There is an internal _clutter_actor_queue_redraw_with_clip API that gets
used for texture-from-pixmap to minimize what we redraw in response to
Damage events. It was previously working in terms of a ClutterActorBox
but it has now been changed so an actor can queue a redraw of volume
instead.
The plan is that clutter_actor_queue_redraw will start to transparently
use _clutter_actor_queue_redraw_with_clip when it can determine a paint
volume for the actor.
For the blur effect we use a BLUR_PADDING constant to pad out the volume
of the source actor on the x and y axis. Previously we were offsetting
the origin negatively using BLUR_PADDING and then adding BLUR_PADDING
to the width and height, but we should have been adding 2*BLUR_PADDING
instead.
This ensures that clipped redraws are disabled when using
CLUTTER_PAINT=redraws. This may seem unintuitive given that this option
is for debugging clipped redraws, but we can't draw an outline outside
the clip region and anything we draw inside the clip region is liable to
leave a trailing mess on the screen since it won't be cleared up by
later clipped redraws.
This adds a debug option to visualize the paint volumes of all actors.
When CLUTTER_PAINT=paint-volumes is exported in the environment before
running a Clutter application then all actors will have their bounding
volume drawn in green with a label corresponding to the actors type.
This is a fairly extensive second pass at exposing paint volumes for
actors.
The API has changed to allow clutter_actor_get_paint_volume to fail
since there are times - such as when an actor isn't a descendent of the
stage - when the volume can't be determined. Another example is when
something has connected to the "paint" signal of the actor and we simply
have no way of knowing what might be drawn in that handler.
The API has also be changed to return a const ClutterPaintVolume pointer
(transfer none) so we can avoid having to dynamically allocate the
volumes in the most common/performance critical code paths. Profiling was
showing the slice allocation of volumes taking about 1% of an apps time,
for some fairly basic tests. Most volumes can now simply be allocated on
the stack; for clutter_actor_get_paint_volume we return a pointer to
&priv->paint_volume and if we need a more dynamic allocation there is
now a _clutter_stage_paint_volume_stack_allocate() mechanism which lets
us allocate data which expires at the start of the next frame.
The API has been extended to make it easier to implement
get_paint_volume for containers by using
clutter_actor_get_transformed_paint_volume and
clutter_paint_volume_union. The first allows you to query the paint
volume of a child but transformed into parent actor coordinates. The
second lets you combine volumes together so you can union all the
volumes for a container's children and report that as the container's
own volume.
The representation of paint volumes has been updated to consider that
2D actors are the most common.
The effect apis, clutter-texture and clutter-group have been update
accordingly.
Previously we used the transformed allocation but that doesn't take
into account actors with depth which may be projected outside the
area covered by the transformed allocation.
The blur effect will sample pixels on the edges of the offscreen buffer,
so we want to add a padding to avoid clamping the blur.
We do this by creating a larger target texture, and updating the paint
volume of the actor during paint to take that padding into account.
We should be using the real, on-screen, transformed size of the actor to
size and position the offscreen buffer we use to paint the actor for an
effect.
An Effect implementation might override the paint volume of the actor to
which it is applied to. The get_paint_volume() virtual function should
be added to the Effect class vtable so that any effect can get the
current paint volume and update it.
The clutter_actor_get_paint_volume() function becomes context aware, and
does the right thing if called from within a ClutterEffect pre_paint()
or post_paint() implementation, by allowing all effects in the chain up
to the caller to modify the paint volume.
An actor has an implicit "paint volume", that is the volume in 3D space
occupied when painting itself.
The paint volume is defined as a cuboid with the origin placed at the
top-left corner of the actor; the size of the cuboid is given by three
vectors: width, height and depth.
ClutterActor provides API to convert the paint volume into a 2D box in
screen coordinates, to compute the on-screen area that an actor will
occupy when painted.
Actors can override the default implementation of the get_paint_volume()
virtual function to provide a different volume.
ClutterAnimator currently has a number of bugs related to its
referencing of its internal timeline.
1) The default timeline created in _init is not unreffed (it appears the
programmer has wrongly thought ClutterTimeline has a floating reference
based on the use of g_object_ref_sink in _set_timeline)
2) The timeline and slave_timeline vars are unreffed in finalize instead
of dispose
3) The signal handlers set up in _set_timeline are not disconnected when
the animator is disposed
http://bugzilla.clutter-project.org/show_bug.cgi?id=2347
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This reorganizes the loop for clutter_actor_contains so that it is a
for loop rather than a while loop. Although this is mostly just
nitpicking, I think this change could make the loop slightly faster if
not optimized because it doesn't perform the self == descendant check
twice and it is clearer.
The documentation for clutter_actor_contains didn't specify what
happens when self==descendant. A strict reading of it might lead you
to think that it would return FALSE because in that case the
descendant isn't an immediate child or a deeper descendant. The code
actually would return TRUE. I think this is more useful so this patch
fixes the docs rather than the code.
When removing all keys in a ClutterAnimator, the hash table with
object/property name pairs went out of sync. This change makes
the animator always clear this hash table upon key-removal; and
refreshing it if the animator's timeline is running.
Fixes bug #2335
Each time a material property changes we look to see if any of its
ancestry has become redundant and if so we prune that redundant
ancestry.
There was a problem with the logic that handles this though because we
weren't considering that a material which is a layer state authority may
still defer to ancestors to define the state of individual layers.
For example a material that derives from a parent with 5 layers can
become a STATE_LAYERS authority by simply changing it's ->n_layers count
to 4 and in that case it can still defer to its ancestors to define the
state of those 4 layers.
This patch checks first if a material is a layer state authority and if
so only tries to prune its ancestry if it also *owns* all the individual
layers it depends on. (I.e. if g_list_length
(material->layer_differences) != material->n_layers then it's not safe
to try pruning its ancestry!)
http://bugzilla-attachments.gnome.org/attachment.cgi?id=170907
There is GL_INVALID_ENUM error for GL_DEPTH_STENCIL when call
glRenderbufferStorage() with OpenGL ES backend. So enable this
only for OpenGL backend.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
This is all internal, so we shouldn't need it; unfortunately, it seems
we're passing invalid data internally, so for the time being catching
inconsistencies should at least emit a warning for us to backtrace.
This adds a check in clutter_actor_real_queue_redraw after calling
_clutter_actor_get_stage_internal to check in case the actor doesn't yet
have an associated stage so we can avoid passing a NULL stage pointer to
_clutter_stage_set_pick_buffer_valid which could cause a crash.
*** This is an API change ***
The general pattern for axis-aligned arguments is:
x argument
y argument
If we consider columns an x-aligned argument, and row a y-aligned
argument, then we need to update the TableLayout functions to be:
column
row
and not:
row
column
We have an optimization to track when there are multiple picks per
frame so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There was a problem though in that we were tracking this information in
the ClutterMainContext, but conceptually this doesn't really make sense
because the pick buffer is associated with a stage framebuffer and there
can be multiple stages for one context.
This patch moves the state tracking to ClutterStage.
This reverts commit d7e86e2696.
This was a half baked patch that was pushed a bit early since it broke
test-texture-pick-with-alpha + the commit message refers to a change on
the wip/paint-box branch that hasn't happened yet.
We have an optimization to track when there are multiple picks per
frames so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There were two problems with how we were tracking this state though.
Firstly we were tracking this information in the ClutterMainContext, but
conceptually this doesn't really make sense because the pick buffer is
associated with a stage framebuffer and there can be multiple stages for
one context. Secondly - since the change to how redraws are queued - we
weren't marking the pick buffer as invalid when a queuing a redraw, we
were only marking the buffer invalid when signaling/finishing the
queue-redraw process, which is now deferred until just before a paint.
This meant using clutter_stage_get_actor_at_pos after a scenegraph
change could give a wrong result if it just read from an existing (but
technically invalid) pick buffer.
This patch moves the state tracking to ClutterStage, and ensures the
buffer is invalidated in _clutter_stage_queue_actor_redraw.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2283
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The request mode set by the box layout was previously width-for-height
in a vertical layout and height-for-width in a horizontal layout which
seems to be wrong. For example, if width-for-height is used in a
vertical layout then the width request will come second with the
for_height parameter set. However a vertical layout doesn't pass the
for_height parameter on to its children so doing the requests in that
order doesn't help. If the layout contains a ClutterText then both the
width and height request for it will have -1 for the for_width and
for_height parameters so the text would end up allocated too small.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2328
If set_cogl_texture() is called after unsetting the Texture's material
then we really want to make a copy of the template.
Also, we should assert more often if the internal state goes horribly
wrong: at least, we'll have a backtrace.
The order of the row_span and column_span arguments was different in
the declaration from that in the definition. This was causing the
gtk-doc to also have the wrong order.
If COGL_OBJECT_DEBUG is defined then cogl-object-private.h will call
COGL_NOTE in the ref and unref macros. For this to work the debug
header needs to also be included or COGL_NOTE won't necessarily be
defined.
If the FlowLayout layout manager wasn't allocated the same size it
requested then it should blow its caches and recompute the layout
with the given allocation size.
Instead of using the fixed position and size API, use the newly added
update_allocation() virtual function in ClutterConstraint to change the
allocation of a ClutterActor. This allows using constraints inside
layout managers, and also allows Constraints to react to changes in the
size of an actor without causing relayout cycles.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2319
The Constraint should plug directly into the allocation mechanism, and
modify the allocation of the actor to which they are applied to. This is
similar to the mechanism used by the Effect class to modify the paint
sequence of an actor.
In line with the changes made in f5f066df9c to clean up how Clutter
deals with transformations of actors this patch updates the code in
clutter-offscreen-effect.c. We now query the projection matrix from the
stage instead of the perspective and instead of duplicating the logic to
setup the stage view transform we now use
_clutter_actor_apply_modelview_transform for the stage instead.
cogl_util_next_p2 is declared in cogl-util.h which is a private header
so it shouldn't be possible for an application to use it. It's
probably not a function we'd like to export from Cogl so it seems
better to keep it private. This patch renames it to _cogl_util_next_p2
so that it won't be exported from the shared library.
The documentation for the function is also slightly wrong because it
stated that the function returned the next power greater than
'a'. However the code would actually return 'a' if it's already a
power of two. I think the actual behaviour is more useful so this
patch changes the documentation rather than the code.
Previously CoglVertexBuffer would always set the flush options flags
to at least contain COGL_MATERIAL_FLUSH_FALLBACK_MASK. The code then
later checks whether any flags are set before deciding whether to copy
the material to implement the overrides. This means that it would
always end up copying the material even if there are no fallback
layers. This patch changes it so that it only sets
COGL_MATERIAL_FLUSH_FALLBACK_MASK if fallback_layers != 0.