We were always explicitly checking priv->needs_allocation in
_clutter_actor_queue_redraw_with_clip, but we only need to do that if
the CLUTTER_REDRAW_CLIPPED_TO_ALLOCATION flag is used.
This initializes priv->last_paint_box with a degenerate box, so a newly
allocated actor added to the scenegraph and made visible only needs to
trigger a redraw of its initial position. If we don't have a valid
last_paint_box though we would instead trigger a full stage redraw.
To make comparing the performance with culling/clipped redraws
enabled/disabled fairer we now avoid querying the paint box when they
are disabled, so that results should reflect how the cost of
transforming paint volumes into screen space etc gets offset against the
benefit of culling.
In clutter_stage_allocate at the end we were always querying the latest
allocation set and using the geometry to assert the viewport and then
kicking a full redraw. These only need to be done when the allocation
really changes, so we now read the previous allocation at the start of
the function and compare at the end. This was stopping clipped redraws
from being used in a lot of cases.
To consider that we've see a number of drivers that can struggle to get
going and may produce a bad first frame we now force the first 2 frames
to be full redraws. This became a serious issue after we started using
clipped redraws more aggressively because we assumed that after the
first frame the full framebuffer was valid and we only redraw the
content that changes. With buggy drivers though, applications would be
left with junk covering a lot of the stage until some event triggered a
full redraw.
This is a workaround for a race condition when resizing windows while
there are in-flight glXCopySubBuffer blits happening.
The problem stems from the fact that rectangles for the blits are
described relative to the bottom left of the window and because we can't
guarantee control over the X window gravity used when resizing so the
gravity is typically NorthWest not SouthWest.
This means if you grow a window vertically the server will make sure to
place the old contents of the window at the top-left/north-west of your
new larger window, but that may happen asynchronous to GLX preparing to
do a blit specified relative to the bottom-left/south-west of the window
(based on the old smaller window geometry).
When the GLX issued blit finally happens relative to the new bottom of
your window, the destination will have shifted relative to the top-left
where all the pixels you care about are so it will result in a nasty
artefact making resizing look very ugly!
We can't currently fix this completely, in-part because the window
manager tends to trample any gravity we might set. This workaround
instead simply disables blits for a while if we are notified of any
resizes happening so if the user is resizing a window via the window
manager then they may see an artefact for one frame but then we will
fallback to redrawing the full stage until the cooling off period is
over.
Instead of triggering a full stage redraw for Expose events we use the
geometry of the exposed region given in the event to queue a clipped
redraw of the stage.
Clutter has now taken responsibility for managing its viewport,
projection matrix and view transform as part of ClutterStage so
_cogl_setup_viewport is no longer used by anything, and since it's quite
an obscure API anyway it's we've taken the opportunity to remove the
function.
Since clutter_actor_queue_redraw now automatically clips redraws
according to the paint volume of the actor we have to be careful to
ensure we really force a full redraw when the stage is allocated a new
size or the stage viewport changes.
We have bent the originally documented semantics a bit so now where we
say "Queueing a new layout automatically queues a redraw as well" it
might be clearer to say "Queuing a new layout implicitly queues a redraw
as well if anything in the layout changes".
This should be close enough to the original semantics to not cause any
problems.
Without this change then we we fail to take advantage of clipped redraws
in a lot of cases because queuing a redraw with priv->needs_allocation
== TRUE will automatically be promoted to a full stage redraw since it's
not possible to determine a valid paint-volume.
Also queuing a redraw here will end up registering a redundant clipped
redraw for the current location, doing quite a lot of redundant
transforms, and then later when re-allocated during layouting another
queue redraw would happen with the correct paint-volume.
This uses actor paint volumes to perform culling during
clutter_actor_paint.
When performing a clipped redraw (because only a few localized actors
changed) then as we traverse the scenegraph painting the actors we can
now ignore actors that don't intersect the clip region. Early testing
shows this can have a big performance benefit; e.g. 100% fps improvement
for test-state with culling enabled and we hope that there are even much
more compelling examples than that in the real world,
Most Clutter applications are 2Dish interfaces and have quite a lot of
actors that get continuously painted when anything is animated. The
dynamic actors are often localized to an area of user focus though so
with culling we can completely avoid painting any of the static actors
outside the current clip region.
Obviously the cost of culling has to be offset against the cost of
painting to determine if it's a win, but our (limited) testing suggests
it should be a win for most applications.
Note: we hope we will be able to also bring another performance bump
from culling with another iteration - hopefully in the 1.6 cycle - to
avoid doing the culling in screen space and instead do it in the stage's
model space. This will hopefully let us minimize the cost of
transforming the actor volumes for culling.
This makes clutter_actor_queue_redraw transparently use an actor's paint
volume to queue a clipped redraw.
We save the actors paint box each time it is painted so that when
clutter_actor_queue_redraw is called we can determine the old and new
location of the actor so we know the full bounds of what must be redrawn
to clear its old view and show the new.
This makes _clutter_actor_transform_and_project_box a static function
and removes the prototype from clutter-private.h since it is no longer
used outside clutter-actor.c
The base implementation for the actor queue_relayout method was queuing
an implicit redraw, but there shouldn't be anything implied from the
mere process of queuing a redraw that should force us to queue a redraw.
If actors are moved as a part of relayouting later then they will queue
a redraw. Also clutter_actor_queue_relayout() still also explicitly
queues a redraw so I think this may have been doubly redundant.
If clutter_actor_allocate finds it necessary to update an actors
allocation then it now also queue a redraw of that actor. Currently we
queue redraws for actors very early on when queuing a relayout instead
of waiting to determine the final outcome of relayouting to determine if
a redraw is really required. With this in place we can move away from
preemptive queuing of redraws.
clutter_actor_queue_relayout currently queues a relayout and a redraw,
but the plan is to change it to only queue a relayout and honour the
documentation by assuming that the process of relayouting will
result queuing redraws for any actors whos allocation changes.
This doesn't make that change it just adds an internal
_clutter_actor_queue_only_relayout function which
clutter_actor_queue_relayout now uses as well as calling
clutter_actor_queue_redraw.
This adds a private ->relayout_pending boolean similar in spirit to
redraw_pending. This will allow us to queue a relayout without
implicitly queueing a redraw; instead we can depend on the actions
of a relayout to queue any necessary redraw.
When clutter_texture_new_from_actor is use we need to track when the
source actor queues a redraw or a relayout so we can also queue a redraw
or relayout for the texture actor.
There is an internal _clutter_actor_queue_redraw_with_clip API that gets
used for texture-from-pixmap to minimize what we redraw in response to
Damage events. It was previously working in terms of a ClutterActorBox
but it has now been changed so an actor can queue a redraw of volume
instead.
The plan is that clutter_actor_queue_redraw will start to transparently
use _clutter_actor_queue_redraw_with_clip when it can determine a paint
volume for the actor.
For the blur effect we use a BLUR_PADDING constant to pad out the volume
of the source actor on the x and y axis. Previously we were offsetting
the origin negatively using BLUR_PADDING and then adding BLUR_PADDING
to the width and height, but we should have been adding 2*BLUR_PADDING
instead.
This ensures that clipped redraws are disabled when using
CLUTTER_PAINT=redraws. This may seem unintuitive given that this option
is for debugging clipped redraws, but we can't draw an outline outside
the clip region and anything we draw inside the clip region is liable to
leave a trailing mess on the screen since it won't be cleared up by
later clipped redraws.
This adds a debug option to visualize the paint volumes of all actors.
When CLUTTER_PAINT=paint-volumes is exported in the environment before
running a Clutter application then all actors will have their bounding
volume drawn in green with a label corresponding to the actors type.
This is a fairly extensive second pass at exposing paint volumes for
actors.
The API has changed to allow clutter_actor_get_paint_volume to fail
since there are times - such as when an actor isn't a descendent of the
stage - when the volume can't be determined. Another example is when
something has connected to the "paint" signal of the actor and we simply
have no way of knowing what might be drawn in that handler.
The API has also be changed to return a const ClutterPaintVolume pointer
(transfer none) so we can avoid having to dynamically allocate the
volumes in the most common/performance critical code paths. Profiling was
showing the slice allocation of volumes taking about 1% of an apps time,
for some fairly basic tests. Most volumes can now simply be allocated on
the stack; for clutter_actor_get_paint_volume we return a pointer to
&priv->paint_volume and if we need a more dynamic allocation there is
now a _clutter_stage_paint_volume_stack_allocate() mechanism which lets
us allocate data which expires at the start of the next frame.
The API has been extended to make it easier to implement
get_paint_volume for containers by using
clutter_actor_get_transformed_paint_volume and
clutter_paint_volume_union. The first allows you to query the paint
volume of a child but transformed into parent actor coordinates. The
second lets you combine volumes together so you can union all the
volumes for a container's children and report that as the container's
own volume.
The representation of paint volumes has been updated to consider that
2D actors are the most common.
The effect apis, clutter-texture and clutter-group have been update
accordingly.
Previously we used the transformed allocation but that doesn't take
into account actors with depth which may be projected outside the
area covered by the transformed allocation.
The blur effect will sample pixels on the edges of the offscreen buffer,
so we want to add a padding to avoid clamping the blur.
We do this by creating a larger target texture, and updating the paint
volume of the actor during paint to take that padding into account.
We should be using the real, on-screen, transformed size of the actor to
size and position the offscreen buffer we use to paint the actor for an
effect.
An Effect implementation might override the paint volume of the actor to
which it is applied to. The get_paint_volume() virtual function should
be added to the Effect class vtable so that any effect can get the
current paint volume and update it.
The clutter_actor_get_paint_volume() function becomes context aware, and
does the right thing if called from within a ClutterEffect pre_paint()
or post_paint() implementation, by allowing all effects in the chain up
to the caller to modify the paint volume.
An actor has an implicit "paint volume", that is the volume in 3D space
occupied when painting itself.
The paint volume is defined as a cuboid with the origin placed at the
top-left corner of the actor; the size of the cuboid is given by three
vectors: width, height and depth.
ClutterActor provides API to convert the paint volume into a 2D box in
screen coordinates, to compute the on-screen area that an actor will
occupy when painted.
Actors can override the default implementation of the get_paint_volume()
virtual function to provide a different volume.
*** WARNING: THIS COMMIT CHANGES THE BUILD ***
Do not recurse into the backend directories to build private, internal
libraries.
We only recurse from clutter/ into the cogl sub-directory; from there,
we don't recurse any further. All the backend-specific code in Cogl and
Clutter is compiled conditionally depending on the macros defined by the
configure script.
We still recurse from the top-level directory into doc, clutter and
tests, because gtk-doc and tests do not deal nicely with non-recursive
layouts.
This change makes Clutter compile slightly faster, and cleans up the
build system, especially when dealing with introspection data.
Ideally, we also want to make Cogl part of the top-level build, so that
we can finally drop the sed trick to change the shared library from the
GIR before compiling it.
Currently disabled:
‣ OSX backend
‣ Fruity backend
Currently enabled but untested:
‣ EGL backend
‣ Windows backend
ClutterAnimator currently has a number of bugs related to its
referencing of its internal timeline.
1) The default timeline created in _init is not unreffed (it appears the
programmer has wrongly thought ClutterTimeline has a floating reference
based on the use of g_object_ref_sink in _set_timeline)
2) The timeline and slave_timeline vars are unreffed in finalize instead
of dispose
3) The signal handlers set up in _set_timeline are not disconnected when
the animator is disposed
http://bugzilla.clutter-project.org/show_bug.cgi?id=2347
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This reorganizes the loop for clutter_actor_contains so that it is a
for loop rather than a while loop. Although this is mostly just
nitpicking, I think this change could make the loop slightly faster if
not optimized because it doesn't perform the self == descendant check
twice and it is clearer.
The documentation for clutter_actor_contains didn't specify what
happens when self==descendant. A strict reading of it might lead you
to think that it would return FALSE because in that case the
descendant isn't an immediate child or a deeper descendant. The code
actually would return TRUE. I think this is more useful so this patch
fixes the docs rather than the code.
When removing all keys in a ClutterAnimator, the hash table with
object/property name pairs went out of sync. This change makes
the animator always clear this hash table upon key-removal; and
refreshing it if the animator's timeline is running.
Fixes bug #2335
Each time a material property changes we look to see if any of its
ancestry has become redundant and if so we prune that redundant
ancestry.
There was a problem with the logic that handles this though because we
weren't considering that a material which is a layer state authority may
still defer to ancestors to define the state of individual layers.
For example a material that derives from a parent with 5 layers can
become a STATE_LAYERS authority by simply changing it's ->n_layers count
to 4 and in that case it can still defer to its ancestors to define the
state of those 4 layers.
This patch checks first if a material is a layer state authority and if
so only tries to prune its ancestry if it also *owns* all the individual
layers it depends on. (I.e. if g_list_length
(material->layer_differences) != material->n_layers then it's not safe
to try pruning its ancestry!)
http://bugzilla-attachments.gnome.org/attachment.cgi?id=170907
There is GL_INVALID_ENUM error for GL_DEPTH_STENCIL when call
glRenderbufferStorage() with OpenGL ES backend. So enable this
only for OpenGL backend.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
This is all internal, so we shouldn't need it; unfortunately, it seems
we're passing invalid data internally, so for the time being catching
inconsistencies should at least emit a warning for us to backtrace.
This adds a check in clutter_actor_real_queue_redraw after calling
_clutter_actor_get_stage_internal to check in case the actor doesn't yet
have an associated stage so we can avoid passing a NULL stage pointer to
_clutter_stage_set_pick_buffer_valid which could cause a crash.
*** This is an API change ***
The general pattern for axis-aligned arguments is:
x argument
y argument
If we consider columns an x-aligned argument, and row a y-aligned
argument, then we need to update the TableLayout functions to be:
column
row
and not:
row
column
We have an optimization to track when there are multiple picks per
frame so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There was a problem though in that we were tracking this information in
the ClutterMainContext, but conceptually this doesn't really make sense
because the pick buffer is associated with a stage framebuffer and there
can be multiple stages for one context.
This patch moves the state tracking to ClutterStage.
This reverts commit d7e86e2696.
This was a half baked patch that was pushed a bit early since it broke
test-texture-pick-with-alpha + the commit message refers to a change on
the wip/paint-box branch that hasn't happened yet.
We have an optimization to track when there are multiple picks per
frames so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There were two problems with how we were tracking this state though.
Firstly we were tracking this information in the ClutterMainContext, but
conceptually this doesn't really make sense because the pick buffer is
associated with a stage framebuffer and there can be multiple stages for
one context. Secondly - since the change to how redraws are queued - we
weren't marking the pick buffer as invalid when a queuing a redraw, we
were only marking the buffer invalid when signaling/finishing the
queue-redraw process, which is now deferred until just before a paint.
This meant using clutter_stage_get_actor_at_pos after a scenegraph
change could give a wrong result if it just read from an existing (but
technically invalid) pick buffer.
This patch moves the state tracking to ClutterStage, and ensures the
buffer is invalidated in _clutter_stage_queue_actor_redraw.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2283
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The request mode set by the box layout was previously width-for-height
in a vertical layout and height-for-width in a horizontal layout which
seems to be wrong. For example, if width-for-height is used in a
vertical layout then the width request will come second with the
for_height parameter set. However a vertical layout doesn't pass the
for_height parameter on to its children so doing the requests in that
order doesn't help. If the layout contains a ClutterText then both the
width and height request for it will have -1 for the for_width and
for_height parameters so the text would end up allocated too small.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2328
If set_cogl_texture() is called after unsetting the Texture's material
then we really want to make a copy of the template.
Also, we should assert more often if the internal state goes horribly
wrong: at least, we'll have a backtrace.
The order of the row_span and column_span arguments was different in
the declaration from that in the definition. This was causing the
gtk-doc to also have the wrong order.
If COGL_OBJECT_DEBUG is defined then cogl-object-private.h will call
COGL_NOTE in the ref and unref macros. For this to work the debug
header needs to also be included or COGL_NOTE won't necessarily be
defined.
If the FlowLayout layout manager wasn't allocated the same size it
requested then it should blow its caches and recompute the layout
with the given allocation size.
Instead of using the fixed position and size API, use the newly added
update_allocation() virtual function in ClutterConstraint to change the
allocation of a ClutterActor. This allows using constraints inside
layout managers, and also allows Constraints to react to changes in the
size of an actor without causing relayout cycles.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2319
The Constraint should plug directly into the allocation mechanism, and
modify the allocation of the actor to which they are applied to. This is
similar to the mechanism used by the Effect class to modify the paint
sequence of an actor.
In line with the changes made in f5f066df9c to clean up how Clutter
deals with transformations of actors this patch updates the code in
clutter-offscreen-effect.c. We now query the projection matrix from the
stage instead of the perspective and instead of duplicating the logic to
setup the stage view transform we now use
_clutter_actor_apply_modelview_transform for the stage instead.
cogl_util_next_p2 is declared in cogl-util.h which is a private header
so it shouldn't be possible for an application to use it. It's
probably not a function we'd like to export from Cogl so it seems
better to keep it private. This patch renames it to _cogl_util_next_p2
so that it won't be exported from the shared library.
The documentation for the function is also slightly wrong because it
stated that the function returned the next power greater than
'a'. However the code would actually return 'a' if it's already a
power of two. I think the actual behaviour is more useful so this
patch changes the documentation rather than the code.
Previously CoglVertexBuffer would always set the flush options flags
to at least contain COGL_MATERIAL_FLUSH_FALLBACK_MASK. The code then
later checks whether any flags are set before deciding whether to copy
the material to implement the overrides. This means that it would
always end up copying the material even if there are no fallback
layers. This patch changes it so that it only sets
COGL_MATERIAL_FLUSH_FALLBACK_MASK if fallback_layers != 0.
If a single arbfp program is being shared between multiple CoglMaterials
then we need to make sure we update all program.local params when
switching between materials. Previously we had a dirty flag to track
when combine_constant params were changed but didn't take in to account
that different materials sharing the same program may have different
combine constants.
Previously the backend private state was used to either link to an
authority material or provide authoritative program state. The mechanism
seemed overly complex and felt very fragile. I made a recent comment
which added a lot of documentation to make it easier to understand but
still it didn't feel very elegant.
This patch takes a slightly different approach; we now have a
ref-counted ArbfpProgramState object which encapsulates a single ARBfp
program and the backend private state now just has a single member which
is a pointer to one of these arbfp_program_state objects. We no longer
need to cache pointers to our arbfp-authority and so we can get rid of
a lot of awkward code that ensured these pointers were
updated/invalidated at the right times. The program state objects are
not tightly bound to a material so it will also allow us to later
implement a cache mechanism that lets us share state outside a materials
ancestry. This may help to optimize code not following the
recommendations of deriving materials from templates, avoiding one-shot
materials and not repeatedly modifying materials because even if a
material's ancestry doesn't naturally lead us to shareable state we can
fallback to searching for shareable state using central hash tables.
This adds a way to iterate the layer indices of the given material since
cogl_material_get_layers has been deprecated. The user provides a
callback to be called once for each layer.
Because modification of layers in the callback may potentially
invalidate any number of the internal CoglMaterialLayer structures and
invalidate the material's layer cache this should be more robust than
cogl_material_get_layers() which used to return a const GList *
pointing directly to internal state.
This fixes the material backends to declare their constant vtable in the
c file with a corresponding extern declaration in the header. This
should fix complaints about duplicate symbols seen on OSX.
Instead of lazily incorporating combine constants as arbfp PARAM
constants in the source directly we now use program.local parameters
instead so we can avoid repeating codegen if a material's combine
constant is updated. This should be a big win for applications animating
a constant used for example in an animated interpolation, such as
gnome-shell.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2280
This makes it so we don't consider LAYER_STATE_TEXTURE changes to affect
the arbfp code. This should avoid a lot of unneeded passes of
code generation for applications modifying the texture for a layer.
This makes it so we only notify backends of either a single material
change or a single layer change. Previously all material STATE_LAYERS
changes would be followed by a more detailed layer change.
For backends that perform code generation for fragment processing they
typically need to understand the details of how layers get changed to
determine if they need to repeat codegen. It doesn't help them to report
a material STATE_LAYERS change for all layer changes since it's so
broad, they really need to wait for the layer change to be notified.
What does help though is to report a STATE_LAYERS change for a change in
material->n_layers because they typically do need to repeat codegen in
that case.
This fixes a number of issues relating to how we track the arbfp private
state associated with CoglMaterials. At the same time it adds much more
extensive code documentation to try and make it a bit more approachable.
When notifying a backend about a layer being modified we now pass the
layers current owner for reference. NB: Although a layer can indirectly
be referenced by multiple layers, a layer is considered immutable once
it has dependants, so there is only ever one material associated with a
layer being modified. Passing the material pointer to the backends
layer_pre_change callback can be useful for backends that associate
their private state with materials and may need to update that state in
response to layer changes.
This renames the get_arbfp_authority function to
get_arbfp_authority_no_check to clarify that the function doesn't
validate that the authority cache is still valid by looking at the age
of the referenced material. The function should only be used when we
*know* the cache has already been checked.
We now pass a boolean to _cogl_material_pre_change_notify to know when
a material change is as a result of a layer change. We plan to use this
information to avoid notifying the backends about material changes if
they are as a result of layer changes. This will simplify the handling
of state changes in the backends because they can assume that layer and
material changes are mutually exclusive.
This adds an internal _cogl_material_get_layer_combine_constant function
so we can query the current layer combine constant back. We should
probably make this a public property getter, but for now we just need
this so we can read the constant in the arbfp backend.
We are going to start tracking more per-texture unit state with arbfp
private state so this adds an internal UnitState type and we allocate an
array of these when setting up a new private state structure. The first
thing that has been moved into this is the sampled boolean to know when
a particular texture unit gets sampled from in the generated arbfp code.
This avoids the use of of gcc constructor and destructor attributes to
initialize the cogl uprof context and optionally print a cogl uprof
report at app exit. We now initialize the uprof context in
cogl_context_create instead.
When building with --enable-profile we now depend on the uprof-0.3
developer release which brings a few improvements:
» It lets us "fix" how we initialize uprof so that instead of using a shared
object constructor/destructor (which was a hack used when first adding
uprof support to Clutter) we can now initialize as part of clutter's
normal initialization code. As a side note though, I found that the way
Clutter initializes has some quite serious problems whenever it
involves GOptionGroups. It is not able to guarantee the initialization
of dependencies like uprof and Cogl. For this reason we still use the
contructor/destructor approach to initialize uprof in Cogl.
» uprof-0.3 provides a better API for adding custom columns when reporting
timer and counter statistics which lets us remove quite a lot of manual
report generation code in clutter-profile.c.
» uprof-0.3 provides a shared context for tracking mainloop timer
statistics. This means any mainloop based library following the same
"Mainloop" timer naming convention can use the shared context and no
matter who ends up owning the final mainloop the statistics will always
be in the same place. This allows profiling of Clutter with an
external mainloop such as with the Mutter compositor.
» uprof-0.3 can export statistics over dbus and comes with an ncurses
based ui to vizualize timer and counter stats live.
The latest version of uprof can be cloned from:
git://github.com/rib/UProf.git
When try_creating_fbo fails it deletes any intermediate render buffers
that were created. However it doesn't clear the list so I think if it
failed a second time it would try to delete the render buffers
again. This could potentially cause problems if a subsequent fbo is
created because the destructor for the original might delete the
renderbuffers of the new fbo.
Since a ClutterClone may be allocated a different size than its source
actor we need to apply a scale factor before painting the source actor.
We were manually using cogl_scale to do this in clutter_clone_paint but
really this kind of thing is best handled in an implementation of the
apply_transform virtual so Clutter can be aware of the transform for
other purposes, such as input transformations. Also we want to provide
an implementation of the get_paint_volume virtual where Clutter will
also expect to be able to use the apply_transform virtual to transform
the volume into its parent's coordinate space.
If a NULL clip is passed to clutter_stage_glx_add_redraw_clip then we
update the redraw clip to have width of 0, but we weren't setting
stage_glx->initialized_redraw_clip = TRUE. This could result in a full,
unclipped stage redraw being reduced to a clipped redraw.
This adds a verbose warning that will be displayed if
clutter_actor_allocate is passed an actor that isn't a descendent of a
ClutterStage. Layouting should always bubble up from a stage so this
condition is likely to indicate a buggy container that allocating a
child that it has already unparented.
When building actor relative transforms, instead of using the matrix
stack to combine transformations and making assumptions about what is
currently on the stack we now just explicitly initialize an identity
matrix and apply transforms to that.
This removes the full_vertex_t typedef for internal transformation code
and we just use ClutterVertex.
ClutterStage now implements apply_transform like any other actor now
and the code we had in _cogl_setup_viewport has been moved to the
stage's apply_transform instead.
ClutterStage now tracks an explicit projection matrix and viewport
geometry. The projection matrix is derived from the perspective whenever
that changes, and the viewport is updated when the stage gets a new
allocation. The SYNC_MATRICES mechanism has been removed in favour of
_clutter_stage_dirty_viewport/projection() APIs that get used when
switching between multiple stages to ensure cogl has the latest
information about the onscreen framebuffer.
This adds _clutter_actor_get_stage_internal to clutter-private.h since
we plan to use it in clutter-offscreen-effect when preparing to
redirect an actor offscreen.
Instead of doing the shlib trick, build ClutterJson (if needed) inside
the top-level clutter/ directory - similar to a non-recursive layout.
Hopefully, one day, we'll be able to do this with a real non-recursive
layout.
Let's try to keep Cogl's build as non-recursive as possible, in the hope
that one day we'll be able to make it fully non-recursive along with the
rest of Clutter.
The keysyms defines in clutter-keysyms.h are generated from the X11 key
symbols headers by doing the equivalent of a pass of sed from XK_* to
CLUTTER_*. This might lead to namespace collisions, down the road.
Instead, we should use the CLUTTER_KEY_* namespace.
This commit includes the script, taken from GDK, that parses the X11
key symbols and generates two headers:
- clutter-keysyms.h: the default included header, with CLUTTER_KEY_*
- clutter-keysyms-compat.h: the compatibility header, with CLUTTER_*
The compat.h header file is included if CLUTTER_DISABLE_DEPRECATED is
not defined - essentially deprecating all the old key symbols.
This does not change any ABI and, assuming that an application or
library is not compiling with CLUTTER_DISABLE_DEPRECATED, the source
compatibility is still guaranteed.
Make sure we don't use deprecated API internally by adding
CLUTTER_DISABLE_DEPRECATED to the AM_CPPFLAGS.
This requires adding -UCLUTTER_DISABLE_DEPRECATED to the introspection
scanner CFLAGS, otherwise the deprecated API will never be introspected
and the data generated will not be compatible.
When animating an actor through clutter_actor_animate() and friends we
might want forcibly detach the animation instance from the actor in
order to start a new one - for instance, in response to user
interaction.
Currently, there is no way to do that except in a very convoluted way,
by emitting the ::completed signal and adding a special case in the
signal handlers; this is due to the fact that clutter_actor_animate()
adds more logic than the one added by clutter_animation_set_object(),
so calling set_object(NULL) or unreferencing the animation instance
itself won't be enough.
The right way to approach this is to add a new method to Clutter.Actor
that detaches any eventual Animation currently referencing it.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2051
If we're depending on an uninstalled .gir, use --include-uninstalled.
We need to explicitly specify Cogl to let the scanner know it's also
uninstalled.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Flushing the framebuffer state can cause some drawing to occur if the
framebuffer has a clip stack which needs the stencil buffer. This was
causing the array pointers set up by enable_state_for_drawing_buffer
to get mangled so it would crash when it hits glDrawArrays. This patch
moves the framebuffer state flush to before it sets up the array
pointers.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2297
I think this is what commit 2cf1405506 intended to do since it
specifically mentioned cleaning up the trap in
clutter_x11_texture_pixmap_set_pixmap, but although it moved the untrap
to only be done in the case where Pixmap != None it left the position of
the trap itself unchanged. This meant the error trapping wouldn't be
balanced if pixmap == None since the untrap wouldn't be done. We now
only trap and untrap around the XGetGeometry call done when pixmap !=
None.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2303
With currently distributed versions of Mesa, calling XFreePixmap()
before glxDestroyPixmap() will cause an X error from DRI. So, we
need to make sure that we get rid of the CoglTexturePixmapX11 before
we XFreePixmap().
clutter_x11_texture_pixmap_dispose(): Call
clutter_x11_texture_pixmap_set_pixmap() instead of using XFreePixmap
directly so that we leverage the text-clearing hack and destroy
things in the right order.
clutter_x11_texture_pixmap_set_pixmap(): Don't do a pointless roundtrip
and trap a pointless error when setting pixmap to None.
clutter_x11_texture_pixmap_set_pixmap(): Free damage resources when
we are setting Pixmap to None.
clutter_x11_texture_pixmap_set_window(): When setting a new window
or setting the window to None, immedediately call
cluter_x11_texture_pixmap_set_pixmap(). This means that set_window(None)
immediately will free any referenced resources related to the window.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2303
Comprehensively add (out) annotations to functions parameters
returning int/float/double.
Not handled here: structure out returns like ClutterColor or
ClutterPerspective or GValue that should get (out caller-allocates).
Not handled here: Cogl
http://bugzilla.clutter-project.org/show_bug.cgi?id=2302
(element-type) should have a full name like Clutter.Actor rather than
a non-namespaced name like Actor. gobject-introspection has become
more strict about this with the recent scanner rewrite.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2301
*** This is an API change ***
Replaced the original drag-threshold property with two separate
horizontal (x-drag-threshold) and vertical (y-drag-threshold)
thresholds.
It is some times necessary to have different drag thresholds for the
horizontal and vertical axes. For example, when a draggable actor is
inside a horizontal scrolling area, only vertical movement must begin
dragging. That can be achieved by setting the x-drag-threshold to
G_MAXUINT while y-drag-threshold is something usual, say, 20 pixels.
This is different than drag axis, because after the threshold
has been cleared by the pointer, the draggable actor can be dragged
along both axes (if allowed by the drag-axis property).
http://bugzilla.clutter-project.org/show_bug.cgi?id=2291
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Creating new materials for every Texture instance results in a lot of
ARBfp programs being generated/compiled. Since most textures will just
be similar we should create a template material for all of them, and
then copy it in every instance. Cogl will try to optimize the generation
of the program and, hopefully, will reuse the same program most of the
time.
With this change, a simple test shows that loading 48 textures will
result in just two programs being compiled - with and without batching
enabled.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2295
When disposing a material layer of type 'texture' we should check that
the texture handle is still valid before calling cogl_handle_unref().
This avoids an assertion failure when disposing a ClutterTexture.
This patch merges in substantial work from
Emmanuele Bassi <ebassi@linux.intel.com>
* Use new introspection --include-uninstalled API since we don't want
to try to find the clutter-1.0.pc file before it's installed.
* Use --pkg-export for Clutter-1.0.gir, since we want the .gir file to
contain the associated pkg-config file.
* Drop the use of --pkg for dependencies; those come from the associated
.gir files. (Actually, --pkg is almost never needed)
* Add --quiet
http://bugzilla.clutter-project.org/show_bug.cgi?id=2292
Intel CE3100 and CE4100 have several planes (framebuffers) and a
hardware blender to blend the planes togeteher to produce the final
image.
clutter_cex100_set_plane() lets you configure which framebuffer clutter
will use for its rendering.
• Use the public COGL_HAS_GLES[12] define instead of the HAVE_COGL_*
ones which are private and defined in config.h,
• Install clutter-egl-headers.h which is needed by clutter-egl.h,
• Remove clutter-stage.h as it's uneeded and does not work since the
single clutter.h include policy,
• Install the egl headers into their own egl directory as the x11 and
glx backends do. The include should then be <clutter/egl/clutter-egl.h>,
so document it. It does not really break anything as nobody could
have used those broken headers.
Intel CE3100 and CE4100 SoCs are designed for TVs. They have separate
framebuffers that are blended together by a piece of hardware to make
the final output. The library that allows you to initialize and
configure those planes is called GDL. A EGL GDL winsys can then be
use with those planes as NativeWindowType to select which plane to use.
This patch adds a new ClutterBackendCex100 backend that can be
selected at compile time with the new --with-flavour=cex100 option.
Some minor fixes here and there: missing include, wrongly placed #endif,
unused variable warning fixes, missing #ifdef.
Make ClutterStageEGL a subclass of either ClutterStageX11 or GObject
depending if you compile with X11 support (EGLX) or not (native).
*** This is an API change ***
The create_target() virtual function should return a CoglHandle to a
texture; clutter_offscreen_effect_get_target(), instead, returns a
CoglMaterial to be painted in the implementation of the paint_target()
virtual function.
Instead of equating textures with materials, and confusing the user of
the API, we should mark the difference more prominently.
First of all, we should return a CoglMaterial* (now that we have that
as a public type) in get_target(); having handles all over the place
does not make it easier to distinguish the semantics of the virtual
functions.
Then we should rename create_target() to create_texture(), to make it
clear that what should be returned is a texture that is used as the
backing for the offscreen framebuffer.
Commit eae4561929 tried to clean how it checks for the private actor
flags. However the check for the 'IN_DESTRUCTION' flag in the Win32
backend got inverted so it would always clear the current
context. This was causing _cogl_check_driver_valid to fail later and
then the realize would get stuck in a infinite loop.
When we free a state because there are no more keys with it as a target use a
goto to re-initialize temporary variables that have become invalid.
Fixing bug #2273
In 965907deb3 the picking was changed to render the full stage
instead of a single pixel whenever picking is performed more than once
between paints. However the condition in the if-statement was
backwards so it would end up always doing a full stage render.
The glx and egl(x) backends export some internal symbols. Hide these
symbols (using '_' prefix) to reduce ABI differentiation between the
glx and eglx flavours.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2267
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>