Allow the developer to set whether the Stage should receive key focus
when mapped. The implementation is fully backend-dependent. The default
value is TRUE because that's what we've been expecting so far.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2500
If an actor is (unfortunately) queuing a relayout in relayout, you would
end up with (ClutterActor*)stage->needs_allocation set to TRUE and
stage->relayout_pending set to TRUE. But if then in the same cycle, an
actor calls clutter_actor_get_allocation_box, that will trigger another
(recursive) _clutter_stage_maybe_relayout, which will wrongly reset the
relayout pending to FALSE, while not actually performing a new relayout
because of the re-entrancy protection.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2503
The stage has a dirty flag to record whenever the viewport and
projection matrices need to be flushed. However after flushing these
the flags were never cleared so it would always redundantly update the
state.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2480
In clutter_stage_real_queue_redraw we were checking to see if the
backend will ignore any subsequent redraw_clip so we can avoid the cost
of projecting the paint-volume of an actor into stage coordinates, but
we weren't ensuring that a full redraw would be queued instead we just
bailed out immediately. This makes sure to call
_clutter_stage_window_add_redraw_clip (stage_window, NULL) in this case
to make sure the backend will do an un-clipped redraw.
Previously we were leaving it up to the default implementation of
get_paint_volume in ClutterGroup to handle the stage by determining the
bounding box of all contained children. This isn't the true bounding box
of the stage though since the stage is responsible for clearing the
entire framebuffer at the start of the frame. This adds a
get_paint_volume implementation for ClutterStage which simply returns
False which means Clutter has to assume it covers everything.
Once an actor had _clutter_stage_queue_redraw_entry_invalidate()
called on it once, then priv->queue_redraw_entry would point to
an entry with entry->actor NULL. _clutter_stage_queue_actor_redraw()
doesn't handle this case and no further redraws would be queued.
To fix this, NULL out priv->queue_redraw_entry() and then make sure
we free the invalidated entry in
_clutter_stage_maybe_finish_queue_redraws() just as we do for
still valid entries.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2389
* private-cleanup:
Add copyright notices
Clean up clutter-private.h/6
Clean up clutter-private.h/5
Clean up clutter-private.h/4
Clean up clutter-private.h/3
Clean up clutter-private.h/2
Clean up clutter-private.h/1
When handling an allocation on the stage, Clutter uses the oppurtunity
to inform Cogl of the new size of the framebuffer so that it can
handle the viewport correctly. It queries the size of the window
implementation using a backend virtual function. However it was doing
this before letting the backend handle the allocation so on Win32 it
would end up using the previous framebuffer size. This wasn't
affecting the X11 backend because in that case the resizes are
asynchronous so setting the stage size causes one allocation which
ends up sending a window size request. Eventually a ConfigureNotify is
received which causes the size of the stage to be set again and
another allocation is fired meaning the framebuffer size will be set
again this time with the correct size. In Win32 the resizes are
synchronous so we don't have this second allocation.
Move the private Backend API to a separate header.
This also allows us to finally move the class vtable and instance
structure to a separate file and plug the visibility hole that left
the Backend class bare for everyone to poke into.
In the case that an unclipped redraw of an actor is queued after a
clipped we should update any existing ClutterStageQueueRedrawEntry
so entry->has_clip = FALSE and free the previous clip.
The clutter stage has a list of entries of actors waiting to be redrawn.
Each entry has a "clip" ClutterPaintVolume member which represents which
how much of the actor needs to get redrawn. It's possible for there to
be no clip associated with the entry. In this case, the clip member is
invalid, the has_clip member should be set to false.
This commit fixes a bug where the has_clip member was not being
initially, explicitly set to false for new entries, and not being
explicitly set to false in the event the clip associated with the entry
is freed.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2350
Signed-off-by: Robert Bragg <robert@linux.intel.com>
During destruction, the StageWindow implementation associated to a Stage
might be NULL. We need to add more checks for a) the IN_DESTRUCTION flag
being set and b) the StageWindow pointer being NULL. Otherwise, we will
get warnings during the destruction of the Stage.
Instead of immediately, recursively emitting the "queue-redraw" signal
when clutter_actor_queue_redraw is called we now defer this process
until all stage updates are complete. This allows us to aggregate
repeated _queue_redraw requests for the same actor avoiding redundant
paint volume transformations. By deferring we also increase the
likelihood that the actor will have a valid paint volume since it will
have an up to date allocation; this in turn means we will more often be
able to automatically queue clipped redraws which can have a big impact
on performance.
Here's an outline of the actor queue redraw mechanism:
The process starts in clutter_actor_queue_redraw or
_clutter_actor_queue_redraw_with_clip.
These functions queue an entry in a list associated with the stage which
is a list of actors that queued a redraw while updating the timelines,
performing layouting and processing other mainloop sources before the
next paint starts.
We aim to minimize the processing done at this point because there is a
good chance other events will happen while updating the scenegraph that
would invalidate any expensive work we might otherwise try to do here.
For example we don't try and resolve the screen space bounding box of an
actor at this stage so as to minimize how much of the screen redraw
because it's possible something else will happen which will force a full
redraw anyway.
When all updates are complete and we come to paint the stage (see
_clutter_stage_do_update) then we iterate this list and actually emit
the "queue-redraw" signals for each of the listed actors which will
bubble up to the stage for each actor and at that point we will
transform the actors paint volume into screen coordinates to determine
the clip region for what needs to be redrawn in the next paint.
Note: actors are allowed to queue a redraw in reseponse to a
queue-redraw signal so we repeat the processing of the list until it
remains empty. An example of when this happens is for Clone actors or
clutter_texture_new_from_actor actors which need to queue a redraw if
their source queues a redraw.
In clutter_stage_allocate at the end we were always querying the latest
allocation set and using the geometry to assert the viewport and then
kicking a full redraw. These only need to be done when the allocation
really changes, so we now read the previous allocation at the start of
the function and compare at the end. This was stopping clipped redraws
from being used in a lot of cases.
Since clutter_actor_queue_redraw now automatically clips redraws
according to the paint volume of the actor we have to be careful to
ensure we really force a full redraw when the stage is allocated a new
size or the stage viewport changes.
This uses actor paint volumes to perform culling during
clutter_actor_paint.
When performing a clipped redraw (because only a few localized actors
changed) then as we traverse the scenegraph painting the actors we can
now ignore actors that don't intersect the clip region. Early testing
shows this can have a big performance benefit; e.g. 100% fps improvement
for test-state with culling enabled and we hope that there are even much
more compelling examples than that in the real world,
Most Clutter applications are 2Dish interfaces and have quite a lot of
actors that get continuously painted when anything is animated. The
dynamic actors are often localized to an area of user focus though so
with culling we can completely avoid painting any of the static actors
outside the current clip region.
Obviously the cost of culling has to be offset against the cost of
painting to determine if it's a win, but our (limited) testing suggests
it should be a win for most applications.
Note: we hope we will be able to also bring another performance bump
from culling with another iteration - hopefully in the 1.6 cycle - to
avoid doing the culling in screen space and instead do it in the stage's
model space. This will hopefully let us minimize the cost of
transforming the actor volumes for culling.
This adds a private ->relayout_pending boolean similar in spirit to
redraw_pending. This will allow us to queue a relayout without
implicitly queueing a redraw; instead we can depend on the actions
of a relayout to queue any necessary redraw.
There is an internal _clutter_actor_queue_redraw_with_clip API that gets
used for texture-from-pixmap to minimize what we redraw in response to
Damage events. It was previously working in terms of a ClutterActorBox
but it has now been changed so an actor can queue a redraw of volume
instead.
The plan is that clutter_actor_queue_redraw will start to transparently
use _clutter_actor_queue_redraw_with_clip when it can determine a paint
volume for the actor.
This is a fairly extensive second pass at exposing paint volumes for
actors.
The API has changed to allow clutter_actor_get_paint_volume to fail
since there are times - such as when an actor isn't a descendent of the
stage - when the volume can't be determined. Another example is when
something has connected to the "paint" signal of the actor and we simply
have no way of knowing what might be drawn in that handler.
The API has also be changed to return a const ClutterPaintVolume pointer
(transfer none) so we can avoid having to dynamically allocate the
volumes in the most common/performance critical code paths. Profiling was
showing the slice allocation of volumes taking about 1% of an apps time,
for some fairly basic tests. Most volumes can now simply be allocated on
the stack; for clutter_actor_get_paint_volume we return a pointer to
&priv->paint_volume and if we need a more dynamic allocation there is
now a _clutter_stage_paint_volume_stack_allocate() mechanism which lets
us allocate data which expires at the start of the next frame.
The API has been extended to make it easier to implement
get_paint_volume for containers by using
clutter_actor_get_transformed_paint_volume and
clutter_paint_volume_union. The first allows you to query the paint
volume of a child but transformed into parent actor coordinates. The
second lets you combine volumes together so you can union all the
volumes for a container's children and report that as the container's
own volume.
The representation of paint volumes has been updated to consider that
2D actors are the most common.
The effect apis, clutter-texture and clutter-group have been update
accordingly.
This is all internal, so we shouldn't need it; unfortunately, it seems
we're passing invalid data internally, so for the time being catching
inconsistencies should at least emit a warning for us to backtrace.
We have an optimization to track when there are multiple picks per
frame so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There was a problem though in that we were tracking this information in
the ClutterMainContext, but conceptually this doesn't really make sense
because the pick buffer is associated with a stage framebuffer and there
can be multiple stages for one context.
This patch moves the state tracking to ClutterStage.
This reverts commit d7e86e2696.
This was a half baked patch that was pushed a bit early since it broke
test-texture-pick-with-alpha + the commit message refers to a change on
the wip/paint-box branch that hasn't happened yet.
We have an optimization to track when there are multiple picks per
frames so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There were two problems with how we were tracking this state though.
Firstly we were tracking this information in the ClutterMainContext, but
conceptually this doesn't really make sense because the pick buffer is
associated with a stage framebuffer and there can be multiple stages for
one context. Secondly - since the change to how redraws are queued - we
weren't marking the pick buffer as invalid when a queuing a redraw, we
were only marking the buffer invalid when signaling/finishing the
queue-redraw process, which is now deferred until just before a paint.
This meant using clutter_stage_get_actor_at_pos after a scenegraph
change could give a wrong result if it just read from an existing (but
technically invalid) pick buffer.
This patch moves the state tracking to ClutterStage, and ensures the
buffer is invalidated in _clutter_stage_queue_actor_redraw.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2283
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When building actor relative transforms, instead of using the matrix
stack to combine transformations and making assumptions about what is
currently on the stack we now just explicitly initialize an identity
matrix and apply transforms to that.
This removes the full_vertex_t typedef for internal transformation code
and we just use ClutterVertex.
ClutterStage now implements apply_transform like any other actor now
and the code we had in _cogl_setup_viewport has been moved to the
stage's apply_transform instead.
ClutterStage now tracks an explicit projection matrix and viewport
geometry. The projection matrix is derived from the perspective whenever
that changes, and the viewport is updated when the stage gets a new
allocation. The SYNC_MATRICES mechanism has been removed in favour of
_clutter_stage_dirty_viewport/projection() APIs that get used when
switching between multiple stages to ensure cogl has the latest
information about the onscreen framebuffer.
Some apps or some use cases don't need to clear the stage on immediate
rendering GPUs. A media player playing a fullscreen video or a
tile-based game, for instance.
These apps are redrawing the whole screen, so we can avoid clearing the
color buffer when preparing to paint the stage, since there is no
blending with the stage color being performed.
We can add an private set of hints to ClutterStage, and expose accessors
for each potential hint; the first hint is the 'no-clear' one.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2058
The marshallers we use for the signals are declared in a private header,
and it stands to reason that they should also be hidden in the shared
object by using the common '_' prefix. We are also using some direct
g_cclosure_marshal_* symbol from GLib, instead of consistently use the
clutter_marshal_* symbol.
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
* stage-min-size-rework:
docs: Update minimum size accessors
actor: Use the TOPLEVEL flag instead of a type check
[stage] Use min-width/height props for min size
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521