During destruction, the StageWindow implementation associated to a Stage
might be NULL. We need to add more checks for a) the IN_DESTRUCTION flag
being set and b) the StageWindow pointer being NULL. Otherwise, we will
get warnings during the destruction of the Stage.
Instead of immediately, recursively emitting the "queue-redraw" signal
when clutter_actor_queue_redraw is called we now defer this process
until all stage updates are complete. This allows us to aggregate
repeated _queue_redraw requests for the same actor avoiding redundant
paint volume transformations. By deferring we also increase the
likelihood that the actor will have a valid paint volume since it will
have an up to date allocation; this in turn means we will more often be
able to automatically queue clipped redraws which can have a big impact
on performance.
Here's an outline of the actor queue redraw mechanism:
The process starts in clutter_actor_queue_redraw or
_clutter_actor_queue_redraw_with_clip.
These functions queue an entry in a list associated with the stage which
is a list of actors that queued a redraw while updating the timelines,
performing layouting and processing other mainloop sources before the
next paint starts.
We aim to minimize the processing done at this point because there is a
good chance other events will happen while updating the scenegraph that
would invalidate any expensive work we might otherwise try to do here.
For example we don't try and resolve the screen space bounding box of an
actor at this stage so as to minimize how much of the screen redraw
because it's possible something else will happen which will force a full
redraw anyway.
When all updates are complete and we come to paint the stage (see
_clutter_stage_do_update) then we iterate this list and actually emit
the "queue-redraw" signals for each of the listed actors which will
bubble up to the stage for each actor and at that point we will
transform the actors paint volume into screen coordinates to determine
the clip region for what needs to be redrawn in the next paint.
Note: actors are allowed to queue a redraw in reseponse to a
queue-redraw signal so we repeat the processing of the list until it
remains empty. An example of when this happens is for Clone actors or
clutter_texture_new_from_actor actors which need to queue a redraw if
their source queues a redraw.
In clutter_stage_allocate at the end we were always querying the latest
allocation set and using the geometry to assert the viewport and then
kicking a full redraw. These only need to be done when the allocation
really changes, so we now read the previous allocation at the start of
the function and compare at the end. This was stopping clipped redraws
from being used in a lot of cases.
Since clutter_actor_queue_redraw now automatically clips redraws
according to the paint volume of the actor we have to be careful to
ensure we really force a full redraw when the stage is allocated a new
size or the stage viewport changes.
This uses actor paint volumes to perform culling during
clutter_actor_paint.
When performing a clipped redraw (because only a few localized actors
changed) then as we traverse the scenegraph painting the actors we can
now ignore actors that don't intersect the clip region. Early testing
shows this can have a big performance benefit; e.g. 100% fps improvement
for test-state with culling enabled and we hope that there are even much
more compelling examples than that in the real world,
Most Clutter applications are 2Dish interfaces and have quite a lot of
actors that get continuously painted when anything is animated. The
dynamic actors are often localized to an area of user focus though so
with culling we can completely avoid painting any of the static actors
outside the current clip region.
Obviously the cost of culling has to be offset against the cost of
painting to determine if it's a win, but our (limited) testing suggests
it should be a win for most applications.
Note: we hope we will be able to also bring another performance bump
from culling with another iteration - hopefully in the 1.6 cycle - to
avoid doing the culling in screen space and instead do it in the stage's
model space. This will hopefully let us minimize the cost of
transforming the actor volumes for culling.
This adds a private ->relayout_pending boolean similar in spirit to
redraw_pending. This will allow us to queue a relayout without
implicitly queueing a redraw; instead we can depend on the actions
of a relayout to queue any necessary redraw.
There is an internal _clutter_actor_queue_redraw_with_clip API that gets
used for texture-from-pixmap to minimize what we redraw in response to
Damage events. It was previously working in terms of a ClutterActorBox
but it has now been changed so an actor can queue a redraw of volume
instead.
The plan is that clutter_actor_queue_redraw will start to transparently
use _clutter_actor_queue_redraw_with_clip when it can determine a paint
volume for the actor.
This is a fairly extensive second pass at exposing paint volumes for
actors.
The API has changed to allow clutter_actor_get_paint_volume to fail
since there are times - such as when an actor isn't a descendent of the
stage - when the volume can't be determined. Another example is when
something has connected to the "paint" signal of the actor and we simply
have no way of knowing what might be drawn in that handler.
The API has also be changed to return a const ClutterPaintVolume pointer
(transfer none) so we can avoid having to dynamically allocate the
volumes in the most common/performance critical code paths. Profiling was
showing the slice allocation of volumes taking about 1% of an apps time,
for some fairly basic tests. Most volumes can now simply be allocated on
the stack; for clutter_actor_get_paint_volume we return a pointer to
&priv->paint_volume and if we need a more dynamic allocation there is
now a _clutter_stage_paint_volume_stack_allocate() mechanism which lets
us allocate data which expires at the start of the next frame.
The API has been extended to make it easier to implement
get_paint_volume for containers by using
clutter_actor_get_transformed_paint_volume and
clutter_paint_volume_union. The first allows you to query the paint
volume of a child but transformed into parent actor coordinates. The
second lets you combine volumes together so you can union all the
volumes for a container's children and report that as the container's
own volume.
The representation of paint volumes has been updated to consider that
2D actors are the most common.
The effect apis, clutter-texture and clutter-group have been update
accordingly.
This is all internal, so we shouldn't need it; unfortunately, it seems
we're passing invalid data internally, so for the time being catching
inconsistencies should at least emit a warning for us to backtrace.
We have an optimization to track when there are multiple picks per
frame so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There was a problem though in that we were tracking this information in
the ClutterMainContext, but conceptually this doesn't really make sense
because the pick buffer is associated with a stage framebuffer and there
can be multiple stages for one context.
This patch moves the state tracking to ClutterStage.
This reverts commit d7e86e2696.
This was a half baked patch that was pushed a bit early since it broke
test-texture-pick-with-alpha + the commit message refers to a change on
the wip/paint-box branch that hasn't happened yet.
We have an optimization to track when there are multiple picks per
frames so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There were two problems with how we were tracking this state though.
Firstly we were tracking this information in the ClutterMainContext, but
conceptually this doesn't really make sense because the pick buffer is
associated with a stage framebuffer and there can be multiple stages for
one context. Secondly - since the change to how redraws are queued - we
weren't marking the pick buffer as invalid when a queuing a redraw, we
were only marking the buffer invalid when signaling/finishing the
queue-redraw process, which is now deferred until just before a paint.
This meant using clutter_stage_get_actor_at_pos after a scenegraph
change could give a wrong result if it just read from an existing (but
technically invalid) pick buffer.
This patch moves the state tracking to ClutterStage, and ensures the
buffer is invalidated in _clutter_stage_queue_actor_redraw.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2283
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When building actor relative transforms, instead of using the matrix
stack to combine transformations and making assumptions about what is
currently on the stack we now just explicitly initialize an identity
matrix and apply transforms to that.
This removes the full_vertex_t typedef for internal transformation code
and we just use ClutterVertex.
ClutterStage now implements apply_transform like any other actor now
and the code we had in _cogl_setup_viewport has been moved to the
stage's apply_transform instead.
ClutterStage now tracks an explicit projection matrix and viewport
geometry. The projection matrix is derived from the perspective whenever
that changes, and the viewport is updated when the stage gets a new
allocation. The SYNC_MATRICES mechanism has been removed in favour of
_clutter_stage_dirty_viewport/projection() APIs that get used when
switching between multiple stages to ensure cogl has the latest
information about the onscreen framebuffer.
Some apps or some use cases don't need to clear the stage on immediate
rendering GPUs. A media player playing a fullscreen video or a
tile-based game, for instance.
These apps are redrawing the whole screen, so we can avoid clearing the
color buffer when preparing to paint the stage, since there is no
blending with the stage color being performed.
We can add an private set of hints to ClutterStage, and expose accessors
for each potential hint; the first hint is the 'no-clear' one.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2058
The marshallers we use for the signals are declared in a private header,
and it stands to reason that they should also be hidden in the shared
object by using the common '_' prefix. We are also using some direct
g_cclosure_marshal_* symbol from GLib, instead of consistently use the
clutter_marshal_* symbol.
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
* stage-min-size-rework:
docs: Update minimum size accessors
actor: Use the TOPLEVEL flag instead of a type check
[stage] Use min-width/height props for min size
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
Instead of shadowing these properties with different properties with the
same names on stage, actually use them. Behaviour should be identical,
except the minimum stage size can now be enforced by setting the
min-width/height properties as well as using the set_minimum_size
function.
The motion event compression should be affected by the device field of
the event; that is: we should compress motion events coming from the
same device.
The introduction of the StageManager in 0.8 implied that the first Stage
instance to be created was automatically assigned the status of "default
stage". This was all well and good, since the default stage was created
behind the curtains by the initialization sequence.
Now that the initialization sequence does not create a default stage any
longer, it means that the first stage created using clutter_stage_new()
gets to be the default, and all special and warm and fuzzy - which also
means that the first stage created by clutter_stage_new() cannot be
destroyed or handled as any other stage. Whoopsie.
Let's go back to the old semantics: the stage created by the first
invocation of clutter_stage_get_default() is the default stage, and
nothing else can be set as default. One day we'll be able to break the
API and the whole default stage business will be a thing of the past.
Setting/unsetting fullscreen on a mapped or unmapped window now works
correctly.
If you unfullscreen a window that was initially full-screened, it will
unset the fullscreen hint and the WM will likely push the size down to
the largest valid size.
If the window was previously un-fullscreened, Clutter will restore the
previous size.
Fullscreening also now works if the WM switches the hint without the
application's knowledge (as happens when you resize a window to the size
of the screen, for example, with stock metacity).
When we resize, we relied on the stage's allocate to re-initialise the
GL viewport. Unfortunately, if we resized within Clutter, the new size
was cached before the window is actually resized, so glViewport wasn't
being called after resizing (some of the time, it's a race condition).
Change the way resizing works slightly so that we only resize when the
geometry size doesn't match our preferred size, and queue a relayout on
ConfigureNotify so the glViewport gets called.
Also change window creation slightly so that setting the size of a
window before it's realized works correctly.
The master clock might have a Stage during its destruction phase,
without a StageWindow attached to it. If this happens and we try
to dereference the StageWindow to get its class and call a virtual
function we might experience some slight turbulence and... then...
explode.
http://bugzilla.openedhand.com/show_bug.cgi?id=1987
This replaces code like this:
if (CLUTTER_ACTOR_IS_VISIBLE (self))
clutter_actor_queue_redraw (self);
with:
clutter_actor_queue_redraw (self);
clutter_actor_queue_redraw internally knows what can be optimized when
the actor is not visible, but it also knows that the queue_redraw signal
must always be sent in case a ClutterClone is cloning a hidden actor.
If your OpenGL driver supports GLX_INTEL_swap_event that means when
glXSwapBuffers is called it returns immediatly and an XEvent is sent when
the actual swap has finished.
Clutter can use the events that notify swap completion as a means to
throttle rendering in the master clock without blocking the CPU and so it
should help improve the performance of CPU bound applications.
We want to set the default size without triggering the layout machinary,
so change the window creation process slightly so we start with a
640x480 window.
Due to the way the new sizing works, clutter stage must set its size in
init (to maintain old behaviour) and the properties on the X11 stage
must be initialised to 1x1 so that it actually goes ahead with the
resize.
Fixes stages that aren't user resizable and have no size set from
appearing at 1x1.
We want the actual window geometry in clutter_stage_set_minimum_size,
not the set size. Now that the geometry function has been changed to do
what it says, use it.
Get the current size of the stage correctly in
clutter_stage_set_minimum_size. The get_geometry StageWindow function is
not equivalent of the current size, use clutter_actor_get_size().
Instead of creating the default stage during initialization we can
now safely create it whenever clutter_stage_get_default() is called.
To maintain the invariant, the default stage is immediately realized
by Clutter itself.
* device-manager: (37 commits)
x11: Re-enable XI1 extension keyboards
x11: Always handle core device events before XI events
docs: Documentation fixes for DeviceManager
device-manager: Fix the signals definition
docs: Add sections for InputDevice and DeviceManager
docs: Add clutter_input_device_get_device_name()
tests: Print out the device details on motion
Always register core devices
device: Remove unused is_default member
win32: Experimental implementation of device support
tests: Print the device name, as well as its Id
x11: Fill out the :name property of the InputDevices
device: Add the :name property to InputDevice
x11: Store core devices on the X11 Backend singleton
device: Unset the cursor actor when leaving the stage
device: Add pointer actor getter
x11: Discard the LeaveNotify for off-stage ButtonRelease
device: Do not overwrite the stage for an InputDevice
event: Off-stage button releases have a click count of 1
event: Scroll events do not have click count
...
ClutterStage has both set_key_focus() and get_key_focus() methods, but
there is no :key-focus property. This means that it is not possible to
get notifications when the key-focus has changes except by connecting to
both the ::key-focus-in and ::key-focus-out signals and do additional
bookkeeping.
http://bugzilla.openedhand.com/show_bug.cgi?id=1956
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The Stage field of an InputDevice is set by the backend, whenever the
pointer enters or leaves the Stage. The Stage should not overwrite the
stage field for every event it processes.
The InputDevice objects stores pointer coordinates, state, stage and
the actor under the cursor, so if the current backend provides us with
one attached to the Event structure then we want the InputDevice itself
to update its state and give us the ClutterActor underneath the
pointer's cursor.
Using the ::event signal to match the CLUTTER_DELETE event type (and
block the stage destruction) can be costly, since it means checking
every single event.
The ::delete-event signal is similar in spirit to any other specialized
signal handler dealing with events, and retains the same semantics.
UProf is a small library that aims to help applications/libraries provide
domain specific reports about performance. It currently provides high
precision timer primitives (rdtsc on x86) and simple counters, the ability
to link statistics between optional components at runtime and makes report
generation easy.
This adds initial accounting for:
- Total mainloop time
- Painting
- Picking
- Layouting
- Idle time
The timing done by uprof is of wall clock time. It's not based on stochastic
samples we simply sample a counter at the start and end. When dealing with
the complexities of GPU drivers and with various kinds of IO this form of
profiling can be quite enlightening as it will be able to represent where
your application is blocking unlike tools such as sysprof.
To enable uprof accounting you must configure Clutter with --enable-profile
and have uprof-0.2 installed from git://git.moblin.org/uprof
If you want to see a report of statistics when Clutter applications exit you
should export CLUTTER_PROFILE_OUTPUT_REPORT=1 before running them.
Just a final word of caution; this stuff is new and the manual nature of
adding uprof instrumentation means it is prone to some errors when modifying
code. This just means that when you question strange results don't rule out
a mistake in the instrumentation. Obviously though we hope the benfits out
weigh e.g. by focusing on very key stats and by having automatic reporting.