Use the buffer_age extension when available to recycle backbuffer contents
instead of blitting from the back to front buffer when doing clipped redraws.
The picking is now done in a pixel that is going to be repaired during the next
redraw cycle for non static scences.
This should improve performance and avoid tearing.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=669122
This allows us to report to the backend that the stage's back buffer has been trashed
while handling picking. If the backend is keeping track of the contents of back buffers
so it can minimize how much of the stage is redrawn then it needs to know when we do pick
renders so it can invalidate the back buffer.
Based on patch from Robert Bragg <robert@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=669122
Since Cogl has started restricting what cogl 1.x api is exposed when
COGL_ENABLE_EXPERIMENTAL_2_0_API is defined and since we build all
Clutter internals with COGL_ENABLE_EXPERIMENTAL_2_0_API defined this
patch makes a first pass at reducing our internal use of the Cogl 1.x
api.
The most notable api that's no longer exposed to us internally is
the cogl_material_ api so this switches all Clutter internals to use the
cogl_pipeline_ api instead. This patch also makes quite a bit of
progress removing internal uses of CoglHandle although there is still
more to go.
The ClutterGeometry type is a poor substitute of cairo_rectangle_int_t,
with unsigned integers for width and height to complicate matters.
Let's remove the internal usage of ClutterGeometry and switch to the
rectangle type from Cairo.
https://bugzilla.gnome.org/show_bug.cgi?id=656663
This adds a public function to get the bounds of the current clipped
redraw on a stage. This should only be called while the stage is being
painted. The function diverts to a virtual function on the
ClutterStageWindow implementation. If the function isn't implemented
or it returns FALSE then the entire stage is reported. The clip bounds
are in integer pixel coordinates in the stage's coordinate space.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2421
This adds an internal _clutter_stage_get_active_framebuffer function
that can be used to get a pointer to the current CoglFramebuffer pointer
that is in use, in association with a given stage.
The "active" infix in the function name is there because we shouldn't
assume that a stage will always correspond to only a single framebuffer
so we aren't getting a pointer to a sole framebuffer, we are getting
a pointer to the framebuffer that is currently in use/being painted.
This API is now used for culling purposes where we need to check if we
are currently painting an actor to a framebuffer that is offscreen, that
doesn't correspond to the stage.
The ParamSpec sub-classes we define are meant to be used only from the C
API, as high-level languages completely ignore them.
The ClutterStageWindow interface is an internal type that escaped into
the public headers; all its methods are private, but we cannot remove
the type until we break for 2.0.
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
If your OpenGL driver supports GLX_INTEL_swap_event that means when
glXSwapBuffers is called it returns immediatly and an XEvent is sent when
the actual swap has finished.
Clutter can use the events that notify swap completion as a means to
throttle rendering in the master clock without blocking the CPU and so it
should help improve the performance of CPU bound applications.
Instead of using ClutterActor for the base class of the Stage
implementation we should extend the StageWindow interface with
the required bits (geometry, realization) and use a simple object
class.
This require a wee bit of changes across Backend, Stage and
StageWindow, even though it's mostly re-shuffling.
First of all, StageWindow should get new virtual functions:
* geometry:
- resize()
- get_geometry()
* realization
- realize()
- unrealize()
This covers all the bits that we use from ClutterActor currently
inside the stage implementations.
The ClutterBackend::create_stage() virtual function should create
a StageWindow, and not an Actor (it should always have been; the
fact that it returned an Actor was a leak of the black magic going
on underneath). Since we never guaranteed ABI compatibility for
the Backend class, this is not a problem.
Internally to ClutterStage we can finally drop the shenanigans of
setting/unsetting actor flags on the implementation: if the realization
succeeds, for instance, we set the REALIZED flag on the Stage and
we're done.
As an initial proof of concept, the X11 and GLX stage implementations
have been ported to the New World Order(tm) and show no regressions.
The mapping and unmapping of the X11 stage implementation is
a bit bong. It's asynchronous, for starters, when it really
can avoid it by tracking the state internally.
The ordering of the map/unmap sequence is also broken with
respect to the resizing.
By tracking the state internally into StageX11 we can safely
remove the MapNotify and UnmapNotify X event handling.
In theory, we should use _NET_WM_STATE a lot more, and reuse
the X11 state flags for fullscreening as well.
Bug #864 - Allow instantiating and subclassing of ClutterStage
* clutter/Makefile.am: Add clutter-stage-window.[ch]
* clutter/clutter-stage-manager.c:
(_clutter_stage_manager_remove_stage): Do not warn if removing
a stage we don't manage, as we might be invoked multiple times
during a ClutterState dispose sequence.
* clutter/clutter-actor.c:
* clutter/clutter-backend.[ch]:
* clutter/clutter-main.c:
* clutter/clutter-private.h:
* clutter/clutter-stage.[ch]: Make ClutterStage a proxy actor,
with a private actor implementing the ClutterStageWindow
interface for handling the per-backend realization, painting
and unrealization, plus all the windowing system abstraction.
* clutter/x11/clutter-event-x11.c:
* clutter/x11/clutter-stage-x11.[ch]: Port the X11 backend
to the new backend and stage API and semantics.
* clutter/glx/clutter-backend-glx.c:
* clutter/glx/clutter-stage-glx.c: Port the GLX backend to
the new backend and stage API and semantics.
* clutter/eglx/clutter-backend-egl.[ch]:
* clutter/eglx/clutter-stage-egl.[ch]: Port the EGLX backend
to the new backend and stage API and semantics (untested).
* tests/test-multistage.c (on_button_press): Rename
clutter_stage_create_new() to clutter_stage_new().