This makes it possible to build Clutter against a standalone build of
Cogl instead of having the Clutter build traverse into the clutter/cogl
subdirectory.
This migrates all the GLX window system code down from the Clutter
backend code into a Cogl winsys. Moving OpenGL window system binding
code down from Clutter into Cogl is the biggest blocker to having Cogl
become a standalone 3D graphics library, so this is an important step in
that direction.
This extends visualization for CLUTTER_PAINT=redraws so it now also
draws outlines for actors to show how they are being culled. Actors get
a green outline if they are fully inside the clip region, blue if fully
outside and greeny-blue if only partially inside.
Recently _cogl_swap_buffers_notify was added (in 142b229c5c) so that
Cogl would be notified when Clutter performs a swap buffers request for
a given onscreen framebuffer. It was expected this would be required for
the recent cogl_read_pixel optimization that was implemented (ref
1bdb0e6e98) but in the end it wasn't used.
Since it wasn't used in the end this patch removes the API.
When we come to presenting the result of a clipped redraw to the front
buffer with a blit we need to ensure that all the rendering is done,
otherwise redraw operations that are slower than the framerate can queue
up in the pipeline during a heavy animation, causing a larger and larger
backlog of rendering visible as lag to the user.
Note: Since calling glFinish() and sycnrhonizing the CPU with the GPU is
far from ideal, we hope that this is only a short term solution.
One idea is to using sync objects to track render completion so we can
throttle the backlog (ideally with an additional extension that lets us
get notifications in our mainloop instead of having to busy wait for the
completion.)
Another option is to support clipped redraws by reusing the contents of
old back buffers such that we can flip instead of using a blit and then
we can use GLX_INTEL_swap_events to throttle. For this though we would
still probably want an additional extension so we can report the limited
region of the window damage to X/compositors.
Thanks to Owen Taylor and Alexander Larsson for reporting the problem.
This implements a variation of frustum culling whereby we convert screen
space clip rectangles into eye space mini-frustums so that we don't have
to repeatedly transform actor paint-volumes all the way into screen
coordinates to perform culling, we just have to apply the modelview
transform and then determine each points distance from the planes that
make up the clip frustum.
By avoiding the projective transform, perspective divide and viewport
scale for each point culled this makes culling much cheaper.
As noted in commit ce3f55292a an explict glFlush is needed for
both glBlitFramebuffer and glXCopySubBuffer.
_clutter_backend_glx_blit_sub_buffer was already doing an explicit
flush when using glBlitFramebuffer, so just do it unconditonally
and remove the call from clutter_stage_glx_redraw.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2558
Unlike glXSwapBuffers, glXCopySubBuffer and glBlitFramebuffer don't
issue an implicit glFlush() so we have to flush ourselves if we want the
request to complete in finite amount of time since otherwise the driver
can batch the command indefinitely.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2551
In the future, we want event translators to be the way to handle events
in backends. For this reason, they should be a part of the base abstract
ClutterBackend class, and not an X11-only concept.
Instead of asking all backends to do that for us, we can call
ClutterStageWindow::redraw ourselves by default.
This changeset fixes all backends to actually do the right thing, and
move the stage implementation redraw inside the ClutterStageWindow
implementation itself.
Since we need to find the stage from the X11 Window, it's better to use
a static hashmap that gets updated every time the ClutterStageX11:xwin
member is changed, instead of iterating over every stage handled by the
global ClutterStageManager singleton.
Since we access it in order to get the X11 Display pointer, it makes
sense to have the ClutterBackendX11 already available inside the
ClutterStageX11 structure, and avoid the pattern:
ClutterBackend *backend = clutter_get_default_backend ();
ClutterBackendX11 *backend_x11 = CLUTTER_BACKEND_X11 (backend);
which costs us a function call, a type cast and an unused variable.
* xi2: (41 commits)
test-devices: Actually print the axis data
device-manager/xi2: Sync the stage of source devices
event: Clean up clutter_event_copy()
device: unset the axes array pointer when resetting
device-manager/xi2: Fix device hotplugging
glx: Clean up GLX implementation
device/x11: Store min/max keycode in the XI device class
x11: Hide all private symbols
docs: More documentation fixes for InputDevice
*/event: Never manipulate the event queue directly
win32: Update DeviceManager device creation
device: Allow enabling/disabling non-master devices
backend/eglx: Add newly created stages to the translators
device: Add more doc annotations
device: Use a double for translate_axis() argument
test-devices: Clean up and show axes data
event: Fix up clutter_event_copy()
device/xi2: Translate the axis data after setting devices
device: Add more accessors for properties
docs: Update API reference
...
This adds a stop-gap mechanism for Cogl to know when the window system
is requested to present the current backbuffer to the frontbuffer by
adding a _cogl_swap_buffers_notify function that backends are now
expected to call right after issuing the equivalent request to OpenGL
vie the platforms OpenGL binding layer. This (blindly) updates all the
backends to call this new function.
For now Cogl doesn't do anything with the notification but the intention
is to use it as part of a planned read-pixel optimization which will
need to reset some state at the start of each new frame.
The x11 backend exposes a lot of symbols that are meant to only be used
when implementing a subclassed backend, like the glx and eglx ones.
The uninstalled headers are also filled with cruft declarations of
functions long since removed.
Let's try to clean up this mess.
This is a lump commit that is fairly difficult to break down without
either breaking bisecting or breaking the test cases.
The new design for handling X11 event translation works this way:
- ClutterBackend::translate_event() has been added as the central
point used by a ClutterBackend implementation to translate a
native event into a ClutterEvent;
- ClutterEventTranslator is a private interface that should be
implemented by backend-specific objects, like stage
implementations and ClutterDeviceManager sub-classes, and
allows dealing with class-specific event translation;
- ClutterStageX11 implements EventTranslator, and deals with the
stage-relative X11 events coming from the X11 event source;
- ClutterStageGLX overrides EventTranslator, in order to
deal with the INTEL_GLX_swap_event extension, and it chains up
to the X11 default implementation;
- ClutterDeviceManagerX11 has been split into two separate classes,
one that deals with core and (optionally) XI1 events, and the
other that deals with XI2 events; the selection is done at run-time,
since the core+XI1 and XI2 mechanisms are mutually exclusive.
All the other backends we officially support still use their own
custom event source and translation function, but the end goal is to
migrate them to the translate_event() virtual function, and have the
event source be a shared part of Clutter core.
Since 1.4 the ClutterGLXTexturePixmap is just a wrapper around
ClutterX11TexturePixmap, so we can safely deprecate it. All the
functionality it provided is now effectively available from the
superclass or directly from Cogl.
This tweaks the semantics of the has_redraw_clips vfunc so we can assume
that at the start of a new frame there is an implied, initial,
redraw_clip that clips everything (i.e. nothing would be redrawn) so in
that case we would expect the has_redraw_clips vfunc to return True at
the start of a new frame for backends that support clipping.
Previously there was an ambiguity when this function returned False
since it could either mean a full screen redraw had been queued or it
could mean that the clip state wasn't yet initialized for that frame.
This would result in _clutter_stage_has_full_redraw_queued() returning
True at the start of a new frame even before any actors have been
updated, which in turn meant we would incorrectly ignore queue_redraw
requests for actors, believing them to be redundant.
To consider that we've see a number of drivers that can struggle to get
going and may produce a bad first frame we now force the first 2 frames
to be full redraws. This became a serious issue after we started using
clipped redraws more aggressively because we assumed that after the
first frame the full framebuffer was valid and we only redraw the
content that changes. With buggy drivers though, applications would be
left with junk covering a lot of the stage until some event triggered a
full redraw.
This is a workaround for a race condition when resizing windows while
there are in-flight glXCopySubBuffer blits happening.
The problem stems from the fact that rectangles for the blits are
described relative to the bottom left of the window and because we can't
guarantee control over the X window gravity used when resizing so the
gravity is typically NorthWest not SouthWest.
This means if you grow a window vertically the server will make sure to
place the old contents of the window at the top-left/north-west of your
new larger window, but that may happen asynchronous to GLX preparing to
do a blit specified relative to the bottom-left/south-west of the window
(based on the old smaller window geometry).
When the GLX issued blit finally happens relative to the new bottom of
your window, the destination will have shifted relative to the top-left
where all the pixels you care about are so it will result in a nasty
artefact making resizing look very ugly!
We can't currently fix this completely, in-part because the window
manager tends to trample any gravity we might set. This workaround
instead simply disables blits for a while if we are notified of any
resizes happening so if the user is resizing a window via the window
manager then they may see an artefact for one frame but then we will
fallback to redrawing the full stage until the cooling off period is
over.
This uses actor paint volumes to perform culling during
clutter_actor_paint.
When performing a clipped redraw (because only a few localized actors
changed) then as we traverse the scenegraph painting the actors we can
now ignore actors that don't intersect the clip region. Early testing
shows this can have a big performance benefit; e.g. 100% fps improvement
for test-state with culling enabled and we hope that there are even much
more compelling examples than that in the real world,
Most Clutter applications are 2Dish interfaces and have quite a lot of
actors that get continuously painted when anything is animated. The
dynamic actors are often localized to an area of user focus though so
with culling we can completely avoid painting any of the static actors
outside the current clip region.
Obviously the cost of culling has to be offset against the cost of
painting to determine if it's a win, but our (limited) testing suggests
it should be a win for most applications.
Note: we hope we will be able to also bring another performance bump
from culling with another iteration - hopefully in the 1.6 cycle - to
avoid doing the culling in screen space and instead do it in the stage's
model space. This will hopefully let us minimize the cost of
transforming the actor volumes for culling.
This ensures that clipped redraws are disabled when using
CLUTTER_PAINT=redraws. This may seem unintuitive given that this option
is for debugging clipped redraws, but we can't draw an outline outside
the clip region and anything we draw inside the clip region is liable to
leave a trailing mess on the screen since it won't be cleared up by
later clipped redraws.
This is a fairly extensive second pass at exposing paint volumes for
actors.
The API has changed to allow clutter_actor_get_paint_volume to fail
since there are times - such as when an actor isn't a descendent of the
stage - when the volume can't be determined. Another example is when
something has connected to the "paint" signal of the actor and we simply
have no way of knowing what might be drawn in that handler.
The API has also be changed to return a const ClutterPaintVolume pointer
(transfer none) so we can avoid having to dynamically allocate the
volumes in the most common/performance critical code paths. Profiling was
showing the slice allocation of volumes taking about 1% of an apps time,
for some fairly basic tests. Most volumes can now simply be allocated on
the stack; for clutter_actor_get_paint_volume we return a pointer to
&priv->paint_volume and if we need a more dynamic allocation there is
now a _clutter_stage_paint_volume_stack_allocate() mechanism which lets
us allocate data which expires at the start of the next frame.
The API has been extended to make it easier to implement
get_paint_volume for containers by using
clutter_actor_get_transformed_paint_volume and
clutter_paint_volume_union. The first allows you to query the paint
volume of a child but transformed into parent actor coordinates. The
second lets you combine volumes together so you can union all the
volumes for a container's children and report that as the container's
own volume.
The representation of paint volumes has been updated to consider that
2D actors are the most common.
The effect apis, clutter-texture and clutter-group have been update
accordingly.
*** WARNING: THIS COMMIT CHANGES THE BUILD ***
Do not recurse into the backend directories to build private, internal
libraries.
We only recurse from clutter/ into the cogl sub-directory; from there,
we don't recurse any further. All the backend-specific code in Cogl and
Clutter is compiled conditionally depending on the macros defined by the
configure script.
We still recurse from the top-level directory into doc, clutter and
tests, because gtk-doc and tests do not deal nicely with non-recursive
layouts.
This change makes Clutter compile slightly faster, and cleans up the
build system, especially when dealing with introspection data.
Ideally, we also want to make Cogl part of the top-level build, so that
we can finally drop the sed trick to change the shared library from the
GIR before compiling it.
Currently disabled:
‣ OSX backend
‣ Fruity backend
Currently enabled but untested:
‣ EGL backend
‣ Windows backend
If a NULL clip is passed to clutter_stage_glx_add_redraw_clip then we
update the redraw clip to have width of 0, but we weren't setting
stage_glx->initialized_redraw_clip = TRUE. This could result in a full,
unclipped stage redraw being reduced to a clipped redraw.
The glx and egl(x) backends export some internal symbols. Hide these
symbols (using '_' prefix) to reduce ABI differentiation between the
glx and eglx flavours.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2267
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
DRM is available on more platforms than Linux (e.g. kFreeBSD), but
Clutter currently FTBFS there because of not being an alternative to
the __linux__ code (where it should be HAVE_DRM).
Instead of copying the DRM data structures, we should use libdrm when
falling back to directly requesting to wait for the vblank.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2225
Based on a patch by: Emilio Pozuelo Monfort <pochu27@gmail.com>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Currently, we select input events and GLX events conditionally,
depending on whether the user has disabled event retrieval.
We should, instead, unconditionally select input events even with event
retrieval disabled because we need to guarantee that the Clutter
internal state is maintained when calling clutter_x11_handle_event()
without requiring applications or embedding toolkits to select events
themselves. If we did that, we'd have to document the events to be
selected, and also update applications and embedding toolkits each time
we added a new mask, or a new class of events - something that's clearly
not possible.
See:
http://bugzilla.clutter-project.org/show_bug.cgi?id=998
for the rationale of why we did conditional selection. It is now clear
that a compositor should clear out the input region, since it cannot
assume a perfectly clean slate coming from us.
See:
http://bugzilla.clutter-project.org/show_bug.cgi?id=2228
for an example of things that break if we do conditional event
selection on GLX events. In that specific case, the X11 server ≤ 1.8
always pushed GLX events on the queue, even without selecting them; this
has been fixed in the X11 server ≥ 1.9, which means that applications
like Mutter or toolkit integration libraries like Clutter-GTK would stop
working on recent Intel drivers providing the GLX_INTEL_swap_event
extension.
This change has been tested with Mutter and Clutter-GTK.
Moves preprocessor #ifdef __linux_ above else statement, avoiding the
lack of an else block if __linux__ is not defined.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2212
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The pixmap handling of both of the texture pixmap actors in Clutter is
now removed and instead it just creates a CoglTexturePixmapX11. Both
actors are now equivalent so there is no need to choose between the
two.
When clipped redraws were first supported in Clutter a heuristic was
added to promote tall clipped redraws into full redraws due to a concern
that using glXCopySubBuffer for tall rectangles would block the GPU for
too long waiting for the vtrace to be in a suitable position so that
tearing isn't seen. We've so far been unable to measure any impact from
this blocking even with full height windows so we are removing the
arbitrary threshold of 300px that was originally "plucked out of thin
air".
http://bugzilla.o-hand.com/show_bug.cgi?id=2136
This adds a _cogl_bind_gl_texture_transient function that should be used
instead of glBindTexture so we can have a consistent cache of the
textures bound to each texture unit so we can avoid some redundant
binding.
If we have the GLX_SGI_video_sync extension then it's possible to always
keep track for the video sync counter each time we call glXSwapBuffers
or do a sub stage blit. This then allows us to avoid waiting before
issuing a blit if we can see that the counter has already progressed.
Also since we expect that glXCopySubBuffer is synchronized to the vblank
we don't need to use glFinish () in conjunction with the vblank wait
since the vblank wait's only purpose is to add a delay.
The GLX_SGI_video_sync spec explicitly says that it's only supported for
direct contexts so we don't setup up the function pointers if
glXIsDirect () returns GL_FALSE.
Neither glXCopySubBuffer or glBlitFramebuffer are integrated with the
swap interval of a framebuffer so that means when we do partial stage
updates (as Mutter does in response to window damage) then the blits
aren't throttled which means applications that throw lots of damage
events at the compositor can effectively cause Clutter to run flat out
taking up all the system resources issuing more blits than can even be
seen.
This patch now makes sure we use the GLX_SGI_video_sync or a
DRM_VBLANK_RELATIVE ioctl to throttle blits to the vblank frequency as
we do when using glXSwapBuffers.
Currently glXCopySubBufferMESA is used for sub stage redraws, but in case
a driver does not support GLX_MESA_copy_sub_buffer we fall back to redrawing
the complete stage which isn't really optimal.
So instead to directly fallback to complete redraws try using GL_EXT_framebuffer_blit
to do the BACK to FRONT buffer copies.
http://bugzilla.openedhand.com/show_bug.cgi?id=2128
While this is totally fine (None is 0L and, in the pointer context, will
be converted in the right internal NULL representation, which could be a
value with some bits to 1), I believe it's clearer to use NULL instead
of None when we talk about pointers.