This is a lump commit that is fairly difficult to break down without
either breaking bisecting or breaking the test cases.
The new design for handling X11 event translation works this way:
- ClutterBackend::translate_event() has been added as the central
point used by a ClutterBackend implementation to translate a
native event into a ClutterEvent;
- ClutterEventTranslator is a private interface that should be
implemented by backend-specific objects, like stage
implementations and ClutterDeviceManager sub-classes, and
allows dealing with class-specific event translation;
- ClutterStageX11 implements EventTranslator, and deals with the
stage-relative X11 events coming from the X11 event source;
- ClutterStageGLX overrides EventTranslator, in order to
deal with the INTEL_GLX_swap_event extension, and it chains up
to the X11 default implementation;
- ClutterDeviceManagerX11 has been split into two separate classes,
one that deals with core and (optionally) XI1 events, and the
other that deals with XI2 events; the selection is done at run-time,
since the core+XI1 and XI2 mechanisms are mutually exclusive.
All the other backends we officially support still use their own
custom event source and translation function, but the end goal is to
migrate them to the translate_event() virtual function, and have the
event source be a shared part of Clutter core.
ClutterX11TexturePixmap watches for configure events to tell when it
needs to name a new pixmap for the window. However, ConfigureEvents
occur on moves in addition to resizes, and doing round trips and
naming new pixmaps every time a window is moved is a real performance
killer.
Add clutter_x11_texture_pixmap_sync_window_internal() that takes the
size/position of the window as arguments rather than always calling
XGetWindowAttributes. This allows us to bypass all work other than
notifying the window-x/window-y properties when we get a ConfigurEvent
for a move.
The last received width/height is saved to allow us to also omit
XGetWindowAttributes on MapNotify events.
The public clutter_x11_texture_pixmap_sync_window() becomes a bit less
efficient since we no longer combine the roundtrips for
XGetWindowAttributes() and XCompositeNameWindowPixmap(), but it appears
to have no callers in current publicly available code.
Several FIXME's are added for areas where there are still weird things
going on in the code or improvements could be made.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2356
When we handle Expose events we try and queue a clipped redraw of the
stage, but for some reason we were also redundantly calling
clutter_actor_queue_redraw for the stage which would negate the request
to queue a clipped redraw.
The "watch" function functionality in xsettings-client.c is designed
for setups like GDK where filters are per-window. If we are going
to pass all events to _clutter_xsettings_client_process_event()
anyways, we can just pass in NULL for watch.
This avoids a nasty infinite loop where an event would get processed
triggering removing a filter and adding a new filter, which would
immediately run and remove a filter and add another and so on
ad-infinitum.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2415
* private-cleanup:
Add copyright notices
Clean up clutter-private.h/6
Clean up clutter-private.h/5
Clean up clutter-private.h/4
Clean up clutter-private.h/3
Clean up clutter-private.h/2
Clean up clutter-private.h/1
There was an array whose length was define by a static const int
variable. GCC seems to consider this a variable-length array so it
will cause warnings now that -Wvla is enabled. We might as well make
this constant a #define instead to avoid the warning.
Move the private Backend API to a separate header.
This also allows us to finally move the class vtable and instance
structure to a separate file and plug the visibility hole that left
the Backend class bare for everyone to poke into.
Since we allow compiling Clutter without the XComposite extension
available, we need to protect the calls to the XComposite API with
the guards provided by the configure script.
One of the later changes made on the paint volume branch before merging
with master was to make paint volumes opt in only since we couldn't make
any safe assumptions about how custom actors may constrain their
painting. We added very conservative implementations for the existing
Clutter actors - including for ClutterTexture which
ClutterX11TexturePixmap is a sub-class of - but we were conservative to
the extent of explicitly checking the GType of the actor so we would
avoid making any assumptions about sub-classes. The upshot was that we
neglected to implement the get_paint_volume vfunc for
ClutterX11TexturePixmap.
This patch provides an implementation that simply reports the actor's
allocation as its paint volume. Also unlike for other core actors it
doesn't explicitly check the GType so we are assuming that all existing
sub-classes of ClutterX11TexturePixmap constrain their drawing to the
actor's transformed allocation. If anyone does want to draw outside the
allocation in future sub-classes, then they should also provide an
updated get_paint_volume implementation.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2349
This is a workaround for a race condition when resizing windows while
there are in-flight glXCopySubBuffer blits happening.
The problem stems from the fact that rectangles for the blits are
described relative to the bottom left of the window and because we can't
guarantee control over the X window gravity used when resizing so the
gravity is typically NorthWest not SouthWest.
This means if you grow a window vertically the server will make sure to
place the old contents of the window at the top-left/north-west of your
new larger window, but that may happen asynchronous to GLX preparing to
do a blit specified relative to the bottom-left/south-west of the window
(based on the old smaller window geometry).
When the GLX issued blit finally happens relative to the new bottom of
your window, the destination will have shifted relative to the top-left
where all the pixels you care about are so it will result in a nasty
artefact making resizing look very ugly!
We can't currently fix this completely, in-part because the window
manager tends to trample any gravity we might set. This workaround
instead simply disables blits for a while if we are notified of any
resizes happening so if the user is resizing a window via the window
manager then they may see an artefact for one frame but then we will
fallback to redrawing the full stage until the cooling off period is
over.
Instead of triggering a full stage redraw for Expose events we use the
geometry of the exposed region given in the event to queue a clipped
redraw of the stage.
There is an internal _clutter_actor_queue_redraw_with_clip API that gets
used for texture-from-pixmap to minimize what we redraw in response to
Damage events. It was previously working in terms of a ClutterActorBox
but it has now been changed so an actor can queue a redraw of volume
instead.
The plan is that clutter_actor_queue_redraw will start to transparently
use _clutter_actor_queue_redraw_with_clip when it can determine a paint
volume for the actor.
*** WARNING: THIS COMMIT CHANGES THE BUILD ***
Do not recurse into the backend directories to build private, internal
libraries.
We only recurse from clutter/ into the cogl sub-directory; from there,
we don't recurse any further. All the backend-specific code in Cogl and
Clutter is compiled conditionally depending on the macros defined by the
configure script.
We still recurse from the top-level directory into doc, clutter and
tests, because gtk-doc and tests do not deal nicely with non-recursive
layouts.
This change makes Clutter compile slightly faster, and cleans up the
build system, especially when dealing with introspection data.
Ideally, we also want to make Cogl part of the top-level build, so that
we can finally drop the sed trick to change the shared library from the
GIR before compiling it.
Currently disabled:
‣ OSX backend
‣ Fruity backend
Currently enabled but untested:
‣ EGL backend
‣ Windows backend
When building actor relative transforms, instead of using the matrix
stack to combine transformations and making assumptions about what is
currently on the stack we now just explicitly initialize an identity
matrix and apply transforms to that.
This removes the full_vertex_t typedef for internal transformation code
and we just use ClutterVertex.
ClutterStage now implements apply_transform like any other actor now
and the code we had in _cogl_setup_viewport has been moved to the
stage's apply_transform instead.
ClutterStage now tracks an explicit projection matrix and viewport
geometry. The projection matrix is derived from the perspective whenever
that changes, and the viewport is updated when the stage gets a new
allocation. The SYNC_MATRICES mechanism has been removed in favour of
_clutter_stage_dirty_viewport/projection() APIs that get used when
switching between multiple stages to ensure cogl has the latest
information about the onscreen framebuffer.
I think this is what commit 2cf1405506 intended to do since it
specifically mentioned cleaning up the trap in
clutter_x11_texture_pixmap_set_pixmap, but although it moved the untrap
to only be done in the case where Pixmap != None it left the position of
the trap itself unchanged. This meant the error trapping wouldn't be
balanced if pixmap == None since the untrap wouldn't be done. We now
only trap and untrap around the XGetGeometry call done when pixmap !=
None.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2303
With currently distributed versions of Mesa, calling XFreePixmap()
before glxDestroyPixmap() will cause an X error from DRI. So, we
need to make sure that we get rid of the CoglTexturePixmapX11 before
we XFreePixmap().
clutter_x11_texture_pixmap_dispose(): Call
clutter_x11_texture_pixmap_set_pixmap() instead of using XFreePixmap
directly so that we leverage the text-clearing hack and destroy
things in the right order.
clutter_x11_texture_pixmap_set_pixmap(): Don't do a pointless roundtrip
and trap a pointless error when setting pixmap to None.
clutter_x11_texture_pixmap_set_pixmap(): Free damage resources when
we are setting Pixmap to None.
clutter_x11_texture_pixmap_set_window(): When setting a new window
or setting the window to None, immedediately call
cluter_x11_texture_pixmap_set_pixmap(). This means that set_window(None)
immediately will free any referenced resources related to the window.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2303
This adds a wrapper macro to clutter-private that will use
g_object_notify_by_pspec if it's compiled against a version of GLib
that is sufficiently new. Otherwise it will notify by the property
name as before by extracting the name from the pspec. The objects can
then store a static array of GParamSpecs and notify using those as
suggested in the documentation for g_object_notify_by_pspec.
Note that the name of the variable used for storing the array of
GParamSpecs is obj_props instead of properties as used in the
documentation because some places in Clutter uses 'properties' as the
name of a local variable.
Mose of the classes in Clutter have been converted using the script in
the bug report. Some classes have not been modified even though the
script picked them up as described here:
json-generator:
We probably don't want to modify the internal copy of JSON
behaviour-depth:
rectangle:
score:
stage-manager:
These aren't using the separate GParamSpec* variable style.
blur-effect:
win32/device-manager:
Don't actually define any properties even though it has the enum.
box-layout:
flow-layout:
Have some per-child properties that don't work automatically with
the script.
clutter-model:
The script gets confused with ClutterModelIter
stage:
Script gets confused because PROP_USER_RESIZE doesn't match
"user-resizable"
test-layout:
Don't really want to modify the tests
http://bugzilla.clutter-project.org/show_bug.cgi?id=2150
Mention the XFixes extension for compositors using input regions to let
events "pass through" the stage.
Thanks to: Robert Bragg <robert@linux.intel.com>
When we disable the event retrieval, we now just disable the X11 event
source, not the event selection. We need to make that clear to
applications, especially compositors, which might expect complete
control over the selection.
XGetGeometry is a great piece of API, since it gets a lot of stuff that
are moderately *not* geometry related - the root window, and the depth
being two.
Since we have multiple conditions depending on the result of that call
we should split them up depending on the actual error - and each of them
should have a separate error message. This makes debugging simpler.
If we have XKB support then we should be using it to turn on the
detectable auto-repeat; this allows avoiding the peeking trick
that emulates it inside the event handling code.
Now that we have private, per-event platform data, we can start putting
it to good use. The first, most simple use is to store the key group
given the event's modifiers. Since we assume a modern X11, we use XKB
to retrieve it, or we simply fall back to 0 by default.
The data is exposed as a ClutterX11-specific function, within the
sanctioned clutter_x11_* namespace.
We might want pieces higher in the stack (like Mx) to handle XSettings
events as well, and swallowing them by removing them from the events
queue would make it impossible.
A typo in clutter-event.c meant that the wrong struct location could be
used for the input device of key events. Also, a typo in the X11 event
code meant that key-presses would come from the pointer device (releases
would still come from the keyboard device).
The pixmap handling of both of the texture pixmap actors in Clutter is
now removed and instead it just creates a CoglTexturePixmapX11. Both
actors are now equivalent so there is no need to choose between the
two.
The Clutter X11 backend now passes all events through
_cogl_xlib_handle_event. This function can now internally be hooked
with _cogl_xlib_add_filter. These are added to a list of callbacks
which are all called in turn by _cogl_xlib_handle_event. This is
intended to be used internally in Cogl by any parts that need to see
Xlib events.
Cogl now also has an internally exposed function to set a pointer to
the Xlib display. This is stored in a global variable. The Clutter X11
backend sets this.
_cogl_xlib_handle_event and _cogl_xlib_set_display can be removed once
Cogl gains a proper window system abstraction.
Use the XSETTINGS machinery to get notification from foreign
environments about settings that might interest Clutter itself - namely:
the default font name, the font DPI, and the Xft font options that can
be mapped on cairo_font_options_t.
The marshallers we use for the signals are declared in a private header,
and it stands to reason that they should also be hidden in the shared
object by using the common '_' prefix. We are also using some direct
g_cclosure_marshal_* symbol from GLib, instead of consistently use the
clutter_marshal_* symbol.
While this is totally fine (None is 0L and, in the pointer context, will
be converted in the right internal NULL representation, which could be a
value with some bits to 1), I believe it's clearer to use NULL instead
of None when we talk about pointers.
While this is totally fine (0 in the pointer context will be converted
in the right internal NULL representation, which could be a value with
some bits to 1), I believe it's clearer to use NULL in the pointer
context.
It seems that, in most case, it's more an overlook than a deliberate
choice to use FALSE/0 as NULL, eg. copying a _COGL_GET_CONTEXT (ctx, 0)
or a g_return_val_if_fail (cond, 0) from a function returning a
gboolean.
Whether events come from the main loop source or from
clutter_x11_handle_event(), we need to feed them to the backend
virtual handle_event function. This fixes problems with clients
using clutter_x11_handle_event() hanging because
GLXBufferSwapComplete events aren't received.
http://bugzilla.openedhand.com/show_bug.cgi?id=2101
ClutterX11TexturePixmap calls get_allocation_box() when queueing a
clipped redraw. If the allocation is not valid, and if we queue a
lot of redraws in response to a series of damage events, the net
result is that we spend all our time in a re-layout. We can
short-circuit this by checking if the actor has a valid allocation, and
if not, just queue a redraw - the actor will be allocated by the time it
is going to be painted.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
There is no need for us to check for low-level functions and header
files, especially since we haven't been checking the results until
now. This makes cross-compiling slightly more bearable.
The installed _HEADERS should be the public ones and the enumeration
types; repeating clutter-x11-texture-pixmap.h breaks with automake 1.11
and doesn't strictly make any sense.
http://bugzilla.openedhand.com/show_bug.cgi?id=2002
The DeviceManager class should be abstract in Clutter, and implemented
by each backend, as different backends will have different ways to
detect, initialize and list devices; the X11 backend alone has *two*
ways of dealing with devices.
This commit makes DeviceManager an abstract class and delegates the
device initialization and enumeration to per-backend sub-classes.
The responsible for creating the device manager is, obviously, the
backend singleton.
The X11 and Win32 backends have been updated to the new layout; the
Win32 backend has been updated blindly, so it might require additional
testing.
ConfigureNotify is delivered on window movements too, but there is no
need to queue a relayout on these as the viewport hasn't changed size.
Check for the window actually changing size on ConfigureNotify before
queueing a relayout.
This fixes laggy window movement when moving a window in response to
Clutter mouse motion events.
As well as manually setting the geometry size, we needed to queue a
relayout. This is what the ConfigureNotify handler would normally do,
but we don't get this event when using a foreign window (obviously).
This should fix resizing in things like gtk-clutter.
If we get into the resize function and it's a foreign window, set the
geometry size so that the allocate will set the backend size and call
glViewport.
Setting/unsetting fullscreen on a mapped or unmapped window now works
correctly.
If you unfullscreen a window that was initially full-screened, it will
unset the fullscreen hint and the WM will likely push the size down to
the largest valid size.
If the window was previously un-fullscreened, Clutter will restore the
previous size.
Fullscreening also now works if the WM switches the hint without the
application's knowledge (as happens when you resize a window to the size
of the screen, for example, with stock metacity).
When we resize, we relied on the stage's allocate to re-initialise the
GL viewport. Unfortunately, if we resized within Clutter, the new size
was cached before the window is actually resized, so glViewport wasn't
being called after resizing (some of the time, it's a race condition).
Change the way resizing works slightly so that we only resize when the
geometry size doesn't match our preferred size, and queue a relayout on
ConfigureNotify so the glViewport gets called.
Also change window creation slightly so that setting the size of a
window before it's realized works correctly.
If your OpenGL driver supports GLX_INTEL_swap_event that means when
glXSwapBuffers is called it returns immediatly and an XEvent is sent when
the actual swap has finished.
Clutter can use the events that notify swap completion as a means to
throttle rendering in the master clock without blocking the CPU and so it
should help improve the performance of CPU bound applications.
We want to set the default size without triggering the layout machinary,
so change the window creation process slightly so we start with a
640x480 window.
Due to the way the new sizing works, clutter stage must set its size in
init (to maintain old behaviour) and the properties on the X11 stage
must be initialised to 1x1 so that it actually goes ahead with the
resize.
Fixes stages that aren't user resizable and have no size set from
appearing at 1x1.
Calling clutter_actor_set_size in response to ConfigureNotify makes
setting the size of the stage racy - the most common result of which
seems to be that you can't set the stage dimensions to anything less
than 640x480.
Instead, add a first_allocation bit to the private structure of the X11
stage and force the first resize (necessary or the default stage will be
a 1x1 window).
Now that we have a minimum size getter on the stage object, change
get_geometry to actually always return the geometry. This fixes stages
that are set as user-resizable appearing at 1x1 size.
This will need changing in other back-ends too.
The extension keyboard support in XInput 1.x is hopelessly broken.
Nevertheless, it's possible to use some bits of it, as we prefer the
core keyboard events to the XInput events, thus at least having proper
handling for X11 key events on the Stage window.
The XI 1.0 layer is complementary to the X11 core devices handling; this
means that core events will still be emitted for the core pointer and
keyboard devices, and that secondary (floating) devices should be
handled on top of that.
Thus, the XI event handling code should be executed (if explicitly
compiled in and enabled) if the core device events have not been parsed.
Note: this is going away with XI2, which completely replaces both core and
XI1 events.
Even with XInput support we should always register core devices. This
allows us to handle enter and leave events correctly on the Stage and
to have a working XInput 1.x support in Clutter.
Instead of overloading the device id of 0 and 1 we should treat the core
devices as special, and have a pointer inside the X11 backend singleton
structure, for fast access.
If the user presses a button on a pointer device and then moves out the
Stage X11 will emit the following events:
LeaveNotify ➔ MotionNotify ... ➔ ButtonRelease ➔ LeaveNotify
The second LeaveNotify differs from the first by the state field.
Unfortunately, ClutterCrossingEvent doesn't have a modifier_state field
like other events, so we cannot provide a way for programmatically
distinguishing them from a Clutter perspective. This is also an X11-ism
we might not even want to replicate on every backend with sane
enter/leave semantics.
For this reason we should check inside the X11 event processing if the
pointer device has already left the Stage and ignore the second
LeaveNotify.
The InputDevice objects stores pointer coordinates, state, stage and
the actor under the cursor, so if the current backend provides us with
one attached to the Event structure then we want the InputDevice itself
to update its state and give us the ClutterActor underneath the
pointer's cursor.
Even when we are not using XInput we now have fallback devices; the
X11 backend should always assign the default devices when translating
the X events to Clutter events.
Use the device manager to store the input devices. Also, provide
two fallback devices when initializing the X11 backend: device 0
for the pointer and device 1 for the keyboard.
Since asking for ARGB by default is still somewhat experimental on X11
and not every toolkit or complex widgets (like WebKit) still do not like
dealing with ARGB visuals, we should switch back to RGB by default - now
that at least we know it works.
For applications (and toolkit integration libraries) that want to enable
the ClutterStage:use-alpha property there is a new function:
void clutter_x11_set_use_argb_visual (gboolean use_argb);
which needs to be called before clutter_init().
The CLUTTER_DISABLE_ARGB_VISUAL environment variable can still be used
to force this value off at run-time.
Destroy the dummy XImage we create even on success.
http://bugzilla.openedhand.com/show_bug.cgi?id=1918
Based on a patch by: Carlos Martín Nieto <carlos@cmartin.tk>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
* stage-use-alpha:
tests: Use accessor methods for :use-alpha
stage: Add accessors for :use-alpha
tests: Allow setting the stage opacity in test-paint-wrapper
stage: Premultiply the stage color
stage: Composite the opacity with the alpha channel
glx: Always request an ARGB visual
stage: Add :use-alpha property
materials: Get the right blend function for alpha
Old-style X11 terminals require that even modern X11 send KeyPress
and KeyRelease pairs when auto-repeating. For this reason modern(-ish)
API like XKB has a way to detect auto-repeat and do a single KeyRelease
at the end of a KeyPress sequence.
The newly added check emulates XKB's detectable auto-repeat by peeking
the next event after a KeyRelease and checking if it's a KeyPress for
the same key and timestamp - and then ignoring the KeyRelease if it
matches.
If a Stage has been set to use a foreign Window then Clutter should not
be managing it; calling XWithdrawWindow and XMapWindow should be
reserved to the windows we manage ourselves.
When requesting the GLXFBConfig for creating the GLX context, we should
always request one that links to an ARGB visual instead of a plain RGB
one.
By using an ARGB visual we allow the ClutterStage:use-alpha property to
work as intended when running Clutter under a compositing manager.
The default behaviour of requesting an ARGB visual can be disabled by
using the:
CLUTTER_DISABLE_ARGB_VISUAL
Environment variable.
This ensures that glViewport is called before the first stage paint.
Previously _clutter_stage_maybe_setup_viewport (which is done before we
start painting) was bailing out without calling cogl_setup_viewport because
the CLUTTER_STAGE_IN_RESIZE flag may be set if the stage was resized before
the first paint. (NB: The CLUTTER_STAGE_IN_RESIZE flag isn't removed until
we get an explicit event back from the X server since the window manager may
choose to deny/alter the resize.)
We now special case the first resize - where the viewport hasn't previously
been initialized and use the requested geometry to initialize the
glViewport without waiting for a reply from the server.
As part of an incremental process to have Cogl be a standalone project we
want to re-consider how we organise the Cogl source code.
Currently this is the structure I'm aiming for:
cogl/
cogl/
<put common source here>
winsys/
cogl-glx.c
cogl-wgl.c
driver/
gl/
gles/
os/ ?
utils/
cogl-fixed
cogl-matrix-stack?
cogl-journal?
cogl-primitives?
pango/
The new winsys component is a starting point for migrating window system
code (i.e. x11,glx,wgl,osx,egl etc) from Clutter to Cogl.
The utils/ and pango/ directories aren't added by this commit, but they are
noted because I plan to add them soon.
Overview of the planned structure:
* The winsys/ API is the API that binds OpenGL to a specific window system,
be that X11 or win32 etc. Example are glx, wgl and egl. Much of the logic
under clutter/{glx,osx,win32 etc} should migrate here.
* Note there is also the idea of a winsys-base that may represent a window
system for which there are multiple winsys APIs. An example of this is
x11, since glx and egl may both be used with x11. (currently only Clutter
has the idea of a winsys-base)
* The driver/ represents a specific varient of OpenGL. Currently we have "gl"
representing OpenGL 1.4-2.1 (mostly fixed function) and "gles" representing
GLES 1.1 (fixed funciton) and 2.0 (fully shader based)
* Everything under cogl/ should fundamentally be supporting access to the
GPU. Essentially Cogl's most basic requirement is to provide a nice GPU
Graphics API and drawing a line between this and the utility functionality
we add to support Clutter should help keep this lean and maintainable.
* Code under utils/ as suggested builds on cogl/ adding more convenient
APIs or mechanism to optimize special cases. Broadly speaking you can
compare cogl/ to OpenGL and utils/ to GLU.
* clutter/pango will be moved to clutter/cogl/pango
How some of the internal configure.ac/pkg-config terminology has changed:
backendextra -> CLUTTER_WINSYS_BASE # e.g. "x11"
backendextralib -> CLUTTER_WINSYS_BASE_LIB # e.g. "x11/libclutter-x11.la"
clutterbackend -> {CLUTTER,COGL}_WINSYS # e.g. "glx"
CLUTTER_FLAVOUR -> {CLUTTER,COGL}_WINSYS
clutterbackendlib -> CLUTTER_WINSYS_LIB
CLUTTER_COGL -> COGL_DRIVER # e.g. "gl"
Note: The CLUTTER_FLAVOUR and CLUTTER_COGL defines are kept for apps
As the first thing to take advantage of the new winsys component in Cogl;
cogl_get_proc_address() has been moved from cogl/{gl,gles}/cogl.c into
cogl/common/cogl.c and this common implementation first trys
_cogl_winsys_get_proc_address() but if that fails then it falls back to
gmodule.
The only backend that tried to implement offscreen stages was the GLX backend
and even this has apparently be broken for some time without anyone noticing.
The property still remains and since the property already clearly states that
it may not work I don't expect anyone to notice.
This simplifies quite a bit of the GLX code which is very desireable from the
POV that we want to start migrating window system code down to Cogl and the
simpler the code is the more straight forward this work will be.
In the future when Cogl has a nicely designed API for framebuffer objects then
re-implementing offscreen stages cleanly for *all* backends should be quite
straightforward.
The user-initiated resize is conflicting with the allocated size. This
happens because we change the size of the stage's X Window behind the
back of the size allocation machinery.
Instead, we should change the size of the actor whenever we receive a
ConfigureNotify event to reflect the new size of the actor.
We force the redraw before mapping, in the hope that when a composited
window manager maps the window it will have its contents ready; that is
not going to work: the solution for this problem requires the implementation
of a protocol for compositors, and not a hack.
Moreover, painting before mapping will cause a paint with the wrong
GL viewport size, which is the wrong thing to do on GLX.
Instead of using ClutterActor for the base class of the Stage
implementation we should extend the StageWindow interface with
the required bits (geometry, realization) and use a simple object
class.
This require a wee bit of changes across Backend, Stage and
StageWindow, even though it's mostly re-shuffling.
First of all, StageWindow should get new virtual functions:
* geometry:
- resize()
- get_geometry()
* realization
- realize()
- unrealize()
This covers all the bits that we use from ClutterActor currently
inside the stage implementations.
The ClutterBackend::create_stage() virtual function should create
a StageWindow, and not an Actor (it should always have been; the
fact that it returned an Actor was a leak of the black magic going
on underneath). Since we never guaranteed ABI compatibility for
the Backend class, this is not a problem.
Internally to ClutterStage we can finally drop the shenanigans of
setting/unsetting actor flags on the implementation: if the realization
succeeds, for instance, we set the REALIZED flag on the Stage and
we're done.
As an initial proof of concept, the X11 and GLX stage implementations
have been ported to the New World Order(tm) and show no regressions.
Clutter advertises itself on X11 as implementing the _NET_WM_PING protocol,
which is needed to be able to detect frozen applications; this allows us to
stop the destruction of the stage by blocking the CLUTTER_DELETE event and
wait for user feedback without the Window Manager thinking that the app has
gone unresponsive.
In order to implement the _NET_WM_PING protocol properly, though, we need
to add the _NET_WM_PID property on the Stage window, since the EWMH states:
[_NET_WM_PID] MAY be used by the Window Manager to kill windows which
do not respond to the _NET_WM_PING protocol.
Meaning that an unresponsive Clutter application might not be killable by
the window manager.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1748
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The fix for bug 1750 inside commit b190448e made Clutter-GTK spew
BadWindow errors. The reason for that is that we call XDestroyWindow()
without checking if the old Window is None; this happens if we call
clutter_x11_set_stage_foreign() on a new ClutterStage before it has
been realized.
Since Clutter-GTK does not need to realize the Stage it is going to
embed anymore (the only reason for that was to obtain a proper Visual
but now there's ClutterBackendX11 API for that), the set_stage_foreign()
call is effectively setting the StageX11 Window for the first time.
When we replace the stage Window using a foreign one we also need to
destroy the Window we created, if needed, to avoid leaking resources
all around.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1750
If clutter_x11_texture_set_window() was called after
clutter_x11_texture_pixmap_set_automatic(), then the Damage object would
not be properly created so updates to the window were ignored.
Refactor creation of the damage object to a separate function, and
call it from clutter_x11_texture_set_window() and clutter_x11_texture_set_pixmap()
as appropriate. Addition and removal of the filter function is made
conditional on priv->damage to make free_damage_resources() cleanly
idempotent.
See: http://bugzilla.gnome.org/show_bug.cgi?id=587189 for the original
bug report.
http://bugzilla.openedhand.com/show_bug.cgi?id=1710
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
A lot of applications change the size of the stage from the default
before the stage is initially shown. The size change won't take affect
until the first allocation run. However we want the window to be at
the correct size when we first map it so we should force an allocation
run before showing the stage.
There was an explicit call to XResizeWindow in
clutter_stage_x11_show. This is not needed anymore because
XResizeWindow will already have been called by the allocate method.
Updating the WM hints on the stage window shortcircuits if the stage
is in WITHDRAWN state, so we need to move the update_wm_hints() call
after the flag has been unset.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The race we were experiencing in the X11 backends is apparently
back after the fix in commit 00a3c698.
This time, just delaying the setting of the SYNC_MATRICES flag
is not enough, so we can resume the use of a STAGE_IN_RESIZE
private flag.
This should also fix bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1668
Instead of using a specific function to check whether the X
server supports the XInput extension we can use the generic
Xlib function XQueryExtension(). This cuts down the extra
checks inside the configure.ac and simplifies the code inside
clutter_x11_register_xinput().
Currently, XInput support requires a function call. In order to
make it easier for people to test it, we can also add a command
line switch that moves the pointer device detection and handling
to XInput. This should ensure that, at least for people building
Clutter with --enable-xinput, applications can be easily migrated
and regressions can be caught.
The StageManager singleton instance is already kept around
by the clutter_stage_manager_get_default() function; there is
no need to have it inside the main Clutter context as well.
Instead of using _clutter_context_get_default() and checking the
is_initialized flag, we should use the newly added private function
that does not cause side effects, especially for functions that have
to be called before any other Clutter function.
The input device API is split halfway thorugh the backends in a very
weird way. The data structures are private, as they should, but most
of the information should be available in the main API since it's
generic enough.
The device type enumeration, for instance, should be common across
every backend; the accessors for device type and id should live in the
core API. The internal API should always use ClutterInputDevice and
not the private X11 implementation when dealing with public structures
like ClutterEvent.
By adding accessors for the device type and id, and by moving the
device type enumeration into the core API we can cut down the amount
of symbols private and/or visible only to the X11 backends; this way
when other backends start implementing multi-pointer support we can
share the same API across the code.
The clutter_context_get_default() function is private, but shared
across Clutter. For this reason, it should be prefixed by '_' so
that the symbol is hidden from the shared object.
If we have an not-visible texture pixmap, we need to:
- Still update it if it is realized, since it won't be
updated when shown. And it might be also be cloned.
- Queue a redraw if even if not visible, since it
it might be cloned.
http://bugzilla.openedhand.com/show_bug.cgi?id=1647
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
RGBA data in X pixmaps and in FBOs is already premultiplied; use
the right format when creating cogl textures.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
The :fullscreen property is very much confusing as it is implemented.
It can be written to a value, but the whole process might fail. If we
set:
g_object_set (stage, "fullscreen", TRUE, NULL);
and the fullscreen process fails or it is not implemented, the value
will be reset to FALSE (if we're lucky) or left TRUE (most of the
times).
The writability is just a shorthand for invoking clutter_stage_fullscreen()
or clutter_stage_unfullscreen() depending on a boolean value without
using an if.
The :fullscreen property also greatly confuses high level languages,
since the same symbol is used:
- for a method name (Clutter.Stage.fullscreen())
- for a property name (Clutter.Stage.fullscreen)
- for a signal (Clutter.Stage::fullscreen)
For these reasons, the :fullscreen should be renamed to :fullscreen-set
and be read-only. Implementations of the Stage should only emit the
StageState event to change from normal to fullscreen, and the Stage
will automatically update the value of the property and emit a notify
signal for it.
An implementaton of realize() never needs to set the
CLUTTER_ACTOR_REALIZED flag, though it can unset the flag if
things fail unexpectedly. (Previously, stage backend implementations
had to do this since clutter_actor_realize() wasn't used; this
is no longer the case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1634
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Setting the stage size using clutter_actor_set_size() is almost always
wrong: the X11 stage implementation should save the size and queue a
relayout -- like it does when receiving a ConfigureNotify. The same
should happen when setting it to be full screen.
The XInput support in Clutter is still using XI 1.x. This will never
work correctly, and we are all waiting for XInput 2 anyway. The changes
internally should be minimal, so we can leave everything in place, but
it's better to disable XInput support by default -- at least for the
time being.
The mapping and unmapping of the X11 stage implementation is
a bit bong. It's asynchronous, for starters, when it really
can avoid it by tracking the state internally.
The ordering of the map/unmap sequence is also broken with
respect to the resizing.
By tracking the state internally into StageX11 we can safely
remove the MapNotify and UnmapNotify X event handling.
In theory, we should use _NET_WM_STATE a lot more, and reuse
the X11 state flags for fullscreening as well.
Apparently, the XInput extension is using the same pkg-config
file ('xi') for both the 1.x and the 2.x API, so we need to
check for both the 1.x XGetExtensionVersion and the 2.x
XQueryInputVersion.
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
The XVisualInfo for GL is created when a stage is being realized.
When embedding Clutter inside another toolkit we might not want to
realize a stage to extract the XVisualInfo, then set the stage
window using a foreign X Window -- which will cause a re-realization.
Instead, we should abstract as much as possible into the X11 backend.
Unfortunately, the XVisualInfo for GL is requested using GLX API; for
this reason we have to create a ClutterBackendX11 method that we
override inside the ClutterBackendGLX implementation.
This also allows us to move a little bit of complexity from out of
the stage realization, which is currently a very delicate and hard
to debug section.
Setting the stage window using the set_stage_foreign() method will
lead to a re-realization. We need to make sure that the Drawable
currently associated to the GL context is set to None, to avoid a
BadDrawable error or, if we're unlucky, a segfault in the X server.
Currently, the default screen guard value is 0, which is a valid
screen number on X11, and it might not be the default.
Patch suggested by: Owen W. Taylor <otaylor@redhat.com>
With the recent change to internal floating point values, ClutterUnit
has become a redundant type, defined to be a float. All integer entry
points are being internally converted to floating point values to be
passed to the GL pipeline with the least amount of conversion.
ClutterUnit is thus exposed as just a "pixel with fractionary bits",
and not -- as users might think -- as generic, resolution and device
independent units. not that it was the case, but a definitive amount
of people was convinced it did provide this "feature", and was flummoxed
about the mere existence of this type.
So, having ClutterUnit exposed in the public API doubles the entry
points and has the following disadvantages:
- we have to maintain twice the amount of entry points in ClutterActor
- we still do an integer-to-float implicit conversion
- we introduce a weird impedance between pixels and "pixels with
fractionary bits"
- language bindings will have to choose what to bind, and resort
to manually overriding the API
+ *except* for language bindings based on GObject-Introspection, as
they cannot do manual overrides, thus will replicate the entire
set of entry points
For these reason, we should coalesces every Actor entry point for
pixels and for ClutterUnit into a single entry point taking a float,
like:
void clutter_actor_set_x (ClutterActor *self,
gfloat x);
void clutter_actor_get_size (ClutterActor *self,
gfloat *width,
gfloat *height);
gfloat clutter_actor_get_height (ClutterActor *self);
etc.
The issues I have identified are:
- we'll have a two cases of compiler warnings:
- printf() format of the return values from %d to %f
- clutter_actor_get_size() taking floats instead of unsigned ints
- we'll have a problem with varargs when passing an integer instead
of a floating point value, except on 64bit platforms where the
size of a float is the same as the size of an int
To be clear: the *intent* of the API should not change -- we still use
pixels everywhere -- but:
- we remove ambiguity in the API with regard to pixels and units
- we remove entry points we get to maintain for the whole 1.0
version of the API
- we make things simpler to bind for both manual language bindings
and automatic (gobject-introspection based) ones
- we have the simplest API possible while still exposing the
capabilities of the underlying GL implementation
Bug 1138 - No trackable "mapped" state
* Add a VISIBLE flag tracking application programmer's
expected showing-state for the actor, allowing us to
always ensure we keep what the app wants while tracking
internal implementation state separately.
* Make MAPPED reflect whether the actor will be painted;
add notification on a ClutterActor::mapped property.
Keep MAPPED state updated as the actor is shown,
ancestors are shown, actor is reparented, etc.
* Require a stage and realized parents to realize; this means
at realization time the correct window system and GL resources
are known. But unparented actors can no longer be realized.
* Allow children to be unrealized even if parent is realized.
Otherwise in effect either all actors or no actors are realized,
i.e. it becomes a stage-global flag.
* Allow clutter_actor_realize() to "fail" if not inside a toplevel
* Rework clutter_actor_unrealize() so internally we have
a flavor that does not mess with visibility flag
* Add _clutter_actor_rerealize() to encapsulate a somewhat
tricky operation we were doing in a couple of places
* Do not realize/unrealize children in ClutterGroup,
ClutterActor already does it
* Do not realize impl by hand in clutter_stage_show(),
since showing impl already does that
* Do not unrealize in various dispose() methods, since
ClutterActor dispose implementation already does it
and chaining up is mandatory
* ClutterTexture uses COGL while unrealizable (before it's
added to a stage). Previously this breakage was affecting
ClutterActor because we had to allow realize outside
a stage. Move the breakage to ClutterTexture, by making
ClutterTexture just use COGL while not realized.
* Unrealize before we set parent to NULL in clutter_actor_unparent().
This means unrealize() implementations can get to the stage.
Because actors need the stage in order to detach from stage.
* Update clutter-actor-invariants.txt to reflect latest changes
* Remove explicit hide/unrealize from ClutterActor::dispose since
unparent already forces those
Instead just assert that unparent() occurred and did the right thing.
* Check whether parent implements unrealize before chaining up
Needed because ClutterGroup no longer has to implement unrealize.
* Perform unrealize in the default handler for the signal.
This allows non-containers that have children to work properly,
and allows containers to override how it's done.
* Add map/unmap virtual methods and set MAPPED flag on self and
children in there. This allows subclasses to hook map/unmap.
These are not signals, because notify::mapped is better for
anything it's legitimate for a non-subclass to do.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1516 - Does not withdraw toplevels correctly
clutter_stage_x11_hide() needs to use XWithdrawWindow(), not
XUnmapWindow().
As it stands now, if the window is already unmapped (say the WM has
minimized it), then the WM will not know that Clutter has closed the
window and will keep the window managed, showing it in the task list
and so forth.