While this is totally fine (None is 0L and, in the pointer context, will
be converted in the right internal NULL representation, which could be a
value with some bits to 1), I believe it's clearer to use NULL instead
of None when we talk about pointers.
While this is totally fine (0 in the pointer context will be converted
in the right internal NULL representation, which could be a value with
some bits to 1), I believe it's clearer to use NULL in the pointer
context.
It seems that, in most case, it's more an overlook than a deliberate
choice to use FALSE/0 as NULL, eg. copying a _COGL_GET_CONTEXT (ctx, 0)
or a g_return_val_if_fail (cond, 0) from a function returning a
gboolean.
Whether events come from the main loop source or from
clutter_x11_handle_event(), we need to feed them to the backend
virtual handle_event function. This fixes problems with clients
using clutter_x11_handle_event() hanging because
GLXBufferSwapComplete events aren't received.
http://bugzilla.openedhand.com/show_bug.cgi?id=2101
ClutterX11TexturePixmap calls get_allocation_box() when queueing a
clipped redraw. If the allocation is not valid, and if we queue a
lot of redraws in response to a series of damage events, the net
result is that we spend all our time in a re-layout. We can
short-circuit this by checking if the actor has a valid allocation, and
if not, just queue a redraw - the actor will be allocated by the time it
is going to be painted.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
There is no need for us to check for low-level functions and header
files, especially since we haven't been checking the results until
now. This makes cross-compiling slightly more bearable.
The installed _HEADERS should be the public ones and the enumeration
types; repeating clutter-x11-texture-pixmap.h breaks with automake 1.11
and doesn't strictly make any sense.
http://bugzilla.openedhand.com/show_bug.cgi?id=2002
The DeviceManager class should be abstract in Clutter, and implemented
by each backend, as different backends will have different ways to
detect, initialize and list devices; the X11 backend alone has *two*
ways of dealing with devices.
This commit makes DeviceManager an abstract class and delegates the
device initialization and enumeration to per-backend sub-classes.
The responsible for creating the device manager is, obviously, the
backend singleton.
The X11 and Win32 backends have been updated to the new layout; the
Win32 backend has been updated blindly, so it might require additional
testing.
ConfigureNotify is delivered on window movements too, but there is no
need to queue a relayout on these as the viewport hasn't changed size.
Check for the window actually changing size on ConfigureNotify before
queueing a relayout.
This fixes laggy window movement when moving a window in response to
Clutter mouse motion events.
As well as manually setting the geometry size, we needed to queue a
relayout. This is what the ConfigureNotify handler would normally do,
but we don't get this event when using a foreign window (obviously).
This should fix resizing in things like gtk-clutter.
If we get into the resize function and it's a foreign window, set the
geometry size so that the allocate will set the backend size and call
glViewport.
Setting/unsetting fullscreen on a mapped or unmapped window now works
correctly.
If you unfullscreen a window that was initially full-screened, it will
unset the fullscreen hint and the WM will likely push the size down to
the largest valid size.
If the window was previously un-fullscreened, Clutter will restore the
previous size.
Fullscreening also now works if the WM switches the hint without the
application's knowledge (as happens when you resize a window to the size
of the screen, for example, with stock metacity).
When we resize, we relied on the stage's allocate to re-initialise the
GL viewport. Unfortunately, if we resized within Clutter, the new size
was cached before the window is actually resized, so glViewport wasn't
being called after resizing (some of the time, it's a race condition).
Change the way resizing works slightly so that we only resize when the
geometry size doesn't match our preferred size, and queue a relayout on
ConfigureNotify so the glViewport gets called.
Also change window creation slightly so that setting the size of a
window before it's realized works correctly.
If your OpenGL driver supports GLX_INTEL_swap_event that means when
glXSwapBuffers is called it returns immediatly and an XEvent is sent when
the actual swap has finished.
Clutter can use the events that notify swap completion as a means to
throttle rendering in the master clock without blocking the CPU and so it
should help improve the performance of CPU bound applications.
We want to set the default size without triggering the layout machinary,
so change the window creation process slightly so we start with a
640x480 window.
Due to the way the new sizing works, clutter stage must set its size in
init (to maintain old behaviour) and the properties on the X11 stage
must be initialised to 1x1 so that it actually goes ahead with the
resize.
Fixes stages that aren't user resizable and have no size set from
appearing at 1x1.
Calling clutter_actor_set_size in response to ConfigureNotify makes
setting the size of the stage racy - the most common result of which
seems to be that you can't set the stage dimensions to anything less
than 640x480.
Instead, add a first_allocation bit to the private structure of the X11
stage and force the first resize (necessary or the default stage will be
a 1x1 window).
Now that we have a minimum size getter on the stage object, change
get_geometry to actually always return the geometry. This fixes stages
that are set as user-resizable appearing at 1x1 size.
This will need changing in other back-ends too.
The extension keyboard support in XInput 1.x is hopelessly broken.
Nevertheless, it's possible to use some bits of it, as we prefer the
core keyboard events to the XInput events, thus at least having proper
handling for X11 key events on the Stage window.
The XI 1.0 layer is complementary to the X11 core devices handling; this
means that core events will still be emitted for the core pointer and
keyboard devices, and that secondary (floating) devices should be
handled on top of that.
Thus, the XI event handling code should be executed (if explicitly
compiled in and enabled) if the core device events have not been parsed.
Note: this is going away with XI2, which completely replaces both core and
XI1 events.
Even with XInput support we should always register core devices. This
allows us to handle enter and leave events correctly on the Stage and
to have a working XInput 1.x support in Clutter.
Instead of overloading the device id of 0 and 1 we should treat the core
devices as special, and have a pointer inside the X11 backend singleton
structure, for fast access.
If the user presses a button on a pointer device and then moves out the
Stage X11 will emit the following events:
LeaveNotify ➔ MotionNotify ... ➔ ButtonRelease ➔ LeaveNotify
The second LeaveNotify differs from the first by the state field.
Unfortunately, ClutterCrossingEvent doesn't have a modifier_state field
like other events, so we cannot provide a way for programmatically
distinguishing them from a Clutter perspective. This is also an X11-ism
we might not even want to replicate on every backend with sane
enter/leave semantics.
For this reason we should check inside the X11 event processing if the
pointer device has already left the Stage and ignore the second
LeaveNotify.
The InputDevice objects stores pointer coordinates, state, stage and
the actor under the cursor, so if the current backend provides us with
one attached to the Event structure then we want the InputDevice itself
to update its state and give us the ClutterActor underneath the
pointer's cursor.
Even when we are not using XInput we now have fallback devices; the
X11 backend should always assign the default devices when translating
the X events to Clutter events.
Use the device manager to store the input devices. Also, provide
two fallback devices when initializing the X11 backend: device 0
for the pointer and device 1 for the keyboard.
Since asking for ARGB by default is still somewhat experimental on X11
and not every toolkit or complex widgets (like WebKit) still do not like
dealing with ARGB visuals, we should switch back to RGB by default - now
that at least we know it works.
For applications (and toolkit integration libraries) that want to enable
the ClutterStage:use-alpha property there is a new function:
void clutter_x11_set_use_argb_visual (gboolean use_argb);
which needs to be called before clutter_init().
The CLUTTER_DISABLE_ARGB_VISUAL environment variable can still be used
to force this value off at run-time.
Destroy the dummy XImage we create even on success.
http://bugzilla.openedhand.com/show_bug.cgi?id=1918
Based on a patch by: Carlos Martín Nieto <carlos@cmartin.tk>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
* stage-use-alpha:
tests: Use accessor methods for :use-alpha
stage: Add accessors for :use-alpha
tests: Allow setting the stage opacity in test-paint-wrapper
stage: Premultiply the stage color
stage: Composite the opacity with the alpha channel
glx: Always request an ARGB visual
stage: Add :use-alpha property
materials: Get the right blend function for alpha
Old-style X11 terminals require that even modern X11 send KeyPress
and KeyRelease pairs when auto-repeating. For this reason modern(-ish)
API like XKB has a way to detect auto-repeat and do a single KeyRelease
at the end of a KeyPress sequence.
The newly added check emulates XKB's detectable auto-repeat by peeking
the next event after a KeyRelease and checking if it's a KeyPress for
the same key and timestamp - and then ignoring the KeyRelease if it
matches.
If a Stage has been set to use a foreign Window then Clutter should not
be managing it; calling XWithdrawWindow and XMapWindow should be
reserved to the windows we manage ourselves.
When requesting the GLXFBConfig for creating the GLX context, we should
always request one that links to an ARGB visual instead of a plain RGB
one.
By using an ARGB visual we allow the ClutterStage:use-alpha property to
work as intended when running Clutter under a compositing manager.
The default behaviour of requesting an ARGB visual can be disabled by
using the:
CLUTTER_DISABLE_ARGB_VISUAL
Environment variable.
This ensures that glViewport is called before the first stage paint.
Previously _clutter_stage_maybe_setup_viewport (which is done before we
start painting) was bailing out without calling cogl_setup_viewport because
the CLUTTER_STAGE_IN_RESIZE flag may be set if the stage was resized before
the first paint. (NB: The CLUTTER_STAGE_IN_RESIZE flag isn't removed until
we get an explicit event back from the X server since the window manager may
choose to deny/alter the resize.)
We now special case the first resize - where the viewport hasn't previously
been initialized and use the requested geometry to initialize the
glViewport without waiting for a reply from the server.
As part of an incremental process to have Cogl be a standalone project we
want to re-consider how we organise the Cogl source code.
Currently this is the structure I'm aiming for:
cogl/
cogl/
<put common source here>
winsys/
cogl-glx.c
cogl-wgl.c
driver/
gl/
gles/
os/ ?
utils/
cogl-fixed
cogl-matrix-stack?
cogl-journal?
cogl-primitives?
pango/
The new winsys component is a starting point for migrating window system
code (i.e. x11,glx,wgl,osx,egl etc) from Clutter to Cogl.
The utils/ and pango/ directories aren't added by this commit, but they are
noted because I plan to add them soon.
Overview of the planned structure:
* The winsys/ API is the API that binds OpenGL to a specific window system,
be that X11 or win32 etc. Example are glx, wgl and egl. Much of the logic
under clutter/{glx,osx,win32 etc} should migrate here.
* Note there is also the idea of a winsys-base that may represent a window
system for which there are multiple winsys APIs. An example of this is
x11, since glx and egl may both be used with x11. (currently only Clutter
has the idea of a winsys-base)
* The driver/ represents a specific varient of OpenGL. Currently we have "gl"
representing OpenGL 1.4-2.1 (mostly fixed function) and "gles" representing
GLES 1.1 (fixed funciton) and 2.0 (fully shader based)
* Everything under cogl/ should fundamentally be supporting access to the
GPU. Essentially Cogl's most basic requirement is to provide a nice GPU
Graphics API and drawing a line between this and the utility functionality
we add to support Clutter should help keep this lean and maintainable.
* Code under utils/ as suggested builds on cogl/ adding more convenient
APIs or mechanism to optimize special cases. Broadly speaking you can
compare cogl/ to OpenGL and utils/ to GLU.
* clutter/pango will be moved to clutter/cogl/pango
How some of the internal configure.ac/pkg-config terminology has changed:
backendextra -> CLUTTER_WINSYS_BASE # e.g. "x11"
backendextralib -> CLUTTER_WINSYS_BASE_LIB # e.g. "x11/libclutter-x11.la"
clutterbackend -> {CLUTTER,COGL}_WINSYS # e.g. "glx"
CLUTTER_FLAVOUR -> {CLUTTER,COGL}_WINSYS
clutterbackendlib -> CLUTTER_WINSYS_LIB
CLUTTER_COGL -> COGL_DRIVER # e.g. "gl"
Note: The CLUTTER_FLAVOUR and CLUTTER_COGL defines are kept for apps
As the first thing to take advantage of the new winsys component in Cogl;
cogl_get_proc_address() has been moved from cogl/{gl,gles}/cogl.c into
cogl/common/cogl.c and this common implementation first trys
_cogl_winsys_get_proc_address() but if that fails then it falls back to
gmodule.
The only backend that tried to implement offscreen stages was the GLX backend
and even this has apparently be broken for some time without anyone noticing.
The property still remains and since the property already clearly states that
it may not work I don't expect anyone to notice.
This simplifies quite a bit of the GLX code which is very desireable from the
POV that we want to start migrating window system code down to Cogl and the
simpler the code is the more straight forward this work will be.
In the future when Cogl has a nicely designed API for framebuffer objects then
re-implementing offscreen stages cleanly for *all* backends should be quite
straightforward.
The user-initiated resize is conflicting with the allocated size. This
happens because we change the size of the stage's X Window behind the
back of the size allocation machinery.
Instead, we should change the size of the actor whenever we receive a
ConfigureNotify event to reflect the new size of the actor.
We force the redraw before mapping, in the hope that when a composited
window manager maps the window it will have its contents ready; that is
not going to work: the solution for this problem requires the implementation
of a protocol for compositors, and not a hack.
Moreover, painting before mapping will cause a paint with the wrong
GL viewport size, which is the wrong thing to do on GLX.
Instead of using ClutterActor for the base class of the Stage
implementation we should extend the StageWindow interface with
the required bits (geometry, realization) and use a simple object
class.
This require a wee bit of changes across Backend, Stage and
StageWindow, even though it's mostly re-shuffling.
First of all, StageWindow should get new virtual functions:
* geometry:
- resize()
- get_geometry()
* realization
- realize()
- unrealize()
This covers all the bits that we use from ClutterActor currently
inside the stage implementations.
The ClutterBackend::create_stage() virtual function should create
a StageWindow, and not an Actor (it should always have been; the
fact that it returned an Actor was a leak of the black magic going
on underneath). Since we never guaranteed ABI compatibility for
the Backend class, this is not a problem.
Internally to ClutterStage we can finally drop the shenanigans of
setting/unsetting actor flags on the implementation: if the realization
succeeds, for instance, we set the REALIZED flag on the Stage and
we're done.
As an initial proof of concept, the X11 and GLX stage implementations
have been ported to the New World Order(tm) and show no regressions.
Clutter advertises itself on X11 as implementing the _NET_WM_PING protocol,
which is needed to be able to detect frozen applications; this allows us to
stop the destruction of the stage by blocking the CLUTTER_DELETE event and
wait for user feedback without the Window Manager thinking that the app has
gone unresponsive.
In order to implement the _NET_WM_PING protocol properly, though, we need
to add the _NET_WM_PID property on the Stage window, since the EWMH states:
[_NET_WM_PID] MAY be used by the Window Manager to kill windows which
do not respond to the _NET_WM_PING protocol.
Meaning that an unresponsive Clutter application might not be killable by
the window manager.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1748
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The fix for bug 1750 inside commit b190448e made Clutter-GTK spew
BadWindow errors. The reason for that is that we call XDestroyWindow()
without checking if the old Window is None; this happens if we call
clutter_x11_set_stage_foreign() on a new ClutterStage before it has
been realized.
Since Clutter-GTK does not need to realize the Stage it is going to
embed anymore (the only reason for that was to obtain a proper Visual
but now there's ClutterBackendX11 API for that), the set_stage_foreign()
call is effectively setting the StageX11 Window for the first time.
When we replace the stage Window using a foreign one we also need to
destroy the Window we created, if needed, to avoid leaking resources
all around.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1750
If clutter_x11_texture_set_window() was called after
clutter_x11_texture_pixmap_set_automatic(), then the Damage object would
not be properly created so updates to the window were ignored.
Refactor creation of the damage object to a separate function, and
call it from clutter_x11_texture_set_window() and clutter_x11_texture_set_pixmap()
as appropriate. Addition and removal of the filter function is made
conditional on priv->damage to make free_damage_resources() cleanly
idempotent.
See: http://bugzilla.gnome.org/show_bug.cgi?id=587189 for the original
bug report.
http://bugzilla.openedhand.com/show_bug.cgi?id=1710
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
A lot of applications change the size of the stage from the default
before the stage is initially shown. The size change won't take affect
until the first allocation run. However we want the window to be at
the correct size when we first map it so we should force an allocation
run before showing the stage.
There was an explicit call to XResizeWindow in
clutter_stage_x11_show. This is not needed anymore because
XResizeWindow will already have been called by the allocate method.
Updating the WM hints on the stage window shortcircuits if the stage
is in WITHDRAWN state, so we need to move the update_wm_hints() call
after the flag has been unset.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The race we were experiencing in the X11 backends is apparently
back after the fix in commit 00a3c698.
This time, just delaying the setting of the SYNC_MATRICES flag
is not enough, so we can resume the use of a STAGE_IN_RESIZE
private flag.
This should also fix bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1668
Instead of using a specific function to check whether the X
server supports the XInput extension we can use the generic
Xlib function XQueryExtension(). This cuts down the extra
checks inside the configure.ac and simplifies the code inside
clutter_x11_register_xinput().
Currently, XInput support requires a function call. In order to
make it easier for people to test it, we can also add a command
line switch that moves the pointer device detection and handling
to XInput. This should ensure that, at least for people building
Clutter with --enable-xinput, applications can be easily migrated
and regressions can be caught.
The StageManager singleton instance is already kept around
by the clutter_stage_manager_get_default() function; there is
no need to have it inside the main Clutter context as well.
Instead of using _clutter_context_get_default() and checking the
is_initialized flag, we should use the newly added private function
that does not cause side effects, especially for functions that have
to be called before any other Clutter function.
The input device API is split halfway thorugh the backends in a very
weird way. The data structures are private, as they should, but most
of the information should be available in the main API since it's
generic enough.
The device type enumeration, for instance, should be common across
every backend; the accessors for device type and id should live in the
core API. The internal API should always use ClutterInputDevice and
not the private X11 implementation when dealing with public structures
like ClutterEvent.
By adding accessors for the device type and id, and by moving the
device type enumeration into the core API we can cut down the amount
of symbols private and/or visible only to the X11 backends; this way
when other backends start implementing multi-pointer support we can
share the same API across the code.
The clutter_context_get_default() function is private, but shared
across Clutter. For this reason, it should be prefixed by '_' so
that the symbol is hidden from the shared object.
If we have an not-visible texture pixmap, we need to:
- Still update it if it is realized, since it won't be
updated when shown. And it might be also be cloned.
- Queue a redraw if even if not visible, since it
it might be cloned.
http://bugzilla.openedhand.com/show_bug.cgi?id=1647
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
RGBA data in X pixmaps and in FBOs is already premultiplied; use
the right format when creating cogl textures.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
The :fullscreen property is very much confusing as it is implemented.
It can be written to a value, but the whole process might fail. If we
set:
g_object_set (stage, "fullscreen", TRUE, NULL);
and the fullscreen process fails or it is not implemented, the value
will be reset to FALSE (if we're lucky) or left TRUE (most of the
times).
The writability is just a shorthand for invoking clutter_stage_fullscreen()
or clutter_stage_unfullscreen() depending on a boolean value without
using an if.
The :fullscreen property also greatly confuses high level languages,
since the same symbol is used:
- for a method name (Clutter.Stage.fullscreen())
- for a property name (Clutter.Stage.fullscreen)
- for a signal (Clutter.Stage::fullscreen)
For these reasons, the :fullscreen should be renamed to :fullscreen-set
and be read-only. Implementations of the Stage should only emit the
StageState event to change from normal to fullscreen, and the Stage
will automatically update the value of the property and emit a notify
signal for it.
An implementaton of realize() never needs to set the
CLUTTER_ACTOR_REALIZED flag, though it can unset the flag if
things fail unexpectedly. (Previously, stage backend implementations
had to do this since clutter_actor_realize() wasn't used; this
is no longer the case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1634
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Setting the stage size using clutter_actor_set_size() is almost always
wrong: the X11 stage implementation should save the size and queue a
relayout -- like it does when receiving a ConfigureNotify. The same
should happen when setting it to be full screen.
The XInput support in Clutter is still using XI 1.x. This will never
work correctly, and we are all waiting for XInput 2 anyway. The changes
internally should be minimal, so we can leave everything in place, but
it's better to disable XInput support by default -- at least for the
time being.
The mapping and unmapping of the X11 stage implementation is
a bit bong. It's asynchronous, for starters, when it really
can avoid it by tracking the state internally.
The ordering of the map/unmap sequence is also broken with
respect to the resizing.
By tracking the state internally into StageX11 we can safely
remove the MapNotify and UnmapNotify X event handling.
In theory, we should use _NET_WM_STATE a lot more, and reuse
the X11 state flags for fullscreening as well.
Apparently, the XInput extension is using the same pkg-config
file ('xi') for both the 1.x and the 2.x API, so we need to
check for both the 1.x XGetExtensionVersion and the 2.x
XQueryInputVersion.
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
The XVisualInfo for GL is created when a stage is being realized.
When embedding Clutter inside another toolkit we might not want to
realize a stage to extract the XVisualInfo, then set the stage
window using a foreign X Window -- which will cause a re-realization.
Instead, we should abstract as much as possible into the X11 backend.
Unfortunately, the XVisualInfo for GL is requested using GLX API; for
this reason we have to create a ClutterBackendX11 method that we
override inside the ClutterBackendGLX implementation.
This also allows us to move a little bit of complexity from out of
the stage realization, which is currently a very delicate and hard
to debug section.
Setting the stage window using the set_stage_foreign() method will
lead to a re-realization. We need to make sure that the Drawable
currently associated to the GL context is set to None, to avoid a
BadDrawable error or, if we're unlucky, a segfault in the X server.
Currently, the default screen guard value is 0, which is a valid
screen number on X11, and it might not be the default.
Patch suggested by: Owen W. Taylor <otaylor@redhat.com>
With the recent change to internal floating point values, ClutterUnit
has become a redundant type, defined to be a float. All integer entry
points are being internally converted to floating point values to be
passed to the GL pipeline with the least amount of conversion.
ClutterUnit is thus exposed as just a "pixel with fractionary bits",
and not -- as users might think -- as generic, resolution and device
independent units. not that it was the case, but a definitive amount
of people was convinced it did provide this "feature", and was flummoxed
about the mere existence of this type.
So, having ClutterUnit exposed in the public API doubles the entry
points and has the following disadvantages:
- we have to maintain twice the amount of entry points in ClutterActor
- we still do an integer-to-float implicit conversion
- we introduce a weird impedance between pixels and "pixels with
fractionary bits"
- language bindings will have to choose what to bind, and resort
to manually overriding the API
+ *except* for language bindings based on GObject-Introspection, as
they cannot do manual overrides, thus will replicate the entire
set of entry points
For these reason, we should coalesces every Actor entry point for
pixels and for ClutterUnit into a single entry point taking a float,
like:
void clutter_actor_set_x (ClutterActor *self,
gfloat x);
void clutter_actor_get_size (ClutterActor *self,
gfloat *width,
gfloat *height);
gfloat clutter_actor_get_height (ClutterActor *self);
etc.
The issues I have identified are:
- we'll have a two cases of compiler warnings:
- printf() format of the return values from %d to %f
- clutter_actor_get_size() taking floats instead of unsigned ints
- we'll have a problem with varargs when passing an integer instead
of a floating point value, except on 64bit platforms where the
size of a float is the same as the size of an int
To be clear: the *intent* of the API should not change -- we still use
pixels everywhere -- but:
- we remove ambiguity in the API with regard to pixels and units
- we remove entry points we get to maintain for the whole 1.0
version of the API
- we make things simpler to bind for both manual language bindings
and automatic (gobject-introspection based) ones
- we have the simplest API possible while still exposing the
capabilities of the underlying GL implementation
Bug 1138 - No trackable "mapped" state
* Add a VISIBLE flag tracking application programmer's
expected showing-state for the actor, allowing us to
always ensure we keep what the app wants while tracking
internal implementation state separately.
* Make MAPPED reflect whether the actor will be painted;
add notification on a ClutterActor::mapped property.
Keep MAPPED state updated as the actor is shown,
ancestors are shown, actor is reparented, etc.
* Require a stage and realized parents to realize; this means
at realization time the correct window system and GL resources
are known. But unparented actors can no longer be realized.
* Allow children to be unrealized even if parent is realized.
Otherwise in effect either all actors or no actors are realized,
i.e. it becomes a stage-global flag.
* Allow clutter_actor_realize() to "fail" if not inside a toplevel
* Rework clutter_actor_unrealize() so internally we have
a flavor that does not mess with visibility flag
* Add _clutter_actor_rerealize() to encapsulate a somewhat
tricky operation we were doing in a couple of places
* Do not realize/unrealize children in ClutterGroup,
ClutterActor already does it
* Do not realize impl by hand in clutter_stage_show(),
since showing impl already does that
* Do not unrealize in various dispose() methods, since
ClutterActor dispose implementation already does it
and chaining up is mandatory
* ClutterTexture uses COGL while unrealizable (before it's
added to a stage). Previously this breakage was affecting
ClutterActor because we had to allow realize outside
a stage. Move the breakage to ClutterTexture, by making
ClutterTexture just use COGL while not realized.
* Unrealize before we set parent to NULL in clutter_actor_unparent().
This means unrealize() implementations can get to the stage.
Because actors need the stage in order to detach from stage.
* Update clutter-actor-invariants.txt to reflect latest changes
* Remove explicit hide/unrealize from ClutterActor::dispose since
unparent already forces those
Instead just assert that unparent() occurred and did the right thing.
* Check whether parent implements unrealize before chaining up
Needed because ClutterGroup no longer has to implement unrealize.
* Perform unrealize in the default handler for the signal.
This allows non-containers that have children to work properly,
and allows containers to override how it's done.
* Add map/unmap virtual methods and set MAPPED flag on self and
children in there. This allows subclasses to hook map/unmap.
These are not signals, because notify::mapped is better for
anything it's legitimate for a non-subclass to do.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1516 - Does not withdraw toplevels correctly
clutter_stage_x11_hide() needs to use XWithdrawWindow(), not
XUnmapWindow().
As it stands now, if the window is already unmapped (say the WM has
minimized it), then the WM will not know that Clutter has closed the
window and will keep the window managed, showing it in the task list
and so forth.
A lockup was reported by fargoilas on #clutter and removing the server grab in
clutter_x11_texture_pixmap_sync_window fixed the problem.
We were doing an x server grab to guarantee that if the XGetWindowAttributes
request reported that our redirected window was viewable then we would also
be able to get a valid pixmap name. (the composite protocol says it's an
error to request a name for a window if it's not viewable) Without the grab
there would be a race condition.
Instead we now handle error conditions gracefully.
If the stage is unrealized (such as will be the case if the stage was
created with clutter_stage_new) then it would set the size of the
stage display but it was not setting the fullscreen_on_map flag so it
never got the _NET_WM_STATE_FULLSCREEN property. Now it always sets
the flag regardless of whether the window is created yet.
There was also problems with dual-headed displays because in that case
DisplayWidth/Height will return the size of the combined display but
Metacity (and presumably other WMs) will sensibly try fit the window
to only one of the monitors. However we were setting the size hints so
that the minimum size is that of the combined display. Metacity tries
to honour this by setting the minimum size but then it no longer
positions the window at the top left of the screen.
The patch makes it avoid setting the minimum size when the stage is
fullscreen by checking the fullscreen_on_map flag. This also means we
can remove the static was_resizable flag which would presumably have
caused problems for multi-stage.
The call to "cmp" to compare a built file with its current version
should use the -s (silent) command line switch. This avoids a ugly
message on the console when building Clutter the first time.