Destroy the dummy XImage we create even on success.
http://bugzilla.openedhand.com/show_bug.cgi?id=1918
Based on a patch by: Carlos Martín Nieto <carlos@cmartin.tk>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
* stage-use-alpha:
tests: Use accessor methods for :use-alpha
stage: Add accessors for :use-alpha
tests: Allow setting the stage opacity in test-paint-wrapper
stage: Premultiply the stage color
stage: Composite the opacity with the alpha channel
glx: Always request an ARGB visual
stage: Add :use-alpha property
materials: Get the right blend function for alpha
Old-style X11 terminals require that even modern X11 send KeyPress
and KeyRelease pairs when auto-repeating. For this reason modern(-ish)
API like XKB has a way to detect auto-repeat and do a single KeyRelease
at the end of a KeyPress sequence.
The newly added check emulates XKB's detectable auto-repeat by peeking
the next event after a KeyRelease and checking if it's a KeyPress for
the same key and timestamp - and then ignoring the KeyRelease if it
matches.
If a Stage has been set to use a foreign Window then Clutter should not
be managing it; calling XWithdrawWindow and XMapWindow should be
reserved to the windows we manage ourselves.
When requesting the GLXFBConfig for creating the GLX context, we should
always request one that links to an ARGB visual instead of a plain RGB
one.
By using an ARGB visual we allow the ClutterStage:use-alpha property to
work as intended when running Clutter under a compositing manager.
The default behaviour of requesting an ARGB visual can be disabled by
using the:
CLUTTER_DISABLE_ARGB_VISUAL
Environment variable.
This ensures that glViewport is called before the first stage paint.
Previously _clutter_stage_maybe_setup_viewport (which is done before we
start painting) was bailing out without calling cogl_setup_viewport because
the CLUTTER_STAGE_IN_RESIZE flag may be set if the stage was resized before
the first paint. (NB: The CLUTTER_STAGE_IN_RESIZE flag isn't removed until
we get an explicit event back from the X server since the window manager may
choose to deny/alter the resize.)
We now special case the first resize - where the viewport hasn't previously
been initialized and use the requested geometry to initialize the
glViewport without waiting for a reply from the server.
As part of an incremental process to have Cogl be a standalone project we
want to re-consider how we organise the Cogl source code.
Currently this is the structure I'm aiming for:
cogl/
cogl/
<put common source here>
winsys/
cogl-glx.c
cogl-wgl.c
driver/
gl/
gles/
os/ ?
utils/
cogl-fixed
cogl-matrix-stack?
cogl-journal?
cogl-primitives?
pango/
The new winsys component is a starting point for migrating window system
code (i.e. x11,glx,wgl,osx,egl etc) from Clutter to Cogl.
The utils/ and pango/ directories aren't added by this commit, but they are
noted because I plan to add them soon.
Overview of the planned structure:
* The winsys/ API is the API that binds OpenGL to a specific window system,
be that X11 or win32 etc. Example are glx, wgl and egl. Much of the logic
under clutter/{glx,osx,win32 etc} should migrate here.
* Note there is also the idea of a winsys-base that may represent a window
system for which there are multiple winsys APIs. An example of this is
x11, since glx and egl may both be used with x11. (currently only Clutter
has the idea of a winsys-base)
* The driver/ represents a specific varient of OpenGL. Currently we have "gl"
representing OpenGL 1.4-2.1 (mostly fixed function) and "gles" representing
GLES 1.1 (fixed funciton) and 2.0 (fully shader based)
* Everything under cogl/ should fundamentally be supporting access to the
GPU. Essentially Cogl's most basic requirement is to provide a nice GPU
Graphics API and drawing a line between this and the utility functionality
we add to support Clutter should help keep this lean and maintainable.
* Code under utils/ as suggested builds on cogl/ adding more convenient
APIs or mechanism to optimize special cases. Broadly speaking you can
compare cogl/ to OpenGL and utils/ to GLU.
* clutter/pango will be moved to clutter/cogl/pango
How some of the internal configure.ac/pkg-config terminology has changed:
backendextra -> CLUTTER_WINSYS_BASE # e.g. "x11"
backendextralib -> CLUTTER_WINSYS_BASE_LIB # e.g. "x11/libclutter-x11.la"
clutterbackend -> {CLUTTER,COGL}_WINSYS # e.g. "glx"
CLUTTER_FLAVOUR -> {CLUTTER,COGL}_WINSYS
clutterbackendlib -> CLUTTER_WINSYS_LIB
CLUTTER_COGL -> COGL_DRIVER # e.g. "gl"
Note: The CLUTTER_FLAVOUR and CLUTTER_COGL defines are kept for apps
As the first thing to take advantage of the new winsys component in Cogl;
cogl_get_proc_address() has been moved from cogl/{gl,gles}/cogl.c into
cogl/common/cogl.c and this common implementation first trys
_cogl_winsys_get_proc_address() but if that fails then it falls back to
gmodule.
The only backend that tried to implement offscreen stages was the GLX backend
and even this has apparently be broken for some time without anyone noticing.
The property still remains and since the property already clearly states that
it may not work I don't expect anyone to notice.
This simplifies quite a bit of the GLX code which is very desireable from the
POV that we want to start migrating window system code down to Cogl and the
simpler the code is the more straight forward this work will be.
In the future when Cogl has a nicely designed API for framebuffer objects then
re-implementing offscreen stages cleanly for *all* backends should be quite
straightforward.
The user-initiated resize is conflicting with the allocated size. This
happens because we change the size of the stage's X Window behind the
back of the size allocation machinery.
Instead, we should change the size of the actor whenever we receive a
ConfigureNotify event to reflect the new size of the actor.
We force the redraw before mapping, in the hope that when a composited
window manager maps the window it will have its contents ready; that is
not going to work: the solution for this problem requires the implementation
of a protocol for compositors, and not a hack.
Moreover, painting before mapping will cause a paint with the wrong
GL viewport size, which is the wrong thing to do on GLX.
Instead of using ClutterActor for the base class of the Stage
implementation we should extend the StageWindow interface with
the required bits (geometry, realization) and use a simple object
class.
This require a wee bit of changes across Backend, Stage and
StageWindow, even though it's mostly re-shuffling.
First of all, StageWindow should get new virtual functions:
* geometry:
- resize()
- get_geometry()
* realization
- realize()
- unrealize()
This covers all the bits that we use from ClutterActor currently
inside the stage implementations.
The ClutterBackend::create_stage() virtual function should create
a StageWindow, and not an Actor (it should always have been; the
fact that it returned an Actor was a leak of the black magic going
on underneath). Since we never guaranteed ABI compatibility for
the Backend class, this is not a problem.
Internally to ClutterStage we can finally drop the shenanigans of
setting/unsetting actor flags on the implementation: if the realization
succeeds, for instance, we set the REALIZED flag on the Stage and
we're done.
As an initial proof of concept, the X11 and GLX stage implementations
have been ported to the New World Order(tm) and show no regressions.
Clutter advertises itself on X11 as implementing the _NET_WM_PING protocol,
which is needed to be able to detect frozen applications; this allows us to
stop the destruction of the stage by blocking the CLUTTER_DELETE event and
wait for user feedback without the Window Manager thinking that the app has
gone unresponsive.
In order to implement the _NET_WM_PING protocol properly, though, we need
to add the _NET_WM_PID property on the Stage window, since the EWMH states:
[_NET_WM_PID] MAY be used by the Window Manager to kill windows which
do not respond to the _NET_WM_PING protocol.
Meaning that an unresponsive Clutter application might not be killable by
the window manager.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1748
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The fix for bug 1750 inside commit b190448e made Clutter-GTK spew
BadWindow errors. The reason for that is that we call XDestroyWindow()
without checking if the old Window is None; this happens if we call
clutter_x11_set_stage_foreign() on a new ClutterStage before it has
been realized.
Since Clutter-GTK does not need to realize the Stage it is going to
embed anymore (the only reason for that was to obtain a proper Visual
but now there's ClutterBackendX11 API for that), the set_stage_foreign()
call is effectively setting the StageX11 Window for the first time.
When we replace the stage Window using a foreign one we also need to
destroy the Window we created, if needed, to avoid leaking resources
all around.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1750
If clutter_x11_texture_set_window() was called after
clutter_x11_texture_pixmap_set_automatic(), then the Damage object would
not be properly created so updates to the window were ignored.
Refactor creation of the damage object to a separate function, and
call it from clutter_x11_texture_set_window() and clutter_x11_texture_set_pixmap()
as appropriate. Addition and removal of the filter function is made
conditional on priv->damage to make free_damage_resources() cleanly
idempotent.
See: http://bugzilla.gnome.org/show_bug.cgi?id=587189 for the original
bug report.
http://bugzilla.openedhand.com/show_bug.cgi?id=1710
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
A lot of applications change the size of the stage from the default
before the stage is initially shown. The size change won't take affect
until the first allocation run. However we want the window to be at
the correct size when we first map it so we should force an allocation
run before showing the stage.
There was an explicit call to XResizeWindow in
clutter_stage_x11_show. This is not needed anymore because
XResizeWindow will already have been called by the allocate method.
Updating the WM hints on the stage window shortcircuits if the stage
is in WITHDRAWN state, so we need to move the update_wm_hints() call
after the flag has been unset.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The race we were experiencing in the X11 backends is apparently
back after the fix in commit 00a3c698.
This time, just delaying the setting of the SYNC_MATRICES flag
is not enough, so we can resume the use of a STAGE_IN_RESIZE
private flag.
This should also fix bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1668
Instead of using a specific function to check whether the X
server supports the XInput extension we can use the generic
Xlib function XQueryExtension(). This cuts down the extra
checks inside the configure.ac and simplifies the code inside
clutter_x11_register_xinput().
Currently, XInput support requires a function call. In order to
make it easier for people to test it, we can also add a command
line switch that moves the pointer device detection and handling
to XInput. This should ensure that, at least for people building
Clutter with --enable-xinput, applications can be easily migrated
and regressions can be caught.
The StageManager singleton instance is already kept around
by the clutter_stage_manager_get_default() function; there is
no need to have it inside the main Clutter context as well.
Instead of using _clutter_context_get_default() and checking the
is_initialized flag, we should use the newly added private function
that does not cause side effects, especially for functions that have
to be called before any other Clutter function.
The input device API is split halfway thorugh the backends in a very
weird way. The data structures are private, as they should, but most
of the information should be available in the main API since it's
generic enough.
The device type enumeration, for instance, should be common across
every backend; the accessors for device type and id should live in the
core API. The internal API should always use ClutterInputDevice and
not the private X11 implementation when dealing with public structures
like ClutterEvent.
By adding accessors for the device type and id, and by moving the
device type enumeration into the core API we can cut down the amount
of symbols private and/or visible only to the X11 backends; this way
when other backends start implementing multi-pointer support we can
share the same API across the code.
The clutter_context_get_default() function is private, but shared
across Clutter. For this reason, it should be prefixed by '_' so
that the symbol is hidden from the shared object.
If we have an not-visible texture pixmap, we need to:
- Still update it if it is realized, since it won't be
updated when shown. And it might be also be cloned.
- Queue a redraw if even if not visible, since it
it might be cloned.
http://bugzilla.openedhand.com/show_bug.cgi?id=1647
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
RGBA data in X pixmaps and in FBOs is already premultiplied; use
the right format when creating cogl textures.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
The :fullscreen property is very much confusing as it is implemented.
It can be written to a value, but the whole process might fail. If we
set:
g_object_set (stage, "fullscreen", TRUE, NULL);
and the fullscreen process fails or it is not implemented, the value
will be reset to FALSE (if we're lucky) or left TRUE (most of the
times).
The writability is just a shorthand for invoking clutter_stage_fullscreen()
or clutter_stage_unfullscreen() depending on a boolean value without
using an if.
The :fullscreen property also greatly confuses high level languages,
since the same symbol is used:
- for a method name (Clutter.Stage.fullscreen())
- for a property name (Clutter.Stage.fullscreen)
- for a signal (Clutter.Stage::fullscreen)
For these reasons, the :fullscreen should be renamed to :fullscreen-set
and be read-only. Implementations of the Stage should only emit the
StageState event to change from normal to fullscreen, and the Stage
will automatically update the value of the property and emit a notify
signal for it.
An implementaton of realize() never needs to set the
CLUTTER_ACTOR_REALIZED flag, though it can unset the flag if
things fail unexpectedly. (Previously, stage backend implementations
had to do this since clutter_actor_realize() wasn't used; this
is no longer the case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1634
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Setting the stage size using clutter_actor_set_size() is almost always
wrong: the X11 stage implementation should save the size and queue a
relayout -- like it does when receiving a ConfigureNotify. The same
should happen when setting it to be full screen.
The XInput support in Clutter is still using XI 1.x. This will never
work correctly, and we are all waiting for XInput 2 anyway. The changes
internally should be minimal, so we can leave everything in place, but
it's better to disable XInput support by default -- at least for the
time being.
The mapping and unmapping of the X11 stage implementation is
a bit bong. It's asynchronous, for starters, when it really
can avoid it by tracking the state internally.
The ordering of the map/unmap sequence is also broken with
respect to the resizing.
By tracking the state internally into StageX11 we can safely
remove the MapNotify and UnmapNotify X event handling.
In theory, we should use _NET_WM_STATE a lot more, and reuse
the X11 state flags for fullscreening as well.
Apparently, the XInput extension is using the same pkg-config
file ('xi') for both the 1.x and the 2.x API, so we need to
check for both the 1.x XGetExtensionVersion and the 2.x
XQueryInputVersion.
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
The XVisualInfo for GL is created when a stage is being realized.
When embedding Clutter inside another toolkit we might not want to
realize a stage to extract the XVisualInfo, then set the stage
window using a foreign X Window -- which will cause a re-realization.
Instead, we should abstract as much as possible into the X11 backend.
Unfortunately, the XVisualInfo for GL is requested using GLX API; for
this reason we have to create a ClutterBackendX11 method that we
override inside the ClutterBackendGLX implementation.
This also allows us to move a little bit of complexity from out of
the stage realization, which is currently a very delicate and hard
to debug section.
Setting the stage window using the set_stage_foreign() method will
lead to a re-realization. We need to make sure that the Drawable
currently associated to the GL context is set to None, to avoid a
BadDrawable error or, if we're unlucky, a segfault in the X server.
Currently, the default screen guard value is 0, which is a valid
screen number on X11, and it might not be the default.
Patch suggested by: Owen W. Taylor <otaylor@redhat.com>
With the recent change to internal floating point values, ClutterUnit
has become a redundant type, defined to be a float. All integer entry
points are being internally converted to floating point values to be
passed to the GL pipeline with the least amount of conversion.
ClutterUnit is thus exposed as just a "pixel with fractionary bits",
and not -- as users might think -- as generic, resolution and device
independent units. not that it was the case, but a definitive amount
of people was convinced it did provide this "feature", and was flummoxed
about the mere existence of this type.
So, having ClutterUnit exposed in the public API doubles the entry
points and has the following disadvantages:
- we have to maintain twice the amount of entry points in ClutterActor
- we still do an integer-to-float implicit conversion
- we introduce a weird impedance between pixels and "pixels with
fractionary bits"
- language bindings will have to choose what to bind, and resort
to manually overriding the API
+ *except* for language bindings based on GObject-Introspection, as
they cannot do manual overrides, thus will replicate the entire
set of entry points
For these reason, we should coalesces every Actor entry point for
pixels and for ClutterUnit into a single entry point taking a float,
like:
void clutter_actor_set_x (ClutterActor *self,
gfloat x);
void clutter_actor_get_size (ClutterActor *self,
gfloat *width,
gfloat *height);
gfloat clutter_actor_get_height (ClutterActor *self);
etc.
The issues I have identified are:
- we'll have a two cases of compiler warnings:
- printf() format of the return values from %d to %f
- clutter_actor_get_size() taking floats instead of unsigned ints
- we'll have a problem with varargs when passing an integer instead
of a floating point value, except on 64bit platforms where the
size of a float is the same as the size of an int
To be clear: the *intent* of the API should not change -- we still use
pixels everywhere -- but:
- we remove ambiguity in the API with regard to pixels and units
- we remove entry points we get to maintain for the whole 1.0
version of the API
- we make things simpler to bind for both manual language bindings
and automatic (gobject-introspection based) ones
- we have the simplest API possible while still exposing the
capabilities of the underlying GL implementation
Bug 1138 - No trackable "mapped" state
* Add a VISIBLE flag tracking application programmer's
expected showing-state for the actor, allowing us to
always ensure we keep what the app wants while tracking
internal implementation state separately.
* Make MAPPED reflect whether the actor will be painted;
add notification on a ClutterActor::mapped property.
Keep MAPPED state updated as the actor is shown,
ancestors are shown, actor is reparented, etc.
* Require a stage and realized parents to realize; this means
at realization time the correct window system and GL resources
are known. But unparented actors can no longer be realized.
* Allow children to be unrealized even if parent is realized.
Otherwise in effect either all actors or no actors are realized,
i.e. it becomes a stage-global flag.
* Allow clutter_actor_realize() to "fail" if not inside a toplevel
* Rework clutter_actor_unrealize() so internally we have
a flavor that does not mess with visibility flag
* Add _clutter_actor_rerealize() to encapsulate a somewhat
tricky operation we were doing in a couple of places
* Do not realize/unrealize children in ClutterGroup,
ClutterActor already does it
* Do not realize impl by hand in clutter_stage_show(),
since showing impl already does that
* Do not unrealize in various dispose() methods, since
ClutterActor dispose implementation already does it
and chaining up is mandatory
* ClutterTexture uses COGL while unrealizable (before it's
added to a stage). Previously this breakage was affecting
ClutterActor because we had to allow realize outside
a stage. Move the breakage to ClutterTexture, by making
ClutterTexture just use COGL while not realized.
* Unrealize before we set parent to NULL in clutter_actor_unparent().
This means unrealize() implementations can get to the stage.
Because actors need the stage in order to detach from stage.
* Update clutter-actor-invariants.txt to reflect latest changes
* Remove explicit hide/unrealize from ClutterActor::dispose since
unparent already forces those
Instead just assert that unparent() occurred and did the right thing.
* Check whether parent implements unrealize before chaining up
Needed because ClutterGroup no longer has to implement unrealize.
* Perform unrealize in the default handler for the signal.
This allows non-containers that have children to work properly,
and allows containers to override how it's done.
* Add map/unmap virtual methods and set MAPPED flag on self and
children in there. This allows subclasses to hook map/unmap.
These are not signals, because notify::mapped is better for
anything it's legitimate for a non-subclass to do.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1516 - Does not withdraw toplevels correctly
clutter_stage_x11_hide() needs to use XWithdrawWindow(), not
XUnmapWindow().
As it stands now, if the window is already unmapped (say the WM has
minimized it), then the WM will not know that Clutter has closed the
window and will keep the window managed, showing it in the task list
and so forth.
A lockup was reported by fargoilas on #clutter and removing the server grab in
clutter_x11_texture_pixmap_sync_window fixed the problem.
We were doing an x server grab to guarantee that if the XGetWindowAttributes
request reported that our redirected window was viewable then we would also
be able to get a valid pixmap name. (the composite protocol says it's an
error to request a name for a window if it's not viewable) Without the grab
there would be a race condition.
Instead we now handle error conditions gracefully.