If your OpenGL driver supports GLX_INTEL_swap_event that means when
glXSwapBuffers is called it returns immediatly and an XEvent is sent when
the actual swap has finished.
Clutter can use the events that notify swap completion as a means to
throttle rendering in the master clock without blocking the CPU and so it
should help improve the performance of CPU bound applications.
The previous state for the device is used by the click count machinery
and we should not be overwriting it at every event; instead, we should
use a parallel storage for the current state coming from the windowing
system.
The LEAVE/ENTER event pairs should be queued during the InputDevice
update process, when we change the actor under the device pointer.
This commit cleans up the event emission code inside clutter-main.c
and the logic of the event processing.
The InputDevice objects stores pointer coordinates, state, stage and
the actor under the cursor, so if the current backend provides us with
one attached to the Event structure then we want the InputDevice itself
to update its state and give us the ClutterActor underneath the
pointer's cursor.
Use the device manager to store the input devices. Also, provide
two fallback devices when initializing the X11 backend: device 0
for the pointer and device 1 for the keyboard.
ClutterActor checks, when destroying and reparenting, if the parent
actor implements the Container interface, and automatically calls the
remove() method to perform a clean removal.
Actors implementing Container, though, might have internal children;
that is, children that are not added through the Container API. It is
already possible to iterate through them using the Container API to
avoid breaking invariants - but calling clutter_actor_destroy() on
these children (even from the Container implementation, and thus outside
of Clutter's control) will either lead to leaks or to segmentation
faults.
Clutter needs a way to distinguish a clutter_actor_set_parent() done on
an internal child from one done on a "public" child; for this reason, a
push/pop pair of functions should be available to Actor implementations
to mark the section where they wish to add internal children:
➔ clutter_actor_push_internal ();
...
clutter_actor_set_parent (child1, parent);
clutter_actor_set_parent (child2, parent);
...
➔ clutter_actor_pop_internal ();
The set_parent() call will automatically set the newly added
INTERNAL_CHILD private flag on each child, and both
clutter_actor_destroy() and clutter_actor_unparent() will check for the
flag before deciding whether to call the Container's remove method.
When getting signals from higher level toolkits, occasionally
one wants access to the underlying event; say for a Button
widget's "clicked" signal, to get the keyboard state.
Rather than having all of the highlevel widgets emit
ClutterEvent just for the more unusual use cases,
add a global function to access the event state.
http://bugzilla.openedhand.com/show_bug.cgi?id=1888
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
There is a new internal Cogl function called _cogl_check_driver_valid
which looks at the value of the GL_VERSION string to determine whether
the driver is supported. Clutter now calls this after the stage is
realized. If it fails then the stage is marked as unrealized and a
warning is shown.
_cogl_features_init now also checks the version number before getting
the function pointers for glBlendFuncSeparate and
glBlendEquationSeparate. It is not safe to just check for the presence
of the functions because some drivers may define the function without
fully implementing the spec.
The GLES version of _cogl_check_driver_valid just always returns TRUE
because there are no version requirements yet.
Eventually the function could also check for mandatory extensions if
there were any.
http://bugzilla.openedhand.com/show_bug.cgi?id=1875
The only backend that tried to implement offscreen stages was the GLX backend
and even this has apparently be broken for some time without anyone noticing.
The property still remains and since the property already clearly states that
it may not work I don't expect anyone to notice.
This simplifies quite a bit of the GLX code which is very desireable from the
POV that we want to start migrating window system code down to Cogl and the
simpler the code is the more straight forward this work will be.
In the future when Cogl has a nicely designed API for framebuffer objects then
re-implementing offscreen stages cleanly for *all* backends should be quite
straightforward.
When computing the pixels value of a ClutterUnits value we should
be caching the value to avoid recomputing for every call of
clutter_units_to_pixels(). We already have a flag telling us to
return the cached value, but we miss the mechanism to evict the
cache whenever the Backend settings affecting the conversion, that
is default font and resolution, change.
In order to implement the eviction we can use a "serial"; the
Backend will have an internal serial field which we retrieve and
put inside the ClutterUnits structure (we split one of the two
64 bit padding fields into two 32 bit fields to maintain ABI); every
time we call clutter_units_to_pixels() we compare the units serial
with that of the Backend; if they match and pixels_set is set to
TRUE then we just return the stored pixels value. If the serials
do not match then we unset the pixels_set flag and recompute the
pixels value.
We can verify this by adding a simple test unit checking that
by changing the resolution of ClutterBackend we get different
pixel values for 1 em.
http://bugzilla.openedhand.com/show_bug.cgi?id=1843
Instead of using ClutterActor for the base class of the Stage
implementation we should extend the StageWindow interface with
the required bits (geometry, realization) and use a simple object
class.
This require a wee bit of changes across Backend, Stage and
StageWindow, even though it's mostly re-shuffling.
First of all, StageWindow should get new virtual functions:
* geometry:
- resize()
- get_geometry()
* realization
- realize()
- unrealize()
This covers all the bits that we use from ClutterActor currently
inside the stage implementations.
The ClutterBackend::create_stage() virtual function should create
a StageWindow, and not an Actor (it should always have been; the
fact that it returned an Actor was a leak of the black magic going
on underneath). Since we never guaranteed ABI compatibility for
the Backend class, this is not a problem.
Internally to ClutterStage we can finally drop the shenanigans of
setting/unsetting actor flags on the implementation: if the realization
succeeds, for instance, we set the REALIZED flag on the Stage and
we're done.
As an initial proof of concept, the X11 and GLX stage implementations
have been ported to the New World Order(tm) and show no regressions.
The CLUTTER_TEXTURE_IN_CLONE_PAINT was used with the old CloneTexture
actor; now that we have ClutterClone nothing sets the private flag
anymore, and the flag itself is not needed.
The race we were experiencing in the X11 backends is apparently
back after the fix in commit 00a3c698.
This time, just delaying the setting of the SYNC_MATRICES flag
is not enough, so we can resume the use of a STAGE_IN_RESIZE
private flag.
This should also fix bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1668
The StageManager singleton instance is already kept around
by the clutter_stage_manager_get_default() function; there is
no need to have it inside the main Clutter context as well.
The _clutter_context_get_default() function will automatically
create the main Clutter context; if we just want to check whether
Clutter has been initialized this will complicate matters, by
requiring a call to g_type_init() inside the client code.
Instead, we should simply provide an internal API that checks
whether the main Clutter context exists and if it has been
initialized, without any side effect.
The input device API is split halfway thorugh the backends in a very
weird way. The data structures are private, as they should, but most
of the information should be available in the main API since it's
generic enough.
The device type enumeration, for instance, should be common across
every backend; the accessors for device type and id should live in the
core API. The internal API should always use ClutterInputDevice and
not the private X11 implementation when dealing with public structures
like ClutterEvent.
By adding accessors for the device type and id, and by moving the
device type enumeration into the core API we can cut down the amount
of symbols private and/or visible only to the X11 backends; this way
when other backends start implementing multi-pointer support we can
share the same API across the code.
The clutter_context_get_default() function is private, but shared
across Clutter. For this reason, it should be prefixed by '_' so
that the symbol is hidden from the shared object.
A flag in the master clock is now set whenever the dispatch caused an
actual redraw of a stage. If this flag is not set during the prepare
and check functions then it will resort to limiting the redraw
attempts to the default frame rate as if vblank syncing was
disabled. Otherwise if a timeline is running that does not cause the
scene to change then it would busy-wait with 100% CPU until the next
frame.
This fix was suggested by Owen Taylor in:
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Instead of trying to guess about which motion events are
extraneous, queue up all events until we process a frame.
This allows us to look ahead and reliably compress consecutive
sequence of motion events.
clutter-main.c: Feed received events to the stage for queueing.
Remove old compression code. Remove clutter_get_motion_events_frequency()
clutter_set_motion_events_frequency()
clutter-stage.c: Keep a queue of pending events.
clutter-master-clock.c: Add processng of queued events to the
clock source dispatch function.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When a redraw is queued on a stage, simply set a flag; then in
the check/prepare functions of the master clock source, check
for stages that need redrawing.
This avoids the complexity of having multiple competing sources
at the same priority and makes the update ordering more reliable and
understandable.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The clutter_redraw() function is used by libraries embedding
Clutter inside another toolkit, instead of queueing a redraw
on the embedded stage. This means that clutter_redraw() should
perform the same sequence of actions done by the redraw idle
callback.
Sometimes it is necessary for third party code to have a
function called during the redraw process, so that you can
update the scenegraph before it is painted.
This is the another step into abstracting the backend operations
that are currently spread all across the board back into the
backend implementations where they belong.
The GL context creation, for instance, is demanded to the stage
realization which makes it a critical path for every operation
that is GL-context bound. This usually does not make any difference
since we realize the default stage, but at some point we might
start looking into avoiding the default stage realization in order
to make the Clutter startup faster.
It also makes the code maintainable because every part is self
contained and can be reworked with the minimum amount of pain.
The master clock is currently advanced using a frame source driven
by the default frame rate. This breaks the sync to vblank because
the vblanking rate could be different than 60 Hz -- or it might be
completely disabled (e.g. with CLUTTER_VBLANK=none).
We should be using the main loop to check if we have timelines
playing, and if so queue a redraw on the stages we own.
We should also prepare the subsequent frame at the end of the redraw
process, so if there are new redraw we will have the scene already
in place.
This makes Clutter redraw at the maximum frame rate, which is
limited by the vblanking frequency.
Currently, the conversion from em to units is done by using the
default font name inside the backend. For actors using their own
font/text layout we need a way to specify the font name along
with the quantity we wish to transform.
With the recent change to internal floating point values, ClutterUnit
has become a redundant type, defined to be a float. All integer entry
points are being internally converted to floating point values to be
passed to the GL pipeline with the least amount of conversion.
ClutterUnit is thus exposed as just a "pixel with fractionary bits",
and not -- as users might think -- as generic, resolution and device
independent units. not that it was the case, but a definitive amount
of people was convinced it did provide this "feature", and was flummoxed
about the mere existence of this type.
So, having ClutterUnit exposed in the public API doubles the entry
points and has the following disadvantages:
- we have to maintain twice the amount of entry points in ClutterActor
- we still do an integer-to-float implicit conversion
- we introduce a weird impedance between pixels and "pixels with
fractionary bits"
- language bindings will have to choose what to bind, and resort
to manually overriding the API
+ *except* for language bindings based on GObject-Introspection, as
they cannot do manual overrides, thus will replicate the entire
set of entry points
For these reason, we should coalesces every Actor entry point for
pixels and for ClutterUnit into a single entry point taking a float,
like:
void clutter_actor_set_x (ClutterActor *self,
gfloat x);
void clutter_actor_get_size (ClutterActor *self,
gfloat *width,
gfloat *height);
gfloat clutter_actor_get_height (ClutterActor *self);
etc.
The issues I have identified are:
- we'll have a two cases of compiler warnings:
- printf() format of the return values from %d to %f
- clutter_actor_get_size() taking floats instead of unsigned ints
- we'll have a problem with varargs when passing an integer instead
of a floating point value, except on 64bit platforms where the
size of a float is the same as the size of an int
To be clear: the *intent* of the API should not change -- we still use
pixels everywhere -- but:
- we remove ambiguity in the API with regard to pixels and units
- we remove entry points we get to maintain for the whole 1.0
version of the API
- we make things simpler to bind for both manual language bindings
and automatic (gobject-introspection based) ones
- we have the simplest API possible while still exposing the
capabilities of the underlying GL implementation
The method of ClutterTimeline that advances the timeline by a
delta (in millisecond) is going to be useful for testing the
timeline's behaviour -- and unbreak the timeline test suite that
was broken by the MasterClock merge.
With the introduction of the map/unmap flags and the split of the
visible state from the mapped state we require that every part of
a scene graph branch is mapped in order to be painted. This breaks
the ability of a ClutterClone to paint an hidden source actor.
In order to fix this we need to introduce an override flag, similar
in spirit to the current modelview and paint opacity overrides that
Clone is already using.
The override flag, when set, will force a temporary map on a
Clone source (and its children).
Currently, all timelines install a timeout inside the TimeoutPool
they share. Every time the main loop spins, all the timeouts are
updated. This, in turn, will usually lead to redraws being queued
on the stages.
This behaviour leads to the potential starvation of timelines and
to excessive redraws.
One lesson learned from the games developers is that the scenegraph
should be prepared in its entirety before the GL paint sequence is
initiated. This means making sure that every ::new-frame signal
handler is called before clutter_redraw() is invoked.
In order to do so a TimeoutPool is not enough: we need a master
clock. The clock will be responsible for advancing all the active
timelines created inside a scene, but only when the stage is
being redrawn.
The sequence is:
+ queue_redraw() is invoked on an actor and bubbles up
to the stage
+ if no redraw() has already been scheduled, install an
idle handler with a known priority
+ inside the idle handler:
- advance the master clock, which will in turn advance
every playing timeline by the amount of milliseconds
elapsed since the last redraw; this will make every
playing timeline emit the ::new-frame signal
- queue a relayout
- call the redraw() method of the backend
This way we trade multiple timeouts with a single frame source
that only runs if a timeline is playing and queues redraws on
the various stages.
Bug 1138 - No trackable "mapped" state
* Add a VISIBLE flag tracking application programmer's
expected showing-state for the actor, allowing us to
always ensure we keep what the app wants while tracking
internal implementation state separately.
* Make MAPPED reflect whether the actor will be painted;
add notification on a ClutterActor::mapped property.
Keep MAPPED state updated as the actor is shown,
ancestors are shown, actor is reparented, etc.
* Require a stage and realized parents to realize; this means
at realization time the correct window system and GL resources
are known. But unparented actors can no longer be realized.
* Allow children to be unrealized even if parent is realized.
Otherwise in effect either all actors or no actors are realized,
i.e. it becomes a stage-global flag.
* Allow clutter_actor_realize() to "fail" if not inside a toplevel
* Rework clutter_actor_unrealize() so internally we have
a flavor that does not mess with visibility flag
* Add _clutter_actor_rerealize() to encapsulate a somewhat
tricky operation we were doing in a couple of places
* Do not realize/unrealize children in ClutterGroup,
ClutterActor already does it
* Do not realize impl by hand in clutter_stage_show(),
since showing impl already does that
* Do not unrealize in various dispose() methods, since
ClutterActor dispose implementation already does it
and chaining up is mandatory
* ClutterTexture uses COGL while unrealizable (before it's
added to a stage). Previously this breakage was affecting
ClutterActor because we had to allow realize outside
a stage. Move the breakage to ClutterTexture, by making
ClutterTexture just use COGL while not realized.
* Unrealize before we set parent to NULL in clutter_actor_unparent().
This means unrealize() implementations can get to the stage.
Because actors need the stage in order to detach from stage.
* Update clutter-actor-invariants.txt to reflect latest changes
* Remove explicit hide/unrealize from ClutterActor::dispose since
unparent already forces those
Instead just assert that unparent() occurred and did the right thing.
* Check whether parent implements unrealize before chaining up
Needed because ClutterGroup no longer has to implement unrealize.
* Perform unrealize in the default handler for the signal.
This allows non-containers that have children to work properly,
and allows containers to override how it's done.
* Add map/unmap virtual methods and set MAPPED flag on self and
children in there. This allows subclasses to hook map/unmap.
These are not signals, because notify::mapped is better for
anything it's legitimate for a non-subclass to do.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1513 - Allow passing in ClutterPickMode to
clutter_stage_get_actor_at_pos()
At the moment, clutter_stage_get_actor_at_pos() uses CLUTTER_PICK_ALL
internally to find an actor. It would be useful to allow passing in
ClutterPickMode to clutter_stage_get_actor_at_pos(), so that the caller
can specify CLUTTER_PICK_REACTIVE as a criteria.
The clutter_get_current_event_time() is a global function for
retrieving the timestamp of the current event being propagated
by Clutter. Such function avoids the need to propagate the
timestamp from within user code.
Instead of having a separate set of font options that override the
backend options when clutter_set_font_flags is called, it now just
directly sets the backend font options. So now the font flags are just
a convenience wrapper around the backend font options.
This also makes the ClutterText labels automatically update when the
font flags are changed because they will respond to the 'font-changed'
signal from the backend.
* generic-actor-clone:
Remove CloneTexture from the API
[tests] Clean up the Clone interactive test
Rename ActorClone to Clone/2
Rename ActorClone to Clone/1
Improves the unit test to verify more awkward scaling and some corresponding fixes
Implements a generic ClutterActorClone that doesn't need fbos.
The hope is that this function makes it easier to extend the font
settings with more flags without having to add a function for every
setting.
A new flag for enabling hinting has been added. If set, this changes
the font options on the global PangoContext and any newly created
PangoContexts. The options are only set if the flag is changed from
the default so it won't override any detailed setting chosen by the
backend.
Instead of recomputing the number of units needed to fit in
an em each time clutter_units_em() is called, we can store this
value into the default Backend along with the resolution and
font name. The value should also be updated each time the
resolution and font are changed, to keep it up to date.
Many use cases for clonning an actor don't require running a shader on the
resulting clone image and so requiring FBOs in these cases is overkill and
in-efficient as it requires kicking and synchronizing a render for each clone.
This approach basically just uses the paint function of another actor to
implement the painting for the clone actor with some fiddling of the model-
view matrix to scale according to the different allocation box sizes of
each of the actors.
A simple unit test called test-actors2 was added for testing.
Continuation of the fix in commit 00a3c69868.
Instead of using a separate flag for the resize process, just
delay the setting of the CLUTTER_ACTOR_SYNC_MATRICES flag on the
stage to the point when we receive a ConfigureNotify event from
X11.
This commit will break the stage embedding into other toolkits.
There is a race condition when we resize a stage before showing
it on X11.
The race goes like this:
- clutter_init() creates the default stage and realize it, which
will cause a 640x480 Window to be created
- call set_size(800, 600) on the stage will cause the Window to be
resized to 800x600
- call show() on the stage for the first time will cause COGL
to set up an 800 by 600 GL viewport
- the Window will be mapped, which will cause X to notify the
window manager that the Window should be resized to 800x600
- the window manager will approve the resize
- X resizes the drawable to 800x600
To fix the race, we need to defer COGL from setting up the viewport
until we receive a ConfigureNotify event and the X server has resized
the Drawable.
In order to defer the call to cogl_setup_viewport() we add a new
private flag, CLUTTER_STAGE_IN_RESIZE; the flag is checked whenever
we need to change the viewport size along with the SYNC_MATRICES
private flag. Thus, cogl_setup_viewport() will be called only if
SYNC_MATRICES is set and IN_RESIZE is not set.
The _clutter_context_create_pango_context() should create a new
context; the function returning the PangoContext stored inside the
MainContext structure should be named _get_pango_context() instead.
The PangoContext should be stored once, and inside the main
Clutter context. Each actor for which clutter_actor_get_pango_context()
has been called will hold a reference on the Pango context as well.
This makes it possible to update the text rendering for Clutter
by using only public API.
The clutter-private.h header already includes cogl-pango.h with
the correct inclusion path, because the main context stores a
pointer to the font map.
There is no need for clutter-text.c to include cogl-pango.h
again since it already includes clutter-private.h.