The idea is that if we see multiple picks per frame then that implies
the visible scene has become static. In this case we can promote the
next pick render to be unclipped so we have valid pick values for the
entire stage. Now we can continue to read from this cached buffer until
the stage contents do visibly change.
Thanks to Luca Bruno on #clutter for this idea!
This adds a wrapper macro to clutter-private that will use
g_object_notify_by_pspec if it's compiled against a version of GLib
that is sufficiently new. Otherwise it will notify by the property
name as before by extracting the name from the pspec. The objects can
then store a static array of GParamSpecs and notify using those as
suggested in the documentation for g_object_notify_by_pspec.
Note that the name of the variable used for storing the array of
GParamSpecs is obj_props instead of properties as used in the
documentation because some places in Clutter uses 'properties' as the
name of a local variable.
Mose of the classes in Clutter have been converted using the script in
the bug report. Some classes have not been modified even though the
script picked them up as described here:
json-generator:
We probably don't want to modify the internal copy of JSON
behaviour-depth:
rectangle:
score:
stage-manager:
These aren't using the separate GParamSpec* variable style.
blur-effect:
win32/device-manager:
Don't actually define any properties even though it has the enum.
box-layout:
flow-layout:
Have some per-child properties that don't work automatically with
the script.
clutter-model:
The script gets confused with ClutterModelIter
stage:
Script gets confused because PROP_USER_RESIZE doesn't match
"user-resizable"
test-layout:
Don't really want to modify the tests
http://bugzilla.clutter-project.org/show_bug.cgi?id=2150
The P_() macro adds a context for the property nick and blurb. In order
to make xgettext recognize it, we need to drop glib-gettexize inside the
autogen.sh script and ship a modified Makefile.in.in with Clutter.
Events allocated by Clutter should have a pointer to platform-specific
data; this would allow backends to add separate structures for holding
ancillary data, whilst retaining the ClutterEvent structure for use on
the stack.
In theory, for Clutter 2.x we might just want to drop Event and use an
opaque structure, or a typed data structure inheriting from
GTypeInstance instead.
Store a back pointer of the layout manager inside the container using
the GObject instance data. This introduces a change in the implementation
of ClutterLayoutManager, though it's still binary compatible.
ClutterEffect is an abstract class that should be used to apply effects
on generic actors.
The ClutterEffect class just defines what an effect should implement; it
could be defined as an interface, but we might want to add some default
behavior dependent on the internal state at a later point.
The effect API applies to any actor, so we need to provide a way to
assign an effect to an actor, and let ClutterActor call the Effect
methods during the paint sequence.
Once an effect is attached to an actor we will perform the paint in this
order:
• Effect::pre_paint()
• Actor::paint signal emission
• Effect::post_paint()
Since an effect might collide with the Shader class, we either allow a
shader or an effect for the time being.
We kind of assume that stuff will break well before during the
ClutterBackend::create_context() implementation if we fail to create a
GL context. We do, however, have error reporting in place inside the
Backend API to catch those cases. Unfortunately, since we switched to
lazy initialization of the Stage, there can be a case of GL context
creation failure that still leads to a successful initialization - and a
segmentation fault later on. This is clearly Not Good™.
Let's try to catch a failure in all the places calling create_context()
and report back to the user the error in a meaningful way, before
crashing and burning.
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
There is no need for us to check for low-level functions and header
files, especially since we haven't been checking the results until
now. This makes cross-compiling slightly more bearable.
The introduction of the StageManager in 0.8 implied that the first Stage
instance to be created was automatically assigned the status of "default
stage". This was all well and good, since the default stage was created
behind the curtains by the initialization sequence.
Now that the initialization sequence does not create a default stage any
longer, it means that the first stage created using clutter_stage_new()
gets to be the default, and all special and warm and fuzzy - which also
means that the first stage created by clutter_stage_new() cannot be
destroyed or handled as any other stage. Whoopsie.
Let's go back to the old semantics: the stage created by the first
invocation of clutter_stage_get_default() is the default stage, and
nothing else can be set as default. One day we'll be able to break the
API and the whole default stage business will be a thing of the past.
The DeviceManager class should be abstract in Clutter, and implemented
by each backend, as different backends will have different ways to
detect, initialize and list devices; the X11 backend alone has *two*
ways of dealing with devices.
This commit makes DeviceManager an abstract class and delegates the
device initialization and enumeration to per-backend sub-classes.
The responsible for creating the device manager is, obviously, the
backend singleton.
The X11 and Win32 backends have been updated to the new layout; the
Win32 backend has been updated blindly, so it might require additional
testing.
Since the "internal" state is global, it will leak onto actors that you
didn't intend for it to, because it applies not just to the actors you
create, but also to any actors *they* create. Eg, if you have a dialog
box class, you might push/pop_internal around creating its buttons, so
that those buttons get marked as internal to the dialog box. But
ctx->internal_child will still be set during the *button*'s constructor
as well, and so, eg, the label and icon inside the button actor will
*also* be marked as internal children, even if that isn't what the
button class wanted.
The least intrusive change at this point is to make push_internal() and
pop_internal() two methods of the Actor class, and take a ClutterActor
pointer as the argument - thus moving the locality of the internal_child
counter to the Actor itself.
http://bugzilla.openedhand.com/show_bug.cgi?id=1990
If your OpenGL driver supports GLX_INTEL_swap_event that means when
glXSwapBuffers is called it returns immediatly and an XEvent is sent when
the actual swap has finished.
Clutter can use the events that notify swap completion as a means to
throttle rendering in the master clock without blocking the CPU and so it
should help improve the performance of CPU bound applications.
The previous state for the device is used by the click count machinery
and we should not be overwriting it at every event; instead, we should
use a parallel storage for the current state coming from the windowing
system.
The LEAVE/ENTER event pairs should be queued during the InputDevice
update process, when we change the actor under the device pointer.
This commit cleans up the event emission code inside clutter-main.c
and the logic of the event processing.
The InputDevice objects stores pointer coordinates, state, stage and
the actor under the cursor, so if the current backend provides us with
one attached to the Event structure then we want the InputDevice itself
to update its state and give us the ClutterActor underneath the
pointer's cursor.
Use the device manager to store the input devices. Also, provide
two fallback devices when initializing the X11 backend: device 0
for the pointer and device 1 for the keyboard.
ClutterActor checks, when destroying and reparenting, if the parent
actor implements the Container interface, and automatically calls the
remove() method to perform a clean removal.
Actors implementing Container, though, might have internal children;
that is, children that are not added through the Container API. It is
already possible to iterate through them using the Container API to
avoid breaking invariants - but calling clutter_actor_destroy() on
these children (even from the Container implementation, and thus outside
of Clutter's control) will either lead to leaks or to segmentation
faults.
Clutter needs a way to distinguish a clutter_actor_set_parent() done on
an internal child from one done on a "public" child; for this reason, a
push/pop pair of functions should be available to Actor implementations
to mark the section where they wish to add internal children:
➔ clutter_actor_push_internal ();
...
clutter_actor_set_parent (child1, parent);
clutter_actor_set_parent (child2, parent);
...
➔ clutter_actor_pop_internal ();
The set_parent() call will automatically set the newly added
INTERNAL_CHILD private flag on each child, and both
clutter_actor_destroy() and clutter_actor_unparent() will check for the
flag before deciding whether to call the Container's remove method.
When getting signals from higher level toolkits, occasionally
one wants access to the underlying event; say for a Button
widget's "clicked" signal, to get the keyboard state.
Rather than having all of the highlevel widgets emit
ClutterEvent just for the more unusual use cases,
add a global function to access the event state.
http://bugzilla.openedhand.com/show_bug.cgi?id=1888
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
There is a new internal Cogl function called _cogl_check_driver_valid
which looks at the value of the GL_VERSION string to determine whether
the driver is supported. Clutter now calls this after the stage is
realized. If it fails then the stage is marked as unrealized and a
warning is shown.
_cogl_features_init now also checks the version number before getting
the function pointers for glBlendFuncSeparate and
glBlendEquationSeparate. It is not safe to just check for the presence
of the functions because some drivers may define the function without
fully implementing the spec.
The GLES version of _cogl_check_driver_valid just always returns TRUE
because there are no version requirements yet.
Eventually the function could also check for mandatory extensions if
there were any.
http://bugzilla.openedhand.com/show_bug.cgi?id=1875
The only backend that tried to implement offscreen stages was the GLX backend
and even this has apparently be broken for some time without anyone noticing.
The property still remains and since the property already clearly states that
it may not work I don't expect anyone to notice.
This simplifies quite a bit of the GLX code which is very desireable from the
POV that we want to start migrating window system code down to Cogl and the
simpler the code is the more straight forward this work will be.
In the future when Cogl has a nicely designed API for framebuffer objects then
re-implementing offscreen stages cleanly for *all* backends should be quite
straightforward.
When computing the pixels value of a ClutterUnits value we should
be caching the value to avoid recomputing for every call of
clutter_units_to_pixels(). We already have a flag telling us to
return the cached value, but we miss the mechanism to evict the
cache whenever the Backend settings affecting the conversion, that
is default font and resolution, change.
In order to implement the eviction we can use a "serial"; the
Backend will have an internal serial field which we retrieve and
put inside the ClutterUnits structure (we split one of the two
64 bit padding fields into two 32 bit fields to maintain ABI); every
time we call clutter_units_to_pixels() we compare the units serial
with that of the Backend; if they match and pixels_set is set to
TRUE then we just return the stored pixels value. If the serials
do not match then we unset the pixels_set flag and recompute the
pixels value.
We can verify this by adding a simple test unit checking that
by changing the resolution of ClutterBackend we get different
pixel values for 1 em.
http://bugzilla.openedhand.com/show_bug.cgi?id=1843
Instead of using ClutterActor for the base class of the Stage
implementation we should extend the StageWindow interface with
the required bits (geometry, realization) and use a simple object
class.
This require a wee bit of changes across Backend, Stage and
StageWindow, even though it's mostly re-shuffling.
First of all, StageWindow should get new virtual functions:
* geometry:
- resize()
- get_geometry()
* realization
- realize()
- unrealize()
This covers all the bits that we use from ClutterActor currently
inside the stage implementations.
The ClutterBackend::create_stage() virtual function should create
a StageWindow, and not an Actor (it should always have been; the
fact that it returned an Actor was a leak of the black magic going
on underneath). Since we never guaranteed ABI compatibility for
the Backend class, this is not a problem.
Internally to ClutterStage we can finally drop the shenanigans of
setting/unsetting actor flags on the implementation: if the realization
succeeds, for instance, we set the REALIZED flag on the Stage and
we're done.
As an initial proof of concept, the X11 and GLX stage implementations
have been ported to the New World Order(tm) and show no regressions.
The CLUTTER_TEXTURE_IN_CLONE_PAINT was used with the old CloneTexture
actor; now that we have ClutterClone nothing sets the private flag
anymore, and the flag itself is not needed.
The race we were experiencing in the X11 backends is apparently
back after the fix in commit 00a3c698.
This time, just delaying the setting of the SYNC_MATRICES flag
is not enough, so we can resume the use of a STAGE_IN_RESIZE
private flag.
This should also fix bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1668
The StageManager singleton instance is already kept around
by the clutter_stage_manager_get_default() function; there is
no need to have it inside the main Clutter context as well.
The _clutter_context_get_default() function will automatically
create the main Clutter context; if we just want to check whether
Clutter has been initialized this will complicate matters, by
requiring a call to g_type_init() inside the client code.
Instead, we should simply provide an internal API that checks
whether the main Clutter context exists and if it has been
initialized, without any side effect.
The input device API is split halfway thorugh the backends in a very
weird way. The data structures are private, as they should, but most
of the information should be available in the main API since it's
generic enough.
The device type enumeration, for instance, should be common across
every backend; the accessors for device type and id should live in the
core API. The internal API should always use ClutterInputDevice and
not the private X11 implementation when dealing with public structures
like ClutterEvent.
By adding accessors for the device type and id, and by moving the
device type enumeration into the core API we can cut down the amount
of symbols private and/or visible only to the X11 backends; this way
when other backends start implementing multi-pointer support we can
share the same API across the code.
The clutter_context_get_default() function is private, but shared
across Clutter. For this reason, it should be prefixed by '_' so
that the symbol is hidden from the shared object.
A flag in the master clock is now set whenever the dispatch caused an
actual redraw of a stage. If this flag is not set during the prepare
and check functions then it will resort to limiting the redraw
attempts to the default frame rate as if vblank syncing was
disabled. Otherwise if a timeline is running that does not cause the
scene to change then it would busy-wait with 100% CPU until the next
frame.
This fix was suggested by Owen Taylor in:
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Instead of trying to guess about which motion events are
extraneous, queue up all events until we process a frame.
This allows us to look ahead and reliably compress consecutive
sequence of motion events.
clutter-main.c: Feed received events to the stage for queueing.
Remove old compression code. Remove clutter_get_motion_events_frequency()
clutter_set_motion_events_frequency()
clutter-stage.c: Keep a queue of pending events.
clutter-master-clock.c: Add processng of queued events to the
clock source dispatch function.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When a redraw is queued on a stage, simply set a flag; then in
the check/prepare functions of the master clock source, check
for stages that need redrawing.
This avoids the complexity of having multiple competing sources
at the same priority and makes the update ordering more reliable and
understandable.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The clutter_redraw() function is used by libraries embedding
Clutter inside another toolkit, instead of queueing a redraw
on the embedded stage. This means that clutter_redraw() should
perform the same sequence of actions done by the redraw idle
callback.
Sometimes it is necessary for third party code to have a
function called during the redraw process, so that you can
update the scenegraph before it is painted.
This is the another step into abstracting the backend operations
that are currently spread all across the board back into the
backend implementations where they belong.
The GL context creation, for instance, is demanded to the stage
realization which makes it a critical path for every operation
that is GL-context bound. This usually does not make any difference
since we realize the default stage, but at some point we might
start looking into avoiding the default stage realization in order
to make the Clutter startup faster.
It also makes the code maintainable because every part is self
contained and can be reworked with the minimum amount of pain.
The master clock is currently advanced using a frame source driven
by the default frame rate. This breaks the sync to vblank because
the vblanking rate could be different than 60 Hz -- or it might be
completely disabled (e.g. with CLUTTER_VBLANK=none).
We should be using the main loop to check if we have timelines
playing, and if so queue a redraw on the stages we own.
We should also prepare the subsequent frame at the end of the redraw
process, so if there are new redraw we will have the scene already
in place.
This makes Clutter redraw at the maximum frame rate, which is
limited by the vblanking frequency.
Currently, the conversion from em to units is done by using the
default font name inside the backend. For actors using their own
font/text layout we need a way to specify the font name along
with the quantity we wish to transform.
With the recent change to internal floating point values, ClutterUnit
has become a redundant type, defined to be a float. All integer entry
points are being internally converted to floating point values to be
passed to the GL pipeline with the least amount of conversion.
ClutterUnit is thus exposed as just a "pixel with fractionary bits",
and not -- as users might think -- as generic, resolution and device
independent units. not that it was the case, but a definitive amount
of people was convinced it did provide this "feature", and was flummoxed
about the mere existence of this type.
So, having ClutterUnit exposed in the public API doubles the entry
points and has the following disadvantages:
- we have to maintain twice the amount of entry points in ClutterActor
- we still do an integer-to-float implicit conversion
- we introduce a weird impedance between pixels and "pixels with
fractionary bits"
- language bindings will have to choose what to bind, and resort
to manually overriding the API
+ *except* for language bindings based on GObject-Introspection, as
they cannot do manual overrides, thus will replicate the entire
set of entry points
For these reason, we should coalesces every Actor entry point for
pixels and for ClutterUnit into a single entry point taking a float,
like:
void clutter_actor_set_x (ClutterActor *self,
gfloat x);
void clutter_actor_get_size (ClutterActor *self,
gfloat *width,
gfloat *height);
gfloat clutter_actor_get_height (ClutterActor *self);
etc.
The issues I have identified are:
- we'll have a two cases of compiler warnings:
- printf() format of the return values from %d to %f
- clutter_actor_get_size() taking floats instead of unsigned ints
- we'll have a problem with varargs when passing an integer instead
of a floating point value, except on 64bit platforms where the
size of a float is the same as the size of an int
To be clear: the *intent* of the API should not change -- we still use
pixels everywhere -- but:
- we remove ambiguity in the API with regard to pixels and units
- we remove entry points we get to maintain for the whole 1.0
version of the API
- we make things simpler to bind for both manual language bindings
and automatic (gobject-introspection based) ones
- we have the simplest API possible while still exposing the
capabilities of the underlying GL implementation