The introspection scanner has become slightly more annoying, in the hope
that people start fixing their annotations. As it turns out, it was the
right move.
It's time. Now that we have clutter_actor_allocate() respecting the
easing state of an actor, and that the LayoutManager animation virtual
functions have been deprecated, we can put ClutterAlpha on the chopping
block, and be done with it, once and for all.
So long, ClutterAlpha; and thanks for all the fish.
This semi-aborted API was broken for various reasons:
- it strongly depended on ClutterAlpha, a class we're trying to
deprecate;
- it requires a lot of boilerplate and copy-and-paste code;
- it requires a full relayout of the actor tree for something
that ought to be automatically handled by ClutterActor.
Now that clutter_actor_allocate() handles transitions using the easing
state of the actor, we can deprecate the LayoutManager API for the 1.x
series, and remove it for the 2.x series.
BoxLayout will use the easing state of the children it's allocating; the
current API is re-implemented in terms of an implicit easing state
forced on each child prior to allocating it.
Calling clutter_actor_allocate() should transition between the current
allocation and the new allocation, by using the defined implementation
of the easing state.
This means that:
clutter_actor_save_easing_state (actor);
clutter_actor_allocate (actor, &new_alloc, flags);
clutter_actor_restore_easing_state (actor);
will cause "actor" to transition between the current allocation and the
desired new allocation.
The trick is to ensure that this happens without invalidating the
entire actor tree, but only the portion of the tree that has the
transitioned actor as the local root. For this reason, we just call the
allocate() implementation from within the transition frame advancement,
without invalidating flags: the actor, after all, *has* a valid
allocation for the duration of the transition.
The :x-expand and :y-expand flags on ClutterActor are used to signal
that an actor should expand horizontally and/or vertically - i.e. that
its parent's layout management policy should try to assign extra space
to the actor when allocating it.
The expand flags are automatic: when set on a leaf node in the actor
tree, they will bubble up through the parent and grandparents up to the
top level actor; during allocation, the actors with children will lazily
compute whether their children needs to expand.
The TransitionGroup class is a logical container for running multiple
transitions.
TransitionGroup is not a Score: it is a Transition that advances each
Transition it contains using the delta between frames, and ensures that
all transitions are in a consistent state; these transitions are not
advanced by the master clock.
There are cases when we want to advance a timeline from another time
source. We cannot use _clutter_timeline_do_tick() directly, as that
assumes that the timeline is already playing, so we'll need to create a
wrapper that toggles the playing flag around it.
Given that we can create a ClutterInterval without an initial and final
values using g_object_new(), it stands to reason that we ought to be
able to create an instance when passing NULL GValue pointers to the
new_with_values() constructor as well.
We tend to use float comparison for structured data types like Vertex,
Point, and Size; we should take into consideration fluctuations in the
floating point representation as well.
Instead of a single new() constructor that both allocates and
initializes, split the allocation and initialization into two separate
functions for types that are typically used on the stack, and rarely
allocated on the heap, like ClutterPoint and friends.
This is also applied retroactively to ClutterActorBox and ClutterVertex,
given that the same considerations on usage apply to them as well; we
can add a return value to clutter_actor_box_init() and
clutter_vertex_init() in an ABI-compatible way, so that
clutter_actor_box_new() and clutter_vertex_new() can be effectively
reimplemented as "init (alloc ())".
Using a compound type property for position and size has various
advantages: it reduces the amount of checks; it reduces the amount
of notify signals to connect to; it reduces the amount of transitions
generated.
The ClutterCanvas content implementation should be used instead, to
avoid stringing along the ClutterTexture API and implementation.
This change requires some minor surgery, as the deprecated section
already contains an header for the previously deprecated methods; plus,
we don't want to deprecate clutter_cairo_set_source_color(). This means
creating a new header to be used for Cairo-related API.
The get_distance() API uses machine integers to compute the distance;
this means that on 32bit we can overflow the integer size. This gets
hidden by the fact that get_distance() returns an unsigned integer as
well.
In reality, ClutterPath is an unmitigated mess, and the only way to
actually fix it is to break API.
https://bugzilla.gnome.org/show_bug.cgi?id=652521
This commit adds a further conditional check for calling
clutter_actor_show() when adding a child to an actor. We cannot
unconditionally change the value of the show-on-set-parent property like
the original solution of commit 81b19a78f5
as that breaks the document invariant that show-on-set-parent will be
changed iff an actor is without a parent.
The new ADD_CHILD_SHOW_ON_SET_PARENT flag is part of the default and
legacy flags, thus retaining the default behaviour when adding a child;
the flag is not passed when reordering the list of children, which means
we ignore the state of the show-on-set-parent property.
The conformance test suite fully passes, including the newly added test
to verify that changing the paint order does not trigger visibility.
https://bugzilla.gnome.org/show_bug.cgi?id=674510
This reverts commit 81b19a78f5.
The commit breaks the conformance test unit for the invariants we
guarantee for the 1.x API:
ERROR:actor-invariants.c:307:actor_show_on_set_parent: assertion failed: (show_on_set_parent)
It's possible to run Clutter with the 'null' input backend, which means
that clutter_device_manager_get_default() may return NULL. In the future
we may add a default dummy device manager, but right now it's safer to
just add a simple NULL check in the places where we ask for the device
manager.
I can't think of any reason why it would do this and there's no
comment explaining it so let's just remove it. The global fog state
has been removed in Cogl 2.0 so it will cause problems later.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
CoglVertexBuffer is deprecated so here is a fairly simple replacement
to use the experimental CoglPrimitive API.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Drop a bunch of variables that are not meant to be used by Cally; also,
drop the wrong library name from the Libs key: Clutter has a single
shared library, now.
https://bugzilla.gnome.org/show_bug.cgi?id=674105
The interface looked like a good idea around the time Clutter 0.2 was
out, but in reality we never had a proper, and supported implementation
outside of clutter-gst - thus, ClutterMedia was acting like a wrapper
around GStreamer, leading to hilarious issues of impedence mismatch
between API and all sorts of indirection issues typical of wrong
abstractions.
In theory, ClutterMedia should have been deprecated and removed before
we hit 1.0, but we kept flip-flopping on the issue.
For 2.0, it's time to take it out.
And shoot it in the face.
It's been a year and change, and two stable releases, since we
introduced the paint volume mechanism to allow actors to paint outside
their allocation safely in environments that support clipped redraws.
The time has come to flip the switch, and return a valid paint volume,
matching the actor's allocation, by default - at least for Actor
instances from classes that do not override paint() and
get_paint_volume().
If an actor has a paint signal handler then it's the user responsability
not to paint outside the allocation - and to suffer the consequences of
doing so; in an ideal world, paint() would not be a signal in the first
place anyway. Plus, the idea that painting can happen at any time and
still have a valid surface greatly conflicts with the design goal of
making Clutter's rendering operations fully retained into a render tree.
We can still revert this commit before spinning 1.12, if need be.
When removing the last Action, Constraint, or Effect, we should also be
clearing the corresponding MetaGroup: code inside ClutterActor relies on
NULL checks, and changing them all to check for NULL && n_items == 0
would not be fun.
configure.ac defines XINPUT_2_2 if XI 2.2 support was found. The code
expects XINPUT_2_2 in the device manager, but HAVE_XINPUT_2_2 in the x11
backend.
On newer X servers, the latter causes a BadValue when XIQueryDevice sends a
different major/minor than gdk's device manager (gnome-control-center).
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
https://bugzilla.gnome.org/show_bug.cgi?id=673961
We need to remove the transition only if the current repeat is equal to
the number of repeats, and if the transition was marked as remove on
complete. Otherwise, the transition has to remain where it is.
The opacity internal setter will do it for us, and it will take into
consideration any eventual flatten effect applied to the actor.
This unbreaks the actor-offscreen-redirect conformance test.
The get_geometry() implementation is broken on multi-head systems; the
only solution is to use XRandR to get the size of the CRTC that contains
the stage.
In XInput 2, the proximity events of XInput 1 have been replaced by an
axis on a valuator class. This means that, on devices that support
proximity information, for instance pens of a tablet, you will start
receiving events with the distance as an axis value - similarly to how
the pressure and tilt are presented in the API.
We were using g_list_foreach() prior to the first Apocalypse, and that
function is resilient against changes to the list while iterating it;
since we are not using a GList any more, we need handle this case
ourselves.
This also avoids the warning
Cogl-WARNING **: ./cogl-buffer.c:215: GL error (1285): Out of memory
generated by cogl_buffer_map when the CoglBuffer has zero length.
When the easing state has a duration of zero milliseconds we can skip
the entire create_transition() call inside set_width() and set_height(),
to avoid what may be a costly call to get_preferred_*.
If we update a transition that is currently playing, we need to check
the current easing state, and look at the eventual duration, in case
the user wants to cancel the transition.
Instead of checking the duration of the current easing state we should
check if there's a transition in progress, and update it
unconditionally.
If there is no easing state, or the easing state has a duration of zero
milliseconds, then create_transition() should bail out early and set the
requested final state.
This allows us to write:
clutter_actor_save_easing_state (actor);
clutter_actor_set_x (actor, 200);
clutter_actor_restore_easing_state (actor);
[...]
clutter_actor_set_x (actor, 100);
and have the second set_x() update the easing in progress, instead of
being ignored.
https://bugzilla.gnome.org/show_bug.cgi?id=672945
Commit 80626e7584 removed an
IN_DESTRUCTION check from within the add_child_internal() method,
outlining an option for bringing it back. It was too late for the 1.10
cycle to do it, and eventually pick up the pieces, but now that we're
at the beginning of the 1.11 cycle we can restore it, and add checks
elsewhere to balance it.
This patch fixes clutter to not crash when multiple animations share
the same timeline and the actors are explicitly destroyed before
the timeline completes (bug 672890)
Some of the Clutter code was using GL types for the primitive types
such as GLint and GLubyte and then passing these to Cogl. This doesn't
make much sense because the Cogl functions directly take native C
types. This patch just replaces them with either a native C type or a
glib type.
Some of the cogl conformance tests are trying to directly call GL for
example to test creating a foreign texture. These tests have been
changed to manually define the GL enum values instead of relying on a
GL header to define them.
This is necessary because Cogl may soon stop including a GL header
from its public headers.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
The disjunction operator was misspelt as -O which tests whether the
following file is owned by the calling user. This doesn't take enough
arguments so bash was showing an error and the test was always
failing. This meant that NEED_XKB_UTILS was always false which should
have broken the build but the Makefile was mistakenly including
clutter-xkb-utils.c again if SUPPORT_WAYLAND is defined.
See 1b77565e for reference.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
The commit 90e5088 added some extra compiler warning options that were
triggering warnings when enabling the wayland build due to missing
header includes. This adds those header includes in.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Because the wayland-server-protocol.h header includes symbols that
collide with wayland-client-protocol.h Cogl now provides top level
<cogl/cogl-wayland-server.h> and <cogl/cogl-wayland-client.h> headers so
that applications can ensure they only include one of the wayland
protocol headers in a particular compilation unit. This updates clutter
accordingly to include those headers.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Should not have been there in the first place: the animatable will be
set either using ClutterTransition API, or when adding the transition
to a ClutterActor.
When adding a transition to a ClutterActor, the actor should hold a
reference on it, and release it only when we remove it. This makes
transitions just like other objects held by ClutterActor.
The ::completed signal emission is part of the current cycle; repeating,
like the automatic reverse of the timeline's direction, happens after
the ::completed chain of handlers has been called.
We still use XKeycodeToKeysym() in a fallback path in case we're not
running on a decent enough system; XKeycodeToKeysym() is deprecated as
of version 1.12 of the X server, but since I don't want to copy a bunch
of code from GDK or, god forbid, from Xlib, for a fallback path, it's
probably more reasonable to just silence the compiler warnings - at
least until we can drop all the X compatibility crap, and just use
modern, or semi-modern, API.
Some events may contain precise scrolling information coming from
devices like trackpads and touchscreens. ClutterEvent should allow
setting and getting this information.
While you can get a per-transition notification of completion, it can be
convenient to also have a way to notify that all the transitions
involving an actor are complete. A simple signal triggered by the
removal of the last transition fits the bill pretty neatly.
When handling Configure events from the X server we update the
internal copy of the window size. Unfortunately we may be updating the
wrong stage implementation because we use the one related to the event
translator (which is the first created stage).
This patch fix flickering/redrawning issues with multi-stage by
looking for the right stage implementation associated with an XEvent.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
If restore_easing_state() is called on the last easing state on the
stack, clean up the stack, so that we don't leave stale pointers
around to later segfault on.
When setting the easing mode, duration, or delay without having ever
called clutter_actor_save_easing_state(). It's confusing, and not
really nice.
In the future, we'll have a default easing state implicitly created by
the actor itself, but for the time being explicitly opting in is
preferrable.
If the pointer is inside the window frame when it's shown then we need
to synthesize and emit a NSMouseEnterEvent ourselves, as Quartz won't
do it for us.
This is a bit of a blind commit - but it's taken from an equivalent
patch that has been verified to work in GDK.
Yes, it's not really the proper GL name for a linear-on-every-axis of a
texture plus linear-between-mipmap-levels minification filter, but it
has three redeeming qualities as a name:
- LINEAR_MIPMAP_LINEAR sucks, as it introduces GL concepts like
mipmaps in the API naming, while we're trying to avoid that;
- people using GL already know what 'trilinear' means in this context
without going all Khronos on their asses;
- we're using 2D textures anyway, so 'linear on two axes and linear
between mipmap levels' can be effectively approximated to
'trilinear'.
I mean, if even the OpenGL official wiki says:
Unfortunately, what most people think of as "trilinear" is not linear
filtering of a 3D texture, but what in OpenGL terms is GL_LINEAR mag
filter and GL_LINEAR_MIPMAP_LINEAR in the min filter in a 2D texture.
That is, it is bilinear filtering of each appropriate mipmap level,
and doing a third linear filter between the adjacent mipmap levels.
Hence the term "trilinear".
-- http://www.opengl.org/wiki/Texture
then the horse has already been flogged to death, and I don't intend to
be accused of necrophilia and sadism by flogging it some more.
Prior art: every single GL tutorial in the history of ever;
CoreAnimation's scaling filter enumerations.
If people want to start using 1D or 3D textures they they are probably
going to be using Cogl API directly, and that has the GL naming scheme
for minification and magnification filters anyway.
At least for the time being, we only expose the parts of the API that we
want to use internally and for new, out-of-tree Content implementations.
The full PaintNode tree API will be made public in 1.12 once we branch
master.
It's a bit late in the game for changing the emission of the paint
signal with actors that use paint nodes - mostly because we have both
implicit paint nodes (background color, content) and explicit paint
nodes (the paint_node virtual).
When we branch for 1.12 we can revert this change.
It's safer, and consistent with the rest of Clutter, to make sure that
the template pipeline we use for ClutterTextureNode has its wrap mode
set to clamp-to-edge.
Both ClutterCanvas and ClutterImage should use the minification and
magnification filters set on the actor, just like the use the content
box and the paint opacity.
The ::paint signal is the old way to paint an actor; the paint_node()
virtual function is the new way. It's still not possible to traverse the
whole scene graph and build a render tree of PaintNode instances, but
with this change we simultaneously cut out the ::paint signal emission
from the critical path for actors that are using the new PaintNode-based
API, and we retain backward compatibility in the interim period between
1.10 and 2.0.
Instead of our homegrown string building; this at least ensures that
we're generating proper data, instead of random strings. Plus, using
JsonNode and JsonBuilder, we can ask the PaintNode subclasses to
serialize themselves in a sensible way.
ClutterContent is an interface for creating delegate objects that handle
what an actor is going to paint.
Since they are a newly added type, they only hook into the new PaintNode
based API.
The position and size of the content is controlled in part by the
content's own preferred size, and by the ClutterContentGravity
enumeration.
The ::paint-node virtual inside ClutterActor is what we want people to
use when painting their actors.
Right now, it's a new code path, that gets called while painting; the
paint_node() implementation should only paint the actor itself, and not
its children — they will get their own paint_node() called when needed.
Internally, ClutterActor will automatically create a dummy PaintNode and
paint the background color; then control will be handed out to the
implementation on the class. This is required to maintain compatibility
with the old ::paint signal emission.
Once we are able to get rid of the paint (and pick) sequences, we'll
switch to a fully retained render tree.
Now that we have a proper scene graph API, we should split out the
rendering part from the logical and event handling part.
ClutterPaintNode is a lightweight fundamental type that encodes only the
paint operations: pipeline state and geometry. At its most simple, is a
way to structure setting up the programmable pipeline using a
CoglPipeline, and submitting Cogl primitives. The important take away
from this API is that you are not allowed to call Cogl API like
cogl_set_source() or cogl_primitive_draw() directly.
The interesting approach to this is that, in the future, we should be
able to move to a purely retained mode: we will decide which actors need
to be painted, they will update their own branch of the render graph,
and we'll take the render graph and build all the rendering commands
from that.
For the 1.x API, we will have to maintain invariants and the existing
behaviour, but as soon as we can break API, the old paint signal will
just go away, and Actors will only be allowed to manipulate the render
tree.
As it turns out, we do end up recursing inside the ::paint signal
emission - especially inside the conformance test suite.
This thoroughly sucks - and we'll only be able to fix it properly
when we bump API for 2.0.
ClutterActor should be able to hold all transitions, even the ones that
have been explicitly created.
This will allow to add new transitions types in the future, like the
keyframe-based one, or the transition group.
It should be possible to ask a timeline what is its duration, taking
into account eventual repeats, and which repeat is the one currently
in progress.
These two functions allow writing animations that depend on the current
state of another timeline.
It should be possible to set up the delay of a transition, but since
we start the Transition instance before returning control to the caller,
we cannot use clutter_actor_get_transition() to do it without something
extra-awkward, like:
transition = clutter_actor_get_transition (actor, "width");
clutter_timeline_stop (transition);
clutter_timeline_set_delay (transition, 1000);
clutter_timeline_start (transition);
for each property involved. It's much easier to add a delay to the
easing state of an actor.
Clutter is meant to be, and I quote from the README, a toolkit:
for creating fast, compelling, portable, and dynamic graphical
user interfaces
and yet the default mode of operation for setting an actor's state on
the scene graph (position, size, opacity, rotation, scaling, depth,
etc.) is *not* dynamic. We assume a static UI, and then animate it.
This is the wrong way to design an API for a toolkit meant to be used to
create animated user interfaces. The default mode of operation should be
to implicitly animate every state transition, and only allow skipping
the animation if the user consciously decides to do so — i.e. the design
tenet of the API should be to make The Right Thing™ by default, and make
it really hard (or even impossible) to do The Wrong Thing™.
So we should identify "animatable" properties, i.e. those properties
that should be implicitly animated by ClutterActor, and use the
animation framework we provide to tween the transitions between the
current state and the desired state; the implicit animation should
happen when setting these properties using the public accessors, and not
through some added functionality. For instance, the following:
clutter_actor_set_position (actor, newX, newY);
should not make the actor jump to the (newX, newY) point; it should
tween the actor's position between the current point and the desired
point.
Since we have to maintain backward compatibility with existing
applications, we still need to mark the transitions explicitly, but we
can be smart about it, and treat transition states as a stack that can
be pushed and popped, e.g.:
clutter_actor_save_easing_state (actor);
clutter_actor_set_easing_duration (actor, 500);
clutter_actor_set_position (actor, newX, newY);
clutter_actor_set_opacity (actor, newOpacity);
clutter_actor_restore_easing_state (actor);
And we can even start stacking animations, e.g.:
clutter_actor_save_easing_state (actor);
clutter_actor_set_easing_duration (actor, 500);
clutter_actor_set_position (actor, newX, newY);
clutter_actor_save_easing_state (actor);
clutter_actor_set_easing_duration (actor, 500);
clutter_actor_set_easing_mode (actor, CLUTTER_LINEAR);
clutter_actor_set_opacity (actor, newOpacity);
clutter_actor_set_depth (actor, newDepth);
clutter_actor_restore_easing_state (actor);
clutter_actor_restore_easing_state (actor);
And so on, and so forth.
The implementation takes advantage of the newly added Transition API,
which uses only ClutterTimeline sub-classes and ClutterInterval, to cut
down the amount of signal emissions and memory management of object
instances; as well of using the ClutterAnimatable interface for custom
properties and interpolation of values.
As a convenience for the C API.
Language bindings should already be using the GValue variants.
This commit also moves the custom progress functions out of the
clutter-interval.c, as they are meant to be generic interpolation
functions and not ClutterInterval-specific.
The ::paint, ::queue-redraw, and ::queue-relayout signals should be
marked as no-recurse and no-hooks; these signals are emitted *a lot*
during each frame, and since GLib has a bunch of optimizations for
signals with no closures, we should try and squeeze every single CPU
cycle we can.
Dist clutter-version.h for use under non-autotools-based build
environments (e.g. MSVC) as this header is now generic under the
systems Clutter supports
Checked with Emmanuele Bassi on IRC.
The frame_budget member of ClutterMasterClock is only enabled when
CLUTTER_ENABLE_DEBUG is enabled, so fix the build with this.
Checked with Emmanuele Bassi on IRC.
Normally this leak goes unnoticed because basic fundamental types
are typically used with clutter_actor_animate(), the leak shows up
if boxed or object types are passed (such as ClutterVertex in the
case I stumbled upon).
The ClutterBrightnessContrastEffect effect class allows changing the
brightness and contrast levels of an actor.
Modified-by: Emmanuele Bassi <ebassi@linux.intel.com>
Modified-by: Neil Roberts <neil@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=656156
We can use the __COUNTER__ macro or, failing that, the __LINE__ macro to
ensure that we don't declare dummy variables more than once with the
same name.
The comment says that we're going to load textures in a loop until we
still have work to do, or if one iteration took more than 5
milliseconds, to avoid blowing up our frame budget, but the check is for
5 seconds, which is hardly a sensible value.
We remove 2 pixels from the height of the cursor, but we should also
remove the same amount from the position on the y axis, so that the
cursor caret appears centered in the allocated height.
https://bugzilla.gnome.org/show_bug.cgi?id=655491
ClutterScript should be able to automatically call gettext() and friends
on strings loaded from a UI definition, prior to passing the string to
the object it is constructing.
The basic implementation is trivial:
- set a translation domain on the ClutterScript instance
- mark the translatable strings inside the JSON data, like:
"property" : {
"translatable" : true,
"string" : "a translatable string"
}
The hard part is now getting the tools we use to extract the
translatable strings to understand the JSON format we use inside
ClutterScript.
Let's keep a budget of 16.6 milliseconds per frame, and reduce it by the
amount of time spent in each phase of the frame processing. If any phase
goes over the allocated budget then we use the diagnostic mode
facilities to warn the app developer.
It is sometimes useful to be able to have better control on when a
repaint function is called. Currently, all repaint functions are called
prior to the stages update phase of the frame processing.
We can introduce flags to represent the point in the frame update
process in which we wish Clutter called the repaint function.
As a bonus, we can also add a flag that causes adding a repaint function
to spin the master clock.
In theory, handlers connected to the ::allocation-changed signal may be
able to modify the actor's real allocation and allocation flags,
especially now that we use STATIC_SCOPE; let's avoid this, so that we
don't regret it later.
The show and hide implementation inside ClutterStage ended up being
recursive, and the hide implementation would actually show the children
of the stage unconditionally.
Whoopsie.
The ActorBox passed to the ::allocation-changed signal should be
annotated as STATIC_SCOPE, given that it's a pointer to a structure
inside ClutterActorPrivate - hence there's no risk of it actually being
freed from a signal handler. This allows the GSignal machinery to avoid
a costly copy/free for each signal emission.
If the redraw entry is not cleared, queueing a redraw from a signal
handler could reinsert the same object in the stage redraw list,
causing the segfault later (as the object is immediately freed)
https://bugzilla.gnome.org/show_bug.cgi?id=671173
This just adds some padding pointers so that we can later add more
virtual functions without breaking ABI.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Clutter applications using evdev are typically fullscreen applications
associated with a single virtual termainal. When switching away from
the applications associated tty then Clutter should stop managing all
evdev devices and re-probe for devices when the application regains
focus by switching back to the tty. To facilitate this, this patch
adds clutter_evdev_release_devices() and clutter_evdev_reclaim_devices()
functions.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
When using the Wayland backend this sets a constraint that the
CoglRenderer selects the Wayland EGL winsys.
When a Wayland compositor display is set it now also sets a constraint
that the render should use EGL because only EGL renderers will set up
the required wl_drm global object.
The X11 backend now sets the X11 constraint.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
The core input devices when XInput doesn't work were being created as
generic ClutterInputDevices instead of ClutterInputDeviceX11s. This
meant the keycode_to_evdev virtual wouldn't work.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a virtual function to ClutterInputDevice to translate a
keycode from the hardware_keycode member of ClutterKeyEvent to an
evdev keycode. The function can fail so that input backends that don't
have a sensible way to translate to evdev keycodes can return FALSE.
There are implementations for evdev, wayland and X. The X
implementation assumes that the X server is using an evdev driver in
which case the hardware keycodes are the evdev codes plus 8.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Because evdev isn't associated with the display system, it doesn't
have any easy way to associate an input device with a stage.
Previously Clutter would never set a stage for an input device and
leave it up to the application to set it. To make it easier for
applications which just have a single fullscreen stage (which is
probably the most common use case for evdev) the device manager now
associates all input devices with the first stage that is created
unless something has already set a stage.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
The shm buffer format enum values were renamed and the explicitly
premultiplied format was dropped since it's now assumed if the buffer
has an alpha component then it's premultiplied.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
The pick method doesn't do anything special over the default pick
method provided by ClutterActor so there's no need to implement it.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a signal that's emitted whenever a wayland surface is damaged
that allows sub-classes to override the default handler to change
how clipped redraws are queued if the sub-class doesn't simply draw
a rectangle. The signal can also be used just to track damage.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a "cogl-texture" gobject property so that a compositor may
listen for notifications of changes to the texture used to paint.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This patch renames the width/height properties to
surface-width/surface-height so that they won't override the
width/height properties of ClutterActor which have different
semantics.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a NEEDS_XKB_UTILS automake conditional that's set to true if
either the wayland backend is enabled or the evdev input backend is
enabled since they both depend on clutter-xkb-utils.c and we need
to avoid listing the file twice since that leads to duplicate symbols
and the build fails.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
When a new buffer is attached and we update the width and height
properties for the surface we now also call clutter_actor_set_size()
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a clutter_wayland_surface_get_surface() function for querying
the struct wl_surface * associated with a ClutterWaylandSurface.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This exposes a clutter_wayland_surface_set_surface() function. The
implementation ignores requests to re-set the same surface and since now
has code to cleanup old surface state before setting the new surface.
(previously the surface was construct only so this wasn't necessary)
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
When disposing a ClutterWaylandSurface we now make sure to unref any
pipeline we created and unref any surface buffer textures we created.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
There was a GArray member named damage that wasn't being used which this
patch removes.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
If wayland compositor support has been enabled then we make sure to
install the corresponding public headers and a
clutter-wayland-compositor.pc pkgconfig file.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
We currently check for the IN_DESTRUCTION flag inside the
add_child_internal() function.
This check disallows calling methods that change the stacking order
within the destruction sequence, by triggering a critical warning first,
and leaving the actor in an undefined state, which then ends up being
caught by an assertion.
The reproducible sequence is:
- actor gets destroyed;
- another actor, linked to the first, will try to change the
stacking order of the first actor;
- changing the stacking order is a composite operation composed
by the following steps:
1. ref() the child;
2. remove_child_internal(), which removes the reference;
3. add_child_internal(), which adds a reference;
- the state of the actor is not changed between (2) and (3), as
it could be an expensive recomputation;
- if (3) bails out, then the actor is in an undefined state, but
still alive;
- the destruction sequence terminates, but the actor is unparented
while its state indicates being parented instead.
- assertion failure.
The obvious fix would be to decompose each set_child_*_sibling() method
into proper remove_child()/add_child(), with state validation; this may
cause excessive work, though, and trigger a cascade of other bugs in
code that assumes that a change in the stacking order is an atomic
operation.
Another potential fix is to just remove this check here, and let code
doing stacking order changes inside the destruction sequence of an actor
continue doing the work.
The third fix is to silently bail out early from every
set_child_*_sibling() and set_child_at_index() method, and avoid doing
work.
I have a preference for the second solution, since it involves the least
amount of work, and the least amount of code duplication.
See bug: https://bugzilla.gnome.org/show_bug.cgi?id=670647
This solves a crash on GNOME Shell, as compute the extents
for some StWidgets could lead to call st_widget_get_theme_node,
and it is a fatal error to call this on a widget that it not
beed added to a stage.
Make it like the clutter-version.h.in template. Since we aren't having
Windows-specific items in here (such as CLUTTER_FLAVOUR), perhaps we
could get the dllexport stuff in clutter-version.h.in, where it can be
used when necessary, and this file would be gone.
GLib introduced macros that allows defining the lower and upper bounds
of the API to be used by application code.
The lower bound allows to define the minimum version that will trigger
deprecation warnings; the upper bound defines the maximum version that
will trigger compiler warnings for unavailable symbols.
This scheme allows gradually porting application code to a new version
of the API, especially in case of resynchronization after multiple
development cycles.
Now that ClutterActor has a default paint volume, subclasses may wish
to retrieve it without chaining up to the parent's implementation of
the get_paint_volume() function.
The get_default_paint_volume() returns a ClutterPaintVolume pointer
to the paint volume as computed by the default implementation of the
get_paint_volume() virtual function; it can only be used immediately,
as it's not guaranteed to survive across multiple frames.
Creating PaintVolume instances is not possible, and it's not recommended
anyway. It is, though, necessary to union paint volumes, especially with
2D boxes, in some cases.
Clutter should provide a simple convenience function that allows
unioning volumes to boxes in a moderately efficient way.
https://bugzilla.gnome.org/show_bug.cgi?id=670021
It should be possible to adapt the abicheck.sh script so that it
actually tests the ABI of libclutter-1.0.so taking into account
the backends that were compiled into Clutter, and avoid expected
failures if Clutter was not built with a specific backend.
https://bugzilla.gnome.org/show_bug.cgi?id=670680
We cannot deprecate ClutterAlpha yet. We cannot also implement
ClutterAlpha in terms of ClutterTimeline, because multiple Alpha
instances can be attached to the same Timeline. So we can start
with a "soft" deprecation: just a warning in the documentation
stating that ClutterAlpha will be deprecated, and removed, in the
future, and that newly-written code should use ClutterTimeline
instead.
We can use ClutterTimeline and its progress mode inside
ClutterAnimation; obviously, we have to maintain the invariants because
of the ClutterAnimation:alpha property, but if all you set is the :mode
property using one of the Clutter animation modes then we can skip the
ClutterAlpha entirely.
Instead of having the easing functions be dependent of ClutterAlpha, and
static to the clutter-alpha.c source file, we should make them generic
and move them to their own internal header and source files. This will
allow to re-use them in the near future.
Since Cogl has started restricting what cogl 1.x api is exposed when
COGL_ENABLE_EXPERIMENTAL_2_0_API is defined and since we build all
Clutter internals with COGL_ENABLE_EXPERIMENTAL_2_0_API defined this
patch makes a first pass at reducing our internal use of the Cogl 1.x
api.
The most notable api that's no longer exposed to us internally is
the cogl_material_ api so this switches all Clutter internals to use the
cogl_pipeline_ api instead. This patch also makes quite a bit of
progress removing internal uses of CoglHandle although there is still
more to go.
The experimental cogl_texture_pixmap_x11_new() api was recently changed
to take an explicit context argument and return a GError on failures.
This updates Clutter's use of the api accordingly.
We were only exposing clutter_backend_get_cogl_context() if
COGL_ENABLE_EXPERIMENTAL_2_0_API had been defined but the CoglContext
api is also available if COGL_ENABLE_EXPERIMENTAL_API has been defined.
As it was it meant that code opting into the experimental Cogl api
but not limiting to the 2.0 only api would have to #define
COGL_ENABLE_EXPERIMENTAL_2_0_API before including clutter.h but make
sure it wasn't defined when including cogl.h which was particularly
awkward.
Recently the cogl_framebuffer_swap_* apis were moved into the
cogl_onscreen_* namespace since only CoglOnscreen framebuffers can be
double buffered. This renames all uses of the cogl_framebuffer_swap_*
apis in Clutter.
The experimental cogl_pipeline_new() api was recently changed so it
explicitly takes a CoglContext. This updates all calls to
cogl_pipeline_new() in clutter accordingly.
ATK_ROLE_CANVAS is not a suitable role, as the user (in general) can't
draw on the Stage. CallyStage implements AtkWindow, so the proper role
is ATK_ROLE_WINDOW
Removing atkcomponent, focus_tracker, etc. Emitting focus state change
from the stage. Now things are more simple, and stop to use some
of the soon-to-be-deprecated signals on ATK.
It should be possible to do:
clutter_stage_set_fullscreen (stage, TRUE);
clutter_actor_show (stage);
and have the stage be full screen as soon as it is shown.
Currently, we need to call clutter_actor_realize() prior to calling
set_fullscreen(), otherwise the backing X window will not be set,
and ClutterStageX11 will silently discard the change.
If set_fullscreen() was called prior to realization, ClutterStageX11
should delay setting the fullscreen hint until the realize() chain
has been successfully executed.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2515
If you execute the following sequence :
stage = clutter_stage_new ();
clutter_actor_set_size (stage, 1280, 800);
clutter_actor_realize (stage);
Then you end up creating an onscreen buffer of size 1280x800 but
ClutterStageX11 storing the stage size at 640x480.
This patch resync the 2 implementation by using the ClutterStage's
size in both classes when realizing.
Signed-off-by: Lionel Landwerlin <llandwerlin@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=667540
Unconditionally creating CoglPipeline and CoglSnippets inside the class
initialization functions does not seem to be enough when dealing with
headless builds.
Our last resort is to lazily create the base pipeline the first time we
try to copy it, during the instance initialization.
The class initialization function may be called when Clutter hasn't been
fully initialized — for instance, when scanning the source with gtk-doc
or with the introspection scanner.
The allocation code for BoxLayout contains a sequence of brain farts
that make it barely working since the synchronization of the layout
algorithm to the one in GtkBox.
The origin of the layout is inverted, and it doesn't take into
consideration a modified allocation origin (for actors the provide
padding or margin).
The pack-start property is broken, and it only works because we walk the
children list backwards; this horribly breaks when a child changes
visibility. Plus, we count invisible children, which leads to
allocations getting insane origins (either close to -MAX_FLOAT or
MAX_FLOAT).
Finally, the allocation is applied twice even for non-animated cases.
https://bugzilla.gnome.org/show_bug.cgi?id=669291
* clutter_wayland_input_device_get_wl_input_device for the input device
* clutter_wayland_stage_get_wl_surface for the Wayland surface
* clutter_wayland_stage_get_wl_shell_surface for the shell surface
This converts the blur, colorize and desaturate effects to use
snippets instead of CoglPrograms. Cogl can handle the snippets much
more efficiently than programs so this should be a performance win. It
also fixes the problem that Cogl would end up recompiling the program
for every instance of the effects because Clutter was not reusing the
same program.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
The blur effect needs to pass a uniform to the GLSL shader so that it
can know the texture coordinate offset from one texel to another. To
calculate this the blur effect was previously using the allocation
size of the actor rounded up to the next power of two. Presumably the
assumption was that Cogl would round up the size of the texture to the
next power of two when allocating the texture. However this is not be
true if the driver supports NPOT textures. Also it doesn't take into
account the paint volume of the actor which may cause the texture to
be a completely different size. This patch just changes to directly
use the size of the texture.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Sometimes a subclass of ClutterOffscreenEffect wants to paint with a
completely custom material. In that case it is awkward to modify the
material returned owned by ClutterOffscreenEffect so it makes more
sense to just get the texture and manage its own material.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
All of the pipelines used for ClutterTexture actors share a common
pipeline ancestor created with cogl_pipeline_copy. Previously this
ancestor had a dummy 1x1 texture attached to it so that it would end
up with the same state as the child pipelines that will render with a
texture. Cogl now has a mechanism to specify that a texture will be
used with a pipeline layer without having to create an actual texture.
This patch makes it use that to avoid having an unused texture.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we have N children and the user passes N (or a number beyond N) to
clutter_actor_insert_child_at_index, we should respond by adding the
child at the end, not silently doing nothing.
This should avoid trying to fix the origin of a paint volume set from
the allocation's origin, and thus breaking everything.
A PaintVolume for an actor is defined to be relative to the actor's
modelview unless specifically modified by internal functions; the origin
of an actor's allocation is, on the other hand, parent-relative.
There are times when we don't want to remove all children and count of
the reference count to drop to 0 to ensure destruction; there are cases,
such as managed environments, where it's preferable to ensure that the
children of an actor get actually destroyed.
ClutterActor has a background-color property, now; we should use it for
the Stage, re-implement the color property in terms of background-color.
and deprecate the Stage property.
Being able to easily set the number of repeats has been a request for
the animation framework for some time now. The usual way to implement
this is: connect to the ::completed signal, use a static counter, and
stop the timeline when the counter hits a specific spot.
In the same light as the :auto-reverse property, we can make it easier
to implement this common functionality by adding a :repeat-count
property that, when set, limits the amount of loops that a Timeline can
perform before stopping itself.
In fact, we can implement the :loop property in terms of the
:repeat-count property just by using a sentinel value mapping to
"infinity", and map loop=FALSE to repeat-count=0, and loop=TRUE to
repeat-count=-1.
The clutter_timeline_clone() method was a pretty dumb idea when it was
introduced, back when we still had the ClutterEffectTemplate and the
clutter_effect_* animation API. It has since become an API wart: we
cannot change or add new properties to be cloned without the risk of
breaking existing code. All in all, cloning a GObject is just a matter
of calling g_object_new() with the wanted properties.
Let's deprecate this throwback of the Olden Days™, so that we can remove
it for good once we break for 2.0.
When the ClutterTextBuffer support inside ClutterText was merged, it
introduced a regression that was identified and fixed in bug 659116.
The optimization to not paint empty ClutterText actors is only valid
is the actor is not editable, or if the cursor is not visible.
Courtesy of GLib and GTK+. The abicheck.sh is a simple, Linux-only,
script to check that we're not leaking private symbols, or that the
clutter.symbols file hasn't been updated.
In theory, it should go inside the distcheck phase.
A bunch of private symbols have escaped into the SO; let's rectify this
situation by using the '_' private prefix, or making them static as they
should have been.
Cogl now requires that all applications integrate their main loop with
Cogl so that it can listen for events from winsys. This patch just
adds Cogl's GSource to the main loop.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Some of Cogl's experimental apis have changed so that the buffer apis
now need to be passed a context argument and some drawing apis have been
replaced with cogl_framebuffer_ drawing apis that take explicit
framebuffer and pipeline arguments.
These changes were made as part of Cogl moving towards a more stateless
api that doesn't rely on a global context.
This patch updates Clutter to work with the latest Cogl api and bumps
the required Cogl version to 1.9.5.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Similar to the clutter_actor_iter_remove(), but it'll call destroy()
instead of remove_child().
We can also reimplement the ::destroy default handler using it, and make
it more compact.
There is a typo in the check for a negative index: the index variable
should be index_, not index - unfortunately, the latter can still be
resolved to index(3), so compiler and linker are perfectly happy.
https://bugzilla.gnome.org/show_bug.cgi?id=669730
An editable ClutterText will reset the selection and cursor whenever the
contents are changed — even if those contents are the same. As this may
confuse the user, we should check if we're setting the exact same string,
and bail out if necessary.
The reverse of position_to_coords().
While providing documentation on how to implement it using the
PangoLayout API, I realized that the verbosity of it all, plus the usage
of the Pango API, was not worth it, and decided to expose the method we
are using internally.
GValueArray is on its way to deprecation in GLib; as far as the
ListModel class is concerned, a plain C array of GValue is a perfectly
suitable replacement for the GValueArray usage. It actually is an
improvement, given that it's going to take less memory.
ClutterActor stopped requiring to override the map and unmap virtual
functions some time ago.
Now that ClutterActor implements the Container interface, overriding map
and unmap to control the MAPPED state of the children is pretty much
going to be a source of bugs and misunderstandings.
Plus, the ordering of the unmap, destroy, dispose, and finalize calls
should be be documented properly.
The documentation should clarify all that.
When calling clutter_actor_destroy(), ClutterActor calls
update_map_state() on itself to unset the REALIZED and MAPPED states,
prior to running the dispose() implementation.
The default dispose() will call remove_child() (either directly or
through the Container implementation), which will check for the MAPPED
state and then run update_map_state() again. We use the previously set
MAPPED state to decide whether or not the parent should queue for a
relayout/redraw when removing a visible children.
If the MAPPED flag was cleared prior to remove_child(), though, it'll
always be unset by the time we get to remove_child(), and this will
cause missing redraws/relayouts; we were ignoring this prior the
post-First Apocalypse changes because we were doing:
if (was_mapped)
clutter_actor_queue_relayout (parent);
clutter_actor_queue_redraw (parent);
which is obviously wrong. Once I removed that glaring brain damage from
the remove_child() implementation, bugs started appearing — bugs that
were probably the reason why we introduced that brain damage in the
first place, instead of checking the source of those bugs.
The obvious fix is to avoid clearing up the actor's state on destroy()
until we remove the actor from its parent. This also reduces the amount
of work we do, and the code paths that can potentially go wrong.
Since the code dealing with ClutterShader is pretty self-contained, now,
we can safely move it outside of the main ClutterActor source file and
into its own. This will allow us to just drop a bunch of files when
branching for 2.0.
The YUV support depends on the driver support, and not only not many
drivers support YUV natively: the supported colorspaces are pretty much
useless.
The proper way to do YUV to RGB colorspace conversion on the GPU is to
use a fragment shader; for that, ClutterTexture and Cogl provide enough
API to achieve a good result - see the Clutter-GStreamer implementation,
for instance.
ClutterFixedLayout is the default layout manager for ClutterActor.
Existing subclasses of ClutterActor will get a fixed layout manager
regardless of whether they are going to use it, but since it sets the
CLUTTER_ACTOR_NO_LAYOUT flag, it will introduce regressions on actors
that perform their own layout management.
The CLUTTER_ACTOR_NO_LAYOUT flag was a bit of a mistake in the first
place, as it was introduced as a last minute workaround in the 1.0
process to deal with broken stuff in Moblin. It's going to be a target
for deprecation towards a removal when we start the 2.0 process.
Iterating over children and ancestors of an actor is a relatively common
operation. Currently, you only have one option: start a for() loop, get
the first child of the actor, and advance to the next sibling for the
list of children; or start a for() loop and advance to the parent of the
actor.
These operations can be easily done through the ClutterActor API, but
they all require going through the public API, and performing multiple
type checks on the arguments.
Along with the DOM API, it would be nice to have an ancillary, utility
API that uses an iterator structure to hold the state, and can be
advanced in a loop.
https://bugzilla.gnome.org/show_bug.cgi?id=668669
During the gutting of ClutterBox, the destroy and dispose implementation
were removed. The former, especially, destroyed all children - which
usually meant that the redraw queues for the childre was cleared as
well. The removal introduced crashes when a Box was destroyed while its
children were still queueing redraws.
Symbolic names are better than magic numbers, even if they are
well-established and won't likely change.
This maps to a commit in GTK+ that introduced the same names; it
was decided to go for PRIMARY, MIDDLE, and SECONDARY because of
the confusion that may arise when the button order gets flipped
in left-handed configurations - the "left" button (i.e. 1) becomes
the right-most button, and the "right" button (i.e. 3) becomes
the left-most button.
https://bugzilla.gnome.org/show_bug.cgi?id=668692
Also update the code to set the size of the stage to set it to the size of the
output. In future versions of the Wayland protocol we'll get a configure
message advising of us of the size we can be to achieve fullscreen.
* stage-state:
docs: Update ClutterStageState flags
wayland: Use the Stage state tracking
gdk: Use the Stage state tracking
win32: Use the Stage state tracking
x11: Use the Stage state tracking
osx: Use the Stage state tracking
stage: Add state tracking
State changes on the Stage are currently deferred to the windowing
system backends, but the code is generally the same, and it should
be abstracted neatly inside the Stage class itself.
There's also the extra caveat for backends that state changes on a
Stage must also emit a ClutterEvent of type CLUTTER_STAGE_STATE, a
requirement that needlessly complicates the backend code.
GLib has gained support for compiling ancillary data files into the same
binary blob as a library or as an executable.
We should add this feature to ClutterScript, so that it's possible to
bundle UI definitions with an application.
The only actor that results in a mix of the old Container API and the
new Actor API is ClutterStage. By inheritance, a Stage is a Group, but
we don't want it to behave like a Group - as it already overrides most
of the Actor API, and the reason why it was made as a Group in the
first place was convenience for adding/removing children.
Given that touching Group to make it aware of the new Actor API has
rapidly devolved into a struggle between a Demiurge that tries to
avoid breakage and a Chaos that finds new and interesting ways to
break ClutterGroup, let's declare API bankruptcy here and now.
ClutterStage should override ClutterContainer methods, and use the
layout management of ClutterFixedLayout as the proper class that it
was meant to be ages ago. Let ClutterGroup rot in pieces.
Now that we reinstated Group to its "former glory", we need to ensure
that applications using the deprecated containers with the new DOM API
in ClutterActor can actually work - or, at least, not break horribly.
This actually means making sure that ClutterStage and ClutterGroup can
cope with the DOM, while retaining their old implementations, as well as
their bizarre idiosyncrasies and their utter, utter brokenness.
Making Group just a proxy to Actor broke some behaviour that application
and toolkit code was relying on. Let's keep Group around to fight
another day.
This commit fixes gnome-shell as far as I can test it.
A Group is a just a ClutterActor with the layout-manager property set at
instance initialization time. It doesn't need anything else from
ClutterActor's vtable, except the slightly custom show_all/hide_all
implementation, and a simplified get_paint_volume.
Instead of chaining up, given that we want to bypass chaining up and
just set the allocation. This also allows us to bail out of the
overridden allocate vfunc check, given that we want the default Actor
behaviour to apply - including eventual layout manager delegates.
The usual way to implement a container actor is to override the
allocate() virtual function, chain up, and then allocate the actor's
children.
Clutter now has the ability to delegate layout management to
ClutterLayoutManager directly; in the allocation, this is done by
checking whether the actor has children, and then call
clutter_layout_manager_allocate() from within the default implementation
of the ClutterActor::allocate() vfunc. The same vfunc that everyone, has
been chaining up to.
Whoopsie.
Well, we can check if there's a layout manager, and if it's NULL, we
bail out. Except that there's a default layout manager, and it's the
fixed layout manager, so that classes like Group and Stage work by
default.
Double whoopsie.
The fix for this scenario is a bit nasty; we have to check if the actor
class has overridden the allocate() vfunc or not, before actually
looking at the layout manager. This means that classes that override the
allocate() vfunc are expected to do everything that ClutterActor's
default implementation does - which I think it's a fair requirement to
have.
For newly written code, though, it would probably be best if we just
provided a function that does the right thing by default, and that
you're supposed to be calling from within the allocate() vfunc
implementation, if you ever chose to override it. This new function,
clutter_actor_set_allocation(), should come with a warning the size of
Texas, to avoid people thinking it's a way to override the whole "call
allocate() on each child" mechanism. Plus, it should check if we're
inside an allocation sequence, and bail out if not.