We currently check for the IN_DESTRUCTION flag inside the
add_child_internal() function.
This check disallows calling methods that change the stacking order
within the destruction sequence, by triggering a critical warning first,
and leaving the actor in an undefined state, which then ends up being
caught by an assertion.
The reproducible sequence is:
- actor gets destroyed;
- another actor, linked to the first, will try to change the
stacking order of the first actor;
- changing the stacking order is a composite operation composed
by the following steps:
1. ref() the child;
2. remove_child_internal(), which removes the reference;
3. add_child_internal(), which adds a reference;
- the state of the actor is not changed between (2) and (3), as
it could be an expensive recomputation;
- if (3) bails out, then the actor is in an undefined state, but
still alive;
- the destruction sequence terminates, but the actor is unparented
while its state indicates being parented instead.
- assertion failure.
The obvious fix would be to decompose each set_child_*_sibling() method
into proper remove_child()/add_child(), with state validation; this may
cause excessive work, though, and trigger a cascade of other bugs in
code that assumes that a change in the stacking order is an atomic
operation.
Another potential fix is to just remove this check here, and let code
doing stacking order changes inside the destruction sequence of an actor
continue doing the work.
The third fix is to silently bail out early from every
set_child_*_sibling() and set_child_at_index() method, and avoid doing
work.
I have a preference for the second solution, since it involves the least
amount of work, and the least amount of code duplication.
See bug: https://bugzilla.gnome.org/show_bug.cgi?id=670647
Now that ClutterActor has a default paint volume, subclasses may wish
to retrieve it without chaining up to the parent's implementation of
the get_paint_volume() function.
The get_default_paint_volume() returns a ClutterPaintVolume pointer
to the paint volume as computed by the default implementation of the
get_paint_volume() virtual function; it can only be used immediately,
as it's not guaranteed to survive across multiple frames.
The experimental cogl_pipeline_new() api was recently changed so it
explicitly takes a CoglContext. This updates all calls to
cogl_pipeline_new() in clutter accordingly.
If we have N children and the user passes N (or a number beyond N) to
clutter_actor_insert_child_at_index, we should respond by adding the
child at the end, not silently doing nothing.
This should avoid trying to fix the origin of a paint volume set from
the allocation's origin, and thus breaking everything.
A PaintVolume for an actor is defined to be relative to the actor's
modelview unless specifically modified by internal functions; the origin
of an actor's allocation is, on the other hand, parent-relative.
There are times when we don't want to remove all children and count of
the reference count to drop to 0 to ensure destruction; there are cases,
such as managed environments, where it's preferable to ensure that the
children of an actor get actually destroyed.
A bunch of private symbols have escaped into the SO; let's rectify this
situation by using the '_' private prefix, or making them static as they
should have been.
Some of Cogl's experimental apis have changed so that the buffer apis
now need to be passed a context argument and some drawing apis have been
replaced with cogl_framebuffer_ drawing apis that take explicit
framebuffer and pipeline arguments.
These changes were made as part of Cogl moving towards a more stateless
api that doesn't rely on a global context.
This patch updates Clutter to work with the latest Cogl api and bumps
the required Cogl version to 1.9.5.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Similar to the clutter_actor_iter_remove(), but it'll call destroy()
instead of remove_child().
We can also reimplement the ::destroy default handler using it, and make
it more compact.
There is a typo in the check for a negative index: the index variable
should be index_, not index - unfortunately, the latter can still be
resolved to index(3), so compiler and linker are perfectly happy.
https://bugzilla.gnome.org/show_bug.cgi?id=669730
ClutterActor stopped requiring to override the map and unmap virtual
functions some time ago.
Now that ClutterActor implements the Container interface, overriding map
and unmap to control the MAPPED state of the children is pretty much
going to be a source of bugs and misunderstandings.
Plus, the ordering of the unmap, destroy, dispose, and finalize calls
should be be documented properly.
The documentation should clarify all that.
When calling clutter_actor_destroy(), ClutterActor calls
update_map_state() on itself to unset the REALIZED and MAPPED states,
prior to running the dispose() implementation.
The default dispose() will call remove_child() (either directly or
through the Container implementation), which will check for the MAPPED
state and then run update_map_state() again. We use the previously set
MAPPED state to decide whether or not the parent should queue for a
relayout/redraw when removing a visible children.
If the MAPPED flag was cleared prior to remove_child(), though, it'll
always be unset by the time we get to remove_child(), and this will
cause missing redraws/relayouts; we were ignoring this prior the
post-First Apocalypse changes because we were doing:
if (was_mapped)
clutter_actor_queue_relayout (parent);
clutter_actor_queue_redraw (parent);
which is obviously wrong. Once I removed that glaring brain damage from
the remove_child() implementation, bugs started appearing — bugs that
were probably the reason why we introduced that brain damage in the
first place, instead of checking the source of those bugs.
The obvious fix is to avoid clearing up the actor's state on destroy()
until we remove the actor from its parent. This also reduces the amount
of work we do, and the code paths that can potentially go wrong.
Since the code dealing with ClutterShader is pretty self-contained, now,
we can safely move it outside of the main ClutterActor source file and
into its own. This will allow us to just drop a bunch of files when
branching for 2.0.
Iterating over children and ancestors of an actor is a relatively common
operation. Currently, you only have one option: start a for() loop, get
the first child of the actor, and advance to the next sibling for the
list of children; or start a for() loop and advance to the parent of the
actor.
These operations can be easily done through the ClutterActor API, but
they all require going through the public API, and performing multiple
type checks on the arguments.
Along with the DOM API, it would be nice to have an ancillary, utility
API that uses an iterator structure to hold the state, and can be
advanced in a loop.
https://bugzilla.gnome.org/show_bug.cgi?id=668669
Now that we reinstated Group to its "former glory", we need to ensure
that applications using the deprecated containers with the new DOM API
in ClutterActor can actually work - or, at least, not break horribly.
This actually means making sure that ClutterStage and ClutterGroup can
cope with the DOM, while retaining their old implementations, as well as
their bizarre idiosyncrasies and their utter, utter brokenness.
Making Group just a proxy to Actor broke some behaviour that application
and toolkit code was relying on. Let's keep Group around to fight
another day.
This commit fixes gnome-shell as far as I can test it.
The usual way to implement a container actor is to override the
allocate() virtual function, chain up, and then allocate the actor's
children.
Clutter now has the ability to delegate layout management to
ClutterLayoutManager directly; in the allocation, this is done by
checking whether the actor has children, and then call
clutter_layout_manager_allocate() from within the default implementation
of the ClutterActor::allocate() vfunc. The same vfunc that everyone, has
been chaining up to.
Whoopsie.
Well, we can check if there's a layout manager, and if it's NULL, we
bail out. Except that there's a default layout manager, and it's the
fixed layout manager, so that classes like Group and Stage work by
default.
Double whoopsie.
The fix for this scenario is a bit nasty; we have to check if the actor
class has overridden the allocate() vfunc or not, before actually
looking at the layout manager. This means that classes that override the
allocate() vfunc are expected to do everything that ClutterActor's
default implementation does - which I think it's a fair requirement to
have.
For newly written code, though, it would probably be best if we just
provided a function that does the right thing by default, and that
you're supposed to be calling from within the allocate() vfunc
implementation, if you ever chose to override it. This new function,
clutter_actor_set_allocation(), should come with a warning the size of
Texas, to avoid people thinking it's a way to override the whole "call
allocate() on each child" mechanism. Plus, it should check if we're
inside an allocation sequence, and bail out if not.
If we want to set a default layout manager, we need to do so inside
init(), as it's not guaranteed that people subclassing Actor and
overriding ::constructed will actually chain up as they should.
The default pick() behaviour does not take into consideration the
children of a ClutterActor because the existing containter actors
usually override pick(), chain up, and then paint their children.
With ClutterActor now a concrete class, though, we need a way to pick
its children without requiring a sub-class; we could simply iterate over
the children inside the default pick() implementation, but this would
lead to double painting, which is not acceptable.
A moderately gross hack is to check if the Actor instance did override
the pick() implementation, and if it is not the case, paint the children
in pick mode.
The hide_all() method is pretty much pointless, as hiding an actor will
automatically prevent its children from being painted. The show_all()
method would only be marginally useful, if actors weren't set to be
visible by default when added to another actor - which was the case when
we introduced show_all() and hide_all().
The concept of "internal child" only meant anything when we had a
separate API for containers and actors. Now that we plugged that
particular hole, we can drop all the hacks we used to have in place
to work around its design limitations.
It can be convenient to be able to set, or get, all the components of an
actor's margin at the same time; since we already have a boxed type for
storing a margin, an accessors pair based on it is not a complicated
addition to the API.
Inside the set_child_[above|below]_sibling() and set_child_at_index() we
should be using the internal API for mutating the children list, instead
of the delegate functions. This ensures that we go through a single,
well-defined code path for all operations on the list of children of
an actor.
We have a replacement in ClutterActor, now.
The old ClutterContainer API needs to be deprecated, and the raise() and
lower() virtual functions need a default implementation, so we can check
for implementations overriding them, by using the diagnostic mode like
we do for add(), remove(), and foreach().
The sort_depth_order() virtual function just doesn't do anything, as it
should have been made ages ago.
The Actor wrappers for the Container methods also need to be deprecated.