The paint opacity for a top level is always overridden to be the full
value, since it's a composited value and we want to paint our scene.
When clearing the stage framebuffer, though, we want to use the actual
opacity, if ClutterStage:use-alpha is set.
These are just terrible API that we've been stringing along since the
0.x days. We cannot truly deprecate them, because they are, in some
cases, the only actual API to perform some operation, and porting all
the code in the world is not going to make us any friends.
For the time being, we use a deprecation notice in the documentation.
The macros are useless for language bindings, and are terribly unsafe
from C as well. There's always the option of using the GObject
properties they refer to, but it's more efficient to just have a simple
getter function.
Nobody has been compiling Clutter with profiling enabled in a long time.
UProf itself hasn't been updated in 5 years, and it still depends on
deprecated components like dbus-glib, with no port to GDBus in sight.
The profiling code was moderately useful in the past, but these days
it's probably better to profile Cogl than Clutter itself; timing
information can be extracted by the timestamp on each diagnostic message
that is now available by default in the CLUTTER_NOTE macro, and we can
add ad hoc counters where needed.
Toolkits may need to paint actors internally outside the normal tree
(for example to create a shadow shape), in which case they need to
control the opacity directly.
https://bugzilla.gnome.org/show_bug.cgi?id=677412
Signed-off-by: Emmanuele Bassi <ebassi@gnome.org>
The easing state is part of the AnimationInfo structure, which is stored
inside the GObject's datalist. Each instance frees the data stored there
during finalization, so there is no point for us to restore an easing
state (which may or may not be the last one) just to have everything
cleared out once we chain up to GObject's own finalize() implementation.
And move the only private ClutterConstraint method to it.
This commit also sneaks in a change that makes sense for the debugging
of the update_allocation() method, which checks if the allocation was
effectively changed.
We want to recompute the content box when changing the content instance,
in case the preferred size is different and the content gravity uses the
preferred size; the change of content with different preferred size and
same gravity should also trigger an implicit transition.
https://bugzilla.gnome.org/show_bug.cgi?id=711182
Some actors want to have a preferred size driven by their content, not
by their children or by their fixed size.
In order to achieve that, we can extend the ClutterRequestMode
enumeration so that clutter_actor_get_preferred_size() defers to the
ClutterContent's own preferred size.
https://bugzilla.gnome.org/show_bug.cgi?id=676326
For a variety of complicated reasons, ClutterText currently sets fields
on the PangoContext when creating a layout. This causes ClutterText to
behave somewhat erratically in certain cases, since the PangoContext is
currently shared between all actors.
GTK+ creates a PangoContext for every single GtkWidget, so it seems like
we should do the same here.
Move the private code that was previously in clutter-main.c into
clutter-actor.c and clean it up a bit. This gives every actor its own
PangoContext it can mutilate whenever it wants, at its heart's content.
https://bugzilla.gnome.org/show_bug.cgi?id=739050
Just like unmapped children.
Apparently, layers above Clutter allow mapped children without an
allocation, instead of unmapping them. This means we need to ignore
them when computing the paint volume.
Patch originally by: Adel Gadllah <adel.gadllah@gmail.com>
Signed-off by: Emmanuele Bassi <ebassi@gnome.org>
https://bugzilla.gnome.org/show_bug.cgi?id=736682
We need to release the temporary reference we acquired in order for the
signal emission to work.
https://bugzilla.gnome.org/show_bug.cgi?id=734761
Signed-off-by: Emmanuele Bassi <ebassi@gnome.org>
Cogl 1.18 deprecated the global clipping API in favour of the
per-framebuffer one, but since we're using the 2.0 API internally we
don't have access to the deprecated symbols any more.
This is pretty much a mechanical port for all the places where we're
still using the old 1.x API.
Instead of asking every internal user to get the stage and get the
active framebuffer from it, we can wrap it up ourselves, and do some
sanity checks as well.
When support for implicit animation of actor position was added,
the optimization for not queueing when allocating an actor back
to the same location was lost. This optimization is important
since when we are hierarchically allocating down from the top of
the stage we constantly reallocate the actors at the top of the
hierarchy back to the same place.
https://bugzilla.gnome.org/show_bug.cgi?id=719368
When the source actor potentially changes size, that shouldn't
necessarily result in the target actor being redrawn - it should
be like when a child of a container is reallocated due to changes
in its siblings or parent - it should redraw only to the extent
that it is moved and resized. Privately export an internal function
from clutter-actor.c to allow getting this right.
https://bugzilla.gnome.org/show_bug.cgi?id=719367
The implicitly created transitions are removed when complete by the
implicit transition machinery. The remove-on-complete hint is for
user-provided transitions.
https://bugzilla.gnome.org/show_bug.cgi?id=705739
There is no reasonable use case for having the functions, the virtual
functions, and the signals for realization and unrealization; the
concept belongs to an older era, when we though it would have been
possible to migrate actors across different GL contexts, of in case a GL
context would not have been available until the main loop started
spinning. That is most definitely not possible today, and too much code
would utterly break if we ever supported that.
While we still don't want to perform implicit transitions on unmapped
actors, we can relax the requirement on having been painted once; the
was_painted flag was introduced to avoid performing implicit transitions
on the :allocation property, but for that we can use the
needs_allocation flag instead, as needs_allocation will be set to FALSE
when we have been painted as well.
Thus, we retain our original goal of not having actors "flying" into
position on their first allocation, without the side effect of
preventing animations when emitting the ::show signal.
When we changed the MetaGroup to handle internal effects, we updated
has_effects(), but forgot to fix the equivalent has_constrains() and
has_actions() method.
Now, if we clear the constraints or the actions on an actor, and we
call has_constraints() or has_actions(), we get an false positive.
The "should this implicit transition be skipped" check should live into
its own function, where we can actually explain what it does and which
conditions should be respected.
Instead of just blindly skipping actors that are unmapped, or haven't
been painted yet, we should add a couple of escape hatches.
First of all, we don't want :allocation to be implicitly animated until
we have been painted (thus allocated) once; this avoids actors "flying
in" into their allocation.
We also want to allow implicit transitions on the opacity even if we
haven't been painted yet; the internal optimization that we employ in
clutter_actor_paint() and skips painting fully transparent actors is
exactly that: an internal optimization. Caller code should not be aware
of this change, and it should not influence code outside of ClutterActor
itself.
The rest of the conditions are the same: if the easing state's duration
is zero, or if the actor is both unmapped and not in a cloned branch of
the scene graph, then implicit transitions are pointless, as they won't
be painted.
https://bugzilla.gnome.org/show_bug.cgi?id=698766
If an actor has not been painted yet, or it's not going to be painted,
we can ignore transitions queued on it.
By ignoring transitions on actors that have not been painted yet, we can
avoid doing work during the set up phase of the scene graph, as well as
avoiding actors "flying in" from nowhere.
Obviously, we have to take into account potential clones, so we need to
check that the actor is not part of a cloned branch of the scene graph,
as well as checking if the actor has mapped clones.
If an actor is unmapped then it won't be painted, so we can safely
short-circuit out of _clutter_actor_queue_redraw_full() if the mapped
flag is not set.
We need, on the other hand, make an exception for Clones, otherwise
they won't receive notification that the source actor has changed
and they won't be painted.
This allows us to ignore redraws queued on children of invisible
parents, and avoid traversing the scene graph.
Instead of using signal notifications, we should be able to keep track
of the clones of an actor from within ClutterActor itself, using private
API. There's no point in pretending that people can actually create a
Clone class out of tree, given the amount of invariants we have to punch
through in order to implement a proper replicator node of the scene
graph, so we can just skip the signal emissions and just do the right
thing at the right time.
More comments are warranted: these functions are pretty much full of
potential side effects, and I'd really like to avoid keeping everything
in my head forever.
Along with the comments and the type casting reduction, I sneaked in a
one line change that is clearly correct after reading the flow of the
whole thing: we queue only a relayout after three potential redraws have
been queued. If we manage to miss a redraw and yet still get a relayout
then it means that most of our assumptions are fundamentally wrong, and
that we ought to dump this whole business of computer programming, and
just go back to being a hunter-gatherer species.