The implicitly created transitions are removed when complete by the
implicit transition machinery. The remove-on-complete hint is for
user-provided transitions.
https://bugzilla.gnome.org/show_bug.cgi?id=705739
There is no reasonable use case for having the functions, the virtual
functions, and the signals for realization and unrealization; the
concept belongs to an older era, when we though it would have been
possible to migrate actors across different GL contexts, of in case a GL
context would not have been available until the main loop started
spinning. That is most definitely not possible today, and too much code
would utterly break if we ever supported that.
While we still don't want to perform implicit transitions on unmapped
actors, we can relax the requirement on having been painted once; the
was_painted flag was introduced to avoid performing implicit transitions
on the :allocation property, but for that we can use the
needs_allocation flag instead, as needs_allocation will be set to FALSE
when we have been painted as well.
Thus, we retain our original goal of not having actors "flying" into
position on their first allocation, without the side effect of
preventing animations when emitting the ::show signal.
When we changed the MetaGroup to handle internal effects, we updated
has_effects(), but forgot to fix the equivalent has_constrains() and
has_actions() method.
Now, if we clear the constraints or the actions on an actor, and we
call has_constraints() or has_actions(), we get an false positive.
The "should this implicit transition be skipped" check should live into
its own function, where we can actually explain what it does and which
conditions should be respected.
Instead of just blindly skipping actors that are unmapped, or haven't
been painted yet, we should add a couple of escape hatches.
First of all, we don't want :allocation to be implicitly animated until
we have been painted (thus allocated) once; this avoids actors "flying
in" into their allocation.
We also want to allow implicit transitions on the opacity even if we
haven't been painted yet; the internal optimization that we employ in
clutter_actor_paint() and skips painting fully transparent actors is
exactly that: an internal optimization. Caller code should not be aware
of this change, and it should not influence code outside of ClutterActor
itself.
The rest of the conditions are the same: if the easing state's duration
is zero, or if the actor is both unmapped and not in a cloned branch of
the scene graph, then implicit transitions are pointless, as they won't
be painted.
https://bugzilla.gnome.org/show_bug.cgi?id=698766
If an actor has not been painted yet, or it's not going to be painted,
we can ignore transitions queued on it.
By ignoring transitions on actors that have not been painted yet, we can
avoid doing work during the set up phase of the scene graph, as well as
avoiding actors "flying in" from nowhere.
Obviously, we have to take into account potential clones, so we need to
check that the actor is not part of a cloned branch of the scene graph,
as well as checking if the actor has mapped clones.
If an actor is unmapped then it won't be painted, so we can safely
short-circuit out of _clutter_actor_queue_redraw_full() if the mapped
flag is not set.
We need, on the other hand, make an exception for Clones, otherwise
they won't receive notification that the source actor has changed
and they won't be painted.
This allows us to ignore redraws queued on children of invisible
parents, and avoid traversing the scene graph.
Instead of using signal notifications, we should be able to keep track
of the clones of an actor from within ClutterActor itself, using private
API. There's no point in pretending that people can actually create a
Clone class out of tree, given the amount of invariants we have to punch
through in order to implement a proper replicator node of the scene
graph, so we can just skip the signal emissions and just do the right
thing at the right time.
More comments are warranted: these functions are pretty much full of
potential side effects, and I'd really like to avoid keeping everything
in my head forever.
Along with the comments and the type casting reduction, I sneaked in a
one line change that is clearly correct after reading the flow of the
whole thing: we queue only a relayout after three potential redraws have
been queued. If we manage to miss a redraw and yet still get a relayout
then it means that most of our assumptions are fundamentally wrong, and
that we ought to dump this whole business of computer programming, and
just go back to being a hunter-gatherer species.
The original code inside ClutterActor that dealt with Transitions
stopping was written for the ::completed signal, thus the code was
correctly handling the lifetime of the instances; when we moved to the
::stopped signal, we assumed that it worked in the same way - with less
conditions to be checked, obviously, but fundamentally similar to the
::completed signal. Sadly, I screwed up the signal definition, and the
signal ended up calling our handlers, but not the default one that did
the cleanup and released references on the Animatable instance.
After fixing the Timeline::stopped signal, we can go back to the
previous code.
Thanks to Craig Hughes for the help in tracking down this mess.
https://bugzilla.gnome.org/show_bug.cgi?id=695158
When stopping the transition we need to release the reference we
maintain while removing the Transition from the hash table inside an
actor. If we fail to do so, the Transition is never released, which
means we leak the Animatable instance we tied to it.
https://bugzilla.gnome.org/show_bug.cgi?id=695158
If we pass TRUE for x_align and FALSE for y_align, the full available
width should be passed to clutter_get_preferred_height, and the same
should be true in the other dimension.
https://bugzilla.gnome.org/show_bug.cgi?id=694237
Instead of placing the whole body of the function inside an if block,
let's make it clear what each part of the function does. Also, add more
comments.
When setting an explicit transform with clutter_actor_set_transform()
and a non (0,0) pivot-point, clutter_actor_apply_transform() will fail
to roll back the pivot-point translation done before multiplying the
transformation matrix due to the "out:" label being slightly misplaced
in clutter_actor_real_apply_transform().
This works properly:
clutter_actor_set_pivot_point (actor, 0.5, 0.5);
clutter_actor_set_rotation_angle (actor, CLUTTER_Z_AXIS, 30);
This results in the actor being moved to the pivot-point position:
clutter_actor_set_pivot_point (actor, 0.5, 0.5);
clutter_matrix_init_identity(&matrix);
cogl_matrix_rotate (&matrix, 30, 0, 0, 1.0);
clutter_actor_set_transform (actor, &matrix);
This also add a conformance test checking that even when using a
pivot-point, no matter how a rotation is set the resulting
transformation matrix will be the same.
https://bugzilla.gnome.org/show_bug.cgi?id=690214
clutter_actor_allocate_preferred_size is supposed to use the fixed
position of an actor. Unfortunately, recent refactorings made it so
that it accidentally used the current allocation. As the current
allocation may be adjusted by the actor, or have been previously
allocated in a strange spot, it may have unintended side effects. Use
the fixed positioning of the actor instead.
This fixes weird issues with margins colliding with
ClutterFixedLayout, causing strange offsets on relayout.
https://bugzilla.gnome.org/show_bug.cgi?id=689316
When trying to clamp to pixel a box that is exactly in between 2
pixels, the clutter_actor_box_clamp_to_pixel() function changes the
size of the box.
Here is an example :
ClutterActorBox box = { 10.5, 10, 20.5, 20};
g_message ("%fx%f -> %fx%f", box.x1, box.y1, box.x2, box.y2);
clutter_actor_box_clamp_to_pixel (&box);
g_message ("%fx%f -> %fx%f", box.x1, box.y1, box.x2, box.y2);
Here is what you get :
** Message: 10.500000x10.000000 -> 20.500000x20.000000
** Message: 10.000000x10.000000 -> 21.000000x20.000000
That is because of the properties of the ceilf and floorf function
used to do the clamping.
For example, ceil(0.5) is 1.0, and ceil(-0.5) is 0.0.
And, floor(0.5) is 0.0, and floor(-0.5) is -1.0.
To work around that problem this patch retains the distance between x
and y coordinates and apply that difference before calling ceilf() on
x2 and y2.
https://bugzilla.gnome.org/show_bug.cgi?id=689073
The code for calculating the content box when the aspect ratio is
greater than 1 was broken. The same code that did the calculation for
aspect ratio less than 1 should be used in all cases.
Fixes: https://bugzilla.gnome.org/682161
When changing an implicit transition mid flight we may end up with an
easing state with a duration of zero milliseconds; this leads to the
implicit transition machinery setting the final state directly onto the
actor. If there is a running transition, though, we need to remove it
from the transitions table, otherwise it will keep running.
This regression happened when the update_transition() internal function
was merged into the create_transition() one.
Tested-by: Lionel Landwerlin <llandwerlin@gmail.com>
Only the signal connection. When using G_ENABLE_DIAGNOSTIC there will be
a warning for every signal connection.
We should try and discourage people from ever using the paint signal
ever again, until we can safely remove it in Clutter 2.0.
We cannot fully deprecate Geometry, because ClutterActor and ClutterText
are actually using the type in signals and properties; but we can
deprecate the API that uses this type, so that 2.0 will be able to avoid
it entirely.
The :clip property still uses ClutterGeometry, which is a very bad
rectangle type. Since we cannot change the type of the property
compatibly, we should introduce a new property using ClutterRect
instead. This also matches the ClutterActor.set_clip() API, which uses a
decomposed rectangle with floating point values, like we do with
set_position() and set_size().
We need to make sure that ClutterActor::transition-stopped is emitted
after the transition was removed from the actor's list of transitions.
We cannot just remove the TransitionClosure from the hash table that
holds the transitions associated to an actor, and let the
TransitionClosure free function stop the timeline, thus emitting the
::transition-stopped signal - otherwise, stopping the timeline will end
up trying to remove the transition from the hash table, and we'll get
either a warning or a segfault.
Since we know we're removing the timeline anyway, we can emit the signal
ourselves in case the timeline is playing, in both the implicit and
explicit cases.