When using a ClutterOffscreenEffect, the size of the offscreen buffer
allocated to perform the effect is currently computed using the paint
volume of the actor it's attached to and in the case the paint volume
cannot be computed, the effect falls back to using the stage's size.
If you scale an actor enough so its paint volume is much bigger that
the size of the stage, you can end up running out of memory (which
leads to your application crashing).
https://bugzilla.gnome.org/show_bug.cgi?id=699675
The "should this implicit transition be skipped" check should live into
its own function, where we can actually explain what it does and which
conditions should be respected.
Instead of just blindly skipping actors that are unmapped, or haven't
been painted yet, we should add a couple of escape hatches.
First of all, we don't want :allocation to be implicitly animated until
we have been painted (thus allocated) once; this avoids actors "flying
in" into their allocation.
We also want to allow implicit transitions on the opacity even if we
haven't been painted yet; the internal optimization that we employ in
clutter_actor_paint() and skips painting fully transparent actors is
exactly that: an internal optimization. Caller code should not be aware
of this change, and it should not influence code outside of ClutterActor
itself.
The rest of the conditions are the same: if the easing state's duration
is zero, or if the actor is both unmapped and not in a cloned branch of
the scene graph, then implicit transitions are pointless, as they won't
be painted.
https://bugzilla.gnome.org/show_bug.cgi?id=698766
This should actually ensure that the calculations of the Z translation
for the projection matrix is resolved to "variable * CONSTANT". The
original factors are left in code so it's trivial to revert to the
trigonometric operations if need be, even without reverting this commit.
The ClutterActor::paint signal is deprecated, and connecting to it even
to get notifications will disable clipped redraws because of violations
of the paint volume.
The only actual valid use case for notifications of a successful frame
is on the ClutterStage, so we should add new (experimental) API for it,
so that users can actually subscribe to it — at least if you're writing
a compositor.
Shoving a signal in a performance critical path is not an option, and
I'm not sure I want to commit to an API like this yet. I reserve the
right to revisit this decision in the future.
https://bugzilla.gnome.org/show_bug.cgi?id=698783
Currently, clutter_canvas_set_size() causes invalidation of the canvas
contents only if the newly set size is different. There are cases when
we want to invalidate the content regardless of the size set, but we
cannot do that right now without possibly causing two invalidations,
for instance:
clutter_canvas_set_size (canvas, new_width, new_height);
clutter_content_invalidate (canvas);
will cause two invalidations if the newly set size is different than
the existing one. One way to work around it is to check the current
size of the canvas and either call set_size() or invalidate() depending
on whether the size differs or not, respectively:
g_object_get (canvas, "width", &width, "height", &height, NULL);
if (width != new_width || height != new_height)
clutter_canvas_set_size (canvas, new_width, new_height);
else
clutter_content_invalidate (canvas);
this, howevere, implies knowledge of the internals of ClutterCanvas,
and of its optimizations — and encodes a certain behaviour in third
party code, which makes changes further down the line harder.
We could remove the optimization, and just issue an invalidation
regardless of the surface size, but it's not something I'd be happy to
do. Instead, we can add a new function specifically for ClutterCanvas
that causes a forced invalidation regardless of the size change. If we
ever decide to remove the optimization further down the road, we can
simply deprecate the function, and make it an alias of invalidate()
or set_size().
Since we are trying to eliminate the ClutterGeometry type, we should
replace the only entry point still using it: the ::cursor-event signal
of ClutterText.
Instead of passing the cursor geometry, we should add an accessor
function.
The combination of signal and getter for the cursor geometry means that
we can deprecate ClutterText::cursor-event, and mark it for removal in
Clutter 2.0.
https://bugzilla.gnome.org/show_bug.cgi?id=682789
On the other backends we will get some sort of expose event after
showing the stage's window which will queue a redraw. These expose
events don't exist on Wayland so nothing will cause Clutter to queue a
redraw. Weston doesn't bother displaying anything for the stage's
surface until the first buffer is sent, which of course it will never
receive if Clutter doesn't paint anything. This patch just makes it
explicitly queue a redraw after the stage is shown so that we will
always pass at least one frame to the compositor.
The bug can be seen by running test-stage-sizing. That example doesn't
have any animations so it won't try to queue any redraws until
something interacts with it. On the other hand something like
test-actors works fine without the patch because it constantly queues
redraws anyway in order to display the animation.
https://bugzilla.gnome.org/show_bug.cgi?id=696791
If clutter is built with both X11 and Wayland support, prefer the
(more complete for now) X11 backend. This matches GTK+'s current
ordering.
This allows distributions to ship a clutter version with both backends
built, and using an envvar to switch to the wayland backend and test
applications.
In the future, applications would be able to choose which backend
they prefer and in which order.
https://bugzilla.gnome.org/show_bug.cgi?id=695838
If an actor has not been painted yet, or it's not going to be painted,
we can ignore transitions queued on it.
By ignoring transitions on actors that have not been painted yet, we can
avoid doing work during the set up phase of the scene graph, as well as
avoiding actors "flying in" from nowhere.
Obviously, we have to take into account potential clones, so we need to
check that the actor is not part of a cloned branch of the scene graph,
as well as checking if the actor has mapped clones.
If an actor is unmapped then it won't be painted, so we can safely
short-circuit out of _clutter_actor_queue_redraw_full() if the mapped
flag is not set.
We need, on the other hand, make an exception for Clones, otherwise
they won't receive notification that the source actor has changed
and they won't be painted.
This allows us to ignore redraws queued on children of invisible
parents, and avoid traversing the scene graph.
Instead of using signal notifications, we should be able to keep track
of the clones of an actor from within ClutterActor itself, using private
API. There's no point in pretending that people can actually create a
Clone class out of tree, given the amount of invariants we have to punch
through in order to implement a proper replicator node of the scene
graph, so we can just skip the signal emissions and just do the right
thing at the right time.
More comments are warranted: these functions are pretty much full of
potential side effects, and I'd really like to avoid keeping everything
in my head forever.
Along with the comments and the type casting reduction, I sneaked in a
one line change that is clearly correct after reading the flow of the
whole thing: we queue only a relayout after three potential redraws have
been queued. If we manage to miss a redraw and yet still get a relayout
then it means that most of our assumptions are fundamentally wrong, and
that we ought to dump this whole business of computer programming, and
just go back to being a hunter-gatherer species.
Since XIQueryVersion, the bad API that it is, chooses the first client
version that it gets, we need to ensure that we pass XIQueryVersion the
new XI2.3 version, knowing fully well that Clutter won't be confused
by the new features.
https://bugzilla.gnome.org/show_bug.cgi?id=692466
The X server should fill in the minor version that it supports in the
case where it only supports the older version. We should not get a
BadRequest or fail the version check if we pass something higher.
https://bugzilla.gnome.org/show_bug.cgi?id=692466
Text exposed by the AtkText methods should be the text
displayed to the user (like the internal method
clutter_text_get_display_text). So it should use the password-char
if it is being used.
This is also a security concern.
The original code inside ClutterActor that dealt with Transitions
stopping was written for the ::completed signal, thus the code was
correctly handling the lifetime of the instances; when we moved to the
::stopped signal, we assumed that it worked in the same way - with less
conditions to be checked, obviously, but fundamentally similar to the
::completed signal. Sadly, I screwed up the signal definition, and the
signal ended up calling our handlers, but not the default one that did
the cleanup and released references on the Animatable instance.
After fixing the Timeline::stopped signal, we can go back to the
previous code.
Thanks to Craig Hughes for the help in tracking down this mess.
https://bugzilla.gnome.org/show_bug.cgi?id=695158
A copy and paste thinko: the ::stopped signal is using the
ClutterTimelineClass.completed slot instead of the .stopped one,
thus preventing sub-classes of ClutterTimeline from overriding the
signal's default closure.
When stopping the transition we need to release the reference we
maintain while removing the Transition from the hash table inside an
actor. If we fail to do so, the Transition is never released, which
means we leak the Animatable instance we tied to it.
https://bugzilla.gnome.org/show_bug.cgi?id=695158
The ClutterOffscreenEffect.get_target_size() method has been deprecated,
and replaced by the get_target_rect() one. We can easily switch to the
latter, and avoid the deprecation warning.
https://bugzilla.gnome.org/show_bug.cgi?id=670004
The target size is not always enough, there are cases where the offset
used to paint the target must also be available for developers
implementing an OffscreenEffect.
The get_target_rect() method returns the rectangle used to paint the
target, with the offsets in the ClutterRect:origin and the texture size
in the ClutterRect:size fields, respectively.
The get_target_size() method should be deprecated, given that its
replacement is generally more useful.
https://bugzilla.gnome.org/show_bug.cgi?id=670004