Both ClutterCanvas and ClutterImage should use the minification and
magnification filters set on the actor, just like the use the content
box and the paint opacity.
The ::paint signal is the old way to paint an actor; the paint_node()
virtual function is the new way. It's still not possible to traverse the
whole scene graph and build a render tree of PaintNode instances, but
with this change we simultaneously cut out the ::paint signal emission
from the critical path for actors that are using the new PaintNode-based
API, and we retain backward compatibility in the interim period between
1.10 and 2.0.
Instead of our homegrown string building; this at least ensures that
we're generating proper data, instead of random strings. Plus, using
JsonNode and JsonBuilder, we can ask the PaintNode subclasses to
serialize themselves in a sensible way.
ClutterContent is an interface for creating delegate objects that handle
what an actor is going to paint.
Since they are a newly added type, they only hook into the new PaintNode
based API.
The position and size of the content is controlled in part by the
content's own preferred size, and by the ClutterContentGravity
enumeration.
The ::paint-node virtual inside ClutterActor is what we want people to
use when painting their actors.
Right now, it's a new code path, that gets called while painting; the
paint_node() implementation should only paint the actor itself, and not
its children — they will get their own paint_node() called when needed.
Internally, ClutterActor will automatically create a dummy PaintNode and
paint the background color; then control will be handed out to the
implementation on the class. This is required to maintain compatibility
with the old ::paint signal emission.
Once we are able to get rid of the paint (and pick) sequences, we'll
switch to a fully retained render tree.
Now that we have a proper scene graph API, we should split out the
rendering part from the logical and event handling part.
ClutterPaintNode is a lightweight fundamental type that encodes only the
paint operations: pipeline state and geometry. At its most simple, is a
way to structure setting up the programmable pipeline using a
CoglPipeline, and submitting Cogl primitives. The important take away
from this API is that you are not allowed to call Cogl API like
cogl_set_source() or cogl_primitive_draw() directly.
The interesting approach to this is that, in the future, we should be
able to move to a purely retained mode: we will decide which actors need
to be painted, they will update their own branch of the render graph,
and we'll take the render graph and build all the rendering commands
from that.
For the 1.x API, we will have to maintain invariants and the existing
behaviour, but as soon as we can break API, the old paint signal will
just go away, and Actors will only be allowed to manipulate the render
tree.
As it turns out, we do end up recursing inside the ::paint signal
emission - especially inside the conformance test suite.
This thoroughly sucks - and we'll only be able to fix it properly
when we bump API for 2.0.
ClutterActor should be able to hold all transitions, even the ones that
have been explicitly created.
This will allow to add new transitions types in the future, like the
keyframe-based one, or the transition group.
It should be possible to ask a timeline what is its duration, taking
into account eventual repeats, and which repeat is the one currently
in progress.
These two functions allow writing animations that depend on the current
state of another timeline.
It should be possible to set up the delay of a transition, but since
we start the Transition instance before returning control to the caller,
we cannot use clutter_actor_get_transition() to do it without something
extra-awkward, like:
transition = clutter_actor_get_transition (actor, "width");
clutter_timeline_stop (transition);
clutter_timeline_set_delay (transition, 1000);
clutter_timeline_start (transition);
for each property involved. It's much easier to add a delay to the
easing state of an actor.