When adding a transition to a ClutterActor, the actor should hold a
reference on it, and release it only when we remove it. This makes
transitions just like other objects held by ClutterActor.
The ::completed signal emission is part of the current cycle; repeating,
like the automatic reverse of the timeline's direction, happens after
the ::completed chain of handlers has been called.
We still use XKeycodeToKeysym() in a fallback path in case we're not
running on a decent enough system; XKeycodeToKeysym() is deprecated as
of version 1.12 of the X server, but since I don't want to copy a bunch
of code from GDK or, god forbid, from Xlib, for a fallback path, it's
probably more reasonable to just silence the compiler warnings - at
least until we can drop all the X compatibility crap, and just use
modern, or semi-modern, API.
Some events may contain precise scrolling information coming from
devices like trackpads and touchscreens. ClutterEvent should allow
setting and getting this information.
While you can get a per-transition notification of completion, it can be
convenient to also have a way to notify that all the transitions
involving an actor are complete. A simple signal triggered by the
removal of the last transition fits the bill pretty neatly.
When handling Configure events from the X server we update the
internal copy of the window size. Unfortunately we may be updating the
wrong stage implementation because we use the one related to the event
translator (which is the first created stage).
This patch fix flickering/redrawning issues with multi-stage by
looking for the right stage implementation associated with an XEvent.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
If restore_easing_state() is called on the last easing state on the
stack, clean up the stack, so that we don't leave stale pointers
around to later segfault on.
When setting the easing mode, duration, or delay without having ever
called clutter_actor_save_easing_state(). It's confusing, and not
really nice.
In the future, we'll have a default easing state implicitly created by
the actor itself, but for the time being explicitly opting in is
preferrable.
If the pointer is inside the window frame when it's shown then we need
to synthesize and emit a NSMouseEnterEvent ourselves, as Quartz won't
do it for us.
This is a bit of a blind commit - but it's taken from an equivalent
patch that has been verified to work in GDK.
Yes, it's not really the proper GL name for a linear-on-every-axis of a
texture plus linear-between-mipmap-levels minification filter, but it
has three redeeming qualities as a name:
- LINEAR_MIPMAP_LINEAR sucks, as it introduces GL concepts like
mipmaps in the API naming, while we're trying to avoid that;
- people using GL already know what 'trilinear' means in this context
without going all Khronos on their asses;
- we're using 2D textures anyway, so 'linear on two axes and linear
between mipmap levels' can be effectively approximated to
'trilinear'.
I mean, if even the OpenGL official wiki says:
Unfortunately, what most people think of as "trilinear" is not linear
filtering of a 3D texture, but what in OpenGL terms is GL_LINEAR mag
filter and GL_LINEAR_MIPMAP_LINEAR in the min filter in a 2D texture.
That is, it is bilinear filtering of each appropriate mipmap level,
and doing a third linear filter between the adjacent mipmap levels.
Hence the term "trilinear".
-- http://www.opengl.org/wiki/Texture
then the horse has already been flogged to death, and I don't intend to
be accused of necrophilia and sadism by flogging it some more.
Prior art: every single GL tutorial in the history of ever;
CoreAnimation's scaling filter enumerations.
If people want to start using 1D or 3D textures they they are probably
going to be using Cogl API directly, and that has the GL naming scheme
for minification and magnification filters anyway.
At least for the time being, we only expose the parts of the API that we
want to use internally and for new, out-of-tree Content implementations.
The full PaintNode tree API will be made public in 1.12 once we branch
master.
It's a bit late in the game for changing the emission of the paint
signal with actors that use paint nodes - mostly because we have both
implicit paint nodes (background color, content) and explicit paint
nodes (the paint_node virtual).
When we branch for 1.12 we can revert this change.
It's safer, and consistent with the rest of Clutter, to make sure that
the template pipeline we use for ClutterTextureNode has its wrap mode
set to clamp-to-edge.
Both ClutterCanvas and ClutterImage should use the minification and
magnification filters set on the actor, just like the use the content
box and the paint opacity.
The ::paint signal is the old way to paint an actor; the paint_node()
virtual function is the new way. It's still not possible to traverse the
whole scene graph and build a render tree of PaintNode instances, but
with this change we simultaneously cut out the ::paint signal emission
from the critical path for actors that are using the new PaintNode-based
API, and we retain backward compatibility in the interim period between
1.10 and 2.0.