The PaintNode hierarchy should have the ability to retrieve the
current active framebuffer by itself, instead of asking Cogl using the
global state API.
In order to do this, we ask the root node of a PaintNode graph for the
active framebuffer. In the current, 1.x-compatibility mode we have two
potential root node types: ClutterRootNode, used by ClutterStage; and
ClutterDummyNode, used a local root for each actor. The former takes a
framebuffer as part of its construction; the latter takes the actor that
acts as the local top-level during the actor's paint sequence, which
means we can get the active framebuffer from the stage associated to the
actor.
By keeping track of the active framebuffer on the node themselves we can
drop the usage of cogl_get_draw_framebuffer() in their implementation.
Cogl 1.18 deprecated the global clipping API in favour of the
per-framebuffer one, but since we're using the 2.0 API internally we
don't have access to the deprecated symbols any more.
This is pretty much a mechanical port for all the places where we're
still using the old 1.x API.
The TextureNode premultiplies the blend color passed to the node
constructor, so we need to document the fact properly to avoid
causing premultiplication twice.
We can also allow passing NULL for a color, and use a fully opaque
white, to make the code slightly more friendly.
If we don't get passed a CoglFramebuffer when creating the root paint
node then we ask Cogl to give us the current draw buffer.
This allows the text-cache conformance test to pass, but it'll require
further investigation.
If we allow content repeats on the texture nodes, then we need to use
the "automatic" wrap mode for the texture layer in the pipeline, instead
of the clamp-to-edge one.
Reported-by: Matthew Watson <matthew@endlessm.com>
Signed-off-by: Emmanuele Bassi <ebassi@gnome.org>
The introspection scanner has become slightly more annoying, in the hope
that people start fixing their annotations. As it turns out, it was the
right move.
Yes, it's not really the proper GL name for a linear-on-every-axis of a
texture plus linear-between-mipmap-levels minification filter, but it
has three redeeming qualities as a name:
- LINEAR_MIPMAP_LINEAR sucks, as it introduces GL concepts like
mipmaps in the API naming, while we're trying to avoid that;
- people using GL already know what 'trilinear' means in this context
without going all Khronos on their asses;
- we're using 2D textures anyway, so 'linear on two axes and linear
between mipmap levels' can be effectively approximated to
'trilinear'.
I mean, if even the OpenGL official wiki says:
Unfortunately, what most people think of as "trilinear" is not linear
filtering of a 3D texture, but what in OpenGL terms is GL_LINEAR mag
filter and GL_LINEAR_MIPMAP_LINEAR in the min filter in a 2D texture.
That is, it is bilinear filtering of each appropriate mipmap level,
and doing a third linear filter between the adjacent mipmap levels.
Hence the term "trilinear".
-- http://www.opengl.org/wiki/Texture
then the horse has already been flogged to death, and I don't intend to
be accused of necrophilia and sadism by flogging it some more.
Prior art: every single GL tutorial in the history of ever;
CoreAnimation's scaling filter enumerations.
If people want to start using 1D or 3D textures they they are probably
going to be using Cogl API directly, and that has the GL naming scheme
for minification and magnification filters anyway.
At least for the time being, we only expose the parts of the API that we
want to use internally and for new, out-of-tree Content implementations.
The full PaintNode tree API will be made public in 1.12 once we branch
master.
It's safer, and consistent with the rest of Clutter, to make sure that
the template pipeline we use for ClutterTextureNode has its wrap mode
set to clamp-to-edge.
Both ClutterCanvas and ClutterImage should use the minification and
magnification filters set on the actor, just like the use the content
box and the paint opacity.
Instead of our homegrown string building; this at least ensures that
we're generating proper data, instead of random strings. Plus, using
JsonNode and JsonBuilder, we can ask the PaintNode subclasses to
serialize themselves in a sensible way.
Now that we have a proper scene graph API, we should split out the
rendering part from the logical and event handling part.
ClutterPaintNode is a lightweight fundamental type that encodes only the
paint operations: pipeline state and geometry. At its most simple, is a
way to structure setting up the programmable pipeline using a
CoglPipeline, and submitting Cogl primitives. The important take away
from this API is that you are not allowed to call Cogl API like
cogl_set_source() or cogl_primitive_draw() directly.
The interesting approach to this is that, in the future, we should be
able to move to a purely retained mode: we will decide which actors need
to be painted, they will update their own branch of the render graph,
and we'll take the render graph and build all the rendering commands
from that.
For the 1.x API, we will have to maintain invariants and the existing
behaviour, but as soon as we can break API, the old paint signal will
just go away, and Actors will only be allowed to manipulate the render
tree.