There's an optimisation in clutter-actor.c to avoid calculating the
last known paint volume whenever culling and clipped redraws are both
disabled. However there was a small thinko in the logic so that it
would also avoid calculating the paint volume whenever only one of the
debug flags is enabled. This fixes it to explicitly check that the two
flags are not both enabled before skipping the paint volume
calculation.
Instead of unconditionally combining the modelview and projection
matrices and then iterating each of the vertices to call
cogl_matrix_transform_point for each one in turn we now only combine the
matrices if there are more than 4 vertices (with less than 4 vertices
its less work to transform them separately) and we use the new
cogl_vertex_{transform,project}_points APIs which can hopefully
vectorize the transformations.
Finally the perspective divide and viewport scale is done in a separate
loop at the end and we don't do the spurious perspective divide and
viewport scale for the z component.
Previously each time we needed to retrieve the model transform for a
given actor we would call the apply_transform vfunc which would build up
a transformation matrix based on the actor's current anchor point, its
scale, its allocation and rotation. The apply_transform implementation
would repeatedly call API like cogl_matrix_rotate, cogl_matrix_translate
and cogl_matrix_scale.
All this micro matrix manipulation APIs were starting to show up in the
profiles of dynamic applications so this adds priv->transform matrix
cache which maintains the combined result of the actors scale, rotation
and anchor point etc. Whenever something like the rotation changes then
then the matrix is marked as dirty, but so long as the matrix isn't
dirty then the apply_transform vfunc now just calls cogl_matrix_multiply
with the cached transform matrix.
This implements a variation of frustum culling whereby we convert screen
space clip rectangles into eye space mini-frustums so that we don't have
to repeatedly transform actor paint-volumes all the way into screen
coordinates to perform culling, we just have to apply the modelview
transform and then determine each points distance from the planes that
make up the clip frustum.
By avoiding the projective transform, perspective divide and viewport
scale for each point culled this makes culling much cheaper.
This simplifies the implementation of the ClutterStage apply_transform
vfunc by using the new cogl_matrix_view_2d_in_perspective utility API
which can setup up a view transform for a given perspective so that a
cross section of the view frustum at an arbitrary depth can be mapped
directly to 2D stage coordinates with (0,0) at the top left.
This adds two new experimental functions to cogl-matrix.c:
cogl_matrix_view_2d_in_perspective and cogl_matrix_view_2d_in_frustum
which can be used to setup a view transform that maps a 2D coordinate
system (0,0) top left and (width,height) bottom right to the current
viewport.
Toolkits such as Clutter that want to mix 2D and 3D drawing can use
these APIs to position a 2D coordinate system at an arbitrary depth
inside a 3D perspective projected viewing frustum.
Firstly Clutter shouldn't be using OpenGL directly so this needed
changing but also conceptually it doesn't make sense for
clutter_stage_read_pixels to validate the requested area to read against
the viewport it would make more sense to compare against the window
size. Finally checking that the width of the area is less than the
viewport or window width without considering the x isn't enough to know
if the area extends outside the windows bounds. (same for the height)
This patch removes the validation of the read area from
clutter_stage_read_pixels and instead we now simply rely on the
semantics of cogl_read_pixels for reading areas outside the window
bounds.
OpenGL < 4.0 only supports integer based viewports and internally we
have a mixture of code using floats and integers for viewports. This
patch switches all viewports throughout clutter and cogl to be
represented using floats considering that in the future we may want to
take advantage of floating point viewports with modern hardware/drivers.
This makes a change to the original point_in_poly algorithm from:
http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html
The aim was to tune the test so that tests against screen aligned
rectangles are more resilient to some in-precision in how we transformed
that rectangle into screen coordinates. In particular gnome-shell was
finding that for some stage sizes then row 0 of the stage would become a
dead zone when going through the software picking fast-path and this was
because the y position of screen aligned rectangles could end up as
something like 0.00024 and the way the algorithm works it doesn't have
any epsilon/fuz factor to consider that in-precision.
We've avoided introducing an epsilon factor to the comparisons since we
feel there's a risk of changing some semantics in ways that might not be
desirable. One of those is that if you transform two polygons which
share an edge and test a point close to that edge then this algorithm
will currently give a positive result for only one polygon.
Another concern is the way this algorithm resolves the corner case where
the horizontal ray being cast to count edge crossings may cross directly
through a vertex. The solution is based on the "idea of Simulation of
Simplicity" and "pretends to shift the ray infinitesimally down so that
it either clearly intersects, or clearly doesn't touch". I'm not
familiar with the idea myself so I expect a misplaced epsilon is likely
to break that aspect of the algorithm.
The simple solution this patch applies is to pixel align the polygon
vertices which should eradicate most noise due to in-precision.
https://bugzilla.gnome.org/show_bug.cgi?id=641197
Anything that is not CLUTTER_INIT_SUCCESS is to be considered an error.
This fixes the Clutter initialization sequence to actually error out
on pre-conditions and backend initialization failures.
clutter_offscreen_effect_pre_paint was using the unitialized value of
the ‘box’ variable whenever the actor doesn't have a paint
volume. This patch makes it just set the offset to 0,0 instead.
When removing the opacity override in the post_paint implementation,
ClutterOffscreenEffect would always set the override back to -1. This
ends up cancelling out the effect of any overrides from outer effects
which means that if any actor has multiple effects attached then it
would apply the opacity multiple times.
To fix this, the effect now preserves the old value of the opacity
override and restores that instead of setting -1.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2541
This is needed if an effect wants to temporarily override the paint
opacity. It needs to be able to restore the old opacity override in
the post_paint handler otherwise it would replace the effect of the
opacity override from any outer effects.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2541
The OffscreenEffect class needs to expose a way for sub-classes to
track the size of FBO it creates, in case it has to do some geometry
deformations like the DeformEffect sub-classes.
Let's move the private symbol we used internally in 1.6 to fix
DeformEffect to the list of public symbols of OffscreenEffect.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2570
The table we use for converting between keysyms and Unicode should be
static and constified, so that it can live in the .rodata section of
the ELF shared object, and be shared among processes.
This change moves the table to a source file, instead of an header; the
change also requires the clutter_keysym_to_unicode() function to be
moved from clutter-event.c into this new source file. The declaration is
still in clutter-event.h, so we don't need to do anything special.
Creating a synthetic event requires direct access to the ClutterEvent
union members; this access does not map in bindings to high-level
languages, especially run-time bindings using GObject-Introspection.
It's also midly annoying from C, as it unnecessarily exposes the guts of
ClutterEvent - something we might want to fix in the future.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2575
Many people expect clutter_init to work the same way as gtk_init which
exits the program on init failure. clutter_init however returns a
status code on failure which applications need to handle because if
the init fails then any further Clutter calls are likely to crash. In
Clutter 2.0 we may want to change this to be more like GTK+.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2574
When using a pipeline and the journal to blit images between
framebuffers, it should disable blending. Otherwise it will end up
blending the source texture with uninitialised garbage in the
destination texture.
Converting from Pango units to pixels by using the C conventions might
cause us to lose a pixel; since we're doing the same for the height, we
should use ceilf() to round up the width and the line height.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2573
The ClutterDeformEffect sub-classes are effectively deforming the
texture target of an FBO, not the actor itself. Thus, we need to
use the FBO's size, and not the actor's allocated size, given that
the actor might be transformed prior to applying an effect.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2571
Since the FBO target might have a different size than the mere paint box
of the actor, we need API to get it out of the ClutterOffscreenEffect
private data structure and on to sub-classes.
Since we cannot add new API in a stable cycle, we need a private
function; we'll leave it there even when opening 1.7, since it's useful
for internal purposes.
Once upon a time, the land of Clutter had a stage singleton. It was
created automatically at initialization time and stayed around even
after the main loop was terminated. The singleton was content in
being all there was. There also was a global API to handle the
configuration of the stage singleton that would affect the behaviour
on other classes, signals and properties.
Then, an evil wizard came along and locked the stage singleton in his
black tower, and twisted it until it was possible to create new stages.
These new stages were pesky, and didn't have the same semantics of the
singleton: they didn't stay around when closed, or terminate the main
loop on delete events.
The evil wizard also started moving all the stage-related API from the
global context into class-specific methods.
Finally, the evil wizard cast a spell, and the stage singleton was
demoted to creation on demand - and until somebody called the
clutter_stage_get_default() function, the singleton remained in a limbo
of NULL pointers and undefined memory areas.
There was a last bit - literally - of information still held by the
global API; a tiny, little flag that disabled per-actor motion events.
The evil wizard added private accessors for it, and stored it inside the
stage private structure, in preparation for a deprecation that would
come in a future development cycle.
The evil wizard looked down upon the land of Clutter from the height of
his black tower; the lay of the land had been reshaped into a crucible
of potential, and the last dregs of the original force of creation were
either molted into new, useful shapes, or blasted away by the sheer fury
of his will.
All was good.
The clutter-id-pool.h header is private and not installed; yet, all the
clutter_id_pool_* symbols are public. Let's correct this oversight we've
been stringing along since forever.
Only allow access to the ClutterMainContext through the private
_clutter_context_get_default() function, so we can easily grep
it and remove the unwanted usage of the global context.
The shader stack held by ClutterMainContext should only be accessed
using functions, and not directly.
Since it's a stack, we can use stack-like operations: push, pop and
peek.
The _clutter_do_redraw() function should really be moved inside
ClutterStage, since all it does is calling private stage and
backend functions. This also allows us to change a long-standing
issue with a global fps counter for all stages, instead of a\
per-stage one.
Let's try and start reducing the size of ClutterActorPrivate by moving
some optional, out-of-band data from it to GObject data.
The ShaderData structure is a prime candidate for this migration: it
does not need to be inspected by the actor, and its relationship with an
actor is transient and optional.
By attaching it to the actor's instance through g_object_set_data() we
neatly tie its lifetime to the instance, and we don't have to care
cleaning it up in the finalize()/dispose() implementation of
ClutterActor itself.