g_list_insert_sorted inserts the new actor before all others that
compare equal so for the normal case when all actors have depth==0
this has the surprising behaviour of layering the actors in reverse
order. To fix this it now manually inserts the actor in the right
place by searching until it finds an actor at a higher depth and
inserting before that.
http://bugzilla.openedhand.com/show_bug.cgi?id=1988
This reverts commit 939e56e2b1.
Changing the depth sort function to have inconsistent behaviour for
nodes that compare equal breaks the stability of g_list_sort. It ends
up so that every time clutter_container_sort_depth_order is called the
order of all actors with the same depth is reversed.
http://bugzilla.openedhand.com/show_bug.cgi?id=1988
To aid in the debugging of Clutter stage resize issues this adds a
COGL_DEBUG=opengl option that will trace "some select OpenGL calls"
(currently just glViewport calls)
Most Cogl debugging code conditions are marked as G_UNLIKELY with the
intention of having the CPU branch prediction always assume the
path is disabled so having debugging support in release binaries has
negligible overhead.
This patch simply fixes a few cases where we weren't using G_UNLIKELY.
COGL_DEBUG=all wasn't previously useful as there are several options
that change the behaviour of Cogl and all together wouldn't help anyone
debug anything.
This patch makes it so COGL_DEBUG=all|verbose now only enables options
that don't change the behaviour of Cogl, i.e. they only affect the
amount of noise we'll print to a terminal.
In addition to that this patch also improves the output from
COGL_DEBUG=help so we now print a table of options including one liner
descriptions of what each option enables.
Some of the ClutterDebugFlags are not meant as a logging facility: they
actually change Clutter's behaviour at run-time.
It would be useful to have this distinction ratified, and thus split
ClutterDebugFlags into two: one DebugFlags for logging facilities and
another set of flags for behavioural changes.
This split is warranted because:
• it should be possible to do "CLUTTER_DEBUG=all" and only have
log messages on the output
• it should be possible to use behavioural modifiers even on a
Clutter that has been compiled without debugging messages
support
The commit adds two new debugging flags:
ClutterPickDebugFlags - controlled by the CLUTTER_PICK environment
variable
ClutterPaintDebugFlags - controlled by the CLUTTER_PAINT environment
variable
The PickDebugFlags are:
nop-picking
dump-pick-buffers
While the PaintDebugFlags is:
disable-swap-events
The mechanism is equivalent to the CLUTTER_DEBUG environment variable,
but it does not depend on the debug level selected when configuring and
compiling Clutter. The picking and painting debugging flags are
initialized at clutter_init() time.
http://bugzilla.openedhand.com/show_bug.cgi?id=1991
The motion event compression should be affected by the device field of
the event; that is: we should compress motion events coming from the
same device.
If an actor is on the boundary of a Stage and the pointer for a device
enters the Stage over that actor, the sequence of events currently is:
➔ ENTER (source: actor, related: NULL)
➔ MOTION
Thus the Stage never gets an ENTER event. This is a regression from
Clutter 1.0.
The correct sequence is:
➔ ENTER (source: stage, related: NULL)
➔ ENTER (source: actor, related: stage)
➔ MOTION
This also maps to the sequence of events sythesized by Clutter when
leaving the Stage through an actor overlapping the Stage boundary.
http://bugzilla.moblin.org/show_bug.cgi?id=9781
The introduction of the StageManager in 0.8 implied that the first Stage
instance to be created was automatically assigned the status of "default
stage". This was all well and good, since the default stage was created
behind the curtains by the initialization sequence.
Now that the initialization sequence does not create a default stage any
longer, it means that the first stage created using clutter_stage_new()
gets to be the default, and all special and warm and fuzzy - which also
means that the first stage created by clutter_stage_new() cannot be
destroyed or handled as any other stage. Whoopsie.
Let's go back to the old semantics: the stage created by the first
invocation of clutter_stage_get_default() is the default stage, and
nothing else can be set as default. One day we'll be able to break the
API and the whole default stage business will be a thing of the past.
Embedding toolkits should benefit from a proper documentation of
clutter_input_device_update_from_event(): its meaning, its use and
the caveats for the "update_stage" argument.
We now never query the width and height of the given texture object
from OpenGL. The problem is that the user may be creating a Cogl
texture from a texture_from_pixmap object where glTexImage2D was
never called and the texture_from_pixmap spec doesn't clarify that
it's reliable to query the width from OpenGL.
This should address:
http://bugzilla.openedhand.com/show_bug.cgi?id=1502
Thanks to Johan Bilien for reporting
Embedding toolkits most likely will disable the event handling, so all
the input device code will not be executed. Unfortunately, the newly
added synthetic event generation of ENTER and LEAVE event pairs depends
on having input devices.
In order to unbreak things without reintroducing the madness of the
previous code we should allow embedding toolkits to just update the
state of an InputDevice by using the data contained inside the
ClutterEvent. This strategy has two obvious reasons:
• the embedding toolkit is creating a ClutterEvent by translating
a toolkit-native event anyway
• this is exactly what ClutterStage does when processing events
We are, essentially, deferring input device handling to the embedding
toolkits, just like we're deferring event handling to them.
The DeviceManager class should be abstract in Clutter, and implemented
by each backend, as different backends will have different ways to
detect, initialize and list devices; the X11 backend alone has *two*
ways of dealing with devices.
This commit makes DeviceManager an abstract class and delegates the
device initialization and enumeration to per-backend sub-classes.
The responsible for creating the device manager is, obviously, the
backend singleton.
The X11 and Win32 backends have been updated to the new layout; the
Win32 backend has been updated blindly, so it might require additional
testing.
ConfigureNotify is delivered on window movements too, but there is no
need to queue a relayout on these as the viewport hasn't changed size.
Check for the window actually changing size on ConfigureNotify before
queueing a relayout.
This fixes laggy window movement when moving a window in response to
Clutter mouse motion events.
The size and position of the window rectangle for clipping in
try_pushing_rect_as_window_rect is calculated by projecting the
rectangle coordinates. Due to rounding errors, this can end up with
slightly off numbers like 34.999999. These were then being cast
directly to an integer so it could end up off by one.
This uses a new macro called COGL_UTIL_NEARBYINT which is a
replacement for the C99 nearbyint function.
If an actor is lying on the border of the Stage it might miss the LEAVE
event when the pointer of a device leaves the Stage window. Since the
backend is unsetting the Stage back pointer on the InputDevice we can
queue the emission of a LEAVE event on the pointer actor as well.
http://bugzilla.moblin.org/show_bug.cgi?id=9677
As well as manually setting the geometry size, we needed to queue a
relayout. This is what the ConfigureNotify handler would normally do,
but we don't get this event when using a foreign window (obviously).
This should fix resizing in things like gtk-clutter.
If we get into the resize function and it's a foreign window, set the
geometry size so that the allocate will set the backend size and call
glViewport.
Setting/unsetting fullscreen on a mapped or unmapped window now works
correctly.
If you unfullscreen a window that was initially full-screened, it will
unset the fullscreen hint and the WM will likely push the size down to
the largest valid size.
If the window was previously un-fullscreened, Clutter will restore the
previous size.
Fullscreening also now works if the WM switches the hint without the
application's knowledge (as happens when you resize a window to the size
of the screen, for example, with stock metacity).
If FBOs aren't supported then it will end up very slow to reorganize
the atlas. Also currently the CoglTexture2D backend will refuse to
create any textures anyway so the full atlas texture won't be created.
cogl_texture_2d_new may fail in certain circumstances so
cogl_atlas_texture_reserve_space should detect this and also
fail. This will cause cogl_texture_new to fallback to a sliced
texture.
Thanks to Vladimir Ivakin for reporting this problem.
When we resize, we relied on the stage's allocate to re-initialise the
GL viewport. Unfortunately, if we resized within Clutter, the new size
was cached before the window is actually resized, so glViewport wasn't
being called after resizing (some of the time, it's a race condition).
Change the way resizing works slightly so that we only resize when the
geometry size doesn't match our preferred size, and queue a relayout on
ConfigureNotify so the glViewport gets called.
Also change window creation slightly so that setting the size of a
window before it's realized works correctly.
Since the "internal" state is global, it will leak onto actors that you
didn't intend for it to, because it applies not just to the actors you
create, but also to any actors *they* create. Eg, if you have a dialog
box class, you might push/pop_internal around creating its buttons, so
that those buttons get marked as internal to the dialog box. But
ctx->internal_child will still be set during the *button*'s constructor
as well, and so, eg, the label and icon inside the button actor will
*also* be marked as internal children, even if that isn't what the
button class wanted.
The least intrusive change at this point is to make push_internal() and
pop_internal() two methods of the Actor class, and take a ClutterActor
pointer as the argument - thus moving the locality of the internal_child
counter to the Actor itself.
http://bugzilla.openedhand.com/show_bug.cgi?id=1990
The master clock might have a Stage during its destruction phase,
without a StageWindow attached to it. If this happens and we try
to dereference the StageWindow to get its class and call a virtual
function we might experience some slight turbulence and... then...
explode.
http://bugzilla.openedhand.com/show_bug.cgi?id=1987
The signal-swapped-after:: modifier for signal connection inside the
clutter_actor_animate* variadic arguments functions is not mentioned in
the documentation.
In the frenzy of the last 10mins before API freeze, I obviously forgot
to update the OpenGL path for _cogl_buffer_hints_to_gl_enum(). This
commit fixes this.
When the atlas is reorganised we could potentially be moving around
textures that are already referenced in the journal. We therefore need
to flush the journal otherwise they will be rendered with incorrect
texture coordinates. We also need to flush the journal even if we are
not reorganizing so that we can rely on the old texture contents
remaining in the atlas after migrating a texture out.
When creating a Cogl sub-texture, if the full texture is also a sub
texture it will now just offset the x and y and reference the full
texture instead. This avoids one level of indirection when rendering
the texture which reduces the chances of getting rounding errors in
the calculations.
Since get_paint_opacity() recurses through the hierarchy it might lead
to a lot of type checks while we walk the parent-child chain. We can
split the recursive function from the public entry point and perform the
type check just once.
• Remove unused variables.
• Do not pre-initialize ClutterActor's GType; pre-emptive optimizations
like these are more black magic than real optimization.
Remove an useless assignment. The n_expand_children is not used outside
the extra_space check, and if n_expand_children is 0 then the extra
space we allocate is 0.
• Remove one unused variable.
• We ignore the result of get_timeline_internal() so we need to tell
the compiler that - though a better solution would be to split the
timeline implicit creation into its own function.
Do not de-reference a void*; use a temporary variable -- after
checking the contents of the pointer. This actually simplifies
the readability and avoids pulling a Lisp with the parentheses.
The function _cogl_get_max_texture_units is called quite often while
rendering and it returns a constant value so we might as well cache
the result. Calling glGetInteger on Mesa can be expensive because it
flushes a lot of state.
An initial pass over the Cogl source code using the Clang static
analysis tool flagged a few low hanging issues such as un-used variables
or redundant initializing of variables which this patch fixes.
All the cogl_rectangle* APIs normalize their input into into an array of
_CoglMutiTexturedRect rectangles and pass these on to our work horse;
_cogl_rectangles_with_multitexture_coords. The definition of
_CoglMutiTexturedRect had 4 separate float members, x_1, y_1, x_2 and
y_2 which meant for some common cases we were having to copy out from an
array into these members. We are now able to simply point into the users
array avoiding a copy which seems desirable when submiting lots of
rectangles.
This uses the G_GNUC_DEPRECATED macros to mark the
cogl_{texture,vertex_buffer,shader}_ref and unref APIs as deprecated.
Since this flagged that cogl-pango-display-list.c and
clutter-glx-texture-pixmap.c were still using deprecated _ref/_unref
APIs they have now been changed to use the cogl_handle_ref/unref API
instead.
The function prototypes for the primitives API were spread between
cogl-path.h and cogl-texture.h and should have been in a
cogl-primitives.h.
As well as shuffling the prototypes around into more sensible places
this commit splits the cogl-path API out from cogl-primitives.c into
a cogl-path.c
We've had complaints that our Cogl code/headers are a bit "special" so
this is a first pass at tidying things up by giving them some
consistency. These changes are all consistent with how new code in Cogl
is being written, but the style isn't consistently applied across all
code yet.
There are two parts to this patch; but since each one required a large
amount of effort to maintain tidy indenting it made sense to combine the
changes to reduce the time spent re indenting the same lines.
The first change is to use a consistent style for declaring function
prototypes in headers. Cogl headers now consistently use this style for
prototypes:
return_type
cogl_function_name (CoglType arg0,
CoglType arg1);
Not everyone likes this style, but it seems that most of the currently
active Cogl developers agree on it.
The second change is to constrain the use of redundant glib data types
in Cogl. Uses of gint, guint, gfloat, glong, gulong and gchar have all
been replaced with int, unsigned int, float, long, unsigned long and char
respectively. When talking about pixel data; use of guchar has been
replaced with guint8, otherwise unsigned char can be used.
The glib types that we continue to use for portability are gboolean,
gint{8,16,32,64}, guint{8,16,32,64} and gsize.
The general intention is that Cogl should look palatable to the widest
range of C programmers including those outside the Gnome community so
- especially for the public API - we want to minimize the number of
foreign looking typedefs.
OpenGL is an implementation detail for Cogl so it's not appropriate to
expose OpenGL extensions through the Cogl API.
Note: Clutter is currently still using this API, because it is still
doing raw GL calls in ClutterGLXTexturePixmap, so this introduces a
couple of (legitimate) build warnings while compiling Clutter.
This replaces code like this:
if (CLUTTER_ACTOR_IS_VISIBLE (self))
clutter_actor_queue_redraw (self);
with:
clutter_actor_queue_redraw (self);
clutter_actor_queue_redraw internally knows what can be optimized when
the actor is not visible, but it also knows that the queue_redraw signal
must always be sent in case a ClutterClone is cloning a hidden actor.
ClutterGroup::foreach was recently changed (ref: ce030a3fce) to use
g_list_foreach() to iterate the children instead of manually iterating
the list so it would safely handle calls like:
clutter_container_foreach (container, clutter_actor_destroy);
(In this example clutter_actor_destroy will result in the current
list item being iterated being freed.)
There is a lot of duplication between ClutterGroup and ClutterBox so
this makes the two files diff-able so that new fixes can easily be
ported to both and bug fixes missing in one or the other can be spotted
more easily. This doesn't change the behaviour of either actor; it's
really just a shuffle around of code and normalizes the coding style to
make the files comparable.
This has already uncovered one bug in ClutterBox, and also highlights
a bug in ClutterGroup + many other actors:
1) ClutterGroup::real_foreach was recently changed to use
g_list_foreach instead of manually iterating the child list so it can
safely handle calls like:
clutter_container_foreach (container, clutter_actor_destroy);
ClutterBox is still manually iterating the list.
2) In ClutterGroup we guard _queue_redraw() calls like this:
if (CLUTTER_ACTOR_IS_VISIBLE (container))
clutter_actor_queue_redraw (CLUTTER_ACTOR (container));
In ClutterBox we don't:
I think ClutterBox is correct here because
clutter_actor_queue_redraw already optimizes the case where the
actor's not visible, but it also considers that the actor may be
cloned and so the guard in ClutterGroup could break clones. This
actually highlights a wider clutter bug since the same kinds of
guards can be found in all other clutter actors.
The signbit macro is defined in C99 so it should be available but some
versions of GCC don't appear to define it by default. If it's not
available we can use a hack to test the bit directly.
If the stage associated to the InputDevice is not set we should
short-circuit out and return NULL. This will result in a pick()
done on the event's stage - if applicable.
http://bugzilla.moblin.org/show_bug.cgi?id=9602
Instead of returning a sub-pixel height round up the preferred height to
the nearest integral value that is not less than the size reported by
Pango, once converted in pixels.
This fixes some backwards logic for asserting that we have a GLX major
version == 1 and a minor version >= 2. (NB: Although we technically
depend on GLX 1.3 features, we still have to support drivers that report
GLX 1.2 because there are a lot of mesa drivers out there incorrectly
report GLX 1.2 even though they export extensions that depend on GLX
1.3)
A material layer can not be considered equal if it is using different
texture filtering modes. This was causing problems where rectangles
with different filters would end up batched together and then rendered
with the wrong filter mode.
If your OpenGL driver supports GLX_INTEL_swap_event that means when
glXSwapBuffers is called it returns immediatly and an XEvent is sent when
the actual swap has finished.
Clutter can use the events that notify swap completion as a means to
throttle rendering in the master clock without blocking the CPU and so it
should help improve the performance of CPU bound applications.
Some extensions only support GLX versions > 1.3 and may not support
old style X Windows as GLXDrawables, so we now create GLXWindows for
stages when possible.
Commit d2bdd3cb62 fixed some compiler warnings but also broke the
ability to create a stage. Although not having warnings from the
compiler is nice, it is also nice to be able to create a stage so lets
not invert the meaning of the error check.
The function modifies the pixels pointed by p in-place so the pointer
can not be constant. The compiler was accepting this because the
modification is done from inline assembler.
_cogl_texture_driver_gen is needed to set the texture minification
mode to Cogl's default of GL_LINEAR. There was also a line to set this
in _cogl_texture_2d_new_with_size but it wasn't working because it was
called *before* the texture was bound. If the texture was later
rendered with the default material it then it would end up with GL's
default mipmap filtering mode but without mipmaps so it would render
white squares instead.
This adds a fast path for premultiplying an RGBA image using SSE2
instructions. SSE registers are 128-bit and we need at least 16-bits
per component for the intermediate result of the multiplication so we
can do two pixels in parallel with one register. The function
interleaves 2 SSE registers to multiply 4 pixels in one function call
with the hope that this will pipeline better.
http://bugzilla.openedhand.com/show_bug.cgi?id=1939
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
• Add the function name in the warning, since the text is the same in
both clutter_actor_raise() and clutter_actor_lower().
• If an actor has a name then prefer it to the type name.
OpenGL ES has no PBO extension, so we fallback to using a malloc'ed
buffer. Make sure the OpenGL-only defines don't leak into the OpenGL ES
compilation.
First, let's add a new public feature called, surprisingly,
COGL_FEATURE_PBOS to check the availability of PBOs and provide a
fallback path when running on older GL implementations or on OpenGL ES
In case the underlying OpenGL implementation does not provide PBOs, we
need a fallback path (a malloc'ed buffer). The CoglPixelBufer
constructors will instanciate a subclass of CoglBuffer that handles
map/unmap and set_data() with a malloc'ed buffer.
The public feature is useful to check before using set_data() on a
buffer as it will mean doing a memcpy() when not supporting PBOs (in
that case, it's better to create the texture directly instead of using a
CoglBuffer).
The only goal of using COGL buffers is to use them to create
textures. cogl_texture_new_from_buffer() is the new symbol to create
textures out of buffers.
This subclass of CoglBuffer aims at wrapping PBOs or other system
surfaces like DRM buffer objects. Two constructors are available:
cogl_pixel_buffer_new() with a size when you only care about the size of
the buffer (such a buffer can be used to store several texture data such
as the three planes of a I420 frame).
cogl_pixel_buffer_new_full() is more a 1:1 mapping between the data and
an underlying surface, with the possibility of having access to a low
level memory buffer that may have a stride.