The pipeline private data is accessed both from the private data set
on a CoglPipeline and the destroy notify function of a weak material
that the vertex buffer creates when it needs to override the wrap
mode. However when a CoglPipeline is destroyed, the CoglObject code
first removes all of the private data set on the object and then the
CoglPipeline code gets invoked to destroy all of the weak children. At
this point the vertex buffer's weak override destroy notify function
will get invoked and try to use the private data which has already
been freed causing a crash.
This patch instead adds a reference count to the pipeline private data
stuct so that we can avoid freeing it until both the private data on
the pipeline has been destroyed and all of the weak materials are
destroyed.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2544
In cogl_pipeline_set_layer_combine_constant it was comparing whether
the new color is the same as the old color using a memcmp on the
constant_color parameter. However the combine constant is stored in
the layer data as an array of four floats but the passed in color is a
CoglColor (which is currently an array of four guint8s). This was
causing valgrind errors and presumably also the check for setting the
same color twice would always fail.
This patch makes it do the conversion to a float array upfront before
the comparison.
Instead of just setting the input device pointer in the private event
data, it should also set the field in the event sub-types, so that
direct access to the structures still works.
cogl_matrix_project_points and cogl_matrix_transform_points had an
optimization for the common case where the stride parameters exactly
match the size of the corresponding structures. The code for both when
generated by gcc with -O2 on x86-64 use two registers to hold the
addresses of the input and output arrays. In the strided version these
pointers are incremented by adding the value of a register and in the
packed version they are incremented by adding an immediate value. I
think the difference in cost here would be negligible and it may even
be faster to add a register.
Also GCC appears to retain the loop counter in a register for the
strided version but in the packed version it can optimize it out and
directly use the input pointer as the counter. I think it would be
possible to reorder the code a bit to explicitly use the input pointer
as the counter if this were a problem.
Getting rid of the packed versions tidies up the code a bit and it
could potentially be faster if the code differences are small and we
get to avoid an extra conditional in cogl_matrix_transform_points.
Use a DeviceManager sub-class similar to the Win32 backend one, which
creates two InputDevices: a core pointer and a core keyboard.
The event translation code then uses these two devices to fill out the
.device field of the events.
Throw in enter/leave tracking, given that we need to update the device's
state.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2490
Implementation of event loop which works with GLib events, native OS X
events and Clutter events.
The event loop source code comes from the equivalent code in the Quartz
GDK backend from GTK+ 2.22.1, which is LGPL v2.1+ and thus compatible
with Clutter's licensing terms.
The code has been tested with libsoup, which did not work before together
with Clutter.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
http://bugzilla.clutter-project.org/show_bug.cgi?id=2490
Since we need to find the stage from the X11 Window, it's better to use
a static hashmap that gets updated every time the ClutterStageX11:xwin
member is changed, instead of iterating over every stage handled by the
global ClutterStageManager singleton.
Clutter should just require that the windowing system used by a backend
adds a device to the stage when the device enters, and removes it from
the stage when the device leaves; with this information, we can
synthesize every crossing event and update the device state without
other intervention from the backend-specific code.
The generation of additional crossing events for actors that are
covering the stage at the coordinates of the crossing event should be
delegated to the event processing code.
The x11 and win32 backends need to be modified to relay the enter and
leave events from the windowing system.
When synthesizing events coming from input devices it should be
possible to just call a setter function, to avoid a huge switch
on the type of the event.
Clutter should also store the device pointer inside the private
data, for faster access of the pointer in allocated events.
Finally, the get_device_id() and get_device_type() accessors should
just be wrappers around clutter_event_get_device(), to reduce the
amount of code duplication.
Since we access it in order to get the X11 Display pointer, it makes
sense to have the ClutterBackendX11 already available inside the
ClutterStageX11 structure, and avoid the pattern:
ClutterBackend *backend = clutter_get_default_backend ();
ClutterBackendX11 *backend_x11 = CLUTTER_BACKEND_X11 (backend);
which costs us a function call, a type cast and an unused variable.
When we receive a ConfigureNotify event that doesn't affect the size
of the window (only the position) then we were still calling
clutter_stage_ensure_viewport which ends up queueing a full stage
redraw. This patch makes it so that it only ensures the viewport when
the size changes as it already did for avoiding queueing a relayout.
It now also avoids setting the clipped redraws cool off period when
the window only moves under the assumption that it's only necessary
for size changes.
Since the XI2 device manager code is going to be compiled only on
POSIX compliant systems, we can safely assume the presence of stdint.h
and include it unconditionally.
CLUTTER_BIND_POSITION and CLUTTER_BIND_SIZE are two convenience
enumeration values for binding x and y, and width and height
respectively, using a single ClutterBindConstraint.
When copying COMBINE state in
_cogl_pipeline_layer_init_multi_property_sparse_state we would read some
state from the destination layer (invalid data potentially), then
redundantly set the value back on the destination. This was picked up by
valgrind, and the code is now more careful about how it references the
src layer vs the destination layer.
There is currently a problem with per-framebuffer journals in that it's
possible to create a framebuffer from a texture which then gets rendered
too but the framebuffer (and corresponding journal) can be freed before
the texture gets used to draw with.
Conceptually we want to make sure when freeing a framebuffer that - if
it is associated with a texture - we flush the journal as the last thing
before really freeing the framebuffer's meta data. Technically though
this is awkward to implement since the obvious mechanism for us to be
notified about the framebuffer's destruction (by setting some user data
internally with a callback) notifies when the framebuffer has a
ref-count of 0. This means we'd have to be careful what we do with the
framebuffer to consider e.g. recursive destruction; anything that would
set more user data on the framebuffer while it is being destroyed and
ensuring nothing else gets notified of the framebuffer's destruction
before the journal has been flushed.
For simplicity, for now, this patch provides another solution which is
to flush framebuffer journals whenever we switch away from a given
framebuffer via cogl_set_framebuffer or cogl_push/pop_framebuffer. The
disadvantage of this approach is that we can't batch all the geometry of
a scene that involves intermediate renders to offscreen framebufers.
Clutter is doing this more and more with applications that use the
ClutterEffect APIs so this is a shame. Hopefully this will only be a
stop-gap solution while we consider how to reliably support journal
logging across framebuffer changes.
When flushing a clip stack that contains more than one rectangle which
needs to use the stencil buffer the code takes a different path so
that it can combine the new rectangle with the existing contents of
the stencil buffer. However it was not correctly flushing the
modelview and projection matrices so that rectangle would be in the
wrong place.
This adds a COGL_DEBUG=clipping option that reports how the clip is
being flushed. This is needed to determine whether the scissor,
stencil clip planes or software clipping is being used.
The CoglDebugFlags are now stored in an array of unsigned ints rather
than a single variable. The flags are accessed using macros instead of
directly peeking at the cogl_debug_flags variable. The index values
are stored in the enum rather than the actual mask values so that the
enum doesn't need to be more than 32 bits wide. The hope is that the
code to determine the index into the array can be optimized out by the
compiler so it should have exactly the same performance as the old
code.
The lighting parameters such as the diffuse and ambient colors were
previously only flushed in the fixed vertend. This meant that if a
vertex shader was used then they would not be set. The lighting
parameters are uniforms which are just as useful in a fragment shader
so it doesn't really make sense to set them in the vertend. They are
now flushed in the common cogl-pipeline-opengl code but the code is
#ifdef'd for GLES2 because they need to be part of the progend in that
case.
The uniforms for the alpha test reference value and point size on
GLES2 are updating using similar code. This generalizes the code so
that there is a static array of predefined builtin uniforms which
contains the uniform name, a pointer to a function to get the value
from the pipeline, a pointer to a function to update the uniform and a
flag representing which CoglPipelineState change affects the
uniform. The uniforms are then updated in a loop. This should simplify
adding more builtin uniforms.
The builtin uniforms are accessible from either the vertex shader or
the fragment shader so we should define them in the common
section. This doesn't really matter for the current list of uniforms
because it's pretty unlikely that you'd want to access the matrices
from the fragment shader, but for other builtins such as the lighting
material properties it makes sense.
Between Clutter 0.8 and 1.0, the new-frame signal of ClutterTimeline
changed the second parameter to be an elapsed time in milliseconds
rather than the frame number. However a few places in clutter were
still calling the parameter 'frame_num' which is a bit
misleading. Notably the signature for the signal class closure in the
header was using the wrong name. This changes them to use 'msecs'.
ClutterTimeline has special handling for the first time do_tick is
called which was not emitting a new-frame signal. This meant that an
application which directly uses the timeline would have to manually
setup the initial state of an animation after starting a timeline to
avoid painting a single frame with the wrong state. It seems to make
more sense to instead emit the new-frame signal so that the
application always sees a new-frame when the progress changes before a
paint.
This adds a custom "rows" property, that allows to define the rows of a
ClutterModel. A single row can either an array of all columns or an
object with column-name : column-value pairs.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2528
Allow to 'abuse' the clutter_script_parse_node function by calling it
with an initialized GValue instead of a valid GParamSpec argument to
obtain the correct typed value from the json node.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2528
* xi2: (41 commits)
test-devices: Actually print the axis data
device-manager/xi2: Sync the stage of source devices
event: Clean up clutter_event_copy()
device: unset the axes array pointer when resetting
device-manager/xi2: Fix device hotplugging
glx: Clean up GLX implementation
device/x11: Store min/max keycode in the XI device class
x11: Hide all private symbols
docs: More documentation fixes for InputDevice
*/event: Never manipulate the event queue directly
win32: Update DeviceManager device creation
device: Allow enabling/disabling non-master devices
backend/eglx: Add newly created stages to the translators
device: Add more doc annotations
device: Use a double for translate_axis() argument
test-devices: Clean up and show axes data
event: Fix up clutter_event_copy()
device/xi2: Translate the axis data after setting devices
device: Add more accessors for properties
docs: Update API reference
...
When we added the texture->framebuffers member a _cogl_texture_init
funciton was added to initialize the list of framebuffers associated
with a texture to NULL. All the backends were updated except the
x11 tfp backend. This was causing crashes in test-pixmap.
This is part of a broader cleanup of some of the experimental Cogl API.
One of the reasons for this particular rename is to reduce the verbosity
of using the API. Another reason is that CoglVertexArray is going to be
renamed CoglAttributeBuffer and we want to help emphasize the
relationship between CoglAttributes and CoglAttributeBuffers.
We have a bunch of experimental convenience functions like
cogl_primitive_p2/p2t2 that have corresponding vertex structures but it
seemed a bit odd to have the vertex annotation e.g. "P2T2" be an infix
of the type like CoglP2T2Vertex instead of be a postfix like
CoglVertexP2T2. This switches them all to follow the postfix naming
style.
COGL_DEBUG=disable-fast-read-pixel can be used to disable the
optimization for reading a single pixel colour back by looking at the
geometry in the journal and not involving the GPU. With this disabled we
will always flush the journal, rendering to the framebuffer and then use
glReadPixels to get the result.
This adds a transparent optimization to cogl_read_pixels for when a
single pixel is being read back and it happens that all the geometry of
the current frame is still available in the framebuffer's associated
journal.
The intention is to indirectly optimize Clutter's render based picking
mechanism in such a way that the 99% of cases where scenes are comprised
of trivial quad primitives that can easily be intersected we can avoid
the latency of kicking a GPU render and blocking for the result when we
know we can calculate the result manually on the CPU probably faster
than we could even kick a render.
A nice property of this solution is that it maintains all the
flexibility of the render based picking provided by Clutter and it can
gracefully fall back to GPU rendering if actors are drawn using anything
more complex than a quad for their geometry.
It seems worth noting that there is a limitation to the extensibility of
this approach in that it can only optimize picking a against geometry
that passes through Cogl's journal which isn't something Clutter
directly controls. For now though this really doesn't matter since
basically all apps should end up hitting this fast-path. The current
idea to address this longer term would be a pick2 vfunc for ClutterActor
that can support geometry and render based input regions of actors and
move this optimization up into Clutter instead.
Note: currently we don't have a primitive count threshold to consider
that there could be scenes with enough geometry for us to compensate for
the cost of kicking a render and determine a result more efficiently by
utilizing the GPU. We don't currently expect this to be common though.
Note: in the future it could still be interesting to revive something
like the wip/async-pbo-picking branch to provide an asynchronous
read-pixels based optimization for Clutter picking in cases where more
complex input regions that necessitate rendering are in use or if we do
add a threshold for rendering as mentioned above.