* elliot/cookbook-actors-composite:
docs: Add reference to useful GObject tutorial
docs: Explain why destroy() is implemented
docs: Implement destroy() rather than dispose()
docs: Don't use clutter_stage_get_default()
docs: Change text on button
docs: Add a note about other state variables
docs: Complete composite actor recipe
docs: Change order of functions in example to match docs
docs: Add more comments on how allocate() works
docs: Include code examples in the recipe
docs: Explain enums for properties and signals
docs: Don't set explicit size on button
docs: Add example of preferred_height() and preferred_width()
docs: Add recipe for creating a custom ClutterActor with composition
docs: Add more comments on code example for composite actor
docs: Improve example code formatting
docs: Add some gtk-doc annotations to example
docs: Add custom ClutterActor example which uses composition
Use a DeviceManager sub-class similar to the Win32 backend one, which
creates two InputDevices: a core pointer and a core keyboard.
The event translation code then uses these two devices to fill out the
.device field of the events.
Throw in enter/leave tracking, given that we need to update the device's
state.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2490
Implementation of event loop which works with GLib events, native OS X
events and Clutter events.
The event loop source code comes from the equivalent code in the Quartz
GDK backend from GTK+ 2.22.1, which is LGPL v2.1+ and thus compatible
with Clutter's licensing terms.
The code has been tested with libsoup, which did not work before together
with Clutter.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
http://bugzilla.clutter-project.org/show_bug.cgi?id=2490
Remove the dispose() implementation and replace
with destroy().
This should be promoted as the standard approach
for implementing a composite actor, as it emits a
signal when instances of the actor subclass are destroyed.
Add some extra detail to the Discussion section of the
composite actor recipe, concentrating on the pros and
cons of this approach.
Also explain more about the Clutter parts of the implementation.
Also general tidy up of language and style.
Add some extra description to the allocate() function,
explaining how the allocation has to be adjusted to
coordinates relative to the actor as a whole, before
applying to the single child actor it is composed from.
Include all the code examples inline as part of the recipe.
Remove sections around each code example, as these are
unnecessary; leave full discussion for the Discussion section
instead of trying to cram it in around the code example.
As most actor subclasses will probably want to implement
size requisition, give a simple example of how to do this
on the basis of the composed actor's size, plus some padding.
Since we need to find the stage from the X11 Window, it's better to use
a static hashmap that gets updated every time the ClutterStageX11:xwin
member is changed, instead of iterating over every stage handled by the
global ClutterStageManager singleton.
Clutter should just require that the windowing system used by a backend
adds a device to the stage when the device enters, and removes it from
the stage when the device leaves; with this information, we can
synthesize every crossing event and update the device state without
other intervention from the backend-specific code.
The generation of additional crossing events for actors that are
covering the stage at the coordinates of the crossing event should be
delegated to the event processing code.
The x11 and win32 backends need to be modified to relay the enter and
leave events from the windowing system.
When synthesizing events coming from input devices it should be
possible to just call a setter function, to avoid a huge switch
on the type of the event.
Clutter should also store the device pointer inside the private
data, for faster access of the pointer in allocated events.
Finally, the get_device_id() and get_device_type() accessors should
just be wrappers around clutter_event_get_device(), to reduce the
amount of code duplication.
Since we access it in order to get the X11 Display pointer, it makes
sense to have the ClutterBackendX11 already available inside the
ClutterStageX11 structure, and avoid the pattern:
ClutterBackend *backend = clutter_get_default_backend ();
ClutterBackendX11 *backend_x11 = CLUTTER_BACKEND_X11 (backend);
which costs us a function call, a type cast and an unused variable.
Cairo has recently changed so that it no longer adds a final move-to
command when the path ends with a close. This patch makes the test
check the run-time version number of Cairo to avoid duplicating this
behaviour when testing the conversion to and from a Cairo path.
When we receive a ConfigureNotify event that doesn't affect the size
of the window (only the position) then we were still calling
clutter_stage_ensure_viewport which ends up queueing a full stage
redraw. This patch makes it so that it only ensures the viewport when
the size changes as it already did for avoiding queueing a relayout.
It now also avoids setting the clipped redraws cool off period when
the window only moves under the assumption that it's only necessary
for size changes.
Since the XI2 device manager code is going to be compiled only on
POSIX compliant systems, we can safely assume the presence of stdint.h
and include it unconditionally.
CLUTTER_BIND_POSITION and CLUTTER_BIND_SIZE are two convenience
enumeration values for binding x and y, and width and height
respectively, using a single ClutterBindConstraint.
When copying COMBINE state in
_cogl_pipeline_layer_init_multi_property_sparse_state we would read some
state from the destination layer (invalid data potentially), then
redundantly set the value back on the destination. This was picked up by
valgrind, and the code is now more careful about how it references the
src layer vs the destination layer.
There is currently a problem with per-framebuffer journals in that it's
possible to create a framebuffer from a texture which then gets rendered
too but the framebuffer (and corresponding journal) can be freed before
the texture gets used to draw with.
Conceptually we want to make sure when freeing a framebuffer that - if
it is associated with a texture - we flush the journal as the last thing
before really freeing the framebuffer's meta data. Technically though
this is awkward to implement since the obvious mechanism for us to be
notified about the framebuffer's destruction (by setting some user data
internally with a callback) notifies when the framebuffer has a
ref-count of 0. This means we'd have to be careful what we do with the
framebuffer to consider e.g. recursive destruction; anything that would
set more user data on the framebuffer while it is being destroyed and
ensuring nothing else gets notified of the framebuffer's destruction
before the journal has been flushed.
For simplicity, for now, this patch provides another solution which is
to flush framebuffer journals whenever we switch away from a given
framebuffer via cogl_set_framebuffer or cogl_push/pop_framebuffer. The
disadvantage of this approach is that we can't batch all the geometry of
a scene that involves intermediate renders to offscreen framebufers.
Clutter is doing this more and more with applications that use the
ClutterEffect APIs so this is a shame. Hopefully this will only be a
stop-gap solution while we consider how to reliably support journal
logging across framebuffer changes.
When flushing a clip stack that contains more than one rectangle which
needs to use the stencil buffer the code takes a different path so
that it can combine the new rectangle with the existing contents of
the stencil buffer. However it was not correctly flushing the
modelview and projection matrices so that rectangle would be in the
wrong place.
With test-clip it's possible to draw three different shapes depending
on what mouse button is used: a rectangle, an ellipse or a path
containing multiple shapes. However the ellipse is also a path so it
doesn't really test anything extra from the third option. This
replaces the ellipse with a rectangle that is first rotated by the
modelview matrix. The rotated matrix won't be able to use the scissor
so it can be used to test stencil and clip plane clipping.
This adds a COGL_DEBUG=clipping option that reports how the clip is
being flushed. This is needed to determine whether the scissor,
stencil clip planes or software clipping is being used.
The CoglDebugFlags are now stored in an array of unsigned ints rather
than a single variable. The flags are accessed using macros instead of
directly peeking at the cogl_debug_flags variable. The index values
are stored in the enum rather than the actual mask values so that the
enum doesn't need to be more than 32 bits wide. The hope is that the
code to determine the index into the array can be optimized out by the
compiler so it should have exactly the same performance as the old
code.
The lighting parameters such as the diffuse and ambient colors were
previously only flushed in the fixed vertend. This meant that if a
vertex shader was used then they would not be set. The lighting
parameters are uniforms which are just as useful in a fragment shader
so it doesn't really make sense to set them in the vertend. They are
now flushed in the common cogl-pipeline-opengl code but the code is
#ifdef'd for GLES2 because they need to be part of the progend in that
case.
The uniforms for the alpha test reference value and point size on
GLES2 are updating using similar code. This generalizes the code so
that there is a static array of predefined builtin uniforms which
contains the uniform name, a pointer to a function to get the value
from the pipeline, a pointer to a function to update the uniform and a
flag representing which CoglPipelineState change affects the
uniform. The uniforms are then updated in a loop. This should simplify
adding more builtin uniforms.
The builtin uniforms are accessible from either the vertex shader or
the fragment shader so we should define them in the common
section. This doesn't really matter for the current list of uniforms
because it's pretty unlikely that you'd want to access the matrices
from the fragment shader, but for other builtins such as the lighting
material properties it makes sense.