There is a lot of duplication between ClutterGroup and ClutterBox so
this makes the two files diff-able so that new fixes can easily be
ported to both and bug fixes missing in one or the other can be spotted
more easily. This doesn't change the behaviour of either actor; it's
really just a shuffle around of code and normalizes the coding style to
make the files comparable.
This has already uncovered one bug in ClutterBox, and also highlights
a bug in ClutterGroup + many other actors:
1) ClutterGroup::real_foreach was recently changed to use
g_list_foreach instead of manually iterating the child list so it can
safely handle calls like:
clutter_container_foreach (container, clutter_actor_destroy);
ClutterBox is still manually iterating the list.
2) In ClutterGroup we guard _queue_redraw() calls like this:
if (CLUTTER_ACTOR_IS_VISIBLE (container))
clutter_actor_queue_redraw (CLUTTER_ACTOR (container));
In ClutterBox we don't:
I think ClutterBox is correct here because
clutter_actor_queue_redraw already optimizes the case where the
actor's not visible, but it also considers that the actor may be
cloned and so the guard in ClutterGroup could break clones. This
actually highlights a wider clutter bug since the same kinds of
guards can be found in all other clutter actors.
The signbit macro is defined in C99 so it should be available but some
versions of GCC don't appear to define it by default. If it's not
available we can use a hack to test the bit directly.
If the stage associated to the InputDevice is not set we should
short-circuit out and return NULL. This will result in a pick()
done on the event's stage - if applicable.
http://bugzilla.moblin.org/show_bug.cgi?id=9602
Instead of returning a sub-pixel height round up the preferred height to
the nearest integral value that is not less than the size reported by
Pango, once converted in pixels.
This fixes some backwards logic for asserting that we have a GLX major
version == 1 and a minor version >= 2. (NB: Although we technically
depend on GLX 1.3 features, we still have to support drivers that report
GLX 1.2 because there are a lot of mesa drivers out there incorrectly
report GLX 1.2 even though they export extensions that depend on GLX
1.3)
A material layer can not be considered equal if it is using different
texture filtering modes. This was causing problems where rectangles
with different filters would end up batched together and then rendered
with the wrong filter mode.
If your OpenGL driver supports GLX_INTEL_swap_event that means when
glXSwapBuffers is called it returns immediatly and an XEvent is sent when
the actual swap has finished.
Clutter can use the events that notify swap completion as a means to
throttle rendering in the master clock without blocking the CPU and so it
should help improve the performance of CPU bound applications.
Some extensions only support GLX versions > 1.3 and may not support
old style X Windows as GLXDrawables, so we now create GLXWindows for
stages when possible.
Commit d2bdd3cb62 fixed some compiler warnings but also broke the
ability to create a stage. Although not having warnings from the
compiler is nice, it is also nice to be able to create a stage so lets
not invert the meaning of the error check.
The function modifies the pixels pointed by p in-place so the pointer
can not be constant. The compiler was accepting this because the
modification is done from inline assembler.
_cogl_texture_driver_gen is needed to set the texture minification
mode to Cogl's default of GL_LINEAR. There was also a line to set this
in _cogl_texture_2d_new_with_size but it wasn't working because it was
called *before* the texture was bound. If the texture was later
rendered with the default material it then it would end up with GL's
default mipmap filtering mode but without mipmaps so it would render
white squares instead.
This adds a fast path for premultiplying an RGBA image using SSE2
instructions. SSE registers are 128-bit and we need at least 16-bits
per component for the intermediate result of the multiplication so we
can do two pixels in parallel with one register. The function
interleaves 2 SSE registers to multiply 4 pixels in one function call
with the hope that this will pipeline better.
http://bugzilla.openedhand.com/show_bug.cgi?id=1939
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
• Add the function name in the warning, since the text is the same in
both clutter_actor_raise() and clutter_actor_lower().
• If an actor has a name then prefer it to the type name.
OpenGL ES has no PBO extension, so we fallback to using a malloc'ed
buffer. Make sure the OpenGL-only defines don't leak into the OpenGL ES
compilation.
First, let's add a new public feature called, surprisingly,
COGL_FEATURE_PBOS to check the availability of PBOs and provide a
fallback path when running on older GL implementations or on OpenGL ES
In case the underlying OpenGL implementation does not provide PBOs, we
need a fallback path (a malloc'ed buffer). The CoglPixelBufer
constructors will instanciate a subclass of CoglBuffer that handles
map/unmap and set_data() with a malloc'ed buffer.
The public feature is useful to check before using set_data() on a
buffer as it will mean doing a memcpy() when not supporting PBOs (in
that case, it's better to create the texture directly instead of using a
CoglBuffer).
The only goal of using COGL buffers is to use them to create
textures. cogl_texture_new_from_buffer() is the new symbol to create
textures out of buffers.
This subclass of CoglBuffer aims at wrapping PBOs or other system
surfaces like DRM buffer objects. Two constructors are available:
cogl_pixel_buffer_new() with a size when you only care about the size of
the buffer (such a buffer can be used to store several texture data such
as the three planes of a I420 frame).
cogl_pixel_buffer_new_full() is more a 1:1 mapping between the data and
an underlying surface, with the possibility of having access to a low
level memory buffer that may have a stride.
Buffer objects are cool! This abstracts the buffer API first introduced
by GL_ARB_vertex_buffer_object and then extended to other objects.
The coglBuffer abstract class is intended to be the base class of all
the buffer objects, letting the user map() buffers. If the underlying
implementation does not support buffer objects (or only support VBO but
not FBO for instance), fallback paths should be provided.
The only way the user has to set the mipmap filters is through the
material/layer API. This API defaults to GL_LINEAR/GL_LINEAR for the max
and min filters. With the main use case of cogl being 2D interfaces, it
makes sense do default to GL_LINEAR for the min filter.
When creating new textures, we did not set any filter on them, using
OpenGL defaults': GL_NEAREST_MIPMAP_LINEAR for the min filter and
GL_LINEAR for the max filter. This will make the driver allocate memory
for the mipmap tree, memory that will not be used in the nominal case
(as the material API defaults to GL_LINEAR).
This patch tries to ensure that the min filter is set to GL_LINEAR
before any glTexImage*() call is done on the texture by setting the
filter when generating new OpenGL handles.
Some GL functions have a return value that the GE() macro is not able to
handle. Let's define a new Ge_RET() macro which will be able to handle
functions such as glMapBuffer().
While at it, removed the unused variadic dots to the GE() macro.
* animator-parser:
docs: Describe the Animation definition syntax
animator: Provide a ClutterScript parser
animator: Allow retrieving type property type from a key
script: Use a node when resolving an animation mode
The whole point of having the Animator class is that the developer can
describe a complex animation using ClutterScript. Hence, ClutterAnimator
should hook into the Script machinery and parse a specific description
format for its keys.
When asking a key for its target value we also ask the developer to pass
in an initialized GValue - but we don't make it easy to know the type of
the GValue. A developer has to ask the GObject class for the GParamSpec
and then initialize the GValue, instead.
Since we know the type of the GValue we should provide a getter for it.
We should also allow developers to throw at us GValue with compatible and
transformable types.
Finally, all the accessors should be constified.
Instead of taking a string and duplicating the "is it a string or an
integer" check in both Alpha and Animation, the function in
ClutterScript that resolves the animation mode values should take a
JsonNode and do all the checks it needs.
When we trashed the contents of the stencil buffer during
_cogl_path_fill_nodes we marked the clip stack state as dirty and expected
the clip stack code would clean up our glStencilFunc state.
The problem is that we only try and update the clip state during
_cogl_journal_init (when we flush the framebuffer state) which is only
called when the journal first gets something logged in it.
To make sure the stencil state is cleaned up we now also flush the journal
so _cogl_journal_init will be called for the next logged rectangle.
If we aren't syncing to vblank or if the last dispatch didn't cause a
redraw then the master clock will try to wait at least a small amount
of time before dispatching again. However if time goes backwards then
it would not do a dispatch until time catches up again. To fix this it
know just runs a dispatch immediately if time goes backwards.
This is related to Moblin bug #3839. There was a similar fix for this
in 9dc012c07, however that only fixed the case where timelines
wouldn't update. If there are no animations running then the master
clock won't even try updating timelines until time catches up.
http://bugzilla.o-hand.com/show_bug.cgi?id=1974
* origin/cwiiis-stage-resize:
[stage-x11] Set the default size differently
[stage] Set default size correctly
Revert "[x11] Don't set actor size on ConfigureNotify"
[x11] Don't set actor size on ConfigureNotify
[stage] Now that get_geometry works, use it
[stage-x11] make get_geometry always get geometry
[stage] Get the current size correctly
[stage] Set minimum width/height to 1x1
[stage] Add set/get_minumum_size
ClutterAnimator is a class for managing the animation of multiple
properties of multiple actors over time with keyframing of values.
The Animator class is meant to be used to effectively describe
animations using the ClutterScript definition format, and to construct
complex implicit animations from the ground up.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
We want to set the default size without triggering the layout machinary,
so change the window creation process slightly so we start with a
640x480 window.
Due to the way the new sizing works, clutter stage must set its size in
init (to maintain old behaviour) and the properties on the X11 stage
must be initialised to 1x1 so that it actually goes ahead with the
resize.
Fixes stages that aren't user resizable and have no size set from
appearing at 1x1.
Calling clutter_actor_set_size in response to ConfigureNotify makes
setting the size of the stage racy - the most common result of which
seems to be that you can't set the stage dimensions to anything less
than 640x480.
Instead, add a first_allocation bit to the private structure of the X11
stage and force the first resize (necessary or the default stage will be
a 1x1 window).
We want the actual window geometry in clutter_stage_set_minimum_size,
not the set size. Now that the geometry function has been changed to do
what it says, use it.
Now that we have a minimum size getter on the stage object, change
get_geometry to actually always return the geometry. This fixes stages
that are set as user-resizable appearing at 1x1 size.
This will need changing in other back-ends too.
Get the current size of the stage correctly in
clutter_stage_set_minimum_size. The get_geometry StageWindow function is
not equivalent of the current size, use clutter_actor_get_size().
This adds three new texture backends.
- CoglTexture2D: This is a trimmed down version of CoglTexture2DSliced
which only supports a single texture and only works with the
GL_TEXTURE_2D target. The code is a lot simpler so it has a less
overheads than dealing with slices. Cogl will use this wherever
possible.
- CoglSubTexture: This is used to get a CoglHandle to represent a
subregion of another texture. The texture can be used as if it was a
standalone texture but it does not need to copy the resources.
- CoglAtlasTexture: This collects RGB and RGBA textures into a single
GL texture with the aim of reducing texture state changes and
increasing batching. The backend will try to manage the atlas and
may move the textures around to close gaps in the texture. By
default all textures will be placed in the atlas.
There was a typo in getting the height of the full texture to check
whether the sub region fits so that it was using the width
instead. This was causing crashes when debugging is enabled for some
apps.
The reason why we have a dummy, offscreen Window when we create the
GLX context is that GLX does not like it when you ask the context for
features if it's not made current to a Drawable. Maybe in the future
it will allow us to do so, but right now we have to make do with what
GLX offers us.
In cogl_texture_new_from_file we create and own a temporary
bitmap. There's no need to copy this data if we need to do a premult
conversion so instead it just does conversion before passing it on to
cogl_texture_new_from_bitmap.
The Cogl atlas code was using _cogl_texture_prepare_for_upload with a
NULL pointer for the dst_bmp to determine the internal format of the
texture without converting the bitmap. It needs to do this to decide
whether the texture will go in the atlas before wasting time on the
conversion. This use of the function is a little confusing so that
part of it has been split out into a new function called
_cogl_texture_determine_internal_format. The code to decide whether a
premult conversion is needed has also been split out.
Bind ctrl-backspace and ctrl-del to functions that delete a word before
or after the cursor, respectively.
Selection does not affect the deletion, but current selection is
preserved. This mimicks GTK+ functionality in GtkTextView and GtkEntry.
http://bugzilla.openedhand.com/show_bug.cgi?id=1767
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The SDL API is far too limited for the windowing system needs of
Clutter; the status of the SDL backend was always experimental, and
since the Windows platform is supported by a native backend there is
no point in having the SDL backend around any more.
The Win32 backend now implements the create_context method which
creates a context and binds it to a 1x1 invisible window. That way
there will always be a context bound and the features can be retrieved
without creating the default stage. This reflects the changes in
1c6ffc8..b245d55 to the GLX backend.
Instead of using g_critical() inside the create_context() implementation
of the ClutterBackendGLX we should use the passed GError, so that the
error message can bubble up to the caller.
Instead of creating the default stage during initialization we can
now safely create it whenever clutter_stage_get_default() is called.
To maintain the invariant, the default stage is immediately realized
by Clutter itself.
Since we must guarantee that Cogl has a GL context to query, it is too
late to use the "dummy Window" trick from within the get_features()
virtual function implementation.
Instead, we can create a dummy Window from create_context() itself and
leave it around - basically trading a default stage with a dummy X
window.
We need to have the dummy X window around all the time so that the
GLX context can be selected and made current.
High level toolkits might wish to construct a PangoFontDescription and
then set it directly on a ClutterText actor proxy or sub-class.
ClutterText should have a :font-description property to set (and get)
the PangoFontDescription.
http://bugzilla.openedhand.com/show_bug.cgi?id=1960
Commit 92a375ab4 changed the initial value of max_texcoord_attrib_unit
to -1 so that it could disable the texture coord array for the first
texture unit when there are no texture coords used in the vbo. However
max_texcoord_attrib_unit was an unsigned value so this actually became
G_MAXUINT. The disabling loop at the bottom still worked because
G_MAXUINT+1==0 but the check for whether any texture unit is greater
than max_texcoord_attrib_unit was failing so it would always end up
disabling all texture units. This is now fixed by changing
max_texcoord_attrib_unit to be signed.
The commit ecbb7ce41a exposed some issues
when positioning the cursor with the mouse pointer: the selection is
not moved along with the cursor when inserting a single character or a
string.
Also, some freeze_notify() are called too early, leading to decoupling
from their respective thaw_notify().
http://bugzilla.openedhand.com/show_bug.cgi?id=1955
The documentation for ClutterGroup behaviour when setting an explicit
size is not accurate - or, actually, it was accurate by the time
ClutterGroup was first written but has been neglected in the following
release cycles.
To avoid confusion for new users of Clutter the documentation should be
slightly expanded, mentioning the exact semantics of ClutterGroup with
regards to: preferred size, explicitly set size and how to constrain the
visible area of a ClutterGroup to an explicitly set size.
Based on a patch by: Neil Roberts <neil@linux.intel.com>
When we disable the per-actor events delivery Clutter replicates the X11
implicit soft grab for motion events with off-stage. The implicit grab
is done whenever the pointer of a device leaves a window with a button
still pressed; with the implicit grab in place the window still receives
motion events even after the LeaveNotify - until the button is released.
The implicit grab is not honoured in the per-actor event deliver case,
though, so we have a mismatch between two in theory equivalent cases.
Luckily, the fix is pretty trivial: when we check for a motion event
with a stage set but without an actor set, and that has off-stage
coordinates, we arbitrarily set the source to be the stage of the event
and emit the pointer event.
When deciding if a material layer is equal it now compares the GL
target and texture number if the textures are not sliced. This is
needed to get batching across atlased textures.
Cogl accepts a pixel format for both the data in memory and the
internal format to be used for the texture. If they do not match then
it would convert them using the CoglBitmap functions before uploading
the data. However, GL also lets you specify both formats so it makes
more sense to let GL do the conversion. The driver may need the
texture in a specific format so it may end up being converted anyway.
The cogl_texture_upload_data functions have been removed and replaced
with a single function to prepare the bitmap. This will only do the
premultiplication conversion because that is the only part that GL
can't do directly.
The premult part of _cogl_convert_premult has now been split out as
_cogl_convert_premult_status. _cogl_convert_premult has been renamed
to _cogl_convert_format to make it less confusing. The premult
conversion is now done in-place instead of copying the
buffer. Previously it was copying the buffer once for the format
conversion and then copying it again for the premult conversion. The
premult conversion never changes the size of the buffer so it's quite
easy to do in place. We can also use the separated out function
independently.
The internal format of the atlas texture is still set to the
appropriate format so Cogl will disable blending for textures that are
intended to be RGB. This should end up ignoring the alpha channel from
the texture in the atlas. This makes the code slightly easier to
maintain and should also improve the chances of batching.
Since we're allowing allocation cycles saying that calling
queue_relayout() inside an allocation cycle "is not allowed" is kind of
confusing. We should say that "it is not recommended".
* device-manager: (37 commits)
x11: Re-enable XI1 extension keyboards
x11: Always handle core device events before XI events
docs: Documentation fixes for DeviceManager
device-manager: Fix the signals definition
docs: Add sections for InputDevice and DeviceManager
docs: Add clutter_input_device_get_device_name()
tests: Print out the device details on motion
Always register core devices
device: Remove unused is_default member
win32: Experimental implementation of device support
tests: Print the device name, as well as its Id
x11: Fill out the :name property of the InputDevices
device: Add the :name property to InputDevice
x11: Store core devices on the X11 Backend singleton
device: Unset the cursor actor when leaving the stage
device: Add pointer actor getter
x11: Discard the LeaveNotify for off-stage ButtonRelease
device: Do not overwrite the stage for an InputDevice
event: Off-stage button releases have a click count of 1
event: Scroll events do not have click count
...
Added a "selection-bound" notify on clutter_text_clear_selection as it
changes the value.
Added utility function clutter_text_set_positions, in order to
change both cursor position and selection bound inside a
g_object_[freeze/thaw]_notify block
Added g_object_[freeze/thaw]_notify in other functions that changes
both cursor position and selection bound
Solves http://bugzilla.openedhand.com/show_bug.cgi?id=1955
ClutterStage has both set_key_focus() and get_key_focus() methods, but
there is no :key-focus property. This means that it is not possible to
get notifications when the key-focus has changes except by connecting to
both the ::key-focus-in and ::key-focus-out signals and do additional
bookkeeping.
http://bugzilla.openedhand.com/show_bug.cgi?id=1956
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The TimeoutPool is not used by ClutterTimeline any more, so we need to
remove a sentence from its description. We also need to fix the gtk-doc
syntax errors.
Instead of assigning a new colour to each quad of a batch, the
rectangle debugging code now assigns a new colour to each batch so
that it can be used to visually see what is being batched. The colour
is stored in a global variable that is reset during cogl_clear. This
improves the chances that the same colour will be used for a batch in
the next frames to avoid flickering.
When setting up the state for the vertex buffer,
enable_state_for_drawing_buffer tries to keep track of the highest
numbered texture unit in use. It then disables any texture arrays for
units that were previously enabled if they are greater than that
number. However if there is no texturing in the VBO then the max used
unit would be left at 0 which it would later think meant unit 0 is
still in use so it wouldn't disable it. To fix this it now initialises
the max used unit to -1 which it should interpret as ‘no units are in
use’ so it will later disable the arrays for all units.
Thanks to Jon Mayo for reporting the bug.
http://bugzilla.openedhand.com/show_bug.cgi?id=1957
We were checking the number of texture units against the GL enum that is
used in glGetInteger() to query that number. Let's abstract this in a
little function.
Took the opportunity to dig a bit on the usage of GL limits for the
number of texture (image) units and document our use of them. We'll need
something finer grained if we want to fully exploit texture image units
with a programmable pipeline.
The index field of CoglTextureUnit was never set, leading to the
creation of units with index set to 0. When trying to retrieve a texture
unit by its index (!= 0) with _cogl_get_texture_unit(), a new one was
created as it could not find it back in the list of textures units:
ctx->texture_units.
http://bugzilla.openedhand.com/show_bug.cgi?id=1958
The :opacity property is defined using a GParamSpecUchar. This usually
leads to issues with language bindings that don't have an 'unsigned
char' type and that need to explicitly handle the conversion between
G_TYPE_UCHAR and G_TYPE_INT or G_TYPE_UINT.
The property definition already specifies an interval size of [0, 255]
on the values; more importantly, GObject already implicitly transforms
between G_TYPE_UCHAR and G_TYPE_UINT (the GValue transformation
functions are registered at type system initialization time) so
switching between a GParamSpecUchar and a GParamSpecUint should not be
an ABI break.
I have tested a simple program using the opacity property before and
after the change and I cannot see any run-time warnings related to this
issue.
Be more drastic if the internal state is broken, and assert() if the
expected Alpha and Timeline instances we need are not valid. This
usually implies a library bug or a massive heap corruption.
The Animation code does transformation of values between type A and A'
after checking for compatibility using g_value_type_compatible(). This
is incorrect: compatibility means that the two types can be copied. The
correct conversion should follow:
if (compatible (type (A), type (A')))
copy (A, A');
else
if (transformable (type (A), type (A')))
transform (A, A');
else
error("Unable to trasform type A in A'");
The transformation might still fail, so we need to check for errors
there as well as a fall-through case.
We should not just check for compatibility, but also for the ability to
transform a GValue of type A into another of type A'.
Usually compatibility is enough, especially if types can be
introspected beforehand; some times, though, we also need to check for
transformability as a type can provide the transformation functions
necessary for the operation.
The commit 1c69c61745 which improved the
protection against timeline removals during the master clock advancement
was only doing half the job - and actually broke the chaining of
animations inside the ::completed signal.
We cannot simply take a reference on the timelines and still use the list
held by the master clock because the do_tick() might result in the
creation of a new timeline, which gets added at the end of the list with
no reference increase and thus gets disposed at the end of the iteration.
We also cannot steal the master clock timelines list because a timeline
might be removed as the direct result of do_tick() and remove_timeline()
would not find the timeline, failing and leaving a dangling pointer
behind.
For this reason we copy the list of timelines out of the one that the
Master Clock holds, take a reference on each timeline, advance them all,
release the reference and free the list.
The extension keyboard support in XInput 1.x is hopelessly broken.
Nevertheless, it's possible to use some bits of it, as we prefer the
core keyboard events to the XInput events, thus at least having proper
handling for X11 key events on the Stage window.
The XI 1.0 layer is complementary to the X11 core devices handling; this
means that core events will still be emitted for the core pointer and
keyboard devices, and that secondary (floating) devices should be
handled on top of that.
Thus, the XI event handling code should be executed (if explicitly
compiled in and enabled) if the core device events have not been parsed.
Note: this is going away with XI2, which completely replaces both core and
XI1 events.
Even with XInput support we should always register core devices. This
allows us to handle enter and leave events correctly on the Stage and
to have a working XInput 1.x support in Clutter.
Mostly lifted from the core pointer and keyboard X11 backend support.
The win32 backend registers two devices (a core pointer and a core
keyboard) and assigns them to the event structure when doing the
translation from native events to Clutter events.
Thanks to: Samuel Degrande <Samuel.Degrande@lifl.fr> for testing this
patch.
Instead of overloading the device id of 0 and 1 we should treat the core
devices as special, and have a pointer inside the X11 backend singleton
structure, for fast access.
When an InputDevice leaves a stage we set the stage member of
InputDevice to NULL. We should also unset the cursor_actor (as the
device is obviously not on an actor any more).
When the device re-enters the Stage the ENTER/LEAVE event generation
machinery will then be able to emit the ENTER event on the Stage.
If the user presses a button on a pointer device and then moves out the
Stage X11 will emit the following events:
LeaveNotify ➔ MotionNotify ... ➔ ButtonRelease ➔ LeaveNotify
The second LeaveNotify differs from the first by the state field.
Unfortunately, ClutterCrossingEvent doesn't have a modifier_state field
like other events, so we cannot provide a way for programmatically
distinguishing them from a Clutter perspective. This is also an X11-ism
we might not even want to replicate on every backend with sane
enter/leave semantics.
For this reason we should check inside the X11 event processing if the
pointer device has already left the Stage and ignore the second
LeaveNotify.
The Stage field of an InputDevice is set by the backend, whenever the
pointer enters or leaves the Stage. The Stage should not overwrite the
stage field for every event it processes.
The previous state for the device is used by the click count machinery
and we should not be overwriting it at every event; instead, we should
use a parallel storage for the current state coming from the windowing
system.
The device manager does not need to update the state of the devices
when the user has disabled the delivery of motion events to actors:
the events will always be delivered as they are to the stage.
The LEAVE/ENTER event pairs should be queued during the InputDevice
update process, when we change the actor under the device pointer.
This commit cleans up the event emission code inside clutter-main.c
and the logic of the event processing.
The InputDevice objects stores pointer coordinates, state, stage and
the actor under the cursor, so if the current backend provides us with
one attached to the Event structure then we want the InputDevice itself
to update its state and give us the ClutterActor underneath the
pointer's cursor.
Even when we are not using XInput we now have fallback devices; the
X11 backend should always assign the default devices when translating
the X events to Clutter events.
Use the device manager to store the input devices. Also, provide
two fallback devices when initializing the X11 backend: device 0
for the pointer and device 1 for the keyboard.
Previously the atlas textures were being created with whatever format
the first sub texture is in. Only three formats are supported so this
only matters if the first texture is a premultiplied alpha
texture. Instead it now masks out the premultiplied bit so that the
textures are always either RGB_888 or RGBA_8888.
The win32 backend now handles the WM_SETCURSOR message and sets a
fully transparent cursor if the cursor-visible property has been
cleared on the stage. The icon is stored in the library via a resource
file. The instance handle for the DLL is needed to load the resource
so there is now a DllMain function to grab the handle.
g_list_foreach has better protection against the current node being
removed. This will happen for example if someone calls
clutter_container_foreach(container, clutter_actor_destroy). This was
causing valgrind errors for the conformance tests which do just that.
When uploading texture data it was just calling cogl_texture_set_data
on the large texture. This would attempt to convert the data to the
format of the large texture. All of the textures with alpha channels
are stored together regardless of whether they are premultiplied so
this was causing premultiplied textures to be unpremultiplied
again. It now just uploads the data ignoring the premult bit of the
format so that it only gets converted once.
With the atlas texture backend ensuring the mipmaps can make it become
a completely different texture which will have different texture
coordinates or may even be sliced. Therefore we need to ensure the
mipmaps before deciding which quads to log in the journal. This adds a
new private function to cogl-material which ensures the mipmaps if
needed.
The sub texture backend doesn't work well as a completely general
texture backend because for example when rendering with cogl_polygon
it needs to be able to tranform arbitrary texture coordinates without
reference to the other coordintes. This can't be done when the texture
coordinates are a multiple of one because sometimes the coordinate
should represent the left or top edge and sometimes it should
represent the bottom or top edge. For example if the s coordinates are
0 and 1 then 1 represents the right edge but if they are 1 and 2 then
1 represents the left edge.
Instead the sub-textures are now documented not to support coordinates
outside the range [0,1]. The coordinates for the sub-region are now
represented as integers as this helps avoid rounding issues. The
region can no longer be a super-region of the texture as this
simplifies the code quite a lot.
There are two new texture virtual functions:
transform_quad_coords_to_gl - This transforms two pairs of coordinates
representing a quad. It will return FALSE if the coordinates can
not be transformed. The sub texture backend uses this to detect
coordinates that require repeating which causes cogl-primitives
to use manual repeating.
ensure_non_quad_rendering - This is used in cogl_polygon and
cogl_vertex_buffer to inform the texture backend that
transform_quad_to_gl is going to be used. The atlas backend
migrates the texture out of the atlas when it hits this.
When calculating the next integer position for negative coordinates it
would not increment if the position is already a multiple of one so we
need to manually add one.
When try_creating_fbo fails it returns 0 to report the error and if it
succeeds it returns ‘flags’. However cogl_offscreen_new_to_texture
also passes in 0 for the flags as the last fallback to create the fbo
with nothing but the color buffer. In that case it will return 0
regardless of whether it succeeded so the last fallback will always be
considered a failure.
To fix this it now just returns a gboolean to indicate whether it
succeeded and the flags used for each attempt is assigned when passing
the argument rather than from the return value of the function.
Also if the only configuration that succeeded was with flags==0 then
it would always try all combinations because last_working_flags would
also be zero. To avoid this it now uses a separate gboolean to mark
whether we found a successful set of flags.
http://bugzilla.openedhand.com/show_bug.cgi?id=1873
Use the newly-added profiling timers inside the master clock dispatch
function to see how much time we spend:
• in the whole function
• in the event processing for each stage
• in the timeline advancement