Clutter short-circuits painting when an actor's opacity is
zero. However if the actor is being painted from a ClutterClone then
it will be painted using the clone's opacity instead so the test was
broken.
* 1.0-integration: (138 commits)
[x11] Disable XInput by default
[xinput] Invert the XI extension version check
[cogl-primitives] Fix an unused variable warning when building GLES
[clutter-stage-egl] Pass -1,-1 to clutter_stage_x11_fix_window_size
Update the GLES backend to have the layer filters in the material
[gles/cogl-shader] Add a missing semicolon
[cogl] Move the texture filters to be a property of the material layer
[text] Fix Pango unit to pixels conversion
[actor] Force unrealization on destroy only for non-toplevels
[x11] Rework map/unmap and resizing
[xinput] Check for the XInput entry points
[units] Validate units against the ParamSpec
[actor] Add the ::allocation-changed signal
[actor] Use flags to control allocations
[units] Rework Units into logical distance value
Remove a stray g_value_get_int()
Remove usage of Units and macros
[cogl-material] Allow setting a layer with an invalid texture handle
[timeline] Remove the concept of frames from timelines
[gles/cogl-shader] Fix parameter spec for cogl_shader_get_info_log
...
Conflicts:
configure.ac
The XInput support in Clutter is still using XI 1.x. This will never
work correctly, and we are all waiting for XInput 2 anyway. The changes
internally should be minimal, so we can leave everything in place, but
it's better to disable XInput support by default -- at least for the
time being.
The texture filters are now a property of the material layer rather
than the texture object. Whenever a texture is painted with a material
it sets the filters on all of the GL textures in the Cogl texture. The
filter is cached so that it won't be changed unnecessarily.
The automatic mipmap generation has changed so that the mipmaps are
only generated when the texture is painted instead of every time the
data changes. Changing the texture sets a flag to mark that the
mipmaps are dirty. This works better if the FBO extension is available
because we can use glGenerateMipmap. If the extension is not available
it will temporarily enable automatic mipmap generation and reupload
the first pixel of each slice. This requires tracking the data for the
first pixel.
The COGL_TEXTURE_AUTO_MIPMAP flag has been replaced with
COGL_TEXTURE_NO_AUTO_MIPMAP so that it will default to
auto-mipmapping. The mipmap generation is now effectively free if you
are not using a mipmap filter mode so you would only want to disable
it if you had some special reason to generate your own mipmaps.
ClutterTexture no longer has to store its own copy of the filter
mode. Instead it stores it in the material and the property is
directly set and read from that. This fixes problems with the filters
getting out of sync when a cogl handle is set on the texture
directly. It also avoids the mess of having to rerealize the texture
if the filter quality changes to HIGH because Cogl will take of
generating the mipmaps if needed.
The mapping and unmapping of the X11 stage implementation is
a bit bong. It's asynchronous, for starters, when it really
can avoid it by tracking the state internally.
The ordering of the map/unmap sequence is also broken with
respect to the resizing.
By tracking the state internally into StageX11 we can safely
remove the MapNotify and UnmapNotify X event handling.
In theory, we should use _NET_WM_STATE a lot more, and reuse
the X11 state flags for fullscreening as well.
Apparently, the XInput extension is using the same pkg-config
file ('xi') for both the 1.x and the 2.x API, so we need to
check for both the 1.x XGetExtensionVersion and the 2.x
XQueryInputVersion.
When declaring a property using ClutterParamSpecUnits we pass a
default type to limit the type of units we accept as valid values
for the property.
This means that we need to add the unit type check as part of the
validation process.
Sometimes it is useful to be able to track changes in the allocation
flags, like the absolute origin, inside children of a container.
Using the notify::allocation signal is not enough, in these cases, so
we need a specific signal that gives us both the allocation box and the
allocation flags.
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
Units as they have been implemented since Clutter 0.4 have always been
misdefined as "logical distance unit", while they were just pixels with
fractionary bits.
Units should be reworked to be opaque structures to hold a value and
its unit type, that can be then converted into pixels when Clutter needs
to paint or compute size requisitions and perform allocations.
The previous API should be completely removed to avoid collisions, and
a new type:
ClutterUnits
should be added; the ability to install GObject properties using
ClutterUnits should be maintained.
It was previously possible to create a material layer with no texture
by setting some property on it such as the matrix. However it was not
possible to get back to that state without removing the layer and
recreating it. It is useful to be able to remove the texture to free
resources without forgetting the state of the layer so we can put a
different texture in later.
Timelines no longer work in terms of a frame rate and a number of
frames but instead just have a duration in milliseconds. This better
matches the working of the master clock where if any timelines are
running it will redraw as fast as possible rather than limiting to the
lowest rated timeline.
Most applications will just create animations and expect them to
finish in a certain amount of time without caring about how many
frames are drawn. If a frame is going to be drawn it might as well
update all of the animations to some fraction of the total animation
rather than rounding to the nearest whole frame.
The 'frame_num' parameter of the new-frame signal is now 'msecs' which
is a number of milliseconds progressed along the
timeline. Applications should use clutter_timeline_get_progress
instead of the frame number.
Markers can now only be attached at a time value. The position is
stored in milliseconds rather than at a frame number.
test-timeline-smoothness and test-timeline-dup-frames have been
removed because they no longer make sense.
The clutter_actor_map and unmap functions need to be called to
properly update the mapped state. This matches the changes to the X11
stage in 125bded8.
If the application code calls for destruction of an actor we need
to make sure that the actor is unrealized before running the dispose
sequence; otherwise, we might trigger an assertion failure on composite
actors.
The commit 762873e79e is completely
and utterly wrong and I should have never pushed it.
Serves me well for trying to work on three different branches and
on three different things.
Currently, the clock source spins a redraw every time there is at
least a timeline running. If the timelines were not advanced in
the previous frame, though, because their interval is larger than
the vblanking interval then this will lead to excessive redraws of
the scenegraph even if nothing has changed.
To avoid this a simple guard should be set by the MasterClock::advance
method in case no timeline was effectively advanced, and checked
before dispatching the stage redraws.
When creating a Cogl texture from a Cogl bitmap it would steal the
data by setting the bitmap_owner flag and clearing the data pointer
from the bitmap. The data would be freed by the time the
new_from_bitmap is finished. There is no reason to do this because the
data will be freed when the Cogl bitmap is unref'd and it is confusing
not to be able to reuse the bitmap for creating multiple textures.
clutter_color_from_string() only supported the "#rrggbbaa" format with
alpha channel, this patch adds support for "#rgba".
Colors in "#rrggbb" format were parsed manually, this is now left to
the pango color parsing fallback, since that's handling it just fine.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The cogl_shader_get_info_log() function is very inconvenient for
language bindings and for regular use, as it requires a static
buffer to be filled -- basically just providing a wrapper around
glGetInfoLogARB().
Since COGL aims to be a more convenient API than raw GL we should
just make cogl_shader_get_info_log() return an allocated string
with the GLSL compiler log.
Instead of using GL_TRIANGLES and uploading the indices every time, it
now uses GL_QUADS instead on OpenGL. Under GLES it still uses indices
but it uses the new cogl_vertex_buffer_indices_get_for_quads function
to avoid uploading the vertices every time.
This requires the _cogl_vertex_buffer_indices_pointer_from_handle
function to be exposed privately to the rest of Cogl.
The static_indices array has been removed from the Cogl context.
The GIR file for Clutter still contains symbols from COGL, even
though we provide a Cogl GIR as well. The Clutter GIR should
depend on the Cogl GIR instead.
All the underlying implementation and the public entry points have
been switched to floats; the only missing bits are the Actor properties
that deal with positioning and sizing.
This usually means a major pain when dealing with GValues and varargs
functions. While GValue will warn you when dealing with the wrong
conversions, varags will simply die an horrible (and hard to debug)
death via segfault. Nothing much to do here, except warn people in the
release notes and hope for the best.
The documentation for ClutterTexture's set_from_rgb_data() and
set_from_yuv_data() says:
Note: This function is likely to change in future versions.
This is not true, since they'll remain for the whole 1.x API cycle.
Now that CoglVertexBuffers support indices we can use them with GLES
to avoid duplicating vertices. Regular GL still uses GL_QUADS because
it is shown to still have a performance benefit over indices with the
Intel drivers.
This function can be used as an efficient way of drawing groups of
quads without using GL_QUADS. It generates a VBO containing the
indices needed to render using pairs of GL_TRIANGLES. The VBO is
globally cached so that it only needs to be uploaded whenever more
indices are requested than ever before.
The allocate_available_size() method is a convenience method in
the same spirit as allocate_preferred_size(). While the latter
will allocate the preferred size of an actor regardless of the
available size provided by the actor's parent -- and thus it's
suitable for simple fixed layout managers like ClutterGroup -- the
former will take into account the available size provided by the
parent and never allocate more than that; it is, thus, suitable
for simple fluid layout managers.
The cogl-enum-types.h file is created by glib-mkenums under
/clutter/cogl/common, and then copied in /clutter/cogl in order
to make the inclusion of that file work inside cogl.h.
Since we're copying it in a different location, the Makefile
for that location has to clean up the copy.
Notifications should be fired off from both the internal timeline and
the wrapping animation here, so notifiers should be frozen around these
property setters.
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Just a couple of final cleanups after the reimplementation of the
Animation model.
i) _set_mode does not need to set the timeline on the alpha
ii) freeze notifications around the setting of a new alpha
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>