The clutter_redraw() function is used by embedding toolkits to
force a redraw on a stage. Since everything is performed by
toggling a flag inside the Stage itself and then letting the
master clock advance, we need a ClutterStage method to ensure
that we start the master clock and redraw.
Instead of calculating a delta in the master clock, and passing that
into each timeline, make each timeline individually responsible for
remembering the last time and computing the delta.
This:
- Fixes a problem where we could spin infinitely processing
timeline-only frames with < 1msec differences.
- Makes timelines consistently start timing on the first frame;
instead of doing different things for the first started timeline
and other timelines.
- Improves accuracy of elapsed time computations by avoiding
accumulating microsecond => millisecond truncation errors.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
clutter-master-clock.c clutter-master-clock.h: When the
SYNC_TO_VBLANK feature is not available, wait for 1/frame_rate
seconds since the start of the last frame before drawing the next
frame. Add _clutter_master_clock_start_running() to abstract
the usage of g_main_context_wakeup()
clutter-stage.c: Add _clutter_master_clock_start_running()
clutter-main.c: Update docs for clutter_set_default_frame_rate()
clutter_get_default_frame_rate() to no longer talk about timeline
frame rates.
test-text-perf.c test-text.c: Set a frame rate of 1000fps so that
frame-rate limiting doesn't affect the result.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Change CLUTTER_PRIORITY_REDRAW to be lower than the GTK+ resize
and relayout priorities to avoid starving GTK+ when run in the
same process as clutter.
Remove the unused CLUTTER_PRIORITY_TIMELINE
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Instead of trying to guess about which motion events are
extraneous, queue up all events until we process a frame.
This allows us to look ahead and reliably compress consecutive
sequence of motion events.
clutter-main.c: Feed received events to the stage for queueing.
Remove old compression code. Remove clutter_get_motion_events_frequency()
clutter_set_motion_events_frequency()
clutter-stage.c: Keep a queue of pending events.
clutter-master-clock.c: Add processng of queued events to the
clock source dispatch function.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When a redraw is queued on a stage, simply set a flag; then in
the check/prepare functions of the master clock source, check
for stages that need redrawing.
This avoids the complexity of having multiple competing sources
at the same priority and makes the update ordering more reliable and
understandable.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If a timeline is added from a different thread, we need to
call g_main_context_wakeup() to wake the main thread up to
start updating the timeline.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Instead of keeping a list of all timelines, and connecting to
signals and weak notifies, simply keep a list of running timelines;
this greatly simplifies both the book-keeping, and also determining
if there are any running timelines.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Remove code to advance the master clock after drawing a frame; if
there are any running timelines the master clock will do another
frame by itself, and the clock will be advanced before running
that frame.
With this change, there is no point in queueing an extra frame
redraw after completing a timeline, since we are always advancing
the timeline *before* redrawing, so remove that code as well.
(This does mean that calling clutter_timeline_stop() won't implicitly
cause the stage to be redrawn; this doesn't seem like something
an app should rely on in any case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The clutter_stage_fullscreen() and clutter_stage_unfullscreen() are
a GDK-ism. The underlying implementation is already using an accessor
with a boolean parameter.
This should take the amount of collisions between properties, methods
and signals to zero.
The :fullscreen property is very much confusing as it is implemented.
It can be written to a value, but the whole process might fail. If we
set:
g_object_set (stage, "fullscreen", TRUE, NULL);
and the fullscreen process fails or it is not implemented, the value
will be reset to FALSE (if we're lucky) or left TRUE (most of the
times).
The writability is just a shorthand for invoking clutter_stage_fullscreen()
or clutter_stage_unfullscreen() depending on a boolean value without
using an if.
The :fullscreen property also greatly confuses high level languages,
since the same symbol is used:
- for a method name (Clutter.Stage.fullscreen())
- for a property name (Clutter.Stage.fullscreen)
- for a signal (Clutter.Stage::fullscreen)
For these reasons, the :fullscreen should be renamed to :fullscreen-set
and be read-only. Implementations of the Stage should only emit the
StageState event to change from normal to fullscreen, and the Stage
will automatically update the value of the property and emit a notify
signal for it.
There have been changes in JSON-GLib upstream to clean up the
data structures, and facilitate introspection.
We still not use the updated JsonParser with the (private) JsonScanner
code, since it's a fork of GLib's GScanner.
Otherwise if there is an error before the slices are created it will
try to free the first_pixels array and crash.
It now also checks whether first_pixels has been created before using
it to update the mipmaps. This should only happen for
cogl_texture_new_from_foreign and doesn't matter if the FBO extension
is available. It would be better in this case to fetch the first pixel
using glGetTexImage as Owen mentioned in the last commit.
tex->first_pixels was never set for foreign textures, leading
to a crash when the texture object is freed.
As a quick fix, simply set to NULL. A more complete fix would
require remembering if we had ever seen the first pixel uploaded,
and if not, doing a glReadPixel to get it before triggering the
mipmap update.
http://bugzilla.openedhand.com/show_bug.cgi?id=1645
Signed-off-by: Neil Roberts <neil@linux.intel.com>
It's very common that there's no reasonable fallback to do if the
blend or combine string you set isn't supported. So, rather than
requiring everybody to pass in a GError purely to catch syntax erorrs,
automatically g_warning() if a parse error is encountered and @error
is NULL.
http://bugzilla.openedhand.com/show_bug.cgi?id=1642
Signed-off-by: Robert Bragg <robert@linux.intel.com>
When we complete a timeline, we clamp the elapsed_time variable
to the range of the timeline. We need to adjust msecs_delta so that
when we check for hit markers we have the correct interval.
http://bugzilla.openedhand.com/show_bug.cgi?id=1641
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The Animation should be referenced during the notification of the
alpha value, since the callback is invoked depending on the Alpha
and it won't vivify the Animation instance for us.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1537
ClutterEvent is not really gobject-introspection friendly because
of the whole discriminated union thing. In particular, if you get
a ClutterEvent in a signal handler, you probably can't access the
event-type-specific fields, and you probably can't call methods
like clutter_key_event_symbol() either, because you can't cast the
ClutterEvent to a ClutterKeyEvent.
The cleanest solution is to turn every accessor into ClutterEvent
methods, accepting a ClutterEvent* and internally checking the event
type.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1585
According to clutter_texture_set_cogl_texture you should unref the handle as
the texture takes its own.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
The OpenGL spec states that if you create a pixmap using glXCreatePixmap you
should use glXDestroyPixmap to destroy it.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Setting the pixmap for an unrealized ClutterGLXTexturePixmap should
not cause it to be realized, and certainly shouldn't cause the the
REALIZED flag to be set without using clutter_actor_realize().
This patch uses the simple approach that;
- pixmap changes on an unrealized ClutterGLXTexturePixmap
are ignored
- when the ClutterGLXTexturePixmap is realized, we then create
the GLXPixmap and the corresponding texture.
The call to clutter_glx_texture_pixmap_update_area() is moved
from create_cogl_texture() to
clutter_glx_texture_pixmap_create_glx_pixmap() since
create_cogl_texture() is only called from one place, and updating
the area is really something we do *after* creating the texture,
not part of creating the texture.
clutter_glx_texture_pixmap_create_glx_pixmap() is reorganized a
bit to avoid debug-logging confusingly if it's called before a pixmap
has been set, and for readability.
http://bugzilla.openedhand.com/show_bug.cgi?id=1635
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
An implementaton of realize() never needs to set the
CLUTTER_ACTOR_REALIZED flag, though it can unset the flag if
things fail unexpectedly. (Previously, stage backend implementations
had to do this since clutter_actor_realize() wasn't used; this
is no longer the case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1634
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Due to the accumulation of floating point errors, natural_width
and min_width can diverge significantly even if the math for
computing them is correct. So just clamp natural_width to
min_width instead of warning about it.
http://bugzilla.openedhand.com/show_bug.cgi?id=1632
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we use float temporaries when computing the bounds of
a group, then, depending on what variables are kept in registers
and what stored on the stack, the accumulated difference between
natural_width and min_width can be more than FLOAT_EPSILON.
Using double temporaries will eliminate the difference in most
cases, or, very rarely, reduce it to a last-bit error.
http://bugzilla.openedhand.com/show_bug.cgi?id=1632
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we are cloning an source actor with an unmapped parent, then when
we temporarily map the source actor:
- We need to skip the check that a mapped actor has a mapped
parent.
- We need to realize the actor's parents before mapping it,
or we'll get an assertion failure in clutter_actor_update_map_state()
because an actor with an unmapped parent is !may_be_realized.
http://bugzilla.openedhand.com/show_bug.cgi?id=1633
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Setting the stage size using clutter_actor_set_size() is almost always
wrong: the X11 stage implementation should save the size and queue a
relayout -- like it does when receiving a ConfigureNotify. The same
should happen when setting it to be full screen.
Since we build the Cogl GIR inside /clutter/cogl we should be looking
there when building the Clutter GIR. Otherwise g-ir-scanner will look
inside the gir directory -- and if you never built Clutter before it
will error out.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1638
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The load-finished signal has a GError* argument which is meant to
signify whether the loading was successful. However many of the
places in ClutterTexture that emit this signal directly pass their
'error' variable which is a GError** and will be NULL or not
completely independently of whether there was an error. If the
argument was dereferenced it would probably crash.
The test-texture-async interactive test case should also verify
that the ::load-finished signal is correctly emitted.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1622
Correctly apply De Morgan's laws to the short-circuit test in
clutter_timeline_pause(); it was short-circuiting always and
never actually pausing.
http://bugzilla.openedhand.com/show_bug.cgi?id=1629
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The commit 2c95b378 prevents clutter_animation_setup_property from being
called with fixed:: property names. This patch adds a additional
parameter "is_fixed" to clutter_animation_setup_property instead of
searching for "fixed::" in property_name.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
It should be possible render a single PangoLayout with different
colors without recalculating the layout. This was not working because
the color used at the first edit was being stored in the display
list. This broke changing the opacity on a ClutterText.
Now each node in the display list has a 'color override' flag which
marks whether it should use the base color or not. The base color is
now passed in from _cogl_pango_display_list_render_texture. The alpha
value is always taken from the base color.
The clutter_redraw() function is used by libraries embedding
Clutter inside another toolkit, instead of queueing a redraw
on the embedded stage. This means that clutter_redraw() should
perform the same sequence of actions done by the redraw idle
callback.
Clutter short-circuits painting when an actor's opacity is
zero. However if the actor is being painted from a ClutterClone then
it will be painted using the clone's opacity instead so the test was
broken.
* 1.0-integration: (138 commits)
[x11] Disable XInput by default
[xinput] Invert the XI extension version check
[cogl-primitives] Fix an unused variable warning when building GLES
[clutter-stage-egl] Pass -1,-1 to clutter_stage_x11_fix_window_size
Update the GLES backend to have the layer filters in the material
[gles/cogl-shader] Add a missing semicolon
[cogl] Move the texture filters to be a property of the material layer
[text] Fix Pango unit to pixels conversion
[actor] Force unrealization on destroy only for non-toplevels
[x11] Rework map/unmap and resizing
[xinput] Check for the XInput entry points
[units] Validate units against the ParamSpec
[actor] Add the ::allocation-changed signal
[actor] Use flags to control allocations
[units] Rework Units into logical distance value
Remove a stray g_value_get_int()
Remove usage of Units and macros
[cogl-material] Allow setting a layer with an invalid texture handle
[timeline] Remove the concept of frames from timelines
[gles/cogl-shader] Fix parameter spec for cogl_shader_get_info_log
...
Conflicts:
configure.ac
The XInput support in Clutter is still using XI 1.x. This will never
work correctly, and we are all waiting for XInput 2 anyway. The changes
internally should be minimal, so we can leave everything in place, but
it's better to disable XInput support by default -- at least for the
time being.
The texture filters are now a property of the material layer rather
than the texture object. Whenever a texture is painted with a material
it sets the filters on all of the GL textures in the Cogl texture. The
filter is cached so that it won't be changed unnecessarily.
The automatic mipmap generation has changed so that the mipmaps are
only generated when the texture is painted instead of every time the
data changes. Changing the texture sets a flag to mark that the
mipmaps are dirty. This works better if the FBO extension is available
because we can use glGenerateMipmap. If the extension is not available
it will temporarily enable automatic mipmap generation and reupload
the first pixel of each slice. This requires tracking the data for the
first pixel.
The COGL_TEXTURE_AUTO_MIPMAP flag has been replaced with
COGL_TEXTURE_NO_AUTO_MIPMAP so that it will default to
auto-mipmapping. The mipmap generation is now effectively free if you
are not using a mipmap filter mode so you would only want to disable
it if you had some special reason to generate your own mipmaps.
ClutterTexture no longer has to store its own copy of the filter
mode. Instead it stores it in the material and the property is
directly set and read from that. This fixes problems with the filters
getting out of sync when a cogl handle is set on the texture
directly. It also avoids the mess of having to rerealize the texture
if the filter quality changes to HIGH because Cogl will take of
generating the mipmaps if needed.
The mapping and unmapping of the X11 stage implementation is
a bit bong. It's asynchronous, for starters, when it really
can avoid it by tracking the state internally.
The ordering of the map/unmap sequence is also broken with
respect to the resizing.
By tracking the state internally into StageX11 we can safely
remove the MapNotify and UnmapNotify X event handling.
In theory, we should use _NET_WM_STATE a lot more, and reuse
the X11 state flags for fullscreening as well.
Apparently, the XInput extension is using the same pkg-config
file ('xi') for both the 1.x and the 2.x API, so we need to
check for both the 1.x XGetExtensionVersion and the 2.x
XQueryInputVersion.
When declaring a property using ClutterParamSpecUnits we pass a
default type to limit the type of units we accept as valid values
for the property.
This means that we need to add the unit type check as part of the
validation process.
Sometimes it is useful to be able to track changes in the allocation
flags, like the absolute origin, inside children of a container.
Using the notify::allocation signal is not enough, in these cases, so
we need a specific signal that gives us both the allocation box and the
allocation flags.
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
Units as they have been implemented since Clutter 0.4 have always been
misdefined as "logical distance unit", while they were just pixels with
fractionary bits.
Units should be reworked to be opaque structures to hold a value and
its unit type, that can be then converted into pixels when Clutter needs
to paint or compute size requisitions and perform allocations.
The previous API should be completely removed to avoid collisions, and
a new type:
ClutterUnits
should be added; the ability to install GObject properties using
ClutterUnits should be maintained.
It was previously possible to create a material layer with no texture
by setting some property on it such as the matrix. However it was not
possible to get back to that state without removing the layer and
recreating it. It is useful to be able to remove the texture to free
resources without forgetting the state of the layer so we can put a
different texture in later.
Timelines no longer work in terms of a frame rate and a number of
frames but instead just have a duration in milliseconds. This better
matches the working of the master clock where if any timelines are
running it will redraw as fast as possible rather than limiting to the
lowest rated timeline.
Most applications will just create animations and expect them to
finish in a certain amount of time without caring about how many
frames are drawn. If a frame is going to be drawn it might as well
update all of the animations to some fraction of the total animation
rather than rounding to the nearest whole frame.
The 'frame_num' parameter of the new-frame signal is now 'msecs' which
is a number of milliseconds progressed along the
timeline. Applications should use clutter_timeline_get_progress
instead of the frame number.
Markers can now only be attached at a time value. The position is
stored in milliseconds rather than at a frame number.
test-timeline-smoothness and test-timeline-dup-frames have been
removed because they no longer make sense.
The clutter_actor_map and unmap functions need to be called to
properly update the mapped state. This matches the changes to the X11
stage in 125bded8.
If the application code calls for destruction of an actor we need
to make sure that the actor is unrealized before running the dispose
sequence; otherwise, we might trigger an assertion failure on composite
actors.
The commit 762873e79e is completely
and utterly wrong and I should have never pushed it.
Serves me well for trying to work on three different branches and
on three different things.
Currently, the clock source spins a redraw every time there is at
least a timeline running. If the timelines were not advanced in
the previous frame, though, because their interval is larger than
the vblanking interval then this will lead to excessive redraws of
the scenegraph even if nothing has changed.
To avoid this a simple guard should be set by the MasterClock::advance
method in case no timeline was effectively advanced, and checked
before dispatching the stage redraws.
When creating a Cogl texture from a Cogl bitmap it would steal the
data by setting the bitmap_owner flag and clearing the data pointer
from the bitmap. The data would be freed by the time the
new_from_bitmap is finished. There is no reason to do this because the
data will be freed when the Cogl bitmap is unref'd and it is confusing
not to be able to reuse the bitmap for creating multiple textures.
clutter_color_from_string() only supported the "#rrggbbaa" format with
alpha channel, this patch adds support for "#rgba".
Colors in "#rrggbb" format were parsed manually, this is now left to
the pango color parsing fallback, since that's handling it just fine.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The cogl_shader_get_info_log() function is very inconvenient for
language bindings and for regular use, as it requires a static
buffer to be filled -- basically just providing a wrapper around
glGetInfoLogARB().
Since COGL aims to be a more convenient API than raw GL we should
just make cogl_shader_get_info_log() return an allocated string
with the GLSL compiler log.
Instead of using GL_TRIANGLES and uploading the indices every time, it
now uses GL_QUADS instead on OpenGL. Under GLES it still uses indices
but it uses the new cogl_vertex_buffer_indices_get_for_quads function
to avoid uploading the vertices every time.
This requires the _cogl_vertex_buffer_indices_pointer_from_handle
function to be exposed privately to the rest of Cogl.
The static_indices array has been removed from the Cogl context.
The GIR file for Clutter still contains symbols from COGL, even
though we provide a Cogl GIR as well. The Clutter GIR should
depend on the Cogl GIR instead.
All the underlying implementation and the public entry points have
been switched to floats; the only missing bits are the Actor properties
that deal with positioning and sizing.
This usually means a major pain when dealing with GValues and varargs
functions. While GValue will warn you when dealing with the wrong
conversions, varags will simply die an horrible (and hard to debug)
death via segfault. Nothing much to do here, except warn people in the
release notes and hope for the best.
The documentation for ClutterTexture's set_from_rgb_data() and
set_from_yuv_data() says:
Note: This function is likely to change in future versions.
This is not true, since they'll remain for the whole 1.x API cycle.
Now that CoglVertexBuffers support indices we can use them with GLES
to avoid duplicating vertices. Regular GL still uses GL_QUADS because
it is shown to still have a performance benefit over indices with the
Intel drivers.
This function can be used as an efficient way of drawing groups of
quads without using GL_QUADS. It generates a VBO containing the
indices needed to render using pairs of GL_TRIANGLES. The VBO is
globally cached so that it only needs to be uploaded whenever more
indices are requested than ever before.
The allocate_available_size() method is a convenience method in
the same spirit as allocate_preferred_size(). While the latter
will allocate the preferred size of an actor regardless of the
available size provided by the actor's parent -- and thus it's
suitable for simple fixed layout managers like ClutterGroup -- the
former will take into account the available size provided by the
parent and never allocate more than that; it is, thus, suitable
for simple fluid layout managers.
The cogl-enum-types.h file is created by glib-mkenums under
/clutter/cogl/common, and then copied in /clutter/cogl in order
to make the inclusion of that file work inside cogl.h.
Since we're copying it in a different location, the Makefile
for that location has to clean up the copy.
Notifications should be fired off from both the internal timeline and
the wrapping animation here, so notifiers should be frozen around these
property setters.
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Just a couple of final cleanups after the reimplementation of the
Animation model.
i) _set_mode does not need to set the timeline on the alpha
ii) freeze notifications around the setting of a new alpha
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The "started" signal is sent first after the timeline has been set to the
'running' state. For this reason, checking if the clock has any running
timelines running will always return true in the "started" signal handler:
the timeline that sent the signal is running.
What needs to be checked in the signal handler is if there are any
timelines running other than the one that emitted the ::started signal,
which we know is running anyway.
This prevents frames from being lost at the beginning of an animation when
a timeline is started after a quiescent period.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1617
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
We avoid rebuilding cogl-enum-types.h and cogl-enum-types.c by
using a "guard" -- a stamp file that will block Makefile. Since
we need cogl-enum-types.h into /clutter/cogl as well for the
cogl.h include to work, if we copy the cogl-enum-types.h
unconditionally it will cause a rebuild of the whole COGL; which
will cause a full rebuild.
To solve this, we can copy the header file when generating it
under the stamp file.
The libclutter-cogl internal object should be the only dependency
for Clutter, since we are already copying it inside clutter/cogl
for the introspection scanner. For this reason, the backend-specific,
real internal object should be built with the backend encoded into
the file name, like libclutter-common. This makes the build output
a little bit more clear: instead of having two:
LINK libclutter-cogl-common.la
...
LINK libclutter-cogl.la
LINK libclutter-cogl.la
We'll have:
LINK libclutter-cogl-common.la
...
LINK libclutter-cogl-gl.la
LINK libclutter-cogl.la
Same applies for the GLES backend.
Just like we do with GObject types and G_DEFINE_TYPE, we should
use the g_once_init_enter/g_once_init_leave mechanism to make the
GType registration of enumeration types thread safe.
The setup_viewport() function should only be used by Clutter and
not by application code.
It can be emulated by changing the Stage size and perspective and
requeueing a redraw after calling clutter_stage_ensure_viewport().
The backface culling enabling function was split and renamed, just
like the depth testing one, so we need to add the macro to the
cogl-deprecated.h header.
Previously indices were tightly bound to a particular Cogl vertex buffer
but we would like to be able to share indices so now we have
cogl_vertex_buffer_indices_new () which returns a CoglHandle.
In particular we could like to have a shared set of indices for drawing
lists of quads that can be shared between the pango renderer and the
Cogl journal.
At the moment Cogl doesn't do much batching of quads so most of the time we
are flushing a single quad at a time. This patch simplifies how we submit
those quads to OpenGL by using glDrawArrays with GL_TRIANGLE_FAN mode
instead of sending indexed vertices using GL_TRIANGLES mode.
Note: I hope to follow up soon with changes that improve our batching and
also move the indices into a VBO so they don't need to be re-validated every
time we call glDrawElements.
To assist people porting code from 0.8, the cogl_texture_* functions that
have been replaced now have defines that give some hint as to how they
should be replaced.
cogl_enable_depth_test and cogl_enable_backface_culling have been renamed
and now have corresponding getters, the new functions are:
cogl_set_depth_test_enabled
cogl_get_depth_test_enabled
cogl_set_backface_culling_enabled
cogl_get_backface_culling_enabled
This adds cogl_matrix api for multiplying matrices either by a perspective
or ortho projective transform. The internal matrix stack and current-matrix
APIs also have corresponding support added.
New public API:
cogl_matrix_perspective
cogl_matrix_ortho
cogl_ortho
cogl_set_modelview_matrix
cogl_set_projection_matrix
cogl_create_context is dealt with internally when _cogl_get_default context
is called, and cogl_destroy_context is currently never called.
It might be nicer later to get an object back when creating a context so
Cogl can support multiple contexts, so these functions are being removed
from the API until we get a chance to address context management properly.
For now cogl_destroy_context is still exported as _cogl_destroy_context so
Clutter could at least install a library deinit handler to call it.
Originally cogl_vertex_buffer_add_indices let the user pass in their own unique
ID for the indices; now the Id is generated internally and returned to the
caller.
It's now possible to add arrays of indices to a Cogl vertex buffer and
they will be put into an OpenGL vertex buffer object. Since it's quite
common for index arrays to be static it saves the OpenGL driver from
having to validate them repeatedly.
This changes the cogl_vertex_buffer_draw_elements API: It's no longer
possible to provide a pointer to an index array at draw time. So
cogl_vertex_buffer_draw_elements now takes an indices identifier that
should correspond to an idendifier returned when calling
cogl_vertex_buffer_add_indices ()
This is being removed before we release Clutter 1.0 since the implementation
wasn't complete, and so we assume no one is using this yet. Util we have
someone with a good usecase, we can't pretend to support breaking out into
raw OpenGL.
There were a number of functions intended to support creating of new
primitives using materials, but at this point they aren't used outside of
Cogl so until someone has a usecase and we can get feedback on this
API, it's being removed before we release Clutter 1.0.
This removes the following API:
cogl_material_set_blend_factors
cogl_material_set_layer_combine_function
cogl_material_set_layer_combine_arg_src
cogl_material_set_layer_combine_arg_op
These were rather awkward to use, so since it's expected very few people are
using them at this point and it should be straight forward to switch over
to blend strings, the API is being removed before we release Clutter 1.0.
Setting up layer combine functions and blend modes is very awkward to do
programatically. This adds a parser for string based descriptions which are
more consise and readable.
E.g. a material layer combine function could now be given as:
"RGBA = ADD (TEXTURE[A], PREVIOUS[RGB])"
or
"RGB = REPLACE (PREVIOUS)"
"A = MODULATE (PREVIOUS, TEXTURE)"
The simple syntax and grammar are only designed to expose standard fixed
function hardware, more advanced combining must be done with shaders.
This includes standalone documentation of blend strings covering the aspects
that are common to blending and texture combining, and adds documentation
with examples specific to the new cogl_material_set_blend() and
cogl_material_layer_set_combine() functions.
Note: The hope is to remove the now redundant bits of the material API
before 1.0
After long deliberation, the Animation class handling of the
:mode, :duration and :loop properties, as well as the conditions
for creating the Alpha and Timeline instances, came out as far too
complicated for their own good.
This is a rework of the API/parameters matrix and behaviour:
- :mode accessors will create an Alpha, if needed
- :duration and :loop accessors will create an Alpha and a Timeline
if needed
- :alpha will set or unset the Alpha
- :timeline will set or unset the Timeline
Plus, more documentation on the Animation class itself.
Many thanks to Jonas Bonn <jonas@southpole.se> for the feedback
and the ideas.
The Animatable interface implementation will always have the computed
value applied, whilst the non-Animatable objects go through the
interval validation first to avoid incurring in assertions and
warnings.
The Animatable::animate_property() should also be able to validate the
property it's supposed to interpolate, and eventually discard it. This
requires adding a return value to the virtual function (and its wrapper
function).
The Animation code will then apply the computed value only if the
animate_property() returns TRUE -- unifying the code path with the
non-Animatable objects.
The Animation class should proxy the :mode, :duration and :loop
properties whenever possible, to avoid them going out of sync when
changed using the Alpha and Timeline instances directly.
Currently, if Timeline:duration is changed, querying Animation:duration
will yield the old value, but the animation itself (being driven by
the Timeline) will use the Timeline's :duration new value. This holds
for the :loop and :mode properties as well.
Instead, the getters for the Animation's :duration, :loop and
:mode properties should ask the relevant object -- if any. The
loop, duration and mode values inside AnimationPrivate should only
be used if no Timeline or no Alpha instances are available, or
when creating new instances.
The Animation should not directly manipulate a Timeline instance,
but it should defer to the Alpha all handling of the timeline.
This means that:
- set_duration() and set_loop() will either create a Timeline or
will set the :duration and :loop properties on the Timeline; if
the Timeline must be created, and no Alpha instance is available,
then a new Alpha instance will be created as well and the newly
create Timeline will be assigned to the Alpha
- if set_mode() on an Animation instance without an Alpha, the
Alpha will be created; a Timeline will also be created
- set_alpha() will replace the Alpha; if the new Alpha does not
have a Timeline associated then a Timeline will be created using
the current :duration and :loop properties of Animation; otherwise,
if the replaced Alpha had a timeline, the timeline will be
transferred to the new one
The CoglTexture constructors expose the "max-waste" argument for
controlling the maximum amount of wasted areas for slicing or,
if set to -1, disables slicing.
Slicing is really relevant only for large images that are never
repeated, so it's a useful feature only in controlled use cases.
Specifying the amount of wasted area is, on the other hand, just
a way to mess up this feature; 99% the times, you either pull this
number out of thin air, hoping it's right, or you try to do the
right thing and you choose the wrong number anyway.
Instead, we can use the CoglTextureFlags to control whether the
texture should not be sliced (useful for Clutter-GST and for the
texture-from-pixmap actors) and provide a reasonable value for
enabling the slicing ourself. At some point, we might even
provide a way to change the default at compile time or at run time,
for particular platforms.
Since max_waste is gone, the :tile-waste property of ClutterTexture
becomes read-only, and it proxies the cogl_texture_get_max_waste()
function.
Inside Clutter, the only cases where the max_waste argument was
not set to -1 are in the Pango glyph cache (which is a POT texture
anyway) and inside the test cases where we want to force slicing;
for the latter we can create larger textures that will be bigger than
the threshold we set.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Signed-off-by: Neil Roberts <neil@linux.intel.com>
Sometimes it is necessary for third party code to have a
function called during the redraw process, so that you can
update the scenegraph before it is painted.
If we are short-circuiting the paint when the opacity is zero we still
need to clear the queued_redraw flag otherwise it won't be possible to
queue another redraw of the actor until something else has caused a
paint first.
* master:
[cogl-vertex-buffer] Ensure the clip state before rendering
[test-text-perf] Small fix-ups
Add a test for text performance
[build] Ensure that cogl-debug is disabled by default
[build] The cogl GE macro wasn't passing an int according to the format string
Use the right internal format for GL_ARB_texture_rectangle
[actor_paint] Ensure painting is a NOP for actors with opacity = 0
Make backface culling work with vertex buffers
Before any rendering is done by Cogl it needs to ensure the clip stack
is set up correctly by calling cogl_clip_ensure. This was not being
done for the Cogl vertex buffer so it would still use the clip from
the previous render.
Now that everything is float, the marsharlling function of the
size-change signal should reflect that fact.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
GLES doesn't support GL_QUADS. This patch makes it use GL_TRIANGLES
instead in that case. Unfortunately this means submitting two extra
vertices per quad. It could be better to use indexed elements once
CoglVertexBuffers gains support for that.
When ClutterGLXTexturePixmap uses GL_ARB_texture_rectangle,
it needs to pass the right internal format (GL_RGB or GL_RGBA)
when it initializes the texture with glTexImage2D() or later
handling won't recognize the alpha channel.
http://bugzilla.openedhand.com/show_bug.cgi?id=1586
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Since it is convenient to use geometry with an opacity of 0 for input only
purposes it's a worthwhile optimization to avoid submitting anything
for such actors while painting.
Backface culling is enabled as part of cogl_enable so the different
rendering functions in Cogl need to explicitly opt-in to have backface
culling enabled. Cogl vertex buffers should allow backface culling so
they should check whether it is enabled and then set the appropriate
cogl_enable flag.
In order to cope with the situation where an application renders with
a PangoLayout, makes some changes and then renders again with the same
layout, CoglPangoRenderer needs to detect that the changes have
occured so that it can recreate the display list. This is acheived by
keeping a reference to the first line of the layout. When the layout
is changed Pango will clear the layout pointer in the first line and
create a new line. So if the layout pointer in the line becomes NULL
then we know the layout has changed. This trick was suggested by
Behdad Esfahbod in this email:
http://mail.gnome.org/archives/gtk-i18n-list/2009-May/msg00019.html
When a position is given to cogl_pango_render_layout_subpixel it
translates the GL matrix by the coordinates. However it was not
dividing by PANGO_SCALE so the coordinates were completely wrong.
Most of the operations involving the texture's allocated area require
floats -- either for computations or for setting the geometry into
COGL. So it doesn't make any sense to use get_allocation_coords() and
cast everything to floats.
Currently, COGL depends on defining debug symbols by manually
modifying the source code. When it's done, it will forcefully
print stuff to the console.
Since COGL has also a pretty, runtime selectable debugging API
we might as well switch everything to it.
In order for this to happen, configure needs a new:
--enable-cogl-debug
command line switch; this will enable COGL debugging, the
CoglHandle debugging and will also turn on the error checking
for each GL operation.
The default setting for the COGL debug defines is off, since
it slows down the GL operations; enabling it for a particular
debug build is trivial, though.
COGL has a debug message system like Clutter's own. In parallel,
it also uses a coupld of #defines. Spread around there are also
calls to printf() instead to the more correct g_log* wrappers.
This commit tries to unify and clean up the macros and the
debug message handling inside COGL to be more consistent.
ClutterTexture has many properties that can only be accessed using
the GObject API. This is fairly inefficient and makes binding the
class overly complicated.
The Texture class should have accessor methods for all its properties,
properly documented.
The code for the conversion of the GL error enumeration code
into a string is not following the code style and conventions
we follow in Clutter and COGL.
The GE() macro is also using fprintf(stderr) directly instead
of using g_warning() -- which is redirectable to an alternative
logging system using the g_log* API.
We use math routines inside Cogl, so it's correct to have it in
the LIBADD line. In normal usage something else was pulling in
-lm, but the introspection is relying on linking against the
convenience library.
Based on a patch by: Colin Walters <walters@verbum.org>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The timeline created when calling set_timeline(NULL) is referenced
even though we implicitly own it. When the Animation is destroyed,
the timeline is then leaked.
Thanks to: Richard Heatley <richard.heatley@starleaf.com>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1548
Add a method for deleting the current selection inside a Text actor.
This is useful for subclasses.
See bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1521
Based on a patch by: Raymond Liu <raymond.liu@intel.com>
ClutterAnimation currently inherits the initial floating reference
semantics from GInitiallyUnowned. An Animation is, though, meant to
be used as a top-level object, like a Timeline or a Behaviour, and
not "owned" by another object. For this reason, the initial floating
reference does not make any sense.
Document that repeated calls to clutter_cairo_texture_create()
continue drawing on the same cairo_surface_t. Add
clutter_cairo_texture_clear() for when you don't want that behavior.
http://bugzilla.openedhand.com/show_bug.cgi?id=1599
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The cursor x position is already translated, so we do not need to take the
actors allocation into account when calculating scrolling.
Additionally, we need to update the text_x value before running
clutter_text_ensure_cursor_position.
Add any scrolling offset to the x value when in single line mode.
Now that the offset is taken into account in the position_to_coords
function, we do not need to adjust the cursor x manually in
clutter_text_paint.
If the cursor is already at the end of the Text contents then we
need to maintain its position when deleting the previous character
using the relative key binding.
The required "fake" libclutter-cogl.la upon with the main clutter
shared object depends is only built with introspection enabled
instead of being built unconditionally.
Passing:
--library=clutter-@CLUTTER_FLAVOUR@-@CLUTTER_API_VERSION@
to g-ir-scanner, when building Cogl was causing g-ir-scanner to
link the introspection program against the installed clutter library,
if it existed or fail otherwise. Instead copy the handling from
the json/ directory where we link against the convenience library
to scan, and do the generation of the typelib later in the
main clutter/directory.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1594
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If text is set, ClutterText should never return less than the layout
height for minimum and preferred heights.
This holds unless ellipsize and wrap are enabled, in which case the
minimum height should be the height of the first line -- which is
the height needed to at the very least show the ellipsization.
Based on a patch by: Thomas Wood <thomas@openedhand.com>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1598
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This is the another step into abstracting the backend operations
that are currently spread all across the board back into the
backend implementations where they belong.
The GL context creation, for instance, is demanded to the stage
realization which makes it a critical path for every operation
that is GL-context bound. This usually does not make any difference
since we realize the default stage, but at some point we might
start looking into avoiding the default stage realization in order
to make the Clutter startup faster.
It also makes the code maintainable because every part is self
contained and can be reworked with the minimum amount of pain.
The master clock is using the redraw priority to create the source
that will be used to spin the paint sequence if something is being
animated using a timeline.
Unfortunately, the priority is too high and this causes starvation
when embedding into other toolkits -- like gtk+.
Thanks to Havoc Pennington for catching this.
The XVisualInfo for GL is created when a stage is being realized.
When embedding Clutter inside another toolkit we might not want to
realize a stage to extract the XVisualInfo, then set the stage
window using a foreign X Window -- which will cause a re-realization.
Instead, we should abstract as much as possible into the X11 backend.
Unfortunately, the XVisualInfo for GL is requested using GLX API; for
this reason we have to create a ClutterBackendX11 method that we
override inside the ClutterBackendGLX implementation.
This also allows us to move a little bit of complexity from out of
the stage realization, which is currently a very delicate and hard
to debug section.
I was seeing clutter_text_get_selection trying to malloc up to 4Gb due to
unexpected negative arithmetic for the start/end offsets which resulted
in a crash.
This just tests for positions of -1 before deciding if the start/end
positions need to be swapped. The conversion from position to byte offset
already works with -1.
cogl_clip_push_window_rect is implemented using GPU scissoring which allows
the GPU to cull anything that falls outside a given rectangle. Since in the
case of picking we only ever care about a single pixel we can get the GPU to
ignore all geometry that doesn't intersect that pixel and only rasterize for
one pixel.
Previously clipping could only be specified in object coordinates, now
rectangles can also be pushed in window coordinates.
Internally rectangles pushed this way are intersected and then clipped using
scissoring. We also transparently try to convert rectangles pushed in
object coordinates into window coordinates as we anticipate the scissoring
path will be faster then the clip planes and undoubtably it will be faster
than using the stencil buffer.
The stencil buffer is always cleared the first time a clip is used
that needs it and the stencil test is disabled otherwise so there is
no need to clear before a paint.
COGLenum, COGLint and COGLuint which were simply typedefs for GL{enum,int,uint}
have been removed from the API and replaced with specialised enum typedefs, int
and unsigned int. These were causing problems for generating bindings and also
considered poor style.
The cogl texture filter defines CGL_NEAREST and CGL_LINEAR etc are now replaced
by a namespaced typedef 'CoglTextureFilter' so they should be replaced with
COGL_TEXTURE_FILTER_NEAREST and COGL_TEXTURE_FILTER_LINEAR etc.
The shader type defines CGL_VERTEX_SHADER and CGL_FRAGMENT_SHADER are handled by
a CoglShaderType typedef and should be replaced with COGL_SHADER_TYPE_VERTEX and
COGL_SHADER_TYPE_FRAGMENT.
cogl_shader_get_parameteriv has been replaced by cogl_shader_get_type and
cogl_shader_is_compiled. More getters can be added later if desired.
Calling glReadPixels is bad enough in forcing us to synchronize the CPU with
the GPU, but glFinish has even stronger synchonization semantics than
glReadPixels which may negate some driver optimizations possible in
glReadPixels.
Commit 43fa38fcf5 broke out-of-tree builds by removing some of the
builddir directories from the include path. builddir/clutter/cogl and
builddir/clutter are needed because cogl.h and cogl-defines-gl.h are
automatically generated by the configure script. The main clutter
headers are in the srcdir so this needs to be in the path too.
When the stage state changes between active/deactive, send out
key-focus-in/key-focus-out signal for the current key focused
actor on the stage.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1503
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Setting the stage window using the set_stage_foreign() method will
lead to a re-realization. We need to make sure that the Drawable
currently associated to the GL context is set to None, to avoid a
BadDrawable error or, if we're unlucky, a segfault in the X server.
This commit reverts part of commit 5bcde25c - specifically the
part that forced a realization of the stage if we are ensuring
the GL context with it. This makes Clutter behave like it did
prior to commit 5bcde25c: if we are asked to ensure the GL context
with an unrealized stage we simply pass NULL to the backend
implementation.
When showing a Stage for the first time we end up realizing the stage
implementation before realizing the wrapper. This leads to segmentation
faults or errors coming from the backend because we're fumbling the
state and realization sequence.
Since we are destroying any previously set VisualInfo we keep we know
for sure that stage->xvisinfo is going to be None; hence, no reason to
check this condition.
The verify_map_state() internal method is conditionally compiled
if we have CLUTTER_ENABLE_DEBUG set; for this reason, all calls to
that method should be made conditional.
When building Clutter with introspection enabled everything stops
at Cogl GIR generation because it depends on the installed library
to work. Since we still require some changes in the API to be able
to build the GIR and the typelib for Cogl we should disable the
generation of the GIR as well.
The fix for bug 1138 broke multi-stage support on GLX, causing
X11 to segfault with the following stack trace:
Backtrace:
0: /usr/X11R6/bin/X(xf86SigHandler+0x7e) [0x80c91fe]
1: [0xb7eea400]
2: /usr/lib/xorg/modules/extensions//libglx.so [0xb7ae880c]
3: /usr/lib/xorg/modules/extensions//libglx.so [0xb7aec0d6]
4: /usr/X11R6/bin/X [0x8154c24]
5: /usr/X11R6/bin/X(Dispatch+0x314) [0x808de54]
6: /usr/X11R6/bin/X(main+0x4b5) [0x8074795]
7: /lib/i686/cmov/libc.so.6(__libc_start_main+0xe5) [0xb7c75775]
8: /usr/X11R6/bin/X(FontFileCompleteXLFD+0x21d) [0x8073a81]
which I can only track down to clutter_backend_glx_ensure_current()
being passed a NULL stage -- something that happens when a stage
is not correct realized. That should lead to a glXMakeCurrent(None)
and not to a segmentation fault, though.
When destroying a top-level actor we can actually relax the verification
of the map state, since it might be fully asynchronous and we might not
re-enter inside the mainloop in time to receive the unmap notification.
If the filter means that the there should be no rows left in the model,
clutter_model_get_iter_at_row (model, 0) should return NULL.
Howevever the currene implementation misbehaves and returns a bad iterator.
This change resolves the issue by tracking if we actually found any
non-filtered rows in our pass through the sequence.
OH Bugzilla: 1591
The stage is chaining up to the ClutterGroup::paint instead of
the ClutterGroup::pick method. This works anyway because we
detect the stage by default, but it's not a reliable solution
in case we decide to change the picking further on.
The master clock is currently advanced using a frame source driven
by the default frame rate. This breaks the sync to vblank because
the vblanking rate could be different than 60 Hz -- or it might be
completely disabled (e.g. with CLUTTER_VBLANK=none).
We should be using the main loop to check if we have timelines
playing, and if so queue a redraw on the stages we own.
We should also prepare the subsequent frame at the end of the redraw
process, so if there are new redraw we will have the scene already
in place.
This makes Clutter redraw at the maximum frame rate, which is
limited by the vblanking frequency.
Currently, picking in ClutterGroup pollutes the CLUTTER_DEBUG=paint
logs since it just calls the paint function. Reimplementing the pick
doesn't make us lose anything -- it might even be slightly faster
since we don't have to do a (typed) cast and a class dereference.
The timeline created when calling set_timeline(NULL) is referenced
even though we implicitly own it. When the Animation is destroyed,
the timeline is then leaked.
Thanks to: Richard Heatley <richard.heatley@starleaf.com>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1548
Currently, the conversion from em to units is done by using the
default font name inside the backend. For actors using their own
font/text layout we need a way to specify the font name along
with the quantity we wish to transform.
Commit 515350a7 renamed ::focus-in and ::focus-out to ::key-focus-in
and ::key-focus-out respectively. One signal emission for ::focus-out
escaped the renaming in ClutterStage.
Currently, the default screen guard value is 0, which is a valid
screen number on X11, and it might not be the default.
Patch suggested by: Owen W. Taylor <otaylor@redhat.com>
Currently, the introspection data for Cogl is built right into
Clutter's own typelib. This makes functions like:
cogl_path_round_rectangle()
Appear as:
Clutter.cogl_path_round_rectangle()
It should be possible, instead, to have a Cogl namespace and:
Cogl.path_round_rectangle()
This means building introspection data for Cogl alone. Unfortunately,
there are three types defined in Cogl that confuse the introspection
scanner, and make it impossible to build a typelib:
COGLint
COGLuint
COGLenum
These three types should go away before 1.0, substituted by int,
unsigned int and proper enumeration types. For this reason, we can
just set up the GIR build and wait until the last moment to create
the typelib. Once that has been done, we will be able to safely
remove the Cogl API from the Clutter GIR and typelib and let
people import Cogl if they want to use the Cogl API via introspection.