Each actor managed by a BinLayout policy should reside inside its
own "layer", with horizontal and vertical alignment. The :x-align
and :y-align properties of the BinLayout are the default alignment
policies, which are copied to each new "layer" when it is created.
The set_alignment() and get_alignment() methods of BinLayout can
be changed to operate on a specific "layer".
The whole machinery uses the new ChildMeta support inside the
LayoutManager base abstract class.
The ChildMeta object is a storage for child-container properties,
that is properties that exist only when an actor is inside a specific
container. The LayoutManager delegate class should also have
layout-specific properties -- so, for this job, we can "recycle"
ChildMeta as the storage.
Emit the ::layout-changed when the BinLayout alignment policies change.
This will result in a queue_relayout() on the containers using the
BinLayout layout manager.
* Use ::layout-changed to queue a relayout when the layout changes
* Destroy the Box children when destroying the Box
* Allow getting the layout manager from the Box
If a sub-class of LayoutManager wishes to implement a parametrized
layout policy it also needs a way to notify the container using the
layout manager that the layout has changed. We cannot do it directly
and automatically from the LayoutManager because a) it has no back
link to the actor that it is using it and b) it can be attached to
multiple actors.
This is a job for <cue raising dramatic music> signals!
By adding ClutterLayoutManager::layout-changed (and its relative
emitted function) we can notify actors using the layout manager that
the layout parameters have been changed, and thus they should queue
a relayout.
A BinLayout is a simple layout manager that allocates a single cell,
providing alignment on both the horizontal and vertical axis.
If the container associated to the BinLayout has more than one child,
the preferred size returned by the layout manager will be as big as
the maximum of the children preferred sizes; the allocation will be
applied to all children - but it will still depend on each child
preferred size and the BinLayout horizontal and vertical alignment
properties.
The supported alignment properties are:
* center: align the child by centering it
* start: align the child at the top or left border of the layout
* end: align the child at the bottom or right border of the layout
* fill: expand the child to fill the size of the layout
* fixed: let the child position itself
A layout manager instance makes only sense if it's owned by a
container. For this reason, it should have a floating reference
instead of a full reference on construction; this allows constructing
Boxes like:
box = clutter_box_new (clutter_fixed_layout_new ());
without leaking the layout manager instance.
The LayoutManager class is an abstract proxy for the size requesition
and size allocation process in ClutterActor.
A ClutterLayoutManager sub-class must implement get_preferred_width(),
get_preferred_height() and allocate(); a ClutterContainer using the
LayoutManager API will then proxy the corresponding Actor virtual
functions to the LayoutManager instance. This allows having a generic
"blank" ClutterActor sub-class, implementing the ClutterContainer
interface, which leaves only the layout management implementation to
the application developers.
The rules to create signal marshallers and enumeration GTypes are
usually copied and pasted all over different projects, though they
are pretty generic and, given a little bit of parametrization, can
be put in separate Makefile.am files and included whenever needed.
The set_default_stage() method of StageManager should not be used
by application code; technically, nothing in Clutter uses it, and
StageManager's API is not considered public anyway.
The IN_DESTRUCTION flag is set around the unrealization and disposal of
the actor in clutter_actor_destroy() but is never unset (it's set twice
instead).
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Currently, setting the :text property has the side-effect of
setting the :use-markup property to FALSE. This prevents
constructing a Text actor, or setting its properties, like:
g_object_set (text,
"use-markup", TRUE,
"text", some_string,
NULL);
as the ordering becomes important. Unfortunately, the ordering
of the properties cannot be enforced with ClutterScript or
with language bindings.
The documentation of the clutter_text_set_text() method should
be expanded to properly specify that the set_text() method will
change the :use-markup property to FALSE as a side effect.
Transform functions allow the use of g_value_transform() to cast
GValues. It's very handy to have casts to and from G_TYPE_STRING as it
allows generic serialization and parsing of GTypes.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Current parsing of units has a number of shortcomings:
* a number followed by trailing space (without any unit specified) was
not recognized,
* "5 emeralds" was parsed as 5em,
* the way we parse the digits after the separator makes us lose
precision for no good reason (5.0 is parsed as 5.00010014...f which
makes g_assert_cmpfloat() fail)
Let's define a stricter grammar we can recognize and try to do so. The
description is in EBNF form, removing the optional <> which is a pain
when having to write DocBook, and using '' for the terminal symbols.
Last step, add more ClutterUnits unit test to get a better coverage of
the grammar we want to parse.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Parse #rgb and #rrggbb in addition to forms with the alpha channel
specified. This allows conversion of colour strings from documents such as
CSS where the alpha channel is not specified when using '#' notation.
This patch also adds the relevant conformance test.
Parse #rgb and #rrggbb in addition to forms with the alpha channel
specified. This allows conversion of colour strings from documents such as
CSS where the alpha channel is not specified when using '#' notation.
gdk is an optional clutter dependency, so the pick buffer debugging option
needs some guards so we don't break, for example, the OSX builds. This also
adds a comment for the bit fiddling done on the pick colors used to ensure
the pick colors are more distinguished while debugging. (we swap the
nibbles of each color component so that pick buffers don't just look black.)
ClutterGroup previously calculated the size as the distance from the
left edge of the leftmost child to the right edge of the rightmost
child except if there were any chidren left of the origin then the
left edge would be zero.
However the group is always allocated its size relative to its
origin so if all of the children are to the right of the origin then
the preferred size would not be large enough to reach the rightmost
child.
origin
┼──────────┐
│Group │
│ ┌────────┼─┐
│ │Child │ │
│ │ │ │
└─┼────────┘ │
│ │
└──────────┘
group size
╟──────────╢
This patch makes it so the size is always just the rightmost edge.
origin
┼────────────┐
│Group │
│ ┌──────────┤
│ │Child │
│ │ │
│ │ │
│ │ │
└─┴──────────┘
group size
╟────────────╢
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1825
Since the Great Rework of ClutterUnits, functions have been using
'units' not 'unit' in their name. clutter_value_get_unit() is a left
over from a dark age, its declaration and documentation have been
updated but not the symbol itself.
Instead of having an assertion failure with a message of dubious
usefulness, we should probably use a more verbose warning explaining
what is the problem and what might be the cause.
The user-initiated resize is conflicting with the allocated size. This
happens because we change the size of the stage's X Window behind the
back of the size allocation machinery.
Instead, we should change the size of the actor whenever we receive a
ConfigureNotify event to reflect the new size of the actor.
We force the redraw before mapping, in the hope that when a composited
window manager maps the window it will have its contents ready; that is
not going to work: the solution for this problem requires the implementation
of a protocol for compositors, and not a hack.
Moreover, painting before mapping will cause a paint with the wrong
GL viewport size, which is the wrong thing to do on GLX.
When not building a debug build the compiler was warning about empty
else clauses with no braces due to code like:
if (blah)
do_foo();
else
COGL_NOTE (DRAW, "a-wibble");
This simply ensures that even for non debug builds COGL_NOTE will expand to
a single statement.
glVertexPointer expects positions with 2, 3 or 4 components, glColorPointer
expects colors with 3 or 4 components and glNormalPointer expects normals
with three components so when adding vertex buffer atributes with the names
"gl_Vertex", "gl_Color" or "gl_Normal" we assert these constraints and print
an explanation to the developer if not met.
This also fixes the previosly incorrect constraint that gl_Normal attributes
must have n_components == 1; thanks to Cat Sidhe for reporting this:
Bug: http://bugzilla.openedhand.com/show_bug.cgi?id=1819
Now if you export CLUTTER_DEBUG=dump-pick-buffers clutter will write out a
png, e.g. pick-buffer-00000.png, each time _clutter_to_pick() is called.
It's a rather crude way to debug the picking (realtime visualization in a
second stage would probably be nicer) but it we've used this approach
successfully numerous times when debugging Clutter picking issues so it
makes sense to have a debug option for it.
It looks like the intention was to duplicate an XVisualInfo in such a way
that the pointer could be returned and then later freed using XFree. But
Xalloc isn't an Xlib counterpart to XFree; Xlib doesn't provide a general
purpose malloc wrapper afik. By shuffling things about a bit, it was
possible to avoid the need for this hack.
By default, float * is considered as an out argument by gobject
introspection which is wrong for quite a few Cogl symbols. Start adding
annotations to fix that for the ones in the "Primitives" gtk-doc
section.
In the default implementation of container::destroy_child_meta Set child
meta qdata to NULL on the child and not the container, since the child
is the object that owns the data.
The lifetime of the journal VBO is entirely within the scope of the
cogl_journal_flush function so there is no need to store it globally
in the Cogl context. Instead, upload_vertices_to_vbo just returns the
new VBO. cogl_journal_flush stores this in a local variable and
destroys it before returning.
This also fixes an assertion when using the GLES backend which was
caused by nothing initialising the journal_vbo variable.
If the system clock rolls back between two frames then we need
to account for the change, to avoid stopping the timeline.
The best option, since a roll back can be any arbitrary amount
of milliseconds, is to skip a frame.
Fixes bug:
http://bugzilla.moblin.org/show_bug.cgi?id=3839
The framebuffer_object spec isn't clear in defining whether attaching a
texture as a renderbuffer with mipmap filtering enabled while the mipmaps
have not been uploaded should result in an incomplete framebuffer object.
(different drivers make different decisions)
To avoid an error with drivers that do consider this a problem we explicitly
set non mipmapped filters before calling glCheckFramebufferStatusEXT. The
filters will later be reset when the texture is actually used for rendering
according to the filters set on the corresponding CoglMaterial.
Since an actor can only be parented to one container we don't need
the extra complications of maintaining a list of ChildMeta objects
attached to an actor in the default implementation of the Container
interface.
Instead of using ClutterActor for the base class of the Stage
implementation we should extend the StageWindow interface with
the required bits (geometry, realization) and use a simple object
class.
This require a wee bit of changes across Backend, Stage and
StageWindow, even though it's mostly re-shuffling.
First of all, StageWindow should get new virtual functions:
* geometry:
- resize()
- get_geometry()
* realization
- realize()
- unrealize()
This covers all the bits that we use from ClutterActor currently
inside the stage implementations.
The ClutterBackend::create_stage() virtual function should create
a StageWindow, and not an Actor (it should always have been; the
fact that it returned an Actor was a leak of the black magic going
on underneath). Since we never guaranteed ABI compatibility for
the Backend class, this is not a problem.
Internally to ClutterStage we can finally drop the shenanigans of
setting/unsetting actor flags on the implementation: if the realization
succeeds, for instance, we set the REALIZED flag on the Stage and
we're done.
As an initial proof of concept, the X11 and GLX stage implementations
have been ported to the New World Order(tm) and show no regressions.
The old code checked whether the property began with 'signal-' and
then checked for 'signal-swapped' and 'signal-after'. This prevented
you from animating a property called for example 'signal-strength'.
The check for the prefix is now in a separate function which also adds
a 'signal-swapped-after' prefix for completeness.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1798
The blend string compiler checks that the syntax of a function name is
[A-Za-z_]*, preventing the use of DOT3_RGB[A].
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The column names are optional - ClutterModel will use the GType name
if there is no user-specified column name. Hence, the ::finalize vfunc
should not try to free an empty column names vector.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1790
The default floating point type for JSON is double precision; this means
that we need to conver to single precision when setting a property with
type G_TYPE_FLOAT.
Currently, to update a property inside an animation you have to
get the interval for that property and then call the set_final_value()
method.
We can provide a simpler, bind()-like method for the convenience of
the developers that just validates everything and then calls the
Interval.set_final_value().
Right now we just check for a NULL stage before calling glXMakeCurrent().
We can, though, get a valid stage without an implementation attached to
it while we are disposing a stage after a CLUTTER_DELETE event, since the
events processing is performed on a vblank-locked basis.
ClutterClone bases its preferred size on the preferred size of
the source actor, so it needs to invalid its cached preferred
size when the preferred size of the source actor changes.
In order for this to work, we need to have notification when
the size of the source actor changes, so add a ::queue-relayout
signal to ClutterActor.
Then connect to this from ClutterClone and queue a relayout
on the clone when a relayout is queued on the source.
http://bugzilla.openedhand.com/show_bug.cgi?id=1755
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This reverts commit 3c47a3beb5.
Of course I remembered just after pushing the patch why we hadn't done
this before :-) If you look in the glsl spec:
http://www.khronos.org/registry/gles/specs/2.0/es_full_spec_2.0.24.pdf
Section 3.7.10 Texture Completeness and Non-Power-Of-Two Textures
you can see GLES 2.0 doesn't support mipmaps for npot textures.
There is possibly some way we could support this in Cogl but at least
it's not as simple as or-ing in the feature flag, sadly.
The core GLES2 API supports NPOT textures, i.e. there is no extension as for
OpenGL, so we now add COGL_FEATURE_TEXTURE_NPOT to the feature flags in
_cogl_features_init.
Thanks to Gordon Williams for spotting this.
Don't let stringify.sh write to the $srcdir + use the BUILT_SOURCES var in
Makefile.am so as to ensure all .c. and .h files get generated from their
corresponding .glsl files before building other targets.
If the timeline is running backwards, the completed signal handler should
set the final state from the interval's initial value.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When rendering a glyph run from a texture we were premultiplying the
colour once in display_list_render() and then again in
display_list_render_texture(). This was causing the color to come out
wrong. This fixes bug #1775.
Thanks to Pierre-Luc Beaudoin for reporting.
The wrong part of an expression was bracketed in the test to determine
when a new texture matrix needed to be loaded which resulted in the
first pass through _cogl_material_layer_flush_gl_sampler_state
not uploading any user matrix.
clutter_text_move_word_backward/forward() calls did not use the
start argument consistently. Also, clutter_text_move_word_forward()
bound check checked the wrong end.
Fixes#1765
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When a letter key is pressed with the control key held down one of
three things will happen :-
a) If the stage is embedded within a GtkClutterEmbed the unicode value
will be filled from gdk_keyval_to_unicode. This will be the same
value as if control was not pressed (so Ctrl+V will be 'v').
b) If the stage is not in a GtkClutterEmbed and Clutter is running on
the X11 backend then it will try to fill in the unicode value from
XLookupString. This *will* take into account the control so the
unicode value will represent a control character (Ctrl+V will be
'\x16').
c) Most other backends will not bother to fill in the unicode
value. Therefore clutter_keysym_to_unicode will be used which also
does not take into account the control key (so Ctrl+V will be 'v').
For cut and paste to work in Nbtk, the control keys need to bubble up
to the parent NbtkEntry container. This works fine for 'b' but not 'a'
and 'c'.
This patch makes ClutterText always allow the event to bubble if the
key is not handled by the binding pool and the control modifier is
down.
Ideally ClutterText would always get a unicode value that takes into
account the modifiers but this is probably best left up to the input
methods.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
g-ir-compiler currently opens the library for the .gir it is compiling;
to make that work we need to set LD_LIBRARY_PATH before running
g-ir-compiler to include .libs.
(I think this may have been working earlier because there was a
hack that substituted .so with .la and tried opening that; that
works for the incorrect libclutter-glx-1.0.so but not for the
correct libclutter-glx-1.0.so.0)
http://bugzilla.openedhand.com/show_bug.cgi?id=1771
Update the sed hack for the shared library to be more robust.
Remove the --shared-library command line argument from g-ir-scanner,
as no distribution will ever ship the .la files. This effectively
reverts commit 68f8a98cfb.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Use the --shared-library option to specify the shared object to link
against when compiling the typelib from the GIR data.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Following bug #1762, the syntax of g-ir-scanner was changed in
gobject-introspection, so Clutter does not build anymore with 0.6.4.
See the bugzilla bug:
http://bugzilla.gnome.org/show_bug.cgi?id=591669
GObject-Introspection now uses a different mechanism to extract the
SONAME when building the gir file and it needs the libtool archive as
option.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When dumping a ClutterUnits structure to a string we are using a bare
g_strdup_printf(), which unfortunately is locale dependant. So, for
instance, a type of CLUTTER_UNIT_EM and a value of 42 are stringified
as:
C: 42.00 em
en_GB 42.00 em
it_IT 42,00 em
fr_FR 42,00 em
This would not be a problem -- clutter_units_from_string() allows both
'.' and ',' as fractionary part delimiters. The test suite, on the
other hand, does not know that, and it checks for exact matches with
the C locale.
Calling setlocale(LC_ALL,"C") at the beginning of the conformance test
suite is not a good idea, because it would prevent external testing; and
it's a lame cop out from doing exactly what we have to do -- pick a format
and stick with it.
Like other platforms, languages and frameworks before us, we opt to
be less liberal in what we create; so, we choose to always stringify
ClutterUnits with fractionary parts using '.' as the delimiter.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1763
Clutter advertises itself on X11 as implementing the _NET_WM_PING protocol,
which is needed to be able to detect frozen applications; this allows us to
stop the destruction of the stage by blocking the CLUTTER_DELETE event and
wait for user feedback without the Window Manager thinking that the app has
gone unresponsive.
In order to implement the _NET_WM_PING protocol properly, though, we need
to add the _NET_WM_PID property on the Stage window, since the EWMH states:
[_NET_WM_PID] MAY be used by the Window Manager to kill windows which
do not respond to the _NET_WM_PING protocol.
Meaning that an unresponsive Clutter application might not be killable by
the window manager.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1748
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
* Do _not_ use CLUTTER_MAJORMINOR to define the installation path
for the headers; we must use CLUTTER_API_VERSION for that.
* Do not put the C compiler flags in the INCLUDES directive.
Bases on a patch by: Gary Ching-Pang Lin <glin@novell.com>
It is possible to unset the size of an actor specified with set_width()
and set_height() by using:
clutter_actor_set_size (actor, -1, -1);
Which works by unsetting the :min-*-set and the :natural-*-set properties.
Calling set_width(-1) and set_height(-1) separately, though, doesn't work
thus implicitly breaking the assumption that set_size() is nothing more
than set_width()+set_height(). This was obviously due to the face that
pre-1.0 set_width() and set_height() took an unsigned integer as an
argument.
ClutterActor parses positional and dimensional properties with a
custom deserializer. We need to:
- handle G_TYPE_INT64, the default integer type for JSON-GLib
- use G_TYPE_FLOAT for properties, since Actor switched to it
for the pixel-based ones
This makes ClutterScript work again.
The json-types.h header is found by the mere fact of it being
in the project; if we are compiling against the system JSON-GLib
this could be horribly out of date.
We need to use clutter-json.h, which will include the right
header for us.
JSON-GLib moved to a single include scheme, so we should only include
json-glib.h. If we use the internal copy it doesn't matter, since the
header does the right thing.
The clutter-script-parser.c does not have a copyright and license
notices; even though the LGPL is a per-project license and not a
per-file license, having those notices in every source file is a
good idea.
The OS X backend Makefile.am was missing a line concatenation, and
so the -xobjective-c directive was always ignored.
Instead of dumping everything into INCLUDES and LDADD we should follow
what the rest of the backends do, and use per-target CFLAGS and LDADD,
and reserve the INCLUDES to -D and -I directives.
Thanks to: Christian Hergert <chris@dronelabs.com>
Don't install inside the clutter-MAJOR_MINOR/ directory, but use
the API_VERSION (1.0).
Otherwise we'd have the Clutter headers for 1.x inside:
$includedir/clutter-1.0/clutter
And the JSON-related headers inside:
$includedir/clutter-1.<minor>/clutter
The fix for bug 1750 inside commit b190448e made Clutter-GTK spew
BadWindow errors. The reason for that is that we call XDestroyWindow()
without checking if the old Window is None; this happens if we call
clutter_x11_set_stage_foreign() on a new ClutterStage before it has
been realized.
Since Clutter-GTK does not need to realize the Stage it is going to
embed anymore (the only reason for that was to obtain a proper Visual
but now there's ClutterBackendX11 API for that), the set_stage_foreign()
call is effectively setting the StageX11 Window for the first time.
When we replace the stage Window using a foreign one we also need to
destroy the Window we created, if needed, to avoid leaking resources
all around.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1750
The "catch all" warning for a the mapped invariant violation is too
generic: it doesn't tell you why the invariant was broken in case
we are trying to map an unparented actor - e.g. through a Clone.
The right gcc define is __GNUC__ not __GNUC_. This typo had the side
effect that we were using the non gcc specific debug macros leading to
a less optmised CLUTTER_NOTE () than one could have dreamed of.
The vertex that should be used by the apply_relative_transform
is the one passed in as const, and the result should be placed
inside the non-const ClutterVertext. Currently, we are using
the latter, and thus the function is completely useless.
We should follow the convention for boxed types initializers of:
<type_name>_from_<another_type> (boxed, value)
For ClutterUnits as well; so:
clutter_units_pixels -> clutter_units_from_pixels
clutter_units_em -> clutter_units_from_em
...
We should still keep the short-hand version as a macro, though.
This makes clutter_stage_win32_show/hide be implementations of
ClutterStageWindowIface rather than overriding the methods in
ClutterActor. This reflects the changes in e4ff24bc for the X11
backend.
In case we are skipping too many frames, we should force the animation
instance to apply the final state of the animated interval inside the
::completed signal handler.
The Stage:offscreen property hasn't been tested for ages, and it
should really just use a FBO, not indirect rendering on a X Pixmap
only on X11. There are better ways anyway to get the current
contents of ClutterStage as a buffer anyway.
We might remove it at any later date, or actually make it work
properly.
When requesting a GLX visual from the X server we should explicitly
set the GL_DEPTH_SIZE and the GL_ALPHA_SIZE bits, otherwise some
functionality might just not work, or work unreliably.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1723
The GValue wrappers for ClutterShader types should always store
values using GL types (GLfloat, GLint) internally, but give and
take generic C types (float, int) to the Clutter side.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1359
The HLS to RGB conversion in case the S value is zero is:
R = G = B = luminance
ClutterColor uses a byte (0 to 255) for the R, G and B channels
encoding, while luminance is expressed using a floating point value
in the closed interval [0, 1]; thus the case above becomes:
R = G = B = (luminance * 255)
The clutter_color_from_hls() code is missing the final step of
de-normalizing the luminance value, and so it breaks the roundtrip
colorspace conversion between RGB and HLS.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1695
If clutter_x11_texture_set_window() was called after
clutter_x11_texture_pixmap_set_automatic(), then the Damage object would
not be properly created so updates to the window were ignored.
Refactor creation of the damage object to a separate function, and
call it from clutter_x11_texture_set_window() and clutter_x11_texture_set_pixmap()
as appropriate. Addition and removal of the filter function is made
conditional on priv->damage to make free_damage_resources() cleanly
idempotent.
See: http://bugzilla.gnome.org/show_bug.cgi?id=587189 for the original
bug report.
http://bugzilla.openedhand.com/show_bug.cgi?id=1710
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If you select all the text in a ClutterText, there is an invisible
cursor position either at the beginning or end. If it's at the beginning,
the bug is that left arrow won't clear the selection. If it's at the end,
the bug is that the right arrow won't.
Here are the ways to reproduce it:
a. Ctrl-A selects all and moves the hidden cursor position to the left.
b. For single line: End, Shift-Home does the same.
c. Or manually moving to the end and doing Shift-Left Arrow to the
beginning.
These all put it in the state where right arrow will properly clear
selection and move to cursor position 1, but left arrow fails to clear
the selection.
For b and c above, the opposite will give you the end case where right
arrow doesn't work.
Anyway, it turns out clear_selection is getting called, it just doesn't
show up because it's not doing a queue_redraw. So the attached patch
seems to fix things.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
It might be desirable for some applications and/or platforms to get
every motion event that was delivered to Clutter from the windowing
backend. By adding a per-stage flag we can bypass the throttling
done when processing the events.
http://bugzilla.openedhand.com/show_bug.cgi?id=1665
Keep the CoglContext in sync between GL and GLES backends. We ought
to find a way to have a generic context, though, and have backend
specific sections.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1698
We need to explicitly force order so that ClutterJson.gir and Cogl.gir
are present in the parent directory before we try to build Clutter.typelib.
http://bugzilla.openedhand.com/show_bug.cgi?id=1700
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
On some platforms (anything but Linux, and on obscure Linux
architectures) dolt isn't used, so $(top_builddir)/doltlibtool
won't exist. $(top_builddir)/libtool will always be generated
even if dolt is used, so just use that unconditionally. We don't
need the extra speed when linking the single program for
introspection.
http://bugzilla.openedhand.com/show_bug.cgi?id=1699
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
A lot of applications change the size of the stage from the default
before the stage is initially shown. The size change won't take affect
until the first allocation run. However we want the window to be at
the correct size when we first map it so we should force an allocation
run before showing the stage.
There was an explicit call to XResizeWindow in
clutter_stage_x11_show. This is not needed anymore because
XResizeWindow will already have been called by the allocate method.
By default NSWindow does not listen to mousemoved events and hence the
default behaviour for Actors using the "motion-event" signal differs
from backend to backend.
Using setAcceptsMouseMovedEvents seems to fix it; unfortunately, I
cannot verify it, but since nobody is currently working on the Quartz
backend I guess it cannot get more broken than how currently is.
Thanks to: Michael <michael@f3k.org>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1687
It would be useful inside a custom actor's paint function to be able to
tell if this is a primary paint call, or if we are in fact painting on
behalf of a clone.
In Mutter we have an optimization not to paint occluded windows; this is
desirable for the windows per se, to conserve bandwith to the card, but
if something like an application switcher is using clones of these windows,
they will not get painted either; currently we have no way of
differentiating between the two.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1685
The CLUTTER_TEXTURE_IN_CLONE_PAINT was used with the old CloneTexture
actor; now that we have ClutterClone nothing sets the private flag
anymore, and the flag itself is not needed.
Updating the WM hints on the stage window shortcircuits if the stage
is in WITHDRAWN state, so we need to move the update_wm_hints() call
after the flag has been unset.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we manually wait for the VBLANK with:
- SGI_video_sync
- Direct usage of the DRM ioctl
Then we should call glFinish() first, or otherwise the swap-buffers
may be delayed by pending drawing and cause a tear.
http://bugzilla.openedhand.com/show_bug.cgi?id=1636
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
commit e2c4a2a9f8 fixed one thing but broke many others things :-/
hopfully this fixes that.
It turned out that the journal was mistakenly setting the OVERRIDE_LAYER0
flush option for all entries, but some other logic errors were also
uncovered in _cogl_material_equal.
Added an internal clutter function, _clutter_master_clock_ensure_next_iteration
that ensures another iteration of the master clock, can be called from repaint
functions as well as other threads.
To help us handle sliced textures; When flushing materials there is an
override option that can be given to replace the texture name for layer0
so we may iterate the slices without needing to modify the material
in use.
Since improving the journal's ability to batch state changes we added a
_cogl_material_equals function that is used by the journal to compare
materials and identify when a state change is required, but this wasn't
correctly considering the layer0 override resulting in false positives that
meant the journal wouldn't update the GL state and the first texture name
was used for all slices.
The cost of glGetFloatv with Mesa is still representing a majority of our
time in OpenGL for some applications, and the last thing left using this is
the current-matrix API when getting the projection matrix.
This adds a matrix stack for the projection matrix, so all getting, setting
and modification of the projection matrix is now managed by Cogl and it's only
when we come to draw that we flush changes to the matrix to OpenGL.
This also brings us closer to being able to drop internal use of the
deprecated OpenGL matrix functions, re: commit 54159f5a1d
The clutter_actor_get_allocation_coords() is not used, and since
the switch to floats in the Actor's API, it returns exactly what
the get_allocation_box() returns.
Currently, the transformation matrix for an actor is constructed
from scenegraph-related accessors. An actor, though, can call COGL
API to add new transformations inside the paint() implementation,
for instance:
static void
my_foo_paint (ClutterActor *a)
{
...
cogl_translate (-scroll_x, -scroll_y, 0);
...
}
Unfortunately these transformations will be completely ignored by
the scenegraph machinery; for instance, getting the actor-relative
coordinates from event coordinates is going to break badly because
of this.
In order to make the scenegraph aware of the potential of additional
transformations, we need a ::apply_transform() virtual function. This
vfunc will pass a CoglMatrix which can be used to apply additional
operations:
static void
my_foo_apply_transform (ClutterActor *a, CoglMatrix *m)
{
CLUTTER_ACTOR_CLASS (my_foo_parent_class)->apply_transform (a, m);
...
cogl_matrix_translate (m, -scroll_x, -scroll_y, 0);
...
}
The ::paint() implementation will be called with the actor already
using the newly applied transformation matrix, as expected:
static void
my_foo_paint (ClutterActor *a)
{
...
}
The ::apply_transform() implementations *must* chain up, so that the
various transformations of each class are preserved. The default
implementation inside ClutterActor applies all the transformations
defined by the scenegraph-related accessors.
Actors performing transformations inside the paint() function will
continue to work as previously.
Scanners like gtk-doc and g-ir-scanner get confused by:
typedef struct _Foo {
...
} Foo;
And expect instead:
typedef struct _Foo Foo;
struct _Foo {
...
};
CoglMatrix definition should be changed to avoid the former type.
The race we were experiencing in the X11 backends is apparently
back after the fix in commit 00a3c698.
This time, just delaying the setting of the SYNC_MATRICES flag
is not enough, so we can resume the use of a STAGE_IN_RESIZE
private flag.
This should also fix bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1668
In order to validate the sequence of:
XResizeWindow
ConfigureNotify
glViewport
that should happen on X11 we need to add debug annotations to the
calls to glViewport() done through COGL.
This avoids some calls to glGetFloatv, which have at least proven to be very
in-efficient in mesa at this point in time, since it always updates all derived
state even when it may not relate to the state being requested.
Fixes and adds a unit test for creating and drawing using materials with
COGL_INVALID_HANDLE texture layers.
This may be valid if for example the user has set a texture combine string
that only references a constant color.
_cogl_material_flush_layers_gl_state will bind the fallback texture for any
COGL_INVALID_HANDLE layer, later though we could explicitly check when the
current blend mode does't actually reference a texture source in which case
binding the fallback texture is redundant.
This tests drawing using cogl_rectangle, cogl_polygon and
cogl_vertex_buffer_draw.
Although we wouldn't recommend developers try and interleve OpenGL drawing
with Cogl drawing - we would prefer patches that improve Cogl to avoid this
if possible - we are providing a simple mechanism that will at least give
developers a fighting chance if they find it necissary.
Note: we aren't helping developers change OpenGL state to modify the
behaviour of Cogl drawing functions - it's unlikley that can ever be
reliably supported - but if they are trying to do something like:
- setup some OpenGL state.
- draw using OpenGL (e.g. glDrawArrays() )
- reset modified OpenGL state.
- continue using Cogl to draw
They should surround their blocks of raw OpenGL with cogl_begin_gl() and
cogl_end_gl():
cogl_begin_gl ();
- setup some OpenGL state.
- draw using OpenGL (e.g. glDrawArrays() )
- reset modified OpenGL state.
cogl_end_gl ();
- continue using Cogl to draw
Again; we aren't supporting code like this:
- setup some OpenGL state.
- use Cogl to draw
- reset modified OpenGL state.
When the internals of Cogl evolves, this is very liable to break.
cogl_begin_gl() will flush all internally batched Cogl primitives, and emit
all internal Cogl state to OpenGL as if it were going to draw something
itself.
The result is that the OpenGL modelview matrix will be setup; the state
corresponding to the current source material will be setup and other world
state such as backface culling, depth and fogging enabledness will be also
be sent to OpenGL.
Note: no special material state is flushed, so if developers want Cogl to setup
a simplified material state it is the their responsibility to set a simple
source material before calling cogl_begin_gl. E.g. by calling
cogl_set_source_color4ub().
Note: It is the developers responsibility to restore any OpenGL state that they
modify to how it was after calling cogl_begin_gl() if they don't do this then
the result of further Cogl calls is undefined.
This function should only need to be called in exceptional circumstances
since Cogl can normally determine internally when a flush is necessary.
As an optimization Cogl drawing functions may batch up primitives
internally, so if you are trying to use raw GL outside of Cogl you stand a
better chance of being successful if you ask Cogl to flush any batched
geometry before making your state changes.
cogl_flush() ensures that the underlying driver is issued all the commands
necessary to draw the batched primitives. It provides no guarantees about
when the driver will complete the rendering.
This provides no guarantees about the GL state upon returning and to avoid
confusing Cogl you should aim to restore any changes you make before
resuming use of Cogl.
If you are making state changes with the intention of affecting Cogl drawing
primitives you are 100% on your own since you stand a good chance of
conflicting with Cogl internals. For example clutter-gst which currently
uses direct GL calls to bind ARBfp programs will very likely break when Cogl
starts to use ARBfb programs internally for the material API, but for now it
can use cogl_flush() to at least ensure that the ARBfp program isn't applied
to additional primitives.
This does not provide a robust generalized solution supporting safe use of
raw GL, its use is very much discouraged.
Previously we would call _cogl_material_pre_change_notify unconditionally, but
now we wait until we really know we are removing a layer before notifying the
change, which will require a journal flush.
Since the convenience functions cogl_set_source_color4ub and
cogl_set_source_texture share a single material, cogl_set_source_color4ub
always calls cogl_material_remove_layer. Often this is a NOP though and
shouldn't require a journal flush.
This gets performance back to where it was before reverting the per-actor
material commits.
This reverts commit 8cf42ea8ac5c05f6b443c453f9c6c2a3cd75acfa.
Since the journal puts material colors in the vertex array accumulated for
drawing we don't need to flush the journal simply due to color changes which
means using cogl_set_source_color4ub is no longer a concern.
This reverts commit 85243da382025bd516937c76a61b8381f6e74689.
Since the journal puts material colors in the vertex array accumulated for
drawing we don't need to flush the journal simply due to color changes
which means using cogl_set_source_color4ub is no longer a concern.
Before any cogl vertex buffer drawing we call
enable_state_for_drawing_buffer which sets up the GL state, but we weren't
disabling unsed client texture coord arrays.
This simplifies the vertex data uploading in the journal, and could improve
performance. Modifying a VBO mid-scene could reqire synchronizing with the
GPU or some form of shadowing/copying to avoid modifying data that the GPU
is currently processing; the buffer was also being marked as GL_STATIC_DRAW
which could have made things worse.
Now we simply create a GL_STATIC_DRAW VBO for each flush and and delete it
when we are finished.
Using cogl_rectangle (and thus the journal) in
_cogl_add_path_to_stencil_buffer means we have to consider all the state
that the journal may change in case it may interfer with the direct GL calls
used. This has proven to be error prone and in this case the journal is an
unnecissary overhead. We now simply call glRectf instead of using
cogl_rectangle.
For small runs of text like icon labels, we can get better performance
going through the Cogl journal since text may then be batched together
with other geometry.
For larger runs of text though we still use VBOs since the cost of logging
the quads becomes too expensive, including the software transform which
isn't at all optimized at this point. VBOs also have the further advantage
of avoiding repeated validation of vertices by the driver and repeated
mapping of data into the GPU so long as the text doesn't change.
Currently the threshold is 100 vertices/25 quads. This number was plucked
out of thin air and should be tuned later.
With this change I see ~180% fps improvment for test-text. (x61s + i965 +
Mesa 7.6-devel)
We were missing the simplest test of all: are the two CoglHandles equal and
are the flush option flags for each material equal? This should improve
batching for some common cases.
Whenever we modify a material we call _cogl_material_pre_change_notify which
checks to see if the material is referenced by the journal and if so flushes
if before we modify the material.
Since the journal logs material colors directly into a vertex array (to
avoid us repeatedly calling glColor) then we know we never need to flush
the journal when material colors change.
Since most Clutter actors aren't much more than textured quads; flushing the
journal typically involves lots of 'change modelview; draw quad' sequences.
The amount of overhead involved in uploading a new modelview and queuing
that primitive is huge in comparison to simply transforming 4 vertices by
the current modelview when logging quads. (Note if your GPU supports HW
vertex transform, then it still does the projective and viewport transforms)
At the same time a --cogl-debug=disable-software-transform option has been
added for comparison and debugging.
This change allows typical pick scenes to be batched into a single draw call
and I'm seeing test-pick run over 200% faster with this. (i965 + Mesa
7.6-devel)
Enabling this option makes Cogl trace how the journal is managing to batch
your rectangles. The journal staggers how it emmits state to the GL driver
and the batches will normally get smaller for each stage, but ideally you
don't want to be in a situation where Cogl is only able to draw one quad per
modelview change and draw call.
E.g. this is a fairly ideal example:
BATCHING: journal len = 101
BATCHING: vbo offset batch len = 101
BATCHING: material batch len = 101
BATCHING: modelview batch len = 101
This isn't:
BATCHING: journal len = 1
BATCHING: vbo offset batch len = 1
BATCHING: material batch len = 1
BATCHING: modelview batch len = 1
BATCHING: journal len = 1
BATCHING: vbo offset batch len = 1
BATCHING: material batch len = 1
BATCHING: modelview batch len = 1
<repeat>
When this option is used Cogl will print a trace of all quads that get
logged into the journal, and a trace of quads as they get flushed.
If you are seeing a bug with the geometry being drawn by Cogl this may give
some clues by letting you sanity check the numbers being logged vs the
numbers being emitted.
For testing the VBO fallback paths it helps to be able to disable the
COGL_FEATURE_VBOS feature flag. When VBOs aren't available Cogl should use
client side malloc()'d buffers instead.
Previously we only used the Cogl matrix stack API for indirect contexts, but
it's too costly to keep on requesting modelview matrices from GL (for
logging in the journal) even for direct rendering.
I also experimented with a patch for mesa to improve performance and
discussed this with upstream, but we agreed to consider the GL matrix API
essentially deprecated. (For reference the GLES 2 and GL 3 specs have
removed the matrix APIs)
CoglColors shouldn't be compared using memcmp since they may contain
uninitialized padding bytes.
The prototype is also suitable for passing to g_hash_table_new as the
key_equal_func.
_cogl_pango_display_list_add_texture now uses this instead of memcmp.
We now put the color of materials into the vertex array used by the journal
instead of calling glColor() but the number of requests for the material
color were quite expensive so we have changed the material color to
internally be byte components instead of floats to avoid repeat conversions
and added _cogl_material_get_colorubv as a fast-path for the journal to
copy data into the vertex array.
To improve batching of geometry in the Cogl journal we need to avoid modifying
materials midscene.
Currently cogl_set_source_color and cogl_set_source_texture simply modify a
single shared material. In the future we can improve this so they use a pool
of materials that gets recycled as the journal is flushed, but for now we
give all ClutterRectangles their own private materials for painting with.
Currently cogl_set_source_color uses a single shared material which means
each actor that uses it causes the journal to flush if the color changes.
Until we improve cogl_set_source_color to use a pool of materials that can
be recycled as the journal is flushed we avoid mid-scene material changes by
giving all actors a private material instead.
The number of material layers enabled when logging a quad in the journal
determines the stride of the corresponding vertex data (since we need a set
of texture coordinates for each layer.) By padding data in the case where we
have only one layer we can avoid a change in stride if we are mixing single
and double layer primitives in a scene (e.g. relevent for a composite
manager that may use 2 layers for all shaped windows) Avoiding stride
changes means we can minimize calls to gl{Vertex,Color}Pointer when flushing
the journal.
Since we need to update the texcoord pointers when the actual number of
layers changes, this adds another batch_and_call() stage to deal with
glTexCoordPointer and enabling/disabling the client arrays.
Previously the journal was always flushed at the end of
_cogl_rectangles_with_multitexture_coords, (i.e. the end of any
cogl_rectangle* calls) but now we have broadened the potential for batching
geometry. In ideal circumstances we will only flush once per scene.
In summary the journal works like this:
When you use any of the cogl_rectangle* APIs then nothing is emitted to the
GPU at this point, we just log one or more quads into the journal. A
journal entry consists of the quad coordinates, an associated material
reference, and a modelview matrix. Ideally the journal only gets flushed
once at the end of a scene, but in fact there are things to consider that
may cause unwanted flushing, including:
- modifying materials mid-scene
This is because each quad in the journal has an associated material
reference (i.e. not copy), so if you try and modify a material that is
already referenced in the journal we force a flush first)
NOTE: For now this means you should avoid using cogl_set_source_color()
since that currently uses a single shared material. Later we
should change it to use a pool of materials that is recycled
when the journal is flushed.
- modifying any state that isn't currently logged, such as depth, fog and
backface culling enables.
The first thing that happens when flushing, is to upload all the vertex data
associated with the journal into a single VBO.
We then go through a process of splitting up the journal into batches that
have compatible state so they can be emitted to the GPU together. This is
currently broken up into 3 levels so we can stagger the state changes:
1) we break the journal up according to changes in the number of material layers
associated with logged quads. The number of layers in a material determines
the stride of the associated vertices, so we have to update our vertex
array offsets at this level. (i.e. calling gl{Vertex,Color},Pointer etc)
2) we further split batches up according to material compatability. (e.g.
materials with different textures) We flush material state at this level.
3) Finally we split batches up according to modelview changes. At this level
we update the modelview matrix and actually emit the actual draw command.
This commit is largely about putting the initial design in-place; this will be
followed by other changes that take advantage of the extended batching.
Removed the G_PARAM_CONSTRUCT from the property registration of
"load-async" and "load-data-async". It made it impossible to use only
load-data-async, as the async loading state would be unset when
load-async got set it's default FALSE value.
Use signed integers while combining window space clip rectangles, so we avoid
arithmatic errors later resulting in glScissor getting negative width and
height arguments.
Previously this was RGBA_8888. It souldn't really make a difference but for
consistency we expect almost all textures in use to have an internaly
premultiplied pixel format.
_cogl_texture_download_from_gl needs to create transient CoglBitmaps when
downloading sliced textures from GL, and then copies these as subregions
into the final target_bitmap. _cogl_texture_download_from_gl also supports
target_bitmaps with a different format to the source CoglTexture being
downloaded.
The problem was that in the case of slice textures we were always looking
at the format of the CoglTexture, not of the target_bitmap when setting
up the transient slice bitmap.
To allow for flushing of batched geometry within Cogl we can't support users
directly calling glReadPixels. glReadPixels is also awkward, not least
because it returns upside down image data.
All the unit tests have been swithed over and clutter_stage_read_pixels now
sits on top of this too.
I've found this is something I do quite often when debugging rendering
problems since its a simple way to wipe out lots of geometry and removes a
lot of unpredictable noise when logging geometry passing through the Cogl
journal.
We were calculating our vertex stride and allocating our vertex array
differently depending on whether the user passed TRUE for use_color or not.
The problem was that we were always writting color data to the array
regardless of use_color.
There was also a bug with _cogl_texture_sliced_polygon in that it was
writing byte color components but we were expecting float components. We
now use byte components in _cogl_multitexture_unsliced_polygon too and pass
GL_UNSIGNED_BYTE to glColorPointer.
Cogl already add similar defines but with the CLUTTER namespace
(CLUTTER_COGL_HAS_GL and CLUTTER_COGL_HAS_GLES). Let's just add two
similar defines with the COGL namespace. Removing the CLUTTER_COGL ones
could break applications silently for no real good reason.
While grepping through the public headers looking for invalid use of
private HAVE_* defines, I stumbled upon two out of sync comments. Yes
it's a very minor trivial change.
JSON-GLib provides simple accessors for basic types so that we
can avoid getting the JsonNode out of a complex type. This makes
the code simpler to understand.
Currently, Clutter depends on the internal copy of JSON-GLib for
the ClutterScript parser. This is done to allow building Clutter
on platforms that do not have the library installed on the system.
Just like we use the internal PNG/JPEG loader as a fallback in
case we don't have GdkPixbuf or CoreGraphics available, we should
use the internal copy of JSON-GLib only in case the system copy
is not present.
The change is simply to move the default for the --with-json
configure switch from "internal" to "check".
In order to allow stricter compliance, a third setting should
be present: "system", which fails if the system copy is not
available.
We should also change the introspection generation to avoid
breaking in case we require the installed Json-1.0.gir instead
of the generated ClutterJson.gir
The clutter_actor_pick() function just emits the ::pick signal
on the actor. Nobody should be using it, since the paint() method
is already context sensitive and will result in a ::pick emission
by itself. The clutter_actor_pick() is just confusing things.
In the int-to-float switch for actor properties, the ::size-change signal
was moved to use floats instead of integers. Sub-pixel precision for image
size is meaningless, though, so we should revert it back to ints.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1659
Instead of using a specific function to check whether the X
server supports the XInput extension we can use the generic
Xlib function XQueryExtension(). This cuts down the extra
checks inside the configure.ac and simplifies the code inside
clutter_x11_register_xinput().
Currently, XInput support requires a function call. In order to
make it easier for people to test it, we can also add a command
line switch that moves the pointer device detection and handling
to XInput. This should ensure that, at least for people building
Clutter with --enable-xinput, applications can be easily migrated
and regressions can be caught.
The StageManager singleton instance is already kept around
by the clutter_stage_manager_get_default() function; there is
no need to have it inside the main Clutter context as well.
Instead of using _clutter_context_get_default() and checking the
is_initialized flag, we should use the newly added private function
that does not cause side effects, especially for functions that have
to be called before any other Clutter function.
The _clutter_context_get_default() function will automatically
create the main Clutter context; if we just want to check whether
Clutter has been initialized this will complicate matters, by
requiring a call to g_type_init() inside the client code.
Instead, we should simply provide an internal API that checks
whether the main Clutter context exists and if it has been
initialized, without any side effect.
The input device API is split halfway thorugh the backends in a very
weird way. The data structures are private, as they should, but most
of the information should be available in the main API since it's
generic enough.
The device type enumeration, for instance, should be common across
every backend; the accessors for device type and id should live in the
core API. The internal API should always use ClutterInputDevice and
not the private X11 implementation when dealing with public structures
like ClutterEvent.
By adding accessors for the device type and id, and by moving the
device type enumeration into the core API we can cut down the amount
of symbols private and/or visible only to the X11 backends; this way
when other backends start implementing multi-pointer support we can
share the same API across the code.
HAVE_COGL_GLES2 is defined in config.h through the configure script and
should not be used in public headers.
The patch makes configure generate the right define that can be used
later in the header.
The clutter_context_get_default() function is private, but shared
across Clutter. For this reason, it should be prefixed by '_' so
that the symbol is hidden from the shared object.
ActorBox should have methods for easily extracting the X and Y
coordinates of the origin, and the width and height separately.
These methods will make it easier for high-level language bindings
to manipulate ActorBox instances and avoid the Geometry type.
The Vertex and ActorBox boxed types are meant to be used across
the API, but are fairly difficult to bind. Their memory management
is also unclear, and has to go through the indirection of
g_boxed_copy() and g_boxed_free().
Cairo stores the image data in ARGB native byte order so we need to
upload this as BGRA on little endian architectures and ARGB on big
endian. ClutterTexture doesn't currently expose any flags to describe
ARGB format so until we can fix the Clutter API it now uses the Cogl
API directly.
In order to chain up animations using clutter_actor_animate() and
friends you have to use an idle handler that guarantees that the
main loop spins at least once after the animation pointer has been
detached from the actor.
This has several drawbacks, first and foremost the fact that the
slice of the main loop for the idle handler might be starved by
other operations, like redrawing. This inevitably leads to tricks
with priorities and the like, contributing to the overall complexity.
Instead, we should guarantee that the animation instance created by
clutter_actor_animate() is valid for the ::completed signal until
it reaches its default handler; after that, the animation is detached
from the actor and destroyed. This means that it's possible to
create a new animation after the first is complete by simply using
g_signal_connect_after().
This unfortunately makes it impossible to keep a reference to the
animation pointer attached to the actor by using g_object_ref(); a
way to "fix" this would be to have a clutter_animation_attach()
and a clutter_animation_detach() pair of methods that allow attaching
any animation to an actor. This might overcomplicate what it is
the simple animation API, though, so it's currently not implemented
and left for future versions.
The test-easing interactive demo has been modified to show how
the animation queuing works by adding a command line switch that
recenters the animated actor once the first animation has ended.
Continuing in the tradition on making clutter_actor_animate() the
next g_object_connect(), here's the addition of the signal-after::
and signal-swapped:: modifiers for the automagic signal connection
arguments.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1646
In order to be ready for the next major version of GLib we need to
disable single header inclusion by using the G_DISABLE_SINGLE_INCLUDES
define in the build process.
We can't short-circuit the emission of ::queue-redraw for not-visible
actors, since ClutterClone uses that signal to know when things need
to be redrawn.
Calling clutter_actor_queue_redraw() out of clutter_actor_real_map() /
clutter_actor_real_unmap() was causing the flag state to get set
incorrectly from _clutter_actor_set_enable_paint_unmapped(), because
a paint queueing a redraw was not expected.
Moving queuing the redraw to clutter_actor_hide()/show() fixes this, and
also fixes a problem where showing a child of a cloned actor wouldn't
cause the clone to be repainted.
http://bugzilla.openedhand.com/show_bug.cgi?id=1484
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we have an not-visible texture pixmap, we need to:
- Still update it if it is realized, since it won't be
updated when shown. And it might be also be cloned.
- Queue a redraw if even if not visible, since it
it might be cloned.
http://bugzilla.openedhand.com/show_bug.cgi?id=1647
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
My patch to choose a premultiplied format when the user gives
COGL_PIXEL_FORMAT_ANY for the internal_format broke the case where the data
in question doesn't have and alpha channel.
This was accidentally missed when merging the premultiplication branch
since I merged a local version of the branch that missed this commit.
Although the underlying materials should allow layers with INVALID_HANDLES
it shouldn't be necissary to expose that via cogl_set_source_texture() and
it's easier to resolve a warning/crash here than odd artefacts/crashes later
in the pipeline.
Merge branch 'premultiplication'
[cogl-texture docs] Improves the documentation of the internal_format args
[test-premult] Adds a unit test for texture upload premultiplication semantics
[fog] Document that fogging only works with opaque or unmultipled colors
[test-blend-strings] Explicitly request RGBA_888 tex format for test textures
[premultiplication] Be more conservative with what data gets premultiplied
[bitmap] Fixes _cogl_bitmap_fallback_unpremult
[cogl-bitmap] Fix minor copy and paste error in _cogl_bitmap_fallback_premult
Avoid unnecesary unpremultiplication when saving to local data
Don't unpremultiply Cairo data
Default to a blend function that expects premultiplied colors
Implement premultiplication for CoglBitmap
Use correct texture format for pixmap textures and FBO's
Add cogl_color_premultiply()
Clarifies that if you give COGL_PIXEL_FORMAT_ANY as the internal format for
cogl_texture_new_from_file or cogl_texture_new_from_data then Cogl will
choose a premultiplied internal format.
The fixed function fogging provided by OpenGL only works with unmultiplied
colors (or if the color has an alpha of 1.0) so since we now premultiply
textures and colors by default a note to this affect has been added to
clutter_stage_set_fog and cogl_set_fog.
test-depth.c no longer uses clutter_stage_set_fog for this reason.
In the future when we can depend on fragment shaders we should also be
able to support fogging of premultiplied primitives.
We don't want to force texture data to be premultipled if the user
explicitly specifies a non premultiplied internal_format such as
COGL_PIXEL_FORMAT_RGBA_8888. So now Cogl will only automatically
premultiply data when COGL_PIXEL_FORMAT_ANY is given for the
internal_format, or a premultiplied internal format such as
COGL_PIXEL_FORMAT_RGBA_8888_PRE is requested but non-premultiplied source
data is given.
This approach is consistent with OpenVG image formats which have already
influenced Cogl's pixel format semantics.
The _cogl_unpremult_alpha_{first,last} functions which
_cogl_bitmap_fallback_unpremult depends on were incorrectly casting each
of the byte components of a texel to a gulong and performing shifts as
if it were dealing with the whole texel.
It now just uses array indexing to access the byte components without
needing to cast or manually shift any bits around.
Even though we used to depend on unpremult whenever we used a
ClutterCairoTexture, clutter_cairo_texture_context_destroy had it's own
unpremult code which worked which is why this bug wouldn't have been noticed
before.
Now that we typically have premultiplied data stored in Cogl
textures, when fetching a texture into local data for temporary
storage, use a premultiplied format to avoid an unpremultiply/
premultiply roundtrip.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Instead of unpremultiplying the Cairo data, pass it directly to Cogl
in premultiplied form; we now *prefer* premultiplied data.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Many operations, like mixing two textures together or alpha-blending
onto a destination with alpha, are done most logically if texture data
is in premultiplied form. We also have many sources of premultiplied
texture data, like X pixmaps, FBOs, cairo surfaces. Rather than trying
to work with two different types of texture data, simplify things by
always premultiplying texture data before uploading to GL.
Because the default blend function is changed to accommodate this,
uses of pure-color CoglMaterial need to be adapted to add
premultiplication.
gl/cogl-texture.c gles/cogl-texture.c: Always premultiply
non-premultiplied texture data before uploading to GL.
cogl-material.c cogl-material.h: Switch the default blend functions
to ONE, ONE_MINUS_SRC_ALPHA so they work correctly with premultiplied
data.
cogl.c: Make cogl_set_source_color() premultiply the color.
cogl.h.in color-material.h: Add some documentation about
premultiplication and its interaction with color values.
cogl-pango-render.c clutter-texture.c tests/interactive/test-cogl-offscreen.c:
Use premultiplied colors.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
cogl-bitmap.c cogl-bitmap-pixbuf.c cogl-bitmap-fallback.c cogl-bitmap-private.h:
Add _cogl_bitmap_can_premult(), _cogl_bitmap_premult() and implement
a reasonably fast implementation in the "fallback" code.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
RGBA data in X pixmaps and in FBOs is already premultiplied; use
the right format when creating cogl textures.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Since commit d743aeaa updated the internal copy of JSON-GLib and
added a new private header file, we need to fix the build to avoid
a distcheck failure.
Merge branch 'master-clock-updates'
* master-clock-updates: (22 commits)
Change the paint forcing on the Text cache text
[timelines] Improve marker hit check and don't fudge the delta
Revert "[timeline] Don't clamp the elapsed time when a looping tl reaches the end"
[tests] Don't add a newline to the end of g_test_message calls
[test-timeline] Add a marker at the beginning of the timeline
[timeline] Don't clamp the elapsed time when a looping tl reaches the end
[master-clock] Throttle if no redraw was performed
[docs] Update Clutter's API reference
Force a paint instead of calling clutter_redraw()
Fix clutter_redraw() to match the redraw cycle
Run the repaint functions inside the redraw cycle
Remove useless manual timeline ticking
Move elapsed-time calculations into ClutterTimeline
Limit the frame rate when not syncing to VBLANK
Decrease the main-loop priority of the frame cycle
Avoid motion-compression in test-picking test
Compress events as part of the frame cycle
Remove stage update idle and do updates from the master clock
Call g_main_context_wakeup() when we start running timelines
Remove unused msecs_delta member
...
Markers added at the start of the timeline need to be special cased
because we effectively check whether the marker time is greater than
the old frame time or less than or equal to the new frame time but we
never actually emit a frame for the start of the timeline.
If the timeline is looping then it adjusts the position to interpolate
a wrapped around position. However we do not emit a new frame after
setting this position so we need to check for markers again there.
clutter_timeline_get_delta should return the actual wall clock time
between emissions of the new-frame signal - even if the elapsed time
has been fudged at the end of the timeline. To make this work it no
longer directly manipulates priv->msecs_delta but instead passes a
delta value to check_markers.
Someone might hide the previously focused actor in the focus-out signal
handler and as key focus still appears to be on that actor we'd get
re-entrant call and get glib critical from g_object_weak_unref
This changes clutter_stage_get_key_focus() to return stage/NULL during
focus-out signal emission. It used to be the actor focus-out was being
emitted on.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1547
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This reverts commit 9c5663d671.
The patch was causing problems for applications that expect the
elapsed_time to be at either end of the timeline when the completed
signal is fired. For example test-behave swaps the direction of the
timeline in the completed handler but if the time has overflowed the
end then the timeline would only take a short time to get back the
beginning. This caused the animation to just vibrate around the
beginning.
The new-frame signal of a timeline was previously guaranteed to be
emitted with the elapsed_time set to the end before it emits the
completed signal. This doesn't necessarily make sense for looping
timelines because it would cause the elapsed time to be clamped to a
slightly off value whenever the timeline restarts. This patch makes it
perform the wrap around before emitting the new-frame signal so that
the elapsed time always corresponds to the time elapsed since the
timeline was started.
Additionally it no longer fudges the msecs_delta property to make the
marker check work so clutter_timeline_get_delta will always return the
wall clock time since the last frame.
A flag in the master clock is now set whenever the dispatch caused an
actual redraw of a stage. If this flag is not set during the prepare
and check functions then it will resort to limiting the redraw
attempts to the default frame rate as if vblank syncing was
disabled. Otherwise if a timeline is running that does not cause the
scene to change then it would busy-wait with 100% CPU until the next
frame.
This fix was suggested by Owen Taylor in:
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
We do not need the whole redraw machinery inside
clutter_stage_read_pixels(): instead, we want Clutter to drop everything,
paint and call glReadPixels() with the current buffer.
The clutter_redraw() function is used by embedding toolkits to
force a redraw on a stage. Since everything is performed by
toggling a flag inside the Stage itself and then letting the
master clock advance, we need a ClutterStage method to ensure
that we start the master clock and redraw.
Instead of calculating a delta in the master clock, and passing that
into each timeline, make each timeline individually responsible for
remembering the last time and computing the delta.
This:
- Fixes a problem where we could spin infinitely processing
timeline-only frames with < 1msec differences.
- Makes timelines consistently start timing on the first frame;
instead of doing different things for the first started timeline
and other timelines.
- Improves accuracy of elapsed time computations by avoiding
accumulating microsecond => millisecond truncation errors.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
clutter-master-clock.c clutter-master-clock.h: When the
SYNC_TO_VBLANK feature is not available, wait for 1/frame_rate
seconds since the start of the last frame before drawing the next
frame. Add _clutter_master_clock_start_running() to abstract
the usage of g_main_context_wakeup()
clutter-stage.c: Add _clutter_master_clock_start_running()
clutter-main.c: Update docs for clutter_set_default_frame_rate()
clutter_get_default_frame_rate() to no longer talk about timeline
frame rates.
test-text-perf.c test-text.c: Set a frame rate of 1000fps so that
frame-rate limiting doesn't affect the result.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Change CLUTTER_PRIORITY_REDRAW to be lower than the GTK+ resize
and relayout priorities to avoid starving GTK+ when run in the
same process as clutter.
Remove the unused CLUTTER_PRIORITY_TIMELINE
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Instead of trying to guess about which motion events are
extraneous, queue up all events until we process a frame.
This allows us to look ahead and reliably compress consecutive
sequence of motion events.
clutter-main.c: Feed received events to the stage for queueing.
Remove old compression code. Remove clutter_get_motion_events_frequency()
clutter_set_motion_events_frequency()
clutter-stage.c: Keep a queue of pending events.
clutter-master-clock.c: Add processng of queued events to the
clock source dispatch function.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When a redraw is queued on a stage, simply set a flag; then in
the check/prepare functions of the master clock source, check
for stages that need redrawing.
This avoids the complexity of having multiple competing sources
at the same priority and makes the update ordering more reliable and
understandable.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If a timeline is added from a different thread, we need to
call g_main_context_wakeup() to wake the main thread up to
start updating the timeline.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Instead of keeping a list of all timelines, and connecting to
signals and weak notifies, simply keep a list of running timelines;
this greatly simplifies both the book-keeping, and also determining
if there are any running timelines.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Remove code to advance the master clock after drawing a frame; if
there are any running timelines the master clock will do another
frame by itself, and the clock will be advanced before running
that frame.
With this change, there is no point in queueing an extra frame
redraw after completing a timeline, since we are always advancing
the timeline *before* redrawing, so remove that code as well.
(This does mean that calling clutter_timeline_stop() won't implicitly
cause the stage to be redrawn; this doesn't seem like something
an app should rely on in any case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The clutter_stage_fullscreen() and clutter_stage_unfullscreen() are
a GDK-ism. The underlying implementation is already using an accessor
with a boolean parameter.
This should take the amount of collisions between properties, methods
and signals to zero.
The :fullscreen property is very much confusing as it is implemented.
It can be written to a value, but the whole process might fail. If we
set:
g_object_set (stage, "fullscreen", TRUE, NULL);
and the fullscreen process fails or it is not implemented, the value
will be reset to FALSE (if we're lucky) or left TRUE (most of the
times).
The writability is just a shorthand for invoking clutter_stage_fullscreen()
or clutter_stage_unfullscreen() depending on a boolean value without
using an if.
The :fullscreen property also greatly confuses high level languages,
since the same symbol is used:
- for a method name (Clutter.Stage.fullscreen())
- for a property name (Clutter.Stage.fullscreen)
- for a signal (Clutter.Stage::fullscreen)
For these reasons, the :fullscreen should be renamed to :fullscreen-set
and be read-only. Implementations of the Stage should only emit the
StageState event to change from normal to fullscreen, and the Stage
will automatically update the value of the property and emit a notify
signal for it.
There have been changes in JSON-GLib upstream to clean up the
data structures, and facilitate introspection.
We still not use the updated JsonParser with the (private) JsonScanner
code, since it's a fork of GLib's GScanner.
Otherwise if there is an error before the slices are created it will
try to free the first_pixels array and crash.
It now also checks whether first_pixels has been created before using
it to update the mipmaps. This should only happen for
cogl_texture_new_from_foreign and doesn't matter if the FBO extension
is available. It would be better in this case to fetch the first pixel
using glGetTexImage as Owen mentioned in the last commit.
tex->first_pixels was never set for foreign textures, leading
to a crash when the texture object is freed.
As a quick fix, simply set to NULL. A more complete fix would
require remembering if we had ever seen the first pixel uploaded,
and if not, doing a glReadPixel to get it before triggering the
mipmap update.
http://bugzilla.openedhand.com/show_bug.cgi?id=1645
Signed-off-by: Neil Roberts <neil@linux.intel.com>
It's very common that there's no reasonable fallback to do if the
blend or combine string you set isn't supported. So, rather than
requiring everybody to pass in a GError purely to catch syntax erorrs,
automatically g_warning() if a parse error is encountered and @error
is NULL.
http://bugzilla.openedhand.com/show_bug.cgi?id=1642
Signed-off-by: Robert Bragg <robert@linux.intel.com>
When we complete a timeline, we clamp the elapsed_time variable
to the range of the timeline. We need to adjust msecs_delta so that
when we check for hit markers we have the correct interval.
http://bugzilla.openedhand.com/show_bug.cgi?id=1641
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The Animation should be referenced during the notification of the
alpha value, since the callback is invoked depending on the Alpha
and it won't vivify the Animation instance for us.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1537
ClutterEvent is not really gobject-introspection friendly because
of the whole discriminated union thing. In particular, if you get
a ClutterEvent in a signal handler, you probably can't access the
event-type-specific fields, and you probably can't call methods
like clutter_key_event_symbol() either, because you can't cast the
ClutterEvent to a ClutterKeyEvent.
The cleanest solution is to turn every accessor into ClutterEvent
methods, accepting a ClutterEvent* and internally checking the event
type.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1585
According to clutter_texture_set_cogl_texture you should unref the handle as
the texture takes its own.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
The OpenGL spec states that if you create a pixmap using glXCreatePixmap you
should use glXDestroyPixmap to destroy it.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Setting the pixmap for an unrealized ClutterGLXTexturePixmap should
not cause it to be realized, and certainly shouldn't cause the the
REALIZED flag to be set without using clutter_actor_realize().
This patch uses the simple approach that;
- pixmap changes on an unrealized ClutterGLXTexturePixmap
are ignored
- when the ClutterGLXTexturePixmap is realized, we then create
the GLXPixmap and the corresponding texture.
The call to clutter_glx_texture_pixmap_update_area() is moved
from create_cogl_texture() to
clutter_glx_texture_pixmap_create_glx_pixmap() since
create_cogl_texture() is only called from one place, and updating
the area is really something we do *after* creating the texture,
not part of creating the texture.
clutter_glx_texture_pixmap_create_glx_pixmap() is reorganized a
bit to avoid debug-logging confusingly if it's called before a pixmap
has been set, and for readability.
http://bugzilla.openedhand.com/show_bug.cgi?id=1635
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
An implementaton of realize() never needs to set the
CLUTTER_ACTOR_REALIZED flag, though it can unset the flag if
things fail unexpectedly. (Previously, stage backend implementations
had to do this since clutter_actor_realize() wasn't used; this
is no longer the case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1634
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Due to the accumulation of floating point errors, natural_width
and min_width can diverge significantly even if the math for
computing them is correct. So just clamp natural_width to
min_width instead of warning about it.
http://bugzilla.openedhand.com/show_bug.cgi?id=1632
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we use float temporaries when computing the bounds of
a group, then, depending on what variables are kept in registers
and what stored on the stack, the accumulated difference between
natural_width and min_width can be more than FLOAT_EPSILON.
Using double temporaries will eliminate the difference in most
cases, or, very rarely, reduce it to a last-bit error.
http://bugzilla.openedhand.com/show_bug.cgi?id=1632
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we are cloning an source actor with an unmapped parent, then when
we temporarily map the source actor:
- We need to skip the check that a mapped actor has a mapped
parent.
- We need to realize the actor's parents before mapping it,
or we'll get an assertion failure in clutter_actor_update_map_state()
because an actor with an unmapped parent is !may_be_realized.
http://bugzilla.openedhand.com/show_bug.cgi?id=1633
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Setting the stage size using clutter_actor_set_size() is almost always
wrong: the X11 stage implementation should save the size and queue a
relayout -- like it does when receiving a ConfigureNotify. The same
should happen when setting it to be full screen.
Since we build the Cogl GIR inside /clutter/cogl we should be looking
there when building the Clutter GIR. Otherwise g-ir-scanner will look
inside the gir directory -- and if you never built Clutter before it
will error out.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1638
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The load-finished signal has a GError* argument which is meant to
signify whether the loading was successful. However many of the
places in ClutterTexture that emit this signal directly pass their
'error' variable which is a GError** and will be NULL or not
completely independently of whether there was an error. If the
argument was dereferenced it would probably crash.
The test-texture-async interactive test case should also verify
that the ::load-finished signal is correctly emitted.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1622
Correctly apply De Morgan's laws to the short-circuit test in
clutter_timeline_pause(); it was short-circuiting always and
never actually pausing.
http://bugzilla.openedhand.com/show_bug.cgi?id=1629
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The commit 2c95b378 prevents clutter_animation_setup_property from being
called with fixed:: property names. This patch adds a additional
parameter "is_fixed" to clutter_animation_setup_property instead of
searching for "fixed::" in property_name.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
It should be possible render a single PangoLayout with different
colors without recalculating the layout. This was not working because
the color used at the first edit was being stored in the display
list. This broke changing the opacity on a ClutterText.
Now each node in the display list has a 'color override' flag which
marks whether it should use the base color or not. The base color is
now passed in from _cogl_pango_display_list_render_texture. The alpha
value is always taken from the base color.
The clutter_redraw() function is used by libraries embedding
Clutter inside another toolkit, instead of queueing a redraw
on the embedded stage. This means that clutter_redraw() should
perform the same sequence of actions done by the redraw idle
callback.
Clutter short-circuits painting when an actor's opacity is
zero. However if the actor is being painted from a ClutterClone then
it will be painted using the clone's opacity instead so the test was
broken.
* 1.0-integration: (138 commits)
[x11] Disable XInput by default
[xinput] Invert the XI extension version check
[cogl-primitives] Fix an unused variable warning when building GLES
[clutter-stage-egl] Pass -1,-1 to clutter_stage_x11_fix_window_size
Update the GLES backend to have the layer filters in the material
[gles/cogl-shader] Add a missing semicolon
[cogl] Move the texture filters to be a property of the material layer
[text] Fix Pango unit to pixels conversion
[actor] Force unrealization on destroy only for non-toplevels
[x11] Rework map/unmap and resizing
[xinput] Check for the XInput entry points
[units] Validate units against the ParamSpec
[actor] Add the ::allocation-changed signal
[actor] Use flags to control allocations
[units] Rework Units into logical distance value
Remove a stray g_value_get_int()
Remove usage of Units and macros
[cogl-material] Allow setting a layer with an invalid texture handle
[timeline] Remove the concept of frames from timelines
[gles/cogl-shader] Fix parameter spec for cogl_shader_get_info_log
...
Conflicts:
configure.ac
The XInput support in Clutter is still using XI 1.x. This will never
work correctly, and we are all waiting for XInput 2 anyway. The changes
internally should be minimal, so we can leave everything in place, but
it's better to disable XInput support by default -- at least for the
time being.
The texture filters are now a property of the material layer rather
than the texture object. Whenever a texture is painted with a material
it sets the filters on all of the GL textures in the Cogl texture. The
filter is cached so that it won't be changed unnecessarily.
The automatic mipmap generation has changed so that the mipmaps are
only generated when the texture is painted instead of every time the
data changes. Changing the texture sets a flag to mark that the
mipmaps are dirty. This works better if the FBO extension is available
because we can use glGenerateMipmap. If the extension is not available
it will temporarily enable automatic mipmap generation and reupload
the first pixel of each slice. This requires tracking the data for the
first pixel.
The COGL_TEXTURE_AUTO_MIPMAP flag has been replaced with
COGL_TEXTURE_NO_AUTO_MIPMAP so that it will default to
auto-mipmapping. The mipmap generation is now effectively free if you
are not using a mipmap filter mode so you would only want to disable
it if you had some special reason to generate your own mipmaps.
ClutterTexture no longer has to store its own copy of the filter
mode. Instead it stores it in the material and the property is
directly set and read from that. This fixes problems with the filters
getting out of sync when a cogl handle is set on the texture
directly. It also avoids the mess of having to rerealize the texture
if the filter quality changes to HIGH because Cogl will take of
generating the mipmaps if needed.
The mapping and unmapping of the X11 stage implementation is
a bit bong. It's asynchronous, for starters, when it really
can avoid it by tracking the state internally.
The ordering of the map/unmap sequence is also broken with
respect to the resizing.
By tracking the state internally into StageX11 we can safely
remove the MapNotify and UnmapNotify X event handling.
In theory, we should use _NET_WM_STATE a lot more, and reuse
the X11 state flags for fullscreening as well.
Apparently, the XInput extension is using the same pkg-config
file ('xi') for both the 1.x and the 2.x API, so we need to
check for both the 1.x XGetExtensionVersion and the 2.x
XQueryInputVersion.
When declaring a property using ClutterParamSpecUnits we pass a
default type to limit the type of units we accept as valid values
for the property.
This means that we need to add the unit type check as part of the
validation process.
Sometimes it is useful to be able to track changes in the allocation
flags, like the absolute origin, inside children of a container.
Using the notify::allocation signal is not enough, in these cases, so
we need a specific signal that gives us both the allocation box and the
allocation flags.
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
Units as they have been implemented since Clutter 0.4 have always been
misdefined as "logical distance unit", while they were just pixels with
fractionary bits.
Units should be reworked to be opaque structures to hold a value and
its unit type, that can be then converted into pixels when Clutter needs
to paint or compute size requisitions and perform allocations.
The previous API should be completely removed to avoid collisions, and
a new type:
ClutterUnits
should be added; the ability to install GObject properties using
ClutterUnits should be maintained.
It was previously possible to create a material layer with no texture
by setting some property on it such as the matrix. However it was not
possible to get back to that state without removing the layer and
recreating it. It is useful to be able to remove the texture to free
resources without forgetting the state of the layer so we can put a
different texture in later.
Timelines no longer work in terms of a frame rate and a number of
frames but instead just have a duration in milliseconds. This better
matches the working of the master clock where if any timelines are
running it will redraw as fast as possible rather than limiting to the
lowest rated timeline.
Most applications will just create animations and expect them to
finish in a certain amount of time without caring about how many
frames are drawn. If a frame is going to be drawn it might as well
update all of the animations to some fraction of the total animation
rather than rounding to the nearest whole frame.
The 'frame_num' parameter of the new-frame signal is now 'msecs' which
is a number of milliseconds progressed along the
timeline. Applications should use clutter_timeline_get_progress
instead of the frame number.
Markers can now only be attached at a time value. The position is
stored in milliseconds rather than at a frame number.
test-timeline-smoothness and test-timeline-dup-frames have been
removed because they no longer make sense.
The clutter_actor_map and unmap functions need to be called to
properly update the mapped state. This matches the changes to the X11
stage in 125bded8.
If the application code calls for destruction of an actor we need
to make sure that the actor is unrealized before running the dispose
sequence; otherwise, we might trigger an assertion failure on composite
actors.
The commit 762873e79e is completely
and utterly wrong and I should have never pushed it.
Serves me well for trying to work on three different branches and
on three different things.
Currently, the clock source spins a redraw every time there is at
least a timeline running. If the timelines were not advanced in
the previous frame, though, because their interval is larger than
the vblanking interval then this will lead to excessive redraws of
the scenegraph even if nothing has changed.
To avoid this a simple guard should be set by the MasterClock::advance
method in case no timeline was effectively advanced, and checked
before dispatching the stage redraws.
When creating a Cogl texture from a Cogl bitmap it would steal the
data by setting the bitmap_owner flag and clearing the data pointer
from the bitmap. The data would be freed by the time the
new_from_bitmap is finished. There is no reason to do this because the
data will be freed when the Cogl bitmap is unref'd and it is confusing
not to be able to reuse the bitmap for creating multiple textures.
clutter_color_from_string() only supported the "#rrggbbaa" format with
alpha channel, this patch adds support for "#rgba".
Colors in "#rrggbb" format were parsed manually, this is now left to
the pango color parsing fallback, since that's handling it just fine.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The cogl_shader_get_info_log() function is very inconvenient for
language bindings and for regular use, as it requires a static
buffer to be filled -- basically just providing a wrapper around
glGetInfoLogARB().
Since COGL aims to be a more convenient API than raw GL we should
just make cogl_shader_get_info_log() return an allocated string
with the GLSL compiler log.
Instead of using GL_TRIANGLES and uploading the indices every time, it
now uses GL_QUADS instead on OpenGL. Under GLES it still uses indices
but it uses the new cogl_vertex_buffer_indices_get_for_quads function
to avoid uploading the vertices every time.
This requires the _cogl_vertex_buffer_indices_pointer_from_handle
function to be exposed privately to the rest of Cogl.
The static_indices array has been removed from the Cogl context.
The GIR file for Clutter still contains symbols from COGL, even
though we provide a Cogl GIR as well. The Clutter GIR should
depend on the Cogl GIR instead.
All the underlying implementation and the public entry points have
been switched to floats; the only missing bits are the Actor properties
that deal with positioning and sizing.
This usually means a major pain when dealing with GValues and varargs
functions. While GValue will warn you when dealing with the wrong
conversions, varags will simply die an horrible (and hard to debug)
death via segfault. Nothing much to do here, except warn people in the
release notes and hope for the best.
The documentation for ClutterTexture's set_from_rgb_data() and
set_from_yuv_data() says:
Note: This function is likely to change in future versions.
This is not true, since they'll remain for the whole 1.x API cycle.
Now that CoglVertexBuffers support indices we can use them with GLES
to avoid duplicating vertices. Regular GL still uses GL_QUADS because
it is shown to still have a performance benefit over indices with the
Intel drivers.
This function can be used as an efficient way of drawing groups of
quads without using GL_QUADS. It generates a VBO containing the
indices needed to render using pairs of GL_TRIANGLES. The VBO is
globally cached so that it only needs to be uploaded whenever more
indices are requested than ever before.
The allocate_available_size() method is a convenience method in
the same spirit as allocate_preferred_size(). While the latter
will allocate the preferred size of an actor regardless of the
available size provided by the actor's parent -- and thus it's
suitable for simple fixed layout managers like ClutterGroup -- the
former will take into account the available size provided by the
parent and never allocate more than that; it is, thus, suitable
for simple fluid layout managers.
The cogl-enum-types.h file is created by glib-mkenums under
/clutter/cogl/common, and then copied in /clutter/cogl in order
to make the inclusion of that file work inside cogl.h.
Since we're copying it in a different location, the Makefile
for that location has to clean up the copy.
Notifications should be fired off from both the internal timeline and
the wrapping animation here, so notifiers should be frozen around these
property setters.
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Just a couple of final cleanups after the reimplementation of the
Animation model.
i) _set_mode does not need to set the timeline on the alpha
ii) freeze notifications around the setting of a new alpha
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The "started" signal is sent first after the timeline has been set to the
'running' state. For this reason, checking if the clock has any running
timelines running will always return true in the "started" signal handler:
the timeline that sent the signal is running.
What needs to be checked in the signal handler is if there are any
timelines running other than the one that emitted the ::started signal,
which we know is running anyway.
This prevents frames from being lost at the beginning of an animation when
a timeline is started after a quiescent period.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1617
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
We avoid rebuilding cogl-enum-types.h and cogl-enum-types.c by
using a "guard" -- a stamp file that will block Makefile. Since
we need cogl-enum-types.h into /clutter/cogl as well for the
cogl.h include to work, if we copy the cogl-enum-types.h
unconditionally it will cause a rebuild of the whole COGL; which
will cause a full rebuild.
To solve this, we can copy the header file when generating it
under the stamp file.
The libclutter-cogl internal object should be the only dependency
for Clutter, since we are already copying it inside clutter/cogl
for the introspection scanner. For this reason, the backend-specific,
real internal object should be built with the backend encoded into
the file name, like libclutter-common. This makes the build output
a little bit more clear: instead of having two:
LINK libclutter-cogl-common.la
...
LINK libclutter-cogl.la
LINK libclutter-cogl.la
We'll have:
LINK libclutter-cogl-common.la
...
LINK libclutter-cogl-gl.la
LINK libclutter-cogl.la
Same applies for the GLES backend.
Just like we do with GObject types and G_DEFINE_TYPE, we should
use the g_once_init_enter/g_once_init_leave mechanism to make the
GType registration of enumeration types thread safe.
The setup_viewport() function should only be used by Clutter and
not by application code.
It can be emulated by changing the Stage size and perspective and
requeueing a redraw after calling clutter_stage_ensure_viewport().
The backface culling enabling function was split and renamed, just
like the depth testing one, so we need to add the macro to the
cogl-deprecated.h header.
Previously indices were tightly bound to a particular Cogl vertex buffer
but we would like to be able to share indices so now we have
cogl_vertex_buffer_indices_new () which returns a CoglHandle.
In particular we could like to have a shared set of indices for drawing
lists of quads that can be shared between the pango renderer and the
Cogl journal.
At the moment Cogl doesn't do much batching of quads so most of the time we
are flushing a single quad at a time. This patch simplifies how we submit
those quads to OpenGL by using glDrawArrays with GL_TRIANGLE_FAN mode
instead of sending indexed vertices using GL_TRIANGLES mode.
Note: I hope to follow up soon with changes that improve our batching and
also move the indices into a VBO so they don't need to be re-validated every
time we call glDrawElements.
To assist people porting code from 0.8, the cogl_texture_* functions that
have been replaced now have defines that give some hint as to how they
should be replaced.
cogl_enable_depth_test and cogl_enable_backface_culling have been renamed
and now have corresponding getters, the new functions are:
cogl_set_depth_test_enabled
cogl_get_depth_test_enabled
cogl_set_backface_culling_enabled
cogl_get_backface_culling_enabled
This adds cogl_matrix api for multiplying matrices either by a perspective
or ortho projective transform. The internal matrix stack and current-matrix
APIs also have corresponding support added.
New public API:
cogl_matrix_perspective
cogl_matrix_ortho
cogl_ortho
cogl_set_modelview_matrix
cogl_set_projection_matrix
cogl_create_context is dealt with internally when _cogl_get_default context
is called, and cogl_destroy_context is currently never called.
It might be nicer later to get an object back when creating a context so
Cogl can support multiple contexts, so these functions are being removed
from the API until we get a chance to address context management properly.
For now cogl_destroy_context is still exported as _cogl_destroy_context so
Clutter could at least install a library deinit handler to call it.
Originally cogl_vertex_buffer_add_indices let the user pass in their own unique
ID for the indices; now the Id is generated internally and returned to the
caller.
It's now possible to add arrays of indices to a Cogl vertex buffer and
they will be put into an OpenGL vertex buffer object. Since it's quite
common for index arrays to be static it saves the OpenGL driver from
having to validate them repeatedly.
This changes the cogl_vertex_buffer_draw_elements API: It's no longer
possible to provide a pointer to an index array at draw time. So
cogl_vertex_buffer_draw_elements now takes an indices identifier that
should correspond to an idendifier returned when calling
cogl_vertex_buffer_add_indices ()
This is being removed before we release Clutter 1.0 since the implementation
wasn't complete, and so we assume no one is using this yet. Util we have
someone with a good usecase, we can't pretend to support breaking out into
raw OpenGL.
There were a number of functions intended to support creating of new
primitives using materials, but at this point they aren't used outside of
Cogl so until someone has a usecase and we can get feedback on this
API, it's being removed before we release Clutter 1.0.
This removes the following API:
cogl_material_set_blend_factors
cogl_material_set_layer_combine_function
cogl_material_set_layer_combine_arg_src
cogl_material_set_layer_combine_arg_op
These were rather awkward to use, so since it's expected very few people are
using them at this point and it should be straight forward to switch over
to blend strings, the API is being removed before we release Clutter 1.0.
Setting up layer combine functions and blend modes is very awkward to do
programatically. This adds a parser for string based descriptions which are
more consise and readable.
E.g. a material layer combine function could now be given as:
"RGBA = ADD (TEXTURE[A], PREVIOUS[RGB])"
or
"RGB = REPLACE (PREVIOUS)"
"A = MODULATE (PREVIOUS, TEXTURE)"
The simple syntax and grammar are only designed to expose standard fixed
function hardware, more advanced combining must be done with shaders.
This includes standalone documentation of blend strings covering the aspects
that are common to blending and texture combining, and adds documentation
with examples specific to the new cogl_material_set_blend() and
cogl_material_layer_set_combine() functions.
Note: The hope is to remove the now redundant bits of the material API
before 1.0
After long deliberation, the Animation class handling of the
:mode, :duration and :loop properties, as well as the conditions
for creating the Alpha and Timeline instances, came out as far too
complicated for their own good.
This is a rework of the API/parameters matrix and behaviour:
- :mode accessors will create an Alpha, if needed
- :duration and :loop accessors will create an Alpha and a Timeline
if needed
- :alpha will set or unset the Alpha
- :timeline will set or unset the Timeline
Plus, more documentation on the Animation class itself.
Many thanks to Jonas Bonn <jonas@southpole.se> for the feedback
and the ideas.
The Animatable interface implementation will always have the computed
value applied, whilst the non-Animatable objects go through the
interval validation first to avoid incurring in assertions and
warnings.
The Animatable::animate_property() should also be able to validate the
property it's supposed to interpolate, and eventually discard it. This
requires adding a return value to the virtual function (and its wrapper
function).
The Animation code will then apply the computed value only if the
animate_property() returns TRUE -- unifying the code path with the
non-Animatable objects.
The Animation class should proxy the :mode, :duration and :loop
properties whenever possible, to avoid them going out of sync when
changed using the Alpha and Timeline instances directly.
Currently, if Timeline:duration is changed, querying Animation:duration
will yield the old value, but the animation itself (being driven by
the Timeline) will use the Timeline's :duration new value. This holds
for the :loop and :mode properties as well.
Instead, the getters for the Animation's :duration, :loop and
:mode properties should ask the relevant object -- if any. The
loop, duration and mode values inside AnimationPrivate should only
be used if no Timeline or no Alpha instances are available, or
when creating new instances.
The Animation should not directly manipulate a Timeline instance,
but it should defer to the Alpha all handling of the timeline.
This means that:
- set_duration() and set_loop() will either create a Timeline or
will set the :duration and :loop properties on the Timeline; if
the Timeline must be created, and no Alpha instance is available,
then a new Alpha instance will be created as well and the newly
create Timeline will be assigned to the Alpha
- if set_mode() on an Animation instance without an Alpha, the
Alpha will be created; a Timeline will also be created
- set_alpha() will replace the Alpha; if the new Alpha does not
have a Timeline associated then a Timeline will be created using
the current :duration and :loop properties of Animation; otherwise,
if the replaced Alpha had a timeline, the timeline will be
transferred to the new one
The CoglTexture constructors expose the "max-waste" argument for
controlling the maximum amount of wasted areas for slicing or,
if set to -1, disables slicing.
Slicing is really relevant only for large images that are never
repeated, so it's a useful feature only in controlled use cases.
Specifying the amount of wasted area is, on the other hand, just
a way to mess up this feature; 99% the times, you either pull this
number out of thin air, hoping it's right, or you try to do the
right thing and you choose the wrong number anyway.
Instead, we can use the CoglTextureFlags to control whether the
texture should not be sliced (useful for Clutter-GST and for the
texture-from-pixmap actors) and provide a reasonable value for
enabling the slicing ourself. At some point, we might even
provide a way to change the default at compile time or at run time,
for particular platforms.
Since max_waste is gone, the :tile-waste property of ClutterTexture
becomes read-only, and it proxies the cogl_texture_get_max_waste()
function.
Inside Clutter, the only cases where the max_waste argument was
not set to -1 are in the Pango glyph cache (which is a POT texture
anyway) and inside the test cases where we want to force slicing;
for the latter we can create larger textures that will be bigger than
the threshold we set.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Signed-off-by: Neil Roberts <neil@linux.intel.com>
Sometimes it is necessary for third party code to have a
function called during the redraw process, so that you can
update the scenegraph before it is painted.
If we are short-circuiting the paint when the opacity is zero we still
need to clear the queued_redraw flag otherwise it won't be possible to
queue another redraw of the actor until something else has caused a
paint first.
* master:
[cogl-vertex-buffer] Ensure the clip state before rendering
[test-text-perf] Small fix-ups
Add a test for text performance
[build] Ensure that cogl-debug is disabled by default
[build] The cogl GE macro wasn't passing an int according to the format string
Use the right internal format for GL_ARB_texture_rectangle
[actor_paint] Ensure painting is a NOP for actors with opacity = 0
Make backface culling work with vertex buffers
Before any rendering is done by Cogl it needs to ensure the clip stack
is set up correctly by calling cogl_clip_ensure. This was not being
done for the Cogl vertex buffer so it would still use the clip from
the previous render.
Now that everything is float, the marsharlling function of the
size-change signal should reflect that fact.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
GLES doesn't support GL_QUADS. This patch makes it use GL_TRIANGLES
instead in that case. Unfortunately this means submitting two extra
vertices per quad. It could be better to use indexed elements once
CoglVertexBuffers gains support for that.
When ClutterGLXTexturePixmap uses GL_ARB_texture_rectangle,
it needs to pass the right internal format (GL_RGB or GL_RGBA)
when it initializes the texture with glTexImage2D() or later
handling won't recognize the alpha channel.
http://bugzilla.openedhand.com/show_bug.cgi?id=1586
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Since it is convenient to use geometry with an opacity of 0 for input only
purposes it's a worthwhile optimization to avoid submitting anything
for such actors while painting.
Backface culling is enabled as part of cogl_enable so the different
rendering functions in Cogl need to explicitly opt-in to have backface
culling enabled. Cogl vertex buffers should allow backface culling so
they should check whether it is enabled and then set the appropriate
cogl_enable flag.
In order to cope with the situation where an application renders with
a PangoLayout, makes some changes and then renders again with the same
layout, CoglPangoRenderer needs to detect that the changes have
occured so that it can recreate the display list. This is acheived by
keeping a reference to the first line of the layout. When the layout
is changed Pango will clear the layout pointer in the first line and
create a new line. So if the layout pointer in the line becomes NULL
then we know the layout has changed. This trick was suggested by
Behdad Esfahbod in this email:
http://mail.gnome.org/archives/gtk-i18n-list/2009-May/msg00019.html
When a position is given to cogl_pango_render_layout_subpixel it
translates the GL matrix by the coordinates. However it was not
dividing by PANGO_SCALE so the coordinates were completely wrong.
Most of the operations involving the texture's allocated area require
floats -- either for computations or for setting the geometry into
COGL. So it doesn't make any sense to use get_allocation_coords() and
cast everything to floats.
Currently, COGL depends on defining debug symbols by manually
modifying the source code. When it's done, it will forcefully
print stuff to the console.
Since COGL has also a pretty, runtime selectable debugging API
we might as well switch everything to it.
In order for this to happen, configure needs a new:
--enable-cogl-debug
command line switch; this will enable COGL debugging, the
CoglHandle debugging and will also turn on the error checking
for each GL operation.
The default setting for the COGL debug defines is off, since
it slows down the GL operations; enabling it for a particular
debug build is trivial, though.
COGL has a debug message system like Clutter's own. In parallel,
it also uses a coupld of #defines. Spread around there are also
calls to printf() instead to the more correct g_log* wrappers.
This commit tries to unify and clean up the macros and the
debug message handling inside COGL to be more consistent.
ClutterTexture has many properties that can only be accessed using
the GObject API. This is fairly inefficient and makes binding the
class overly complicated.
The Texture class should have accessor methods for all its properties,
properly documented.
The code for the conversion of the GL error enumeration code
into a string is not following the code style and conventions
we follow in Clutter and COGL.
The GE() macro is also using fprintf(stderr) directly instead
of using g_warning() -- which is redirectable to an alternative
logging system using the g_log* API.
We use math routines inside Cogl, so it's correct to have it in
the LIBADD line. In normal usage something else was pulling in
-lm, but the introspection is relying on linking against the
convenience library.
Based on a patch by: Colin Walters <walters@verbum.org>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The timeline created when calling set_timeline(NULL) is referenced
even though we implicitly own it. When the Animation is destroyed,
the timeline is then leaked.
Thanks to: Richard Heatley <richard.heatley@starleaf.com>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1548
Add a method for deleting the current selection inside a Text actor.
This is useful for subclasses.
See bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1521
Based on a patch by: Raymond Liu <raymond.liu@intel.com>
ClutterAnimation currently inherits the initial floating reference
semantics from GInitiallyUnowned. An Animation is, though, meant to
be used as a top-level object, like a Timeline or a Behaviour, and
not "owned" by another object. For this reason, the initial floating
reference does not make any sense.
Document that repeated calls to clutter_cairo_texture_create()
continue drawing on the same cairo_surface_t. Add
clutter_cairo_texture_clear() for when you don't want that behavior.
http://bugzilla.openedhand.com/show_bug.cgi?id=1599
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The cursor x position is already translated, so we do not need to take the
actors allocation into account when calculating scrolling.
Additionally, we need to update the text_x value before running
clutter_text_ensure_cursor_position.
Add any scrolling offset to the x value when in single line mode.
Now that the offset is taken into account in the position_to_coords
function, we do not need to adjust the cursor x manually in
clutter_text_paint.
If the cursor is already at the end of the Text contents then we
need to maintain its position when deleting the previous character
using the relative key binding.
The required "fake" libclutter-cogl.la upon with the main clutter
shared object depends is only built with introspection enabled
instead of being built unconditionally.
Passing:
--library=clutter-@CLUTTER_FLAVOUR@-@CLUTTER_API_VERSION@
to g-ir-scanner, when building Cogl was causing g-ir-scanner to
link the introspection program against the installed clutter library,
if it existed or fail otherwise. Instead copy the handling from
the json/ directory where we link against the convenience library
to scan, and do the generation of the typelib later in the
main clutter/directory.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1594
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If text is set, ClutterText should never return less than the layout
height for minimum and preferred heights.
This holds unless ellipsize and wrap are enabled, in which case the
minimum height should be the height of the first line -- which is
the height needed to at the very least show the ellipsization.
Based on a patch by: Thomas Wood <thomas@openedhand.com>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1598
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This is the another step into abstracting the backend operations
that are currently spread all across the board back into the
backend implementations where they belong.
The GL context creation, for instance, is demanded to the stage
realization which makes it a critical path for every operation
that is GL-context bound. This usually does not make any difference
since we realize the default stage, but at some point we might
start looking into avoiding the default stage realization in order
to make the Clutter startup faster.
It also makes the code maintainable because every part is self
contained and can be reworked with the minimum amount of pain.
The master clock is using the redraw priority to create the source
that will be used to spin the paint sequence if something is being
animated using a timeline.
Unfortunately, the priority is too high and this causes starvation
when embedding into other toolkits -- like gtk+.
Thanks to Havoc Pennington for catching this.
The XVisualInfo for GL is created when a stage is being realized.
When embedding Clutter inside another toolkit we might not want to
realize a stage to extract the XVisualInfo, then set the stage
window using a foreign X Window -- which will cause a re-realization.
Instead, we should abstract as much as possible into the X11 backend.
Unfortunately, the XVisualInfo for GL is requested using GLX API; for
this reason we have to create a ClutterBackendX11 method that we
override inside the ClutterBackendGLX implementation.
This also allows us to move a little bit of complexity from out of
the stage realization, which is currently a very delicate and hard
to debug section.
I was seeing clutter_text_get_selection trying to malloc up to 4Gb due to
unexpected negative arithmetic for the start/end offsets which resulted
in a crash.
This just tests for positions of -1 before deciding if the start/end
positions need to be swapped. The conversion from position to byte offset
already works with -1.
cogl_clip_push_window_rect is implemented using GPU scissoring which allows
the GPU to cull anything that falls outside a given rectangle. Since in the
case of picking we only ever care about a single pixel we can get the GPU to
ignore all geometry that doesn't intersect that pixel and only rasterize for
one pixel.
Previously clipping could only be specified in object coordinates, now
rectangles can also be pushed in window coordinates.
Internally rectangles pushed this way are intersected and then clipped using
scissoring. We also transparently try to convert rectangles pushed in
object coordinates into window coordinates as we anticipate the scissoring
path will be faster then the clip planes and undoubtably it will be faster
than using the stencil buffer.
The stencil buffer is always cleared the first time a clip is used
that needs it and the stencil test is disabled otherwise so there is
no need to clear before a paint.
COGLenum, COGLint and COGLuint which were simply typedefs for GL{enum,int,uint}
have been removed from the API and replaced with specialised enum typedefs, int
and unsigned int. These were causing problems for generating bindings and also
considered poor style.
The cogl texture filter defines CGL_NEAREST and CGL_LINEAR etc are now replaced
by a namespaced typedef 'CoglTextureFilter' so they should be replaced with
COGL_TEXTURE_FILTER_NEAREST and COGL_TEXTURE_FILTER_LINEAR etc.
The shader type defines CGL_VERTEX_SHADER and CGL_FRAGMENT_SHADER are handled by
a CoglShaderType typedef and should be replaced with COGL_SHADER_TYPE_VERTEX and
COGL_SHADER_TYPE_FRAGMENT.
cogl_shader_get_parameteriv has been replaced by cogl_shader_get_type and
cogl_shader_is_compiled. More getters can be added later if desired.
Calling glReadPixels is bad enough in forcing us to synchronize the CPU with
the GPU, but glFinish has even stronger synchonization semantics than
glReadPixels which may negate some driver optimizations possible in
glReadPixels.
Commit 43fa38fcf5 broke out-of-tree builds by removing some of the
builddir directories from the include path. builddir/clutter/cogl and
builddir/clutter are needed because cogl.h and cogl-defines-gl.h are
automatically generated by the configure script. The main clutter
headers are in the srcdir so this needs to be in the path too.
When the stage state changes between active/deactive, send out
key-focus-in/key-focus-out signal for the current key focused
actor on the stage.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1503
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Setting the stage window using the set_stage_foreign() method will
lead to a re-realization. We need to make sure that the Drawable
currently associated to the GL context is set to None, to avoid a
BadDrawable error or, if we're unlucky, a segfault in the X server.
This commit reverts part of commit 5bcde25c - specifically the
part that forced a realization of the stage if we are ensuring
the GL context with it. This makes Clutter behave like it did
prior to commit 5bcde25c: if we are asked to ensure the GL context
with an unrealized stage we simply pass NULL to the backend
implementation.
When showing a Stage for the first time we end up realizing the stage
implementation before realizing the wrapper. This leads to segmentation
faults or errors coming from the backend because we're fumbling the
state and realization sequence.
Since we are destroying any previously set VisualInfo we keep we know
for sure that stage->xvisinfo is going to be None; hence, no reason to
check this condition.
The verify_map_state() internal method is conditionally compiled
if we have CLUTTER_ENABLE_DEBUG set; for this reason, all calls to
that method should be made conditional.
When building Clutter with introspection enabled everything stops
at Cogl GIR generation because it depends on the installed library
to work. Since we still require some changes in the API to be able
to build the GIR and the typelib for Cogl we should disable the
generation of the GIR as well.
The fix for bug 1138 broke multi-stage support on GLX, causing
X11 to segfault with the following stack trace:
Backtrace:
0: /usr/X11R6/bin/X(xf86SigHandler+0x7e) [0x80c91fe]
1: [0xb7eea400]
2: /usr/lib/xorg/modules/extensions//libglx.so [0xb7ae880c]
3: /usr/lib/xorg/modules/extensions//libglx.so [0xb7aec0d6]
4: /usr/X11R6/bin/X [0x8154c24]
5: /usr/X11R6/bin/X(Dispatch+0x314) [0x808de54]
6: /usr/X11R6/bin/X(main+0x4b5) [0x8074795]
7: /lib/i686/cmov/libc.so.6(__libc_start_main+0xe5) [0xb7c75775]
8: /usr/X11R6/bin/X(FontFileCompleteXLFD+0x21d) [0x8073a81]
which I can only track down to clutter_backend_glx_ensure_current()
being passed a NULL stage -- something that happens when a stage
is not correct realized. That should lead to a glXMakeCurrent(None)
and not to a segmentation fault, though.
When destroying a top-level actor we can actually relax the verification
of the map state, since it might be fully asynchronous and we might not
re-enter inside the mainloop in time to receive the unmap notification.
If the filter means that the there should be no rows left in the model,
clutter_model_get_iter_at_row (model, 0) should return NULL.
Howevever the currene implementation misbehaves and returns a bad iterator.
This change resolves the issue by tracking if we actually found any
non-filtered rows in our pass through the sequence.
OH Bugzilla: 1591
The stage is chaining up to the ClutterGroup::paint instead of
the ClutterGroup::pick method. This works anyway because we
detect the stage by default, but it's not a reliable solution
in case we decide to change the picking further on.
The master clock is currently advanced using a frame source driven
by the default frame rate. This breaks the sync to vblank because
the vblanking rate could be different than 60 Hz -- or it might be
completely disabled (e.g. with CLUTTER_VBLANK=none).
We should be using the main loop to check if we have timelines
playing, and if so queue a redraw on the stages we own.
We should also prepare the subsequent frame at the end of the redraw
process, so if there are new redraw we will have the scene already
in place.
This makes Clutter redraw at the maximum frame rate, which is
limited by the vblanking frequency.
Currently, picking in ClutterGroup pollutes the CLUTTER_DEBUG=paint
logs since it just calls the paint function. Reimplementing the pick
doesn't make us lose anything -- it might even be slightly faster
since we don't have to do a (typed) cast and a class dereference.
The timeline created when calling set_timeline(NULL) is referenced
even though we implicitly own it. When the Animation is destroyed,
the timeline is then leaked.
Thanks to: Richard Heatley <richard.heatley@starleaf.com>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1548
Currently, the conversion from em to units is done by using the
default font name inside the backend. For actors using their own
font/text layout we need a way to specify the font name along
with the quantity we wish to transform.
Commit 515350a7 renamed ::focus-in and ::focus-out to ::key-focus-in
and ::key-focus-out respectively. One signal emission for ::focus-out
escaped the renaming in ClutterStage.
Currently, the default screen guard value is 0, which is a valid
screen number on X11, and it might not be the default.
Patch suggested by: Owen W. Taylor <otaylor@redhat.com>
Currently, the introspection data for Cogl is built right into
Clutter's own typelib. This makes functions like:
cogl_path_round_rectangle()
Appear as:
Clutter.cogl_path_round_rectangle()
It should be possible, instead, to have a Cogl namespace and:
Cogl.path_round_rectangle()
This means building introspection data for Cogl alone. Unfortunately,
there are three types defined in Cogl that confuse the introspection
scanner, and make it impossible to build a typelib:
COGLint
COGLuint
COGLenum
These three types should go away before 1.0, substituted by int,
unsigned int and proper enumeration types. For this reason, we can
just set up the GIR build and wait until the last moment to create
the typelib. Once that has been done, we will be able to safely
remove the Cogl API from the Clutter GIR and typelib and let
people import Cogl if they want to use the Cogl API via introspection.
For consistency, and since those signals are key-related, the
::focus-in signal is not ::key-focus-in and the ::focus-out
signal is now ::key-focus-out.
Add a method for deleting the current selection inside a Text actor.
This is useful for subclasses.
See bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1521
Based on a patch by: Raymond Liu <raymond.liu@intel.com>
ClutterAnimation currently inherits the initial floating reference
semantics from GInitiallyUnowned. An Animation is, though, meant to
be used as a top-level object, like a Timeline or a Behaviour, and
not "owned" by another object. For this reason, the initial floating
reference does not make any sense.
With the recent change to internal floating point values, ClutterUnit
has become a redundant type, defined to be a float. All integer entry
points are being internally converted to floating point values to be
passed to the GL pipeline with the least amount of conversion.
ClutterUnit is thus exposed as just a "pixel with fractionary bits",
and not -- as users might think -- as generic, resolution and device
independent units. not that it was the case, but a definitive amount
of people was convinced it did provide this "feature", and was flummoxed
about the mere existence of this type.
So, having ClutterUnit exposed in the public API doubles the entry
points and has the following disadvantages:
- we have to maintain twice the amount of entry points in ClutterActor
- we still do an integer-to-float implicit conversion
- we introduce a weird impedance between pixels and "pixels with
fractionary bits"
- language bindings will have to choose what to bind, and resort
to manually overriding the API
+ *except* for language bindings based on GObject-Introspection, as
they cannot do manual overrides, thus will replicate the entire
set of entry points
For these reason, we should coalesces every Actor entry point for
pixels and for ClutterUnit into a single entry point taking a float,
like:
void clutter_actor_set_x (ClutterActor *self,
gfloat x);
void clutter_actor_get_size (ClutterActor *self,
gfloat *width,
gfloat *height);
gfloat clutter_actor_get_height (ClutterActor *self);
etc.
The issues I have identified are:
- we'll have a two cases of compiler warnings:
- printf() format of the return values from %d to %f
- clutter_actor_get_size() taking floats instead of unsigned ints
- we'll have a problem with varargs when passing an integer instead
of a floating point value, except on 64bit platforms where the
size of a float is the same as the size of an int
To be clear: the *intent* of the API should not change -- we still use
pixels everywhere -- but:
- we remove ambiguity in the API with regard to pixels and units
- we remove entry points we get to maintain for the whole 1.0
version of the API
- we make things simpler to bind for both manual language bindings
and automatic (gobject-introspection based) ones
- we have the simplest API possible while still exposing the
capabilities of the underlying GL implementation
When calling clutter_model_iter_next () / clutter_model_iter_prev () we need
to update the row for the iterator. In order to improve the peformance of
iterating this change adds a private row mutator and switches ClutterListModel
to use it.
In order to carry out various comparisons for e.g. identifying the first
iterator in the model we need a ClutterModelIter. This change switches from
creating a new iterator each time to reusing an existing iterator.
The flags field of ClutterActor should have accessor methods for,
language bindings.
Also, the set_flags() and unset_flags() methods should actively
emit notifications for the changed properties.
This is simply a wrapper around cogl_color_set_from_4f and
cogl_material_set_color. We already had a prototype for this, it was
an oversight that it wasn't already implemented.
There were several functions I believe no one is currently using that were
only implemented in the GL backend (cogl_offscreen_blit_region and
cogl_offscreen_blit) that have simply been removed so we have a chance to
think about design later with a real use case.
There was one nonsense function (cogl_offscreen_new_multisample) that
sounded exciting but in all cases it just returned COGL_INVALID_HANDLE
(though at least for GL it checked for multisampling support first!?)
it has also been removed.
The MASK draw buffer type has been removed. If we want to expose color
masking later then I think it at least would be nicer to have the mask be a
property that can be set on any draw buffer.
The cogl_draw_buffer and cogl_{push,pop}_draw_buffer function prototypes
have been moved up into cogl.h since they are for managing global Cogl state
and not for modifying or creating the actual offscreen buffers.
This also documents the API so for example desiphering the semantics of
cogl_offscreen_new_to_texture() should be a bit easier now.
These are necessary if nesting redirections to an fbo,
otherwise there's no way to know how to restore
previous state.
glPushAttrib(GL_COLOR_BUFFER_BIT) would save draw buffer
state, but also saves a lot of other stuff, and
cogl_draw_buffer() relies on knowing about all draw
buffer state changes. So we have to implement a
draw buffer stack ourselves.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
It is valid in some situations to have a material layer with an invalid texture
handle (e.g. if you setup a texture combine mode before setting the texture)
and so _cogl_material_layer_free needs to check for a valid handle before
attempting to unref it.
Adds missing notices, and ensures all the notices are consistent. The Cogl
blurb also now reads:
* Cogl
*
* An object oriented GL/GLES Abstraction/Utility Layer
Redundant clearing of depth and stencil buffers every render can be very
expensive, so cogl now gives control over which auxiliary buffers are
cleared.
Note: For now clutter continues to clear the color, depth and stencil buffer
each paint.
The clutter frame source tries to average out the frame deltas so that
if one frame takes 1 interval plus a little bit of extra time then the
next frame will be 1 interval minus that little bit of extra
time. Therefore the deltas can sometimes be less than the frame
interval. ClutterTimeline should accumulate these small differences
otherwise it will end up missing out frames so the total duration of
the timeline will be a lot longer.
For example this was causing test-actors to appear to run very slow.
The method of ClutterTimeline that advances the timeline by a
delta (in millisecond) is going to be useful for testing the
timeline's behaviour -- and unbreak the timeline test suite that
was broken by the MasterClock merge.
With the introduction of the map/unmap flags and the split of the
visible state from the mapped state we require that every part of
a scene graph branch is mapped in order to be painted. This breaks
the ability of a ClutterClone to paint an hidden source actor.
In order to fix this we need to introduce an override flag, similar
in spirit to the current modelview and paint opacity overrides that
Clone is already using.
The override flag, when set, will force a temporary map on a
Clone source (and its children).
ClutterContainer provides a foreach_with_internals() vfunc for
iterating over all of a container's children, be them added using
the Container API or be them internal to the container itself.
We should be using the foreach_with_internals() function instead
of the plain foreach().
When replacing the fbo_handle with a new handle it first unrefs the
old handle. This was previously a call to cogl_offscreen_unref which
silently ignored attempts to unref COGL_INVALID_HANDLE. However the
new cogl_handle_unref does check for this so we should make sure the
handle is valid to avoid the warning.
Bug 1565 - test-fbo case failed in many platform
clutter_texture_new_from_actor was broken because it created the FBO
texture but then never attached it to the material so it was never
used for rendering. The old behaviour in Clutter 0.8 was to assign the
texture directly to priv->tex. In 0.9 priv->tex is replaced with
priv->material which has a reference to the tex in layer 0. So putting
the FBO texture directly in layer 0 more closely matches the original
behaviour.
When rendering a pango layout CoglPangoRenderer now records the
operations into a list called a CoglPangoDisplayList. The entries in
the list are either glyph renderings from a texture, rectangles or
trapezoids. Multiple consecutive glyph renderings from the same glyph
cache texture are combined into a single entry. Note the
CoglPangoDisplayList has nothing to do with a GL display list.
After the display list is built it is attached to the PangoLayout with
g_object_set_qdata so that next time the layout is rendered it can
bypass rebuilding it.
The glyph rendering entries are drawn via a VBO. The VBO is attached
to the display list so it can be used multiple times. This makes the
common case of rendering a PangoLayout contained in a single texture
subsequent times usually boil down to a single call to glDrawArrays
with an already uploaded VBO.
The VBOs are accessed via the cogl_vertex_buffer API so if VBOs are
not available in GL it will resort to a fallback.
Note this will fall apart if the pango layout is altered after the
first render. I can't find a way to detect when the layout is
altered. However this won't affect ClutterText because it creates a
new PangoLayout whenever any properties are changed.
Currently ClutterModel::get_iter_at_row() ignores whether we have
a filter in place. This also extends to the get_n_rows() method.
The more consistent, more intuitive and surely more correct way to
handle a Model with a filter in place is to take into account the
presence of the filter itself -- that is:
- get_n_rows() should take into account the filter and return the
number of *filtered* rows
- get_iter_at_row() should also take the filter into account and
get the first non-filtered row
These two changes make the ClutterModel with a filter function
behave like a subset of the original Model without a filter in
place.
For instance, given a model with three rows:
- [row 0] <does not match filter>
- [row 1] <matches filter>
- [row 2] <matches filter>
- [row 3] <does not match filter>
The get_n_rows() method will return "2", since only two rows will
match the filter; the get_first_iter() method will ask for the
zero-eth row, which will return an iterator pointing to the contents
of row 1 (but the :row property of the iterator will be set to 0);
the get_last_iter() method will ask for the last row, which will
return an iterator pointing to the contents of row 2 (but the :row
property of the iterator will be set to 1).
This changes will hopefully make the Model API more consistent
in its usage whether there is a filter in place or not.
Currently, there is no way for implementations of the ClutterModel
abstract class to know whether there is a filter in place. Since
subclasses might implement some optimization in case there is no
filter present, we need a simple (and public) API to ask the model
itself.
Setting the wrap mode on the PangoLayout seems to have disappeared
during the text-actor-layout-height branch merge so this brings it
back. The test for this in test-text-cache no longer needs to be
disabled.
We also shouldn't set the width on the layout if there is no wrapping
or ellipsizing because otherwise it implicitly enables wrapping. This
only matters if the actor gets allocated smaller than its natural
size.
Bug 1484 - Redraw ClutterClone when the source changes, even for
!visible sources
Connect to ::queue-redraw on the clone source and queue a redraw.
This allows redrawing the Clone when the source changes, even in
case of a non visible source actor.
http://bugzilla.openedhand.com/show_bug.cgi?id=1484
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Currently, all timelines install a timeout inside the TimeoutPool
they share. Every time the main loop spins, all the timeouts are
updated. This, in turn, will usually lead to redraws being queued
on the stages.
This behaviour leads to the potential starvation of timelines and
to excessive redraws.
One lesson learned from the games developers is that the scenegraph
should be prepared in its entirety before the GL paint sequence is
initiated. This means making sure that every ::new-frame signal
handler is called before clutter_redraw() is invoked.
In order to do so a TimeoutPool is not enough: we need a master
clock. The clock will be responsible for advancing all the active
timelines created inside a scene, but only when the stage is
being redrawn.
The sequence is:
+ queue_redraw() is invoked on an actor and bubbles up
to the stage
+ if no redraw() has already been scheduled, install an
idle handler with a known priority
+ inside the idle handler:
- advance the master clock, which will in turn advance
every playing timeline by the amount of milliseconds
elapsed since the last redraw; this will make every
playing timeline emit the ::new-frame signal
- queue a relayout
- call the redraw() method of the backend
This way we trade multiple timeouts with a single frame source
that only runs if a timeline is playing and queues redraws on
the various stages.
Bug 1547 - when an actor is unmapped while owning the focus, it should
release it
When an actor is unmapped while owning the focus, the should release it.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1138 - No trackable "mapped" state
* Add a VISIBLE flag tracking application programmer's
expected showing-state for the actor, allowing us to
always ensure we keep what the app wants while tracking
internal implementation state separately.
* Make MAPPED reflect whether the actor will be painted;
add notification on a ClutterActor::mapped property.
Keep MAPPED state updated as the actor is shown,
ancestors are shown, actor is reparented, etc.
* Require a stage and realized parents to realize; this means
at realization time the correct window system and GL resources
are known. But unparented actors can no longer be realized.
* Allow children to be unrealized even if parent is realized.
Otherwise in effect either all actors or no actors are realized,
i.e. it becomes a stage-global flag.
* Allow clutter_actor_realize() to "fail" if not inside a toplevel
* Rework clutter_actor_unrealize() so internally we have
a flavor that does not mess with visibility flag
* Add _clutter_actor_rerealize() to encapsulate a somewhat
tricky operation we were doing in a couple of places
* Do not realize/unrealize children in ClutterGroup,
ClutterActor already does it
* Do not realize impl by hand in clutter_stage_show(),
since showing impl already does that
* Do not unrealize in various dispose() methods, since
ClutterActor dispose implementation already does it
and chaining up is mandatory
* ClutterTexture uses COGL while unrealizable (before it's
added to a stage). Previously this breakage was affecting
ClutterActor because we had to allow realize outside
a stage. Move the breakage to ClutterTexture, by making
ClutterTexture just use COGL while not realized.
* Unrealize before we set parent to NULL in clutter_actor_unparent().
This means unrealize() implementations can get to the stage.
Because actors need the stage in order to detach from stage.
* Update clutter-actor-invariants.txt to reflect latest changes
* Remove explicit hide/unrealize from ClutterActor::dispose since
unparent already forces those
Instead just assert that unparent() occurred and did the right thing.
* Check whether parent implements unrealize before chaining up
Needed because ClutterGroup no longer has to implement unrealize.
* Perform unrealize in the default handler for the signal.
This allows non-containers that have children to work properly,
and allows containers to override how it's done.
* Add map/unmap virtual methods and set MAPPED flag on self and
children in there. This allows subclasses to hook map/unmap.
These are not signals, because notify::mapped is better for
anything it's legitimate for a non-subclass to do.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1228 - Unnecessary glColorMask on alpha drops performance
With DRI2, alpha is allowed in the window's framebuffer
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1513 - Allow passing in ClutterPickMode to
clutter_stage_get_actor_at_pos()
At the moment, clutter_stage_get_actor_at_pos() uses CLUTTER_PICK_ALL
internally to find an actor. It would be useful to allow passing in
ClutterPickMode to clutter_stage_get_actor_at_pos(), so that the caller
can specify CLUTTER_PICK_REACTIVE as a criteria.
Bug 1516 - Does not withdraw toplevels correctly
clutter_stage_x11_hide() needs to use XWithdrawWindow(), not
XUnmapWindow().
As it stands now, if the window is already unmapped (say the WM has
minimized it), then the WM will not know that Clutter has closed the
window and will keep the window managed, showing it in the task list
and so forth.
Bug 1517 - clutter_container_foreach_with_internals()
This allows us to iterate over all children (for things like maintaining
map/realize invariants) or only children that apps added and care about.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1561 - Bad code in clutter-alpha.c
The implementation of the easing modes equations followed closely
the JavaScript and ActionScript counterparts. Obviously, JS and AS
are not C-compatible, so later versions of gcc (4.4.0 for instance)
would complain about uninitialized variables and such. The code is
also obfuscated and hard to debug/understand.
For these reasons, the implementation should be unobfuscated and
sanitized.
A lockup was reported by fargoilas on #clutter and removing the server grab in
clutter_x11_texture_pixmap_sync_window fixed the problem.
We were doing an x server grab to guarantee that if the XGetWindowAttributes
request reported that our redirected window was viewable then we would also
be able to get a valid pixmap name. (the composite protocol says it's an
error to request a name for a window if it's not viewable) Without the grab
there would be a race condition.
Instead we now handle error conditions gracefully.
The documentation for the ::cursor-event says that the signal
is emitted only when the cursor position changes. Right now it
is being emitted every time we ensure the cursor position during
the Text paint sequence.
The clutter_text_ensure_cursor_position() function should check
whether the stored cursor position has changed since the last
paint, and emit the ::cursor-event signal only when there has
been a change.
We only want to eschew the pango_layout_set_width() on editable,
single-line Text actors because they can "scroll" the PangoLayout.
This fixes the ellipsization on entries.
In unifying the {gl,gles}/cogl.c code recently, moving most of the code into
common/cogl.c the gmodule.h include was also mistakenly moved.
Thanks to Felix Rabe for reporting this issue.
Note: I haven't tested this fix myself, as I'm not set up to be able to
build for OS X
Buffer objects aren't currently available for glx indirect contexts, so we
now have a fallback that simply allocates fake client side vbos to store the
attributes.
This makes the #if 0'd debug code that was in _cogl_journal_flush_quad_batch
- which we have repeatedly found usefull for debugging various geometry
issues in Clutter apps - a runtime debug option.
The outline colors rotate in order from red to green to blue which can also
help confirm the order that your geometry really drawn.
The outlines are not affected by the current material state, so if you e.g.
have a blending bug where geometry mysteriously disappears this can confirm
if the underlying rectangles are actually being emitted but blending is
causing them to be invisible.
Editable ClutterText will scroll when allocated less width than is
necessary to lay out the entire layout on a single line, so return 1 for
the minimum width in this case.
Approved by Emmanuele Bassi, fixes bug #1555.
Since we have to do (z_far - z_near) and use it in a division we
should check that the user is not passing a value that would
cause a division by zero.
If we need to check that the layout sequence is correct in
terms of order of execution and with respect to caching, then
having a CLUTTER_DEBUG_LAYOUT debug flag would make things
easier.
Ellipsizing was effectively broken for two reasons. There was a typo
in the code to set the width so it always ended up being some massive
value. If no height should be set on the layout it was being set to
G_MAXINT. Setting a height greater than 0 enables wrapping which so
ellipsizing is not performed. It should be left at the default of -1
instead.
Bug 1476 - JSON Parser memory leak
Static analysis of the code showed that the in-tree copy of
the JsonParser object leaks objects and arrays on parse errors.
Thanks to Gordon Williams <gordon.williams@collabora.co.uk>
If the stage is unrealized (such as will be the case if the stage was
created with clutter_stage_new) then it would set the size of the
stage display but it was not setting the fullscreen_on_map flag so it
never got the _NET_WM_STATE_FULLSCREEN property. Now it always sets
the flag regardless of whether the window is created yet.
There was also problems with dual-headed displays because in that case
DisplayWidth/Height will return the size of the combined display but
Metacity (and presumably other WMs) will sensibly try fit the window
to only one of the monitors. However we were setting the size hints so
that the minimum size is that of the combined display. Metacity tries
to honour this by setting the minimum size but then it no longer
positions the window at the top left of the screen.
The patch makes it avoid setting the minimum size when the stage is
fullscreen by checking the fullscreen_on_map flag. This also means we
can remove the static was_resizable flag which would presumably have
caused problems for multi-stage.
Adds a new property so that the selection color can be different from
the cursor color. If no selection color is specified it will use the
cursor color as before. If no cursor color is specified either it will
use the text color.
The debug macros for tracking reference counting of CoglHandles had
some typos introduced in c3d9f0 which meant it failed to compile when
COGL_DEBUG is 1.
Since the Cogl material branch merge when changing the color of a part
using pango attributes (such as using <span color="red" /> markup)
then it wouldn't return to the default color for the rest of the
layout. pango_renderer_get_color returns NULL if there is no color
override in which case it needs to revert to the color specified as
the argument to cogl_pango_render_layout. The 'color' member of
CoglPangoRenderer has been reinstated to store that default color and
now cogl_pango_render_set_color_for_part is the only place that sets
the material color.
The clutter_actor_animate*() family of functions should only connect
to the Animation::completed signal once, during the construction of
the Animation object attached to the Actor. Otherwise, the completed
signal handler will be run multiple times, and will try to unref()
the Animation for each call -- leading to a segmentation fault.
The Animation class is missing a ::started signal matching the
::completed one. A ::started signal is useful for debugging,
initial state set up, and checks.
Bug 1535 - Complete animation always unrefs ClutterAnimation (even
after g_object_ref_sink)
Animations created through clutter_animation_new() should not
automagically unref themselves by default on ::complete. We
only want that behaviour for Animations created by the
clutter_actor_animate* family of functions, since those provide
the automagic memory management.
ClutterGroup still ships with API deprecated since 0.4. We did
promise to keep it around for a minor release cycle -- not for 3.
Since we plan on shipping 1.0 without the extra baggage of the
deprecated entry points, here's the chance to remove the accumulated
cruft.
All the removed methods and signals have a ClutterContainer
counterpart.
Since we're planning to release 1.0 without any of the deprecated
API baggage, we can simply remove the set_uniform_1f() method from
ClutterShader public API and add it to the deprecated header.
The cogl_is_* functions were showing up quite high on profiles due to
iterating through arrays of cogl handles.
This does away with all the handle arrays and implements a simple struct
inheritance scheme. All cogl objects now add a CoglHandleObject _parent;
member to their main structures. The base object includes 2 members a.t.m; a
ref_count, and a klass pointer. The klass in turn gives you a type and
virtual function for freeing objects of that type.
Each handle type has a _cogl_##handle_type##_get_type () function
automatically defined which returns a GQuark of the handle type, so now
implementing the cogl_is_* funcs is just a case of comparing with
obj->klass->type.
Another outcome of the re-work is that cogl_handle_{ref,unref} are also much
more efficient, and no longer need extending for each handle type added to
cogl. The cogl_##handle_type##_{ref,unref} functions are now deprecated and
are no longer used internally to Clutter or Cogl. Potentially we can remove
them completely before 1.0.
A layer object may be instantiated when setting a combine mode, but before a
texture is associated. (e.g. this is done by the pango renderer) if this is the
case we shouldn't call cogl_texture_get_format() with an invalid cogl handle.
This patch skips over layers without a texture handle when determining if any
textures have an alpha channel.
Bug 1518 - [Patch] Widget derivied from ClutterText will crash on
key_press_event
In clutter_text_key_press() we are using G_OBJECT_TYPE_NAME to find
out the actor's type name. However, if some widget is derived from
ClutterText, when the key press handler is called, G_OBJECT_TYPE_NAME
will return the name of the derived widget.
The default implementation should get the binding pool for the base
class.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
ClutterText should offer multiple selection modes depending on the
number of pointer clicks.
Following the GTK+ conventions:
- double click selects the current word
- triple click selects the current line
Added support for registering a handler for the completed signal
directly amongst the varargs making it easier to attach code
to be executed when animations complete.
Bug 1529 - Selection bound out of sync with an empty Text actor
When the user clicks on a Text actor the cursor position and the
selection bound are set using bytes_to_offset(); if the Text is
empty, this means placing them both to 0. Setting the selection
bound to 0 means that when the user inserts a character by typing
it into the Text, the selection will be [0,1]; this will select
the first character, which will then be overwritten when typing
a new character.
The Text actor should, instead, check if there are no contents
and set the cursor position and the selection bound to -1.
The clutter_text_set_selection_bound() method should also validate
the value passed, in case it's bigger than the text lenght, or
smaller than -1.