If you call get_n_columns() during the instance initialization phase but
before set_name()/set_types() have been called, you'll get a (guint) -1.
This is less than ideal.
If columns haven't been initialized we should just return 0, which was
the intent of the API since the beginning.
Based on a patch by: Bastian Winkler <buz@netbuz.org>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The int storage, and the initial value of -1, is used as a guard when
subclassing ClutterListModel to allow the sub-class to call
clutter_model_set_names() and clutter_model_set_types().
This reverts commit c274118a8f.
This makes it more likely consumers notice invalid unreferences.
GObject has the same assertion.
http://bugzilla.openedhand.com/show_bug.cgi?id=2029
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
clutter_model_get_n_columns is supposed to return a guint, so the
n_columns field needs to be a guint with the initial value set to 0.
http://bugzilla.openedhand.com/show_bug.cgi?id=2017
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When entering cogl_texture_2d_new_from_bitmap the internal format can
be COGL_PIXEL_FORMAT_ANY. This was causing _cogl_texture_2d_can_create
to use an invalid GL format type. Mesa apparently ignores this but it
was causing errors when Cogl is compiled with debugging under NVidia.
http://bugzilla.openedhand.com/show_bug.cgi?id=2026
Add a return result from CoglTexture.transform_quad_coords_to_gl(),
so that we can properly determine the nature of repeats in
the face of GL_TEXTURE_RECTANGLE_ARB, where the returned
coordinates are not normalized.
The comment "We also work out whether any of the texture
coordinates are outside the range [0.0,1.0]. We need to do
this after calling transform_coords_to_gl in case the texture
backend is munging the coordinates (such as in the sub texture
backend)." is disregarded and removed, since it's actually
the virtual coordinates that determine whether we repeat,
not the GL coordinates.
Warnings about disregarded layers are used in all cases where
applicable, including for subtextures.
http://bugzilla.openedhand.com/show_bug.cgi?id=2016
Signed-off-by: Neil Roberts <neil@linux.intel.com>
Fix clutter initialisation if argb visuals are enabled, setting a border
color on creating the dummy window. This should avoid BadMatch happening
when the depth of the root window visual is not the same of the depth
of the argb visual.
http://bugzilla.openedhand.com/show_bug.cgi?id=2011
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Backport of the upstream JSON-GLib commit that improved the strictness
of JsonParser.
The original upstream commit is:
29881f03468db08bfb404cfcd5b61b4cdc419a87
In _cogl_texture_2d_sliced_foreach_sub_texture_in_region(), don't
assert that the target is GL_TEXTURE_2D; instead conditionalize
normalization on the target.
http://bugzilla.openedhand.com/show_bug.cgi?id=2015
If the EGL context is already created then we shouldn't try to create
another one. This was causing problems where one context would be
created from calling _clutter_feature_init and the other was created
from _clutter_backend_get_features. Cogl would set up its state using
the first context and then assume the state was still valid when the
second context became used so blending was not working correctly.
http://bugzilla.openedhand.com/show_bug.cgi?id=2020
The documentation and name of the get_transformation_matrix function
implies that 'matrix' is purely an out parameter. However it wasn't
initializing the matrix before calling the 'apply_transform' virtual
so it was basically just a wrapper for the virtual. The virtual
assumes the matrix parameter is in/out and applies the actor's
transformation on top of any existing transformations. This causes
unexpected semantics that are inconsistent with the documentation.
This changes clutter_glx_texture_pixmap_update_area so it defers the
call to glXBindTexImageEXT until our pre "paint" signal handler which
makes clutter_glx_texture_pixmap_update_area cheap to call.
The hope is that mutter can switch to reporting raw damage updates to
ClutterGLXTexturePixmap and we can use these to queue clipped redraws.
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
commit 511e5ceb51 accidentally removed the #ifdef COGL_ENABLE_DEBUG
guards around the "cogl-debug" and "cogl-no-debug" cogl_args[] which
this patch restores.
The FlowLayout fails to provide a preferred size in case no sizing is
specified on one axis. It should, instead, have the preferred size of
the sum of its children, depending on the orientation property.
http://bugzilla.openedhand.com/show_bug.cgi?id=2013
Some EGL drivers for embedded devices require a specific framebuffer
device to be opened and passed to eglCreateWindowSurface(). Since it's
optional, we can provide an environment variabled called
CLUTTER_FB_DEVICE that can be used to specify the path of the device
to be opened.
http://bugzilla.openedhand.com/show_bug.cgi?id=1997
Update the EGL native framebuffer backend to be 1.2-ready:
» create the EGL context and the surface inside the create_context()
implementation so that a context is always available
» simplify the StageWindow implementation
» clean up old code
http://bugzilla.openedhand.com/show_bug.cgi?id=1997
Just like _cogl_texture_2d_new_with_size(),
_cogl_texture_2d_new_from_bitmap() needs to check if an unsliced
texture can be created at the given size, or if hardware
limitations prevent this.
http://bugzilla.openedhand.com/show_bug.cgi?id=2014
Signed-off-by: Neil Roberts <neil@linux.intel.com>
cogl_read_pixels() no longer asserts that the format passed in is
RGBA_8888 but instead accepts any format. The appropriate GL enums for
the format are passed to glReadPixels so OpenGL should be perform a
conversion if neccessary.
It currently assumes glReadPixels will always give us premultiplied
data. This will usually be correct because the result of the default
blending operations for Cogl ends up with premultiplied data in the
framebuffer. However it is possible for the framebuffer to be in
whatever format depending on what CoglMaterial is used to render to
it. Eventually we may want to add a way for an application to inform
Cogl that the framebuffer is not premultiplied in case it is being
used for some special purpose.
If the requested format is not premultiplied then Cogl will convert
it. The tests have been changed to read the data as premultiplied so
that they won't be affected by the conversion. Picking in Clutter has
been changed to use COGL_PIXEL_FORMAT_RGB_888 because it doesn't need
the alpha component. clutter_stage_read_pixels is left unchanged
because the application can't specify a format for that so it seems to
make most sense to store unpremultiplied values.
http://bugzilla.openedhand.com/show_bug.cgi?id=1959
* stage-min-size-rework:
docs: Update minimum size accessors
actor: Use the TOPLEVEL flag instead of a type check
[stage] Use min-width/height props for min size
The clutter-profile.c print_report() code would crash if no stats had
been gathered because uprof would return NULL for the "Redrawing" timer
which we then dereferenced.
This changes the code to start by checking for the "Mainloop",
"Redrawing" and "Do Pick" timers and if none are present it returns
immediately without generating any report.
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
There is no need for us to check for low-level functions and header
files, especially since we haven't been checking the results until
now. This makes cross-compiling slightly more bearable.
If the actor is an internal child of another actor then we should call
unparent() when destroying it, like clutter_actor_reparent() does;
otherwise we'll leak the actor, since the parent holds a reference to
it.
http://bugzilla.openedhand.com/show_bug.cgi?id=2009
Instead of shadowing these properties with different properties with the
same names on stage, actually use them. Behaviour should be identical,
except the minimum stage size can now be enforced by setting the
min-width/height properties as well as using the set_minimum_size
function.
If we do not unset the Stage we will have stale data, and the Crossing
event when re-entering a Stage will not be emitted, as the actor under
the pointer might be the same as before.
This adds a COGL_INDICES_TYPE_UNSIGNED_INT enum value so that unsigned
ints can be used with cogl_vertex_buffer_indices_new. Unsigned ints
are not supported in core on GLES so a feature flag has also been
added to advertise this. GLES only sets the feature if the
GL_OES_element_index_uint extension is available. It is an error to
call indices_new() with unsigned ints unless the feature is
advertised.
http://bugzilla.openedhand.com/show_bug.cgi?id=1998
Allow a ClutterModel to be constructed through the ClutterScript API.
Currently this allows a model to be generated like like this:
{
"id" : "test-model",
"type" : "ClutterListModel",
"columns" : [
[ "text-column", "gchararray" ],
[ "int-column", "gint" ],
[ "actor-column", "ClutterRectangle" ]
]
}
where 'columns' is an array containing arrays of column-name,
column-type pairs.
http://bugzilla.openedhand.com/show_bug.cgi?id=2007
The code has gotten really complicated to follow.
As soon as we have a sync-to-vblank mechanism we should just bail out.
Also, __GL_SYNC_TO_VBLANK (which is used by nVidia) should be assumed
equivalent to a CLUTTER_VBLANK_GLX_SWAP.
We should explain what a "key frame" is for ClutterAnimator, possibly
with some sort of visual cue.
This allows me to demonstrate my poor skills at using Inkscape, as well
as my overall bad taste for graphics design.
The top-level types list was comically out of date, and it was only
determining whether the type we were constructing was initially unowned
or a full object. We can safely replace it with a simple type check.
It would be useful to be able to share the Timeline across different
animator instances, or with different animation constructs. Also this
allows sharing definitions of Timelines in ClutterScript.
The arguments for remove_key() can be NULL, but there is an extraneous
assertion that fails if they are. The pre-conditions should match the
documentation, in this case.
A sub-class of ClutterBox might add ChildMeta support, and since
pack_at() does not go through clutter_container_add_actor(), we
need to manually call the create_child_meta() ourselves.
It is conceivable that Container implementations might add children
outside of the Container::add() implementation - e.g. for packing at
a specific index. Since the addition (and removal) might happen outside
the common path we need to expose all the API that is implicitly called
by ClutterContainer when adding and removing a child - namely the
ChildMeta creation and destruction.
Previously the GLES2 backend needed a special wrapper for
glBindTexture because it needed to know the internal GL format of the
texture in order to correctly implement the GL_MODULATE texture env
mode. When GL_MODULATE is used then the RGB values are taken from the
previous texture layer rather than being fetched from the
texture. However since the material API was added Cogl no longer uses
the GL_MODULATE texture env mode but instead always uses GL_COMBINE.
Compiling the GLES2 backend broke since the more-texture-backends
branch merge because the cogl_get_internal_gl_format function was
removed and there was one place in GLES2 specific code that was using
this to bind the texture.
The texture layer combine functions are now hard coded to GL_COMBINE
instead of GL_MODULATE. The combine function can be customized with
all the parameters of GL_COMBINE. A shader is generated to implement
the given parameters.
Currently it will try to generate code for the constant color but it
will use a uniform which does not exist.
The GLES2 backend for Cogl is failing to compile because
GL_MAX_TEXTURE_UNITS is not defined. Let's define it and provide a
wrapper which uses GL_MAX_TEXTURE_IMAGE_UNITS or
COGL_GLES2_MAX_TEXTURE_UNITS, whichever is the smallest.
A bogus ClutterInterpolation argument had been carried from
clutter_animator_set_interpolation to clutter_animator_get_interpolation
in copy and paste.
GLib 2.24 (but starting from the 2.23.2 unstable release) added a new
macro for collecting GValues from a va_list.
The newly added G_VALUE_COLLECT_INIT() macro should be used in place
of initializing the GValue and calling G_VALUE_COLLECT(), and improves
the collection performances by avoiding multiple checks, free and
initialization calls.
The installed _HEADERS should be the public ones and the enumeration
types; repeating clutter-x11-texture-pixmap.h breaks with automake 1.11
and doesn't strictly make any sense.
http://bugzilla.openedhand.com/show_bug.cgi?id=2002
When set_container() is called with a NULL container we cannot use the
passed pointer to unset the CLUTTER_ACTOR_NO_LAYOUT flag. We should
store a back pointer to the container as object data (there's no need
to add a Private data structure in this case) and unset the flag on the
back pointer instead.
Previously only ClutterGroup was able to set the CLUTTER_ACTOR_NO_LAYOUT
flag which allows clutter-actor.c to avoid a relayout when showing or
hiding fixed layout containers. Instead of it being the responsibility
of the container to set this flag this patch makes the layout manager
itself decide in the ::set_container method. This way both ClutterBox
and ClutterGroup can take advantage of the optimization.
g_list_insert_sorted inserts the new actor before all others that
compare equal so for the normal case when all actors have depth==0
this has the surprising behaviour of layering the actors in reverse
order. To fix this it now manually inserts the actor in the right
place by searching until it finds an actor at a higher depth and
inserting before that.
http://bugzilla.openedhand.com/show_bug.cgi?id=1988
This reverts commit 939e56e2b1.
Changing the depth sort function to have inconsistent behaviour for
nodes that compare equal breaks the stability of g_list_sort. It ends
up so that every time clutter_container_sort_depth_order is called the
order of all actors with the same depth is reversed.
http://bugzilla.openedhand.com/show_bug.cgi?id=1988
To aid in the debugging of Clutter stage resize issues this adds a
COGL_DEBUG=opengl option that will trace "some select OpenGL calls"
(currently just glViewport calls)
Most Cogl debugging code conditions are marked as G_UNLIKELY with the
intention of having the CPU branch prediction always assume the
path is disabled so having debugging support in release binaries has
negligible overhead.
This patch simply fixes a few cases where we weren't using G_UNLIKELY.
COGL_DEBUG=all wasn't previously useful as there are several options
that change the behaviour of Cogl and all together wouldn't help anyone
debug anything.
This patch makes it so COGL_DEBUG=all|verbose now only enables options
that don't change the behaviour of Cogl, i.e. they only affect the
amount of noise we'll print to a terminal.
In addition to that this patch also improves the output from
COGL_DEBUG=help so we now print a table of options including one liner
descriptions of what each option enables.
Some of the ClutterDebugFlags are not meant as a logging facility: they
actually change Clutter's behaviour at run-time.
It would be useful to have this distinction ratified, and thus split
ClutterDebugFlags into two: one DebugFlags for logging facilities and
another set of flags for behavioural changes.
This split is warranted because:
• it should be possible to do "CLUTTER_DEBUG=all" and only have
log messages on the output
• it should be possible to use behavioural modifiers even on a
Clutter that has been compiled without debugging messages
support
The commit adds two new debugging flags:
ClutterPickDebugFlags - controlled by the CLUTTER_PICK environment
variable
ClutterPaintDebugFlags - controlled by the CLUTTER_PAINT environment
variable
The PickDebugFlags are:
nop-picking
dump-pick-buffers
While the PaintDebugFlags is:
disable-swap-events
The mechanism is equivalent to the CLUTTER_DEBUG environment variable,
but it does not depend on the debug level selected when configuring and
compiling Clutter. The picking and painting debugging flags are
initialized at clutter_init() time.
http://bugzilla.openedhand.com/show_bug.cgi?id=1991
The motion event compression should be affected by the device field of
the event; that is: we should compress motion events coming from the
same device.
If an actor is on the boundary of a Stage and the pointer for a device
enters the Stage over that actor, the sequence of events currently is:
➔ ENTER (source: actor, related: NULL)
➔ MOTION
Thus the Stage never gets an ENTER event. This is a regression from
Clutter 1.0.
The correct sequence is:
➔ ENTER (source: stage, related: NULL)
➔ ENTER (source: actor, related: stage)
➔ MOTION
This also maps to the sequence of events sythesized by Clutter when
leaving the Stage through an actor overlapping the Stage boundary.
http://bugzilla.moblin.org/show_bug.cgi?id=9781
The introduction of the StageManager in 0.8 implied that the first Stage
instance to be created was automatically assigned the status of "default
stage". This was all well and good, since the default stage was created
behind the curtains by the initialization sequence.
Now that the initialization sequence does not create a default stage any
longer, it means that the first stage created using clutter_stage_new()
gets to be the default, and all special and warm and fuzzy - which also
means that the first stage created by clutter_stage_new() cannot be
destroyed or handled as any other stage. Whoopsie.
Let's go back to the old semantics: the stage created by the first
invocation of clutter_stage_get_default() is the default stage, and
nothing else can be set as default. One day we'll be able to break the
API and the whole default stage business will be a thing of the past.
Embedding toolkits should benefit from a proper documentation of
clutter_input_device_update_from_event(): its meaning, its use and
the caveats for the "update_stage" argument.
We now never query the width and height of the given texture object
from OpenGL. The problem is that the user may be creating a Cogl
texture from a texture_from_pixmap object where glTexImage2D was
never called and the texture_from_pixmap spec doesn't clarify that
it's reliable to query the width from OpenGL.
This should address:
http://bugzilla.openedhand.com/show_bug.cgi?id=1502
Thanks to Johan Bilien for reporting
Embedding toolkits most likely will disable the event handling, so all
the input device code will not be executed. Unfortunately, the newly
added synthetic event generation of ENTER and LEAVE event pairs depends
on having input devices.
In order to unbreak things without reintroducing the madness of the
previous code we should allow embedding toolkits to just update the
state of an InputDevice by using the data contained inside the
ClutterEvent. This strategy has two obvious reasons:
• the embedding toolkit is creating a ClutterEvent by translating
a toolkit-native event anyway
• this is exactly what ClutterStage does when processing events
We are, essentially, deferring input device handling to the embedding
toolkits, just like we're deferring event handling to them.
The DeviceManager class should be abstract in Clutter, and implemented
by each backend, as different backends will have different ways to
detect, initialize and list devices; the X11 backend alone has *two*
ways of dealing with devices.
This commit makes DeviceManager an abstract class and delegates the
device initialization and enumeration to per-backend sub-classes.
The responsible for creating the device manager is, obviously, the
backend singleton.
The X11 and Win32 backends have been updated to the new layout; the
Win32 backend has been updated blindly, so it might require additional
testing.
ConfigureNotify is delivered on window movements too, but there is no
need to queue a relayout on these as the viewport hasn't changed size.
Check for the window actually changing size on ConfigureNotify before
queueing a relayout.
This fixes laggy window movement when moving a window in response to
Clutter mouse motion events.
The size and position of the window rectangle for clipping in
try_pushing_rect_as_window_rect is calculated by projecting the
rectangle coordinates. Due to rounding errors, this can end up with
slightly off numbers like 34.999999. These were then being cast
directly to an integer so it could end up off by one.
This uses a new macro called COGL_UTIL_NEARBYINT which is a
replacement for the C99 nearbyint function.
If an actor is lying on the border of the Stage it might miss the LEAVE
event when the pointer of a device leaves the Stage window. Since the
backend is unsetting the Stage back pointer on the InputDevice we can
queue the emission of a LEAVE event on the pointer actor as well.
http://bugzilla.moblin.org/show_bug.cgi?id=9677
As well as manually setting the geometry size, we needed to queue a
relayout. This is what the ConfigureNotify handler would normally do,
but we don't get this event when using a foreign window (obviously).
This should fix resizing in things like gtk-clutter.
If we get into the resize function and it's a foreign window, set the
geometry size so that the allocate will set the backend size and call
glViewport.
Setting/unsetting fullscreen on a mapped or unmapped window now works
correctly.
If you unfullscreen a window that was initially full-screened, it will
unset the fullscreen hint and the WM will likely push the size down to
the largest valid size.
If the window was previously un-fullscreened, Clutter will restore the
previous size.
Fullscreening also now works if the WM switches the hint without the
application's knowledge (as happens when you resize a window to the size
of the screen, for example, with stock metacity).
If FBOs aren't supported then it will end up very slow to reorganize
the atlas. Also currently the CoglTexture2D backend will refuse to
create any textures anyway so the full atlas texture won't be created.
cogl_texture_2d_new may fail in certain circumstances so
cogl_atlas_texture_reserve_space should detect this and also
fail. This will cause cogl_texture_new to fallback to a sliced
texture.
Thanks to Vladimir Ivakin for reporting this problem.
When we resize, we relied on the stage's allocate to re-initialise the
GL viewport. Unfortunately, if we resized within Clutter, the new size
was cached before the window is actually resized, so glViewport wasn't
being called after resizing (some of the time, it's a race condition).
Change the way resizing works slightly so that we only resize when the
geometry size doesn't match our preferred size, and queue a relayout on
ConfigureNotify so the glViewport gets called.
Also change window creation slightly so that setting the size of a
window before it's realized works correctly.
Since the "internal" state is global, it will leak onto actors that you
didn't intend for it to, because it applies not just to the actors you
create, but also to any actors *they* create. Eg, if you have a dialog
box class, you might push/pop_internal around creating its buttons, so
that those buttons get marked as internal to the dialog box. But
ctx->internal_child will still be set during the *button*'s constructor
as well, and so, eg, the label and icon inside the button actor will
*also* be marked as internal children, even if that isn't what the
button class wanted.
The least intrusive change at this point is to make push_internal() and
pop_internal() two methods of the Actor class, and take a ClutterActor
pointer as the argument - thus moving the locality of the internal_child
counter to the Actor itself.
http://bugzilla.openedhand.com/show_bug.cgi?id=1990
The master clock might have a Stage during its destruction phase,
without a StageWindow attached to it. If this happens and we try
to dereference the StageWindow to get its class and call a virtual
function we might experience some slight turbulence and... then...
explode.
http://bugzilla.openedhand.com/show_bug.cgi?id=1987
The signal-swapped-after:: modifier for signal connection inside the
clutter_actor_animate* variadic arguments functions is not mentioned in
the documentation.
In the frenzy of the last 10mins before API freeze, I obviously forgot
to update the OpenGL path for _cogl_buffer_hints_to_gl_enum(). This
commit fixes this.
When the atlas is reorganised we could potentially be moving around
textures that are already referenced in the journal. We therefore need
to flush the journal otherwise they will be rendered with incorrect
texture coordinates. We also need to flush the journal even if we are
not reorganizing so that we can rely on the old texture contents
remaining in the atlas after migrating a texture out.
When creating a Cogl sub-texture, if the full texture is also a sub
texture it will now just offset the x and y and reference the full
texture instead. This avoids one level of indirection when rendering
the texture which reduces the chances of getting rounding errors in
the calculations.
Since get_paint_opacity() recurses through the hierarchy it might lead
to a lot of type checks while we walk the parent-child chain. We can
split the recursive function from the public entry point and perform the
type check just once.
• Remove unused variables.
• Do not pre-initialize ClutterActor's GType; pre-emptive optimizations
like these are more black magic than real optimization.
Remove an useless assignment. The n_expand_children is not used outside
the extra_space check, and if n_expand_children is 0 then the extra
space we allocate is 0.
• Remove one unused variable.
• We ignore the result of get_timeline_internal() so we need to tell
the compiler that - though a better solution would be to split the
timeline implicit creation into its own function.
Do not de-reference a void*; use a temporary variable -- after
checking the contents of the pointer. This actually simplifies
the readability and avoids pulling a Lisp with the parentheses.
The function _cogl_get_max_texture_units is called quite often while
rendering and it returns a constant value so we might as well cache
the result. Calling glGetInteger on Mesa can be expensive because it
flushes a lot of state.
An initial pass over the Cogl source code using the Clang static
analysis tool flagged a few low hanging issues such as un-used variables
or redundant initializing of variables which this patch fixes.
All the cogl_rectangle* APIs normalize their input into into an array of
_CoglMutiTexturedRect rectangles and pass these on to our work horse;
_cogl_rectangles_with_multitexture_coords. The definition of
_CoglMutiTexturedRect had 4 separate float members, x_1, y_1, x_2 and
y_2 which meant for some common cases we were having to copy out from an
array into these members. We are now able to simply point into the users
array avoiding a copy which seems desirable when submiting lots of
rectangles.
This uses the G_GNUC_DEPRECATED macros to mark the
cogl_{texture,vertex_buffer,shader}_ref and unref APIs as deprecated.
Since this flagged that cogl-pango-display-list.c and
clutter-glx-texture-pixmap.c were still using deprecated _ref/_unref
APIs they have now been changed to use the cogl_handle_ref/unref API
instead.
The function prototypes for the primitives API were spread between
cogl-path.h and cogl-texture.h and should have been in a
cogl-primitives.h.
As well as shuffling the prototypes around into more sensible places
this commit splits the cogl-path API out from cogl-primitives.c into
a cogl-path.c
We've had complaints that our Cogl code/headers are a bit "special" so
this is a first pass at tidying things up by giving them some
consistency. These changes are all consistent with how new code in Cogl
is being written, but the style isn't consistently applied across all
code yet.
There are two parts to this patch; but since each one required a large
amount of effort to maintain tidy indenting it made sense to combine the
changes to reduce the time spent re indenting the same lines.
The first change is to use a consistent style for declaring function
prototypes in headers. Cogl headers now consistently use this style for
prototypes:
return_type
cogl_function_name (CoglType arg0,
CoglType arg1);
Not everyone likes this style, but it seems that most of the currently
active Cogl developers agree on it.
The second change is to constrain the use of redundant glib data types
in Cogl. Uses of gint, guint, gfloat, glong, gulong and gchar have all
been replaced with int, unsigned int, float, long, unsigned long and char
respectively. When talking about pixel data; use of guchar has been
replaced with guint8, otherwise unsigned char can be used.
The glib types that we continue to use for portability are gboolean,
gint{8,16,32,64}, guint{8,16,32,64} and gsize.
The general intention is that Cogl should look palatable to the widest
range of C programmers including those outside the Gnome community so
- especially for the public API - we want to minimize the number of
foreign looking typedefs.
OpenGL is an implementation detail for Cogl so it's not appropriate to
expose OpenGL extensions through the Cogl API.
Note: Clutter is currently still using this API, because it is still
doing raw GL calls in ClutterGLXTexturePixmap, so this introduces a
couple of (legitimate) build warnings while compiling Clutter.
This replaces code like this:
if (CLUTTER_ACTOR_IS_VISIBLE (self))
clutter_actor_queue_redraw (self);
with:
clutter_actor_queue_redraw (self);
clutter_actor_queue_redraw internally knows what can be optimized when
the actor is not visible, but it also knows that the queue_redraw signal
must always be sent in case a ClutterClone is cloning a hidden actor.
ClutterGroup::foreach was recently changed (ref: ce030a3fce) to use
g_list_foreach() to iterate the children instead of manually iterating
the list so it would safely handle calls like:
clutter_container_foreach (container, clutter_actor_destroy);
(In this example clutter_actor_destroy will result in the current
list item being iterated being freed.)
There is a lot of duplication between ClutterGroup and ClutterBox so
this makes the two files diff-able so that new fixes can easily be
ported to both and bug fixes missing in one or the other can be spotted
more easily. This doesn't change the behaviour of either actor; it's
really just a shuffle around of code and normalizes the coding style to
make the files comparable.
This has already uncovered one bug in ClutterBox, and also highlights
a bug in ClutterGroup + many other actors:
1) ClutterGroup::real_foreach was recently changed to use
g_list_foreach instead of manually iterating the child list so it can
safely handle calls like:
clutter_container_foreach (container, clutter_actor_destroy);
ClutterBox is still manually iterating the list.
2) In ClutterGroup we guard _queue_redraw() calls like this:
if (CLUTTER_ACTOR_IS_VISIBLE (container))
clutter_actor_queue_redraw (CLUTTER_ACTOR (container));
In ClutterBox we don't:
I think ClutterBox is correct here because
clutter_actor_queue_redraw already optimizes the case where the
actor's not visible, but it also considers that the actor may be
cloned and so the guard in ClutterGroup could break clones. This
actually highlights a wider clutter bug since the same kinds of
guards can be found in all other clutter actors.
The signbit macro is defined in C99 so it should be available but some
versions of GCC don't appear to define it by default. If it's not
available we can use a hack to test the bit directly.
If the stage associated to the InputDevice is not set we should
short-circuit out and return NULL. This will result in a pick()
done on the event's stage - if applicable.
http://bugzilla.moblin.org/show_bug.cgi?id=9602
Instead of returning a sub-pixel height round up the preferred height to
the nearest integral value that is not less than the size reported by
Pango, once converted in pixels.
This fixes some backwards logic for asserting that we have a GLX major
version == 1 and a minor version >= 2. (NB: Although we technically
depend on GLX 1.3 features, we still have to support drivers that report
GLX 1.2 because there are a lot of mesa drivers out there incorrectly
report GLX 1.2 even though they export extensions that depend on GLX
1.3)
A material layer can not be considered equal if it is using different
texture filtering modes. This was causing problems where rectangles
with different filters would end up batched together and then rendered
with the wrong filter mode.
If your OpenGL driver supports GLX_INTEL_swap_event that means when
glXSwapBuffers is called it returns immediatly and an XEvent is sent when
the actual swap has finished.
Clutter can use the events that notify swap completion as a means to
throttle rendering in the master clock without blocking the CPU and so it
should help improve the performance of CPU bound applications.
Some extensions only support GLX versions > 1.3 and may not support
old style X Windows as GLXDrawables, so we now create GLXWindows for
stages when possible.
Commit d2bdd3cb62 fixed some compiler warnings but also broke the
ability to create a stage. Although not having warnings from the
compiler is nice, it is also nice to be able to create a stage so lets
not invert the meaning of the error check.
The function modifies the pixels pointed by p in-place so the pointer
can not be constant. The compiler was accepting this because the
modification is done from inline assembler.
_cogl_texture_driver_gen is needed to set the texture minification
mode to Cogl's default of GL_LINEAR. There was also a line to set this
in _cogl_texture_2d_new_with_size but it wasn't working because it was
called *before* the texture was bound. If the texture was later
rendered with the default material it then it would end up with GL's
default mipmap filtering mode but without mipmaps so it would render
white squares instead.
This adds a fast path for premultiplying an RGBA image using SSE2
instructions. SSE registers are 128-bit and we need at least 16-bits
per component for the intermediate result of the multiplication so we
can do two pixels in parallel with one register. The function
interleaves 2 SSE registers to multiply 4 pixels in one function call
with the hope that this will pipeline better.
http://bugzilla.openedhand.com/show_bug.cgi?id=1939
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
• Add the function name in the warning, since the text is the same in
both clutter_actor_raise() and clutter_actor_lower().
• If an actor has a name then prefer it to the type name.
OpenGL ES has no PBO extension, so we fallback to using a malloc'ed
buffer. Make sure the OpenGL-only defines don't leak into the OpenGL ES
compilation.
First, let's add a new public feature called, surprisingly,
COGL_FEATURE_PBOS to check the availability of PBOs and provide a
fallback path when running on older GL implementations or on OpenGL ES
In case the underlying OpenGL implementation does not provide PBOs, we
need a fallback path (a malloc'ed buffer). The CoglPixelBufer
constructors will instanciate a subclass of CoglBuffer that handles
map/unmap and set_data() with a malloc'ed buffer.
The public feature is useful to check before using set_data() on a
buffer as it will mean doing a memcpy() when not supporting PBOs (in
that case, it's better to create the texture directly instead of using a
CoglBuffer).
The only goal of using COGL buffers is to use them to create
textures. cogl_texture_new_from_buffer() is the new symbol to create
textures out of buffers.
This subclass of CoglBuffer aims at wrapping PBOs or other system
surfaces like DRM buffer objects. Two constructors are available:
cogl_pixel_buffer_new() with a size when you only care about the size of
the buffer (such a buffer can be used to store several texture data such
as the three planes of a I420 frame).
cogl_pixel_buffer_new_full() is more a 1:1 mapping between the data and
an underlying surface, with the possibility of having access to a low
level memory buffer that may have a stride.
Buffer objects are cool! This abstracts the buffer API first introduced
by GL_ARB_vertex_buffer_object and then extended to other objects.
The coglBuffer abstract class is intended to be the base class of all
the buffer objects, letting the user map() buffers. If the underlying
implementation does not support buffer objects (or only support VBO but
not FBO for instance), fallback paths should be provided.
The only way the user has to set the mipmap filters is through the
material/layer API. This API defaults to GL_LINEAR/GL_LINEAR for the max
and min filters. With the main use case of cogl being 2D interfaces, it
makes sense do default to GL_LINEAR for the min filter.
When creating new textures, we did not set any filter on them, using
OpenGL defaults': GL_NEAREST_MIPMAP_LINEAR for the min filter and
GL_LINEAR for the max filter. This will make the driver allocate memory
for the mipmap tree, memory that will not be used in the nominal case
(as the material API defaults to GL_LINEAR).
This patch tries to ensure that the min filter is set to GL_LINEAR
before any glTexImage*() call is done on the texture by setting the
filter when generating new OpenGL handles.
Some GL functions have a return value that the GE() macro is not able to
handle. Let's define a new Ge_RET() macro which will be able to handle
functions such as glMapBuffer().
While at it, removed the unused variadic dots to the GE() macro.
* animator-parser:
docs: Describe the Animation definition syntax
animator: Provide a ClutterScript parser
animator: Allow retrieving type property type from a key
script: Use a node when resolving an animation mode
The whole point of having the Animator class is that the developer can
describe a complex animation using ClutterScript. Hence, ClutterAnimator
should hook into the Script machinery and parse a specific description
format for its keys.
When asking a key for its target value we also ask the developer to pass
in an initialized GValue - but we don't make it easy to know the type of
the GValue. A developer has to ask the GObject class for the GParamSpec
and then initialize the GValue, instead.
Since we know the type of the GValue we should provide a getter for it.
We should also allow developers to throw at us GValue with compatible and
transformable types.
Finally, all the accessors should be constified.
Instead of taking a string and duplicating the "is it a string or an
integer" check in both Alpha and Animation, the function in
ClutterScript that resolves the animation mode values should take a
JsonNode and do all the checks it needs.
When we trashed the contents of the stencil buffer during
_cogl_path_fill_nodes we marked the clip stack state as dirty and expected
the clip stack code would clean up our glStencilFunc state.
The problem is that we only try and update the clip state during
_cogl_journal_init (when we flush the framebuffer state) which is only
called when the journal first gets something logged in it.
To make sure the stencil state is cleaned up we now also flush the journal
so _cogl_journal_init will be called for the next logged rectangle.
If we aren't syncing to vblank or if the last dispatch didn't cause a
redraw then the master clock will try to wait at least a small amount
of time before dispatching again. However if time goes backwards then
it would not do a dispatch until time catches up again. To fix this it
know just runs a dispatch immediately if time goes backwards.
This is related to Moblin bug #3839. There was a similar fix for this
in 9dc012c07, however that only fixed the case where timelines
wouldn't update. If there are no animations running then the master
clock won't even try updating timelines until time catches up.
http://bugzilla.o-hand.com/show_bug.cgi?id=1974
* origin/cwiiis-stage-resize:
[stage-x11] Set the default size differently
[stage] Set default size correctly
Revert "[x11] Don't set actor size on ConfigureNotify"
[x11] Don't set actor size on ConfigureNotify
[stage] Now that get_geometry works, use it
[stage-x11] make get_geometry always get geometry
[stage] Get the current size correctly
[stage] Set minimum width/height to 1x1
[stage] Add set/get_minumum_size
ClutterAnimator is a class for managing the animation of multiple
properties of multiple actors over time with keyframing of values.
The Animator class is meant to be used to effectively describe
animations using the ClutterScript definition format, and to construct
complex implicit animations from the ground up.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
We want to set the default size without triggering the layout machinary,
so change the window creation process slightly so we start with a
640x480 window.
Due to the way the new sizing works, clutter stage must set its size in
init (to maintain old behaviour) and the properties on the X11 stage
must be initialised to 1x1 so that it actually goes ahead with the
resize.
Fixes stages that aren't user resizable and have no size set from
appearing at 1x1.
Calling clutter_actor_set_size in response to ConfigureNotify makes
setting the size of the stage racy - the most common result of which
seems to be that you can't set the stage dimensions to anything less
than 640x480.
Instead, add a first_allocation bit to the private structure of the X11
stage and force the first resize (necessary or the default stage will be
a 1x1 window).
We want the actual window geometry in clutter_stage_set_minimum_size,
not the set size. Now that the geometry function has been changed to do
what it says, use it.
Now that we have a minimum size getter on the stage object, change
get_geometry to actually always return the geometry. This fixes stages
that are set as user-resizable appearing at 1x1 size.
This will need changing in other back-ends too.
Get the current size of the stage correctly in
clutter_stage_set_minimum_size. The get_geometry StageWindow function is
not equivalent of the current size, use clutter_actor_get_size().
This adds three new texture backends.
- CoglTexture2D: This is a trimmed down version of CoglTexture2DSliced
which only supports a single texture and only works with the
GL_TEXTURE_2D target. The code is a lot simpler so it has a less
overheads than dealing with slices. Cogl will use this wherever
possible.
- CoglSubTexture: This is used to get a CoglHandle to represent a
subregion of another texture. The texture can be used as if it was a
standalone texture but it does not need to copy the resources.
- CoglAtlasTexture: This collects RGB and RGBA textures into a single
GL texture with the aim of reducing texture state changes and
increasing batching. The backend will try to manage the atlas and
may move the textures around to close gaps in the texture. By
default all textures will be placed in the atlas.
There was a typo in getting the height of the full texture to check
whether the sub region fits so that it was using the width
instead. This was causing crashes when debugging is enabled for some
apps.
The reason why we have a dummy, offscreen Window when we create the
GLX context is that GLX does not like it when you ask the context for
features if it's not made current to a Drawable. Maybe in the future
it will allow us to do so, but right now we have to make do with what
GLX offers us.
In cogl_texture_new_from_file we create and own a temporary
bitmap. There's no need to copy this data if we need to do a premult
conversion so instead it just does conversion before passing it on to
cogl_texture_new_from_bitmap.
The Cogl atlas code was using _cogl_texture_prepare_for_upload with a
NULL pointer for the dst_bmp to determine the internal format of the
texture without converting the bitmap. It needs to do this to decide
whether the texture will go in the atlas before wasting time on the
conversion. This use of the function is a little confusing so that
part of it has been split out into a new function called
_cogl_texture_determine_internal_format. The code to decide whether a
premult conversion is needed has also been split out.
Bind ctrl-backspace and ctrl-del to functions that delete a word before
or after the cursor, respectively.
Selection does not affect the deletion, but current selection is
preserved. This mimicks GTK+ functionality in GtkTextView and GtkEntry.
http://bugzilla.openedhand.com/show_bug.cgi?id=1767
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The SDL API is far too limited for the windowing system needs of
Clutter; the status of the SDL backend was always experimental, and
since the Windows platform is supported by a native backend there is
no point in having the SDL backend around any more.
The Win32 backend now implements the create_context method which
creates a context and binds it to a 1x1 invisible window. That way
there will always be a context bound and the features can be retrieved
without creating the default stage. This reflects the changes in
1c6ffc8..b245d55 to the GLX backend.
Instead of using g_critical() inside the create_context() implementation
of the ClutterBackendGLX we should use the passed GError, so that the
error message can bubble up to the caller.
Instead of creating the default stage during initialization we can
now safely create it whenever clutter_stage_get_default() is called.
To maintain the invariant, the default stage is immediately realized
by Clutter itself.
Since we must guarantee that Cogl has a GL context to query, it is too
late to use the "dummy Window" trick from within the get_features()
virtual function implementation.
Instead, we can create a dummy Window from create_context() itself and
leave it around - basically trading a default stage with a dummy X
window.
We need to have the dummy X window around all the time so that the
GLX context can be selected and made current.
High level toolkits might wish to construct a PangoFontDescription and
then set it directly on a ClutterText actor proxy or sub-class.
ClutterText should have a :font-description property to set (and get)
the PangoFontDescription.
http://bugzilla.openedhand.com/show_bug.cgi?id=1960
Commit 92a375ab4 changed the initial value of max_texcoord_attrib_unit
to -1 so that it could disable the texture coord array for the first
texture unit when there are no texture coords used in the vbo. However
max_texcoord_attrib_unit was an unsigned value so this actually became
G_MAXUINT. The disabling loop at the bottom still worked because
G_MAXUINT+1==0 but the check for whether any texture unit is greater
than max_texcoord_attrib_unit was failing so it would always end up
disabling all texture units. This is now fixed by changing
max_texcoord_attrib_unit to be signed.
The commit ecbb7ce41a exposed some issues
when positioning the cursor with the mouse pointer: the selection is
not moved along with the cursor when inserting a single character or a
string.
Also, some freeze_notify() are called too early, leading to decoupling
from their respective thaw_notify().
http://bugzilla.openedhand.com/show_bug.cgi?id=1955
The documentation for ClutterGroup behaviour when setting an explicit
size is not accurate - or, actually, it was accurate by the time
ClutterGroup was first written but has been neglected in the following
release cycles.
To avoid confusion for new users of Clutter the documentation should be
slightly expanded, mentioning the exact semantics of ClutterGroup with
regards to: preferred size, explicitly set size and how to constrain the
visible area of a ClutterGroup to an explicitly set size.
Based on a patch by: Neil Roberts <neil@linux.intel.com>
When we disable the per-actor events delivery Clutter replicates the X11
implicit soft grab for motion events with off-stage. The implicit grab
is done whenever the pointer of a device leaves a window with a button
still pressed; with the implicit grab in place the window still receives
motion events even after the LeaveNotify - until the button is released.
The implicit grab is not honoured in the per-actor event deliver case,
though, so we have a mismatch between two in theory equivalent cases.
Luckily, the fix is pretty trivial: when we check for a motion event
with a stage set but without an actor set, and that has off-stage
coordinates, we arbitrarily set the source to be the stage of the event
and emit the pointer event.
When deciding if a material layer is equal it now compares the GL
target and texture number if the textures are not sliced. This is
needed to get batching across atlased textures.
Cogl accepts a pixel format for both the data in memory and the
internal format to be used for the texture. If they do not match then
it would convert them using the CoglBitmap functions before uploading
the data. However, GL also lets you specify both formats so it makes
more sense to let GL do the conversion. The driver may need the
texture in a specific format so it may end up being converted anyway.
The cogl_texture_upload_data functions have been removed and replaced
with a single function to prepare the bitmap. This will only do the
premultiplication conversion because that is the only part that GL
can't do directly.
The premult part of _cogl_convert_premult has now been split out as
_cogl_convert_premult_status. _cogl_convert_premult has been renamed
to _cogl_convert_format to make it less confusing. The premult
conversion is now done in-place instead of copying the
buffer. Previously it was copying the buffer once for the format
conversion and then copying it again for the premult conversion. The
premult conversion never changes the size of the buffer so it's quite
easy to do in place. We can also use the separated out function
independently.
The internal format of the atlas texture is still set to the
appropriate format so Cogl will disable blending for textures that are
intended to be RGB. This should end up ignoring the alpha channel from
the texture in the atlas. This makes the code slightly easier to
maintain and should also improve the chances of batching.
Since we're allowing allocation cycles saying that calling
queue_relayout() inside an allocation cycle "is not allowed" is kind of
confusing. We should say that "it is not recommended".
* device-manager: (37 commits)
x11: Re-enable XI1 extension keyboards
x11: Always handle core device events before XI events
docs: Documentation fixes for DeviceManager
device-manager: Fix the signals definition
docs: Add sections for InputDevice and DeviceManager
docs: Add clutter_input_device_get_device_name()
tests: Print out the device details on motion
Always register core devices
device: Remove unused is_default member
win32: Experimental implementation of device support
tests: Print the device name, as well as its Id
x11: Fill out the :name property of the InputDevices
device: Add the :name property to InputDevice
x11: Store core devices on the X11 Backend singleton
device: Unset the cursor actor when leaving the stage
device: Add pointer actor getter
x11: Discard the LeaveNotify for off-stage ButtonRelease
device: Do not overwrite the stage for an InputDevice
event: Off-stage button releases have a click count of 1
event: Scroll events do not have click count
...
Added a "selection-bound" notify on clutter_text_clear_selection as it
changes the value.
Added utility function clutter_text_set_positions, in order to
change both cursor position and selection bound inside a
g_object_[freeze/thaw]_notify block
Added g_object_[freeze/thaw]_notify in other functions that changes
both cursor position and selection bound
Solves http://bugzilla.openedhand.com/show_bug.cgi?id=1955
ClutterStage has both set_key_focus() and get_key_focus() methods, but
there is no :key-focus property. This means that it is not possible to
get notifications when the key-focus has changes except by connecting to
both the ::key-focus-in and ::key-focus-out signals and do additional
bookkeeping.
http://bugzilla.openedhand.com/show_bug.cgi?id=1956
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The TimeoutPool is not used by ClutterTimeline any more, so we need to
remove a sentence from its description. We also need to fix the gtk-doc
syntax errors.
Instead of assigning a new colour to each quad of a batch, the
rectangle debugging code now assigns a new colour to each batch so
that it can be used to visually see what is being batched. The colour
is stored in a global variable that is reset during cogl_clear. This
improves the chances that the same colour will be used for a batch in
the next frames to avoid flickering.
When setting up the state for the vertex buffer,
enable_state_for_drawing_buffer tries to keep track of the highest
numbered texture unit in use. It then disables any texture arrays for
units that were previously enabled if they are greater than that
number. However if there is no texturing in the VBO then the max used
unit would be left at 0 which it would later think meant unit 0 is
still in use so it wouldn't disable it. To fix this it now initialises
the max used unit to -1 which it should interpret as ‘no units are in
use’ so it will later disable the arrays for all units.
Thanks to Jon Mayo for reporting the bug.
http://bugzilla.openedhand.com/show_bug.cgi?id=1957
We were checking the number of texture units against the GL enum that is
used in glGetInteger() to query that number. Let's abstract this in a
little function.
Took the opportunity to dig a bit on the usage of GL limits for the
number of texture (image) units and document our use of them. We'll need
something finer grained if we want to fully exploit texture image units
with a programmable pipeline.
The index field of CoglTextureUnit was never set, leading to the
creation of units with index set to 0. When trying to retrieve a texture
unit by its index (!= 0) with _cogl_get_texture_unit(), a new one was
created as it could not find it back in the list of textures units:
ctx->texture_units.
http://bugzilla.openedhand.com/show_bug.cgi?id=1958
The :opacity property is defined using a GParamSpecUchar. This usually
leads to issues with language bindings that don't have an 'unsigned
char' type and that need to explicitly handle the conversion between
G_TYPE_UCHAR and G_TYPE_INT or G_TYPE_UINT.
The property definition already specifies an interval size of [0, 255]
on the values; more importantly, GObject already implicitly transforms
between G_TYPE_UCHAR and G_TYPE_UINT (the GValue transformation
functions are registered at type system initialization time) so
switching between a GParamSpecUchar and a GParamSpecUint should not be
an ABI break.
I have tested a simple program using the opacity property before and
after the change and I cannot see any run-time warnings related to this
issue.
Be more drastic if the internal state is broken, and assert() if the
expected Alpha and Timeline instances we need are not valid. This
usually implies a library bug or a massive heap corruption.
The Animation code does transformation of values between type A and A'
after checking for compatibility using g_value_type_compatible(). This
is incorrect: compatibility means that the two types can be copied. The
correct conversion should follow:
if (compatible (type (A), type (A')))
copy (A, A');
else
if (transformable (type (A), type (A')))
transform (A, A');
else
error("Unable to trasform type A in A'");
The transformation might still fail, so we need to check for errors
there as well as a fall-through case.
We should not just check for compatibility, but also for the ability to
transform a GValue of type A into another of type A'.
Usually compatibility is enough, especially if types can be
introspected beforehand; some times, though, we also need to check for
transformability as a type can provide the transformation functions
necessary for the operation.
The commit 1c69c61745 which improved the
protection against timeline removals during the master clock advancement
was only doing half the job - and actually broke the chaining of
animations inside the ::completed signal.
We cannot simply take a reference on the timelines and still use the list
held by the master clock because the do_tick() might result in the
creation of a new timeline, which gets added at the end of the list with
no reference increase and thus gets disposed at the end of the iteration.
We also cannot steal the master clock timelines list because a timeline
might be removed as the direct result of do_tick() and remove_timeline()
would not find the timeline, failing and leaving a dangling pointer
behind.
For this reason we copy the list of timelines out of the one that the
Master Clock holds, take a reference on each timeline, advance them all,
release the reference and free the list.
The extension keyboard support in XInput 1.x is hopelessly broken.
Nevertheless, it's possible to use some bits of it, as we prefer the
core keyboard events to the XInput events, thus at least having proper
handling for X11 key events on the Stage window.
The XI 1.0 layer is complementary to the X11 core devices handling; this
means that core events will still be emitted for the core pointer and
keyboard devices, and that secondary (floating) devices should be
handled on top of that.
Thus, the XI event handling code should be executed (if explicitly
compiled in and enabled) if the core device events have not been parsed.
Note: this is going away with XI2, which completely replaces both core and
XI1 events.
Even with XInput support we should always register core devices. This
allows us to handle enter and leave events correctly on the Stage and
to have a working XInput 1.x support in Clutter.
Mostly lifted from the core pointer and keyboard X11 backend support.
The win32 backend registers two devices (a core pointer and a core
keyboard) and assigns them to the event structure when doing the
translation from native events to Clutter events.
Thanks to: Samuel Degrande <Samuel.Degrande@lifl.fr> for testing this
patch.
Instead of overloading the device id of 0 and 1 we should treat the core
devices as special, and have a pointer inside the X11 backend singleton
structure, for fast access.
When an InputDevice leaves a stage we set the stage member of
InputDevice to NULL. We should also unset the cursor_actor (as the
device is obviously not on an actor any more).
When the device re-enters the Stage the ENTER/LEAVE event generation
machinery will then be able to emit the ENTER event on the Stage.
If the user presses a button on a pointer device and then moves out the
Stage X11 will emit the following events:
LeaveNotify ➔ MotionNotify ... ➔ ButtonRelease ➔ LeaveNotify
The second LeaveNotify differs from the first by the state field.
Unfortunately, ClutterCrossingEvent doesn't have a modifier_state field
like other events, so we cannot provide a way for programmatically
distinguishing them from a Clutter perspective. This is also an X11-ism
we might not even want to replicate on every backend with sane
enter/leave semantics.
For this reason we should check inside the X11 event processing if the
pointer device has already left the Stage and ignore the second
LeaveNotify.
The Stage field of an InputDevice is set by the backend, whenever the
pointer enters or leaves the Stage. The Stage should not overwrite the
stage field for every event it processes.
The previous state for the device is used by the click count machinery
and we should not be overwriting it at every event; instead, we should
use a parallel storage for the current state coming from the windowing
system.
The device manager does not need to update the state of the devices
when the user has disabled the delivery of motion events to actors:
the events will always be delivered as they are to the stage.
The LEAVE/ENTER event pairs should be queued during the InputDevice
update process, when we change the actor under the device pointer.
This commit cleans up the event emission code inside clutter-main.c
and the logic of the event processing.
The InputDevice objects stores pointer coordinates, state, stage and
the actor under the cursor, so if the current backend provides us with
one attached to the Event structure then we want the InputDevice itself
to update its state and give us the ClutterActor underneath the
pointer's cursor.
Even when we are not using XInput we now have fallback devices; the
X11 backend should always assign the default devices when translating
the X events to Clutter events.
Use the device manager to store the input devices. Also, provide
two fallback devices when initializing the X11 backend: device 0
for the pointer and device 1 for the keyboard.
Previously the atlas textures were being created with whatever format
the first sub texture is in. Only three formats are supported so this
only matters if the first texture is a premultiplied alpha
texture. Instead it now masks out the premultiplied bit so that the
textures are always either RGB_888 or RGBA_8888.
The win32 backend now handles the WM_SETCURSOR message and sets a
fully transparent cursor if the cursor-visible property has been
cleared on the stage. The icon is stored in the library via a resource
file. The instance handle for the DLL is needed to load the resource
so there is now a DllMain function to grab the handle.
g_list_foreach has better protection against the current node being
removed. This will happen for example if someone calls
clutter_container_foreach(container, clutter_actor_destroy). This was
causing valgrind errors for the conformance tests which do just that.
When uploading texture data it was just calling cogl_texture_set_data
on the large texture. This would attempt to convert the data to the
format of the large texture. All of the textures with alpha channels
are stored together regardless of whether they are premultiplied so
this was causing premultiplied textures to be unpremultiplied
again. It now just uploads the data ignoring the premult bit of the
format so that it only gets converted once.
With the atlas texture backend ensuring the mipmaps can make it become
a completely different texture which will have different texture
coordinates or may even be sliced. Therefore we need to ensure the
mipmaps before deciding which quads to log in the journal. This adds a
new private function to cogl-material which ensures the mipmaps if
needed.
The sub texture backend doesn't work well as a completely general
texture backend because for example when rendering with cogl_polygon
it needs to be able to tranform arbitrary texture coordinates without
reference to the other coordintes. This can't be done when the texture
coordinates are a multiple of one because sometimes the coordinate
should represent the left or top edge and sometimes it should
represent the bottom or top edge. For example if the s coordinates are
0 and 1 then 1 represents the right edge but if they are 1 and 2 then
1 represents the left edge.
Instead the sub-textures are now documented not to support coordinates
outside the range [0,1]. The coordinates for the sub-region are now
represented as integers as this helps avoid rounding issues. The
region can no longer be a super-region of the texture as this
simplifies the code quite a lot.
There are two new texture virtual functions:
transform_quad_coords_to_gl - This transforms two pairs of coordinates
representing a quad. It will return FALSE if the coordinates can
not be transformed. The sub texture backend uses this to detect
coordinates that require repeating which causes cogl-primitives
to use manual repeating.
ensure_non_quad_rendering - This is used in cogl_polygon and
cogl_vertex_buffer to inform the texture backend that
transform_quad_to_gl is going to be used. The atlas backend
migrates the texture out of the atlas when it hits this.
When calculating the next integer position for negative coordinates it
would not increment if the position is already a multiple of one so we
need to manually add one.
When try_creating_fbo fails it returns 0 to report the error and if it
succeeds it returns ‘flags’. However cogl_offscreen_new_to_texture
also passes in 0 for the flags as the last fallback to create the fbo
with nothing but the color buffer. In that case it will return 0
regardless of whether it succeeded so the last fallback will always be
considered a failure.
To fix this it now just returns a gboolean to indicate whether it
succeeded and the flags used for each attempt is assigned when passing
the argument rather than from the return value of the function.
Also if the only configuration that succeeded was with flags==0 then
it would always try all combinations because last_working_flags would
also be zero. To avoid this it now uses a separate gboolean to mark
whether we found a successful set of flags.
http://bugzilla.openedhand.com/show_bug.cgi?id=1873
Use the newly-added profiling timers inside the master clock dispatch
function to see how much time we spend:
• in the whole function
• in the event processing for each stage
• in the timeline advancement
Reading back the texture data in unrealize does not seem like a
desirable feature any more, clutter has evolved a lot since it was
implemented.
What's wrong with it now:
* It takes *a lot* of time to read the data back with glReadPixel(),
* When several textures share the same CoglTexture, the same data can
be read back multiple times,
* If the underlying material uses multiple texture units, only the
first one was copied back,
* In ClutterCairoTexture, we end up having two separate copies of the
data,
* GL actually manages texture memory accross system/video memory
for us!
For all the reasons above, let's get rid of the glReadPixel() in
Texture::unrealize()
Fixes: OHB#1842
Since 755cce33a7 the framebuffer code is using the GL enums
GL_DEPTH_ATTACHMENT and GL_DEPTH_COMPONENT16. These aren't available
directly under GLES except with the OES suffix so we need to define
them manually as we do with the other framebuffer constants.
These macros used to define Cogl wrappers for the GLenum values. There are
now Cogl enums everywhere in the API where these were required so we
shouldn't need them anymore. They were in the public headers but as
they are not neccessary and were not in the API docs for Clutter 1.0
it should be safe to remove them.
Using the ::event signal to match the CLUTTER_DELETE event type (and
block the stage destruction) can be costly, since it means checking
every single event.
The ::delete-event signal is similar in spirit to any other specialized
signal handler dealing with events, and retains the same semantics.
If a user supplied multiple groups of texture coordinates with
cogl_rectangle_with_multitexture_coords() then we would repeatedly log only
the first group in the journal. This fixes that bug and adds a conformance
test to verify the fix.
Thanks to Gord Allott for reporting this bug.
The Intel drivers in Mesa 7.6 (and possibly earlier versions) don't
support creating FBOs with a stencil buffer but without a depth
buffer. This reworks framebuffer allocation so that we try a number
of fallback options before failing.
The options we try in order are:
- the same options that were sucessful last time if available
- combined depth and stencil
- separate depth and stencil
- just stencil, no depth
- just depth, no stencil
- neither depth or stencil
Allow the user of the ClutterMedia interface to specify a Pango font
description to display subtitles. Even if the underlying implementation
of the interface does not natively use Pange, it must be capable of
parsing the grammar that pango_font_description_from_string() accepts.
Some source files should not be passed through the introspection parser,
as they are fully private and do not expose any valuable API.
Also the clutter-profile.h header is private and should not be
installed.
We weren't taking a reference on the texture to be used as the color buffer
for offscreen rendering, so it was possible to free the texture leaving the
framebuffer in an inconsistent state.
This adds gives Cogl a dedicated UProf context which will be linked together
with Clutter's context during clutter_init_real().
Initial timers cover _cogl_journal_flush and _cogl_journal_log_quad
You can explicitly ask for a report of Cogl statistics by exporting
COGL_PROFILE_OUTPUT_REPORT=1 but since the context is linked with Clutter's
the statisitcs will also be shown in the automatic Clutter reports.
This suspends and resumes all uprof timers and counters except while dealing
with picking, so as to give more focused statistics.
Be aware that there are still some issues with this profile option since
there are a few special case counters and timers that shouldn't be
suspended; noteably the frame counters are incorrect so the per frame stats
can't be trusted.
As we have for debugging, this adds the ability to control profiling flags
either via the command line or an environment variable.
The first option added is CLUTTER_PROFILE=disable-report
This also changes the reporting to be opt-out so you don't need to export
CLUTTER_PROFILE_OUTPUT_REPORT=1 to see a report but you can use
CLUTTER_PROFILE=disable-report to disable it if desired.
UProf is a small library that aims to help applications/libraries provide
domain specific reports about performance. It currently provides high
precision timer primitives (rdtsc on x86) and simple counters, the ability
to link statistics between optional components at runtime and makes report
generation easy.
This adds initial accounting for:
- Total mainloop time
- Painting
- Picking
- Layouting
- Idle time
The timing done by uprof is of wall clock time. It's not based on stochastic
samples we simply sample a counter at the start and end. When dealing with
the complexities of GPU drivers and with various kinds of IO this form of
profiling can be quite enlightening as it will be able to represent where
your application is blocking unlike tools such as sysprof.
To enable uprof accounting you must configure Clutter with --enable-profile
and have uprof-0.2 installed from git://git.moblin.org/uprof
If you want to see a report of statistics when Clutter applications exit you
should export CLUTTER_PROFILE_OUTPUT_REPORT=1 before running them.
Just a final word of caution; this stuff is new and the manual nature of
adding uprof instrumentation means it is prone to some errors when modifying
code. This just means that when you question strange results don't rule out
a mistake in the instrumentation. Obviously though we hope the benfits out
weigh e.g. by focusing on very key stats and by having automatic reporting.
Since asking for ARGB by default is still somewhat experimental on X11
and not every toolkit or complex widgets (like WebKit) still do not like
dealing with ARGB visuals, we should switch back to RGB by default - now
that at least we know it works.
For applications (and toolkit integration libraries) that want to enable
the ClutterStage:use-alpha property there is a new function:
void clutter_x11_set_use_argb_visual (gboolean use_argb);
which needs to be called before clutter_init().
The CLUTTER_DISABLE_ARGB_VISUAL environment variable can still be used
to force this value off at run-time.
Currently, ClutterActor detects a relayout cycle (an actor causing a
relayout to be queued from within an allocate() function) and aborts
after printing out a warning. This might be a little bit too anal
retentive, and it currently breaks GTK+ embedding inside clutter-gtk
so we should probably relax the behaviour a bit. Now we just emit the
warning but we still go ahead with the relayout.
When Clutter tries to pick an ARGB visual it tried to set the
GLX_TRANSPARENT_TYPE attribute of the FBConfig to
GLX_TRANSPARENT_RGB. However the code to do this was broken so that it
was actually trying to set the non-existant attribute number 0x8008
instead. Mesa silently ignored this so it appeared as if it was
working but the Nvidia drivers do not like it.
It appears that the TRANSPARENT_TYPE attribute is not neccessary for
getting an ARGB visual anyway and instead it is intended to support
color-key transparency. Therefore we can just remove it and get all of
the FBConfigs. Then if we need an ARGB visual we can just walk the
list to look for one with depth == 32.
The fbconfig is now stored in a single variable instead of having a
separate variable for the rgb and rgba configs because the old code
only ever retrieved one of them anyway.
Previously when the markup property is set it would generate an
attribute list from the markup and then merge it with the attributes
from the attribute property and store it as the effective
attributes. The markup attributes and the marked up text would then be
forgotten. This breaks if the application then later changes the
attributes property because it would try to regenerate the effective
attributes from the markup text but the stored text no longer contains
any markup. If the original markup text happened to contain entities
like '<' they would end up causing parse errors because they would
be converted to the actual symbols.
To fix this the attributes from the markup are now stored
independently from the effective attributes. The effective attributes
are now regenerated if either set of attributes changes right before a
layout is created.
http://bugzilla.openedhand.com/show_bug.cgi?id=1940
Destroy the dummy XImage we create even on success.
http://bugzilla.openedhand.com/show_bug.cgi?id=1918
Based on a patch by: Carlos Martín Nieto <carlos@cmartin.tk>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
PropertyInfo should store a copy of the JsonNodes it references, so
that property_info_free() can safely dispose them, and we can reference
values across different UI definition data.
The implicit timeline parsing code is not copying the JsonNode; this
leads to a double free in some cases, which is masked by the GSlice
allocator and produces a heap corruption later on.
Allow the user of the ClutterMedia interface to specify an external (as
in not multiplexed with the audio/video streams) location of a subtitle
stream.
Both the ::insert-text and ::delete-text are "action" signals, that is
signals that are safe to (and should) be emitted using g_signal_emit()
directly.
A timeline advancement might cause another timeline to be
destroyed, which will likely lead to a segmentation fault.
Before advancing the timelines we should take a reference
on them - just like we do for the stages before doing
event processing. This will prevent dispose() from running
until the end of the advancement.
http://bugzilla.openedhand.com/show_bug.cgi?id=1854
Apparently, calling g_set_prgname() multiple times is not allowed
anymore, and hence clutter_init_* calls should not do that. Though this
is really GLib's fault - and a massive nuisance for us - we should
prolly comply to avoid the test suite dying on us.
* animate-layout-manager:
layout-manager: Document the animation support
layout-manager: Rewind the timeline in begin_animation()
box-layout: Remove the allocations hash table
docs: Clean up the README file
layout: Let begin_animation() return the Alpha
box-layout: Add knobs for controlling animations
box-layout: Animate layout properties
layout: Add animation support to LayoutManager
Add ActorBox animation methods
Add a section inside the LayoutManager class API reference documenting,
with examples, how to implement animation support inside a layout
manager sub-class.
If the default implementation begin_animation() is called twice then we
should rewind the timeline, as well as updating its duration and the
easing mode of the alpha.
The BoxLayout uses a HashTable to map the latest stable allocation of
each child, in order to use that as the initial value during an
animation; this in spite of already having a perfectly valid per-child
storage as part of the layout manager: ClutterBoxChild.
The last stable allocation should be stored inside the ClutterBoxChild
instead of having it in the private data for ClutterBoxLayout. The
access remains O(1), since there is a 1:1 mapping between child and
BoxChild instances, but we save a little bit of memory and we avoid
keeping aroud allocations for old children.
* stage-use-alpha:
tests: Use accessor methods for :use-alpha
stage: Add accessors for :use-alpha
tests: Allow setting the stage opacity in test-paint-wrapper
stage: Premultiply the stage color
stage: Composite the opacity with the alpha channel
glx: Always request an ARGB visual
stage: Add :use-alpha property
materials: Get the right blend function for alpha
ClutterActor checks, when destroying and reparenting, if the parent
actor implements the Container interface, and automatically calls the
remove() method to perform a clean removal.
Actors implementing Container, though, might have internal children;
that is, children that are not added through the Container API. It is
already possible to iterate through them using the Container API to
avoid breaking invariants - but calling clutter_actor_destroy() on
these children (even from the Container implementation, and thus outside
of Clutter's control) will either lead to leaks or to segmentation
faults.
Clutter needs a way to distinguish a clutter_actor_set_parent() done on
an internal child from one done on a "public" child; for this reason, a
push/pop pair of functions should be available to Actor implementations
to mark the section where they wish to add internal children:
➔ clutter_actor_push_internal ();
...
clutter_actor_set_parent (child1, parent);
clutter_actor_set_parent (child2, parent);
...
➔ clutter_actor_pop_internal ();
The set_parent() call will automatically set the newly added
INTERNAL_CHILD private flag on each child, and both
clutter_actor_destroy() and clutter_actor_unparent() will check for the
flag before deciding whether to call the Container's remove method.
When beginning a new animation for a LayoutManager, the implementation
should return the ClutterAlpha used. This allows controlling the
timeline and/or modifying the animation parameters on the fly.
ClutterLayoutManager does not have any state associated with it, and
defers all the state to its sub-classes.
The BoxLayout is thus in charge of controlling:
• whether or not animations should be used
• the duration of the animation
• the easing mode of the animation
By adding three new properties:
• ClutterBoxLayout:use-animations
• ClutterBoxLayout:easing-duration
• ClutterBoxLayout:easing-mode
And their relative accessors pairs we can make BoxLayout decide whether
or not, and with which parameters, call the begin_animation() method of
ClutterLayoutManager.
The test-box-layout has been modified to reflect this new functionality,
by checking the key-press event for the 'a' key symbol to toggle the use
of animations.
Use the newly added animation support inside LayoutManager to animate
between state changes of the BoxLayout properties.
The implementation is based on equivalent code from Mx, written by:
Thomas Wood <thomas.wood@intel.com>
In order to animate a fluid layout we cannot use the common animation
code paths as they will override the size request and allocation paths
that are handled by the layout manager itself.
One way to introduce animations in the allocation sequence is to use a
Timeline and an Alpha to compute a progress value and then use that
value to interpolate an ActorBox between the initial and final states of
the animation - with the initial state being the last allocation of the
child prior to the animation start, and the final state the allocation
of the child at the end; for every frame of the Timeline we then queue a
relayout on the layout manager's container, which will result in an
animation.
ClutterLayoutManager is the most likely place to add a generic API for
beginning and ending an animation, as well as the place to provide a
default code path to create the ancillary Timeline and Alpha instances
needed to drive the animation.
A LayoutManager sub-class will need to:
• call clutter_layout_manager_begin_animation() whenever it should
animate between two states, for instance: whenever a layout property
changes value;
• eventually override begin_animation() and end_animation() in case
further state needs to be set up, and then chain up to the default
implementation provided by LayoutManager;
• if a completely different implementation is required, the layout
manager sub-class should override begin_animation(), end_animation()
and get_animation_progress().
Inside the allocate() implementation the sub-class should also
interpolate between the last known allocation of a child and the newly
computed allocation.
ClutterActorBox should have an interpolate() method that allows to
compute the intermediate values between two states, given a progress
value, e.g.:
clutter_actor_box_interpolate (start, end, alpha, &result);
Another utility method, useful for layout managers, is a modifier
that clamps the members of the actor box to the nearest integer
value.
When getting signals from higher level toolkits, occasionally
one wants access to the underlying event; say for a Button
widget's "clicked" signal, to get the keyboard state.
Rather than having all of the highlevel widgets emit
ClutterEvent just for the more unusual use cases,
add a global function to access the event state.
http://bugzilla.openedhand.com/show_bug.cgi?id=1888
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Old-style X11 terminals require that even modern X11 send KeyPress
and KeyRelease pairs when auto-repeating. For this reason modern(-ish)
API like XKB has a way to detect auto-repeat and do a single KeyRelease
at the end of a KeyPress sequence.
The newly added check emulates XKB's detectable auto-repeat by peeking
the next event after a KeyRelease and checking if it's a KeyPress for
the same key and timestamp - and then ignoring the KeyRelease if it
matches.
If a Stage has been set to use a foreign Window then Clutter should not
be managing it; calling XWithdrawWindow and XMapWindow should be
reserved to the windows we manage ourselves.
Some actor implementation might avoid imposing any layout on their
children. The Actor base class usually assumes some sort of layout
management is in place, so it will queue relayouts when, for instance,
an actor is shown or is hidden. If the parent of the actor does not
impose any layout, though, showing or hiding one of its children will
not affect the layout of the others.
An example of this kind of container is ClutterGroup.
By adding a new Actor flag, CLUTTER_ACTOR_NO_LAYOUT, and by making
the Group actor set it on itself, the Actor base class can now decide
whether or not to queue a relayout. The flag is not meant to be used
by application code, and should only be set when implementing a new
container.
http://bugzilla.openedhand.com/show_bug.cgi?id=1838
When the texture is in the atlas, ensuring the mipmaps can effectively
make it become a completely different texture so we should do this
before getting the GL handle.
Mipmaps don't work very well in the current atlas because there is not
enough padding between the textures. If ensure_mipmaps is called it
will now create a new texture and migrate the atlased texture to
it. It will use the same blit mechanism as when migrating so it will
try to use an FBO for a fast blit. However if this is not possible it
will end up downloading the data for the entire atlas which is not
ideal.
When reorganizing the textures, we can avoid downloading the entire
texture data if we bind the source texture in a framebuffer object and
copy the destination using glCopyTexSubImage2D. This is also
implemented using a much faster path in Mesa.
Currently it is calling the GL framebuffer API directly but ideally it
would use the Cogl offscreen API. However there is no way to tell Cogl
not to create a stencil renderbuffer which seems like a waste in this
situation.
If FBOs are not available it will fallback to reading back the entire
texture data as before.
This adds a 'dump-atlas-image' debug category. When enabled, CoglAtlas
will use Cairo to create a png which visualizes the leaf rectangles of
the atlas.
This adds an 'atlas' category to the COGL_DEBUG environment
variable. When enabled Cogl will display messages when textures are
added to the atlas and when the atlas is reorganized.
When space can't be found in the atlas for a new texture it will now
try to reorganize the atlas to make space. A new CoglAtlas is created
and all of the textures are readded in decreasing size order. If the
textures still don't fit then the size of the atlas is doubled until
either we find a space or we reach the texture size limits. If we
successfully find an organization that fits then all of the textures
will be migrated to a new texture. This involves copying the texture
data into CPU memory and then uploading it again. Potentially it could
eventually use a PBO or an FBO to transfer the image without going
through the CPU.
The algorithm for laying out the textures works a lot better if the
rectangles are added in order so we might eventually want some API for
creating multiple textures in one go to avoid reorganizing the atlas
as far as possible.
This adds a CoglAtlas type which is a data structure that keeps track
of unused sub rectangles of a larger rectangle. There is a new atlased
texture backend which uses this to put multiple textures into a single
larger texture.
Currently the atlas is always sized 256x256 and the textures are never
moved once they are put in. Eventually it needs to be able to
reorganise the atlas and grow it if necessary. It also needs to
migrate the textures out of the atlas if mipmaps are required.
clutter_actor_get_preferred_width/height currently caches only one size
requests, for a given height / width.
It's common for a layout manager to call get_preferred_width with 2
different heights during the same allocation cycle. Typically once in
the size request, once in the allocation. If
clutter_actor_get_preferred_width is called
alternatively with 2 different for_height, the cache is totally
inefficient, and we end up always querying the actor size even
when the actor does not need a re-allocation.
http://bugzilla.openedhand.com/show_bug.cgi?id=1876
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Fix a copy-and-paste thinko where the cell size was computed using the
minimum size instead of the natural size. For actors with a minimum size
of zero, like Textures, this implied always a zero allocation.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This is an optimised version of CoglTexture2DSliced that always deals
with a single texture and always uses the GL_TEXTURE_2D
target. cogl_texture_new_from_bitmap now tries to use this backend
first. If it can't create a texture with that size then it falls back
the sliced backend.
cogl_texture_upload_data_prepare has been split into two functions
because the sliced backend needs to know the real internal format
before the conversion is performed. Otherwise the converted bitmap
will be wasted if the backend can't support the size.
This provides a way to upload the entire data for a texture without
having to first call glTexImage and then glTexSubImage. This should be
faster especially with indirect rendering where it would needlessy
send the data for the texture twice.
new_from_data and new_from_file can be implemented in terms of
new_from_bitmap so it makes sense to move these to cogl-texture rather
than having to implement them in every texture backend.
This adds a new texture backend which represents a sub texture of a
larger texture. The texture is created with a reference to the full
texture and a set of coordinates describing the region. The backend
simply defers to the full texture for all operations and maps the
coordinates to the other range. You can also use coordinates outside
the range [0,1] to create a repeated version of the full texture.
A new public API function called cogl_texture_new_from_sub_texture is
available to create the sub texture.
The CoglTextureSliceCallback function pointer now takes const pointers
for the texture coordinates. This makes it clearer that the callback
should not modify the array and therefore the backend can use the same
array for both sets of coords.
Given a region of texture coordinates this utility invokes a callback
enough times to cover the region with a subregion that spans the
texture at most once. Eg, if called with tx1 and tx2 as 0.5 and 3.0 it
it would invoke the callback with:
0.5,1.0 1.0,2.0 2.0,3.0
Manual repeating is needed by all texture backends regardless of
whether they can support hardware repeating because when Cogl calls
the foreach_sub_texture_in_region method then it sets the wrap mode to
GL_CLAMP_TO_EDGE and no hardware repeating is possible.
In _cogl_multitexture_quad_single_primitive we use a wrap mode of
GL_CLAMP_TO_EDGE if the texture coordinates are all in the range [0,1]
or GL_REPEAT otherwise. This is to avoid pulling in pixels from either
side when using GL_LINEAR filter mode and rendering the entire
texture. Previously it was checking using the unconverted texture
coordinates. This is ok unless the texture backend is radically
transforming the texture coordinates, such as in the sub texture
backend where the coordinates may map to something completely
different. We now check whether the coordinates are in range after
converting them.
Most of the fields that were previously in CoglTexture are specific to
the implementation of CoglTexture2DSliced so they should be placed
there instead. For example, the 'mipmaps_dirty' flag is an
implementation detail of the ensure_mipmaps function so it doesn't
make sense to force all texture backends to have this function.
Other fields such as width, height, gl_format and format may make
sense for all textures but I've added them as virtual functions
instead. This may make more sense for a sub-texture backend for
example where it can calculate these based on the full texture.
The CoglTexture struct previously contained some fields which are only
used to upload data such as the CoglBitmap and the source GL
format. These are now moved to a separate CoglTextureUploadData struct
which only exists for the duration of one of the cogl_texture_*_new
functions. In cogl-texture there are utility functions which operate
on this new struct rather than on CoglTexture directly.
Some of the fields that were previously stored in the CoglBitmap
struct are now copied to the CoglTexture such as the width, height,
format and internal GL format.
The rowstride was previously stored in CoglTexture and this was
publicly accessible with the cogl_texture_get_rowstride
function. However this doesn't seem to be a useful function because
there is no need to use the same rowstride again when uploading or
downloading new data. Instead cogl_texture_get_rowstride now just
calculates a suitable rowstride from the format and width of the
texture.
Commit 558b17ee1e added support for rectangle textures to the
framebuffer code. Under GLES there is no GL_TEXTURE_RECTANGLE_ARB
definition so this was breaking the build. The rest of Cogl uses
ifdef's around that constant so we should do the same here.
• The debug flags are pre-processor ones, so they should be listed
inside AM_CPPFLAGS.
• Clutter's publicly exported symbols match the following regular
expression:
^(clutter|cogl|json)_*
The old one also listed "pango" as a possible prefix, but the
Pango API is now under the Cogl namespace.
We need to add the row-spacing value when calculating the y position for lines
of actors in horizontal flowing layouts.
Similarly we need to add the col-spacing value when calculating the x posution
for actors in vertical flowing layouts.
When requesting the GLXFBConfig for creating the GLX context, we should
always request one that links to an ARGB visual instead of a plain RGB
one.
By using an ARGB visual we allow the ClutterStage:use-alpha property to
work as intended when running Clutter under a compositing manager.
The default behaviour of requesting an ARGB visual can be disabled by
using the:
CLUTTER_DISABLE_ARGB_VISUAL
Environment variable.
The ClutterStage:use-alpha property is used to let a stage know that it
should honour the alpha component of the ClutterStage:color property.
If :use-alpha is set to FALSE the stage always uses the full opacity
when clearing itself before a paint(); otherwise, the alpha value is
used.
The correct blend function for the alpha channel is:
GL_ONE, GL_ONE_MINUS_SRC_ALPHA
As per bug 1406. This fix was dropped when the switch to premultiplied
alpha was merged.
* text-direction:
docs: Add text-direction accessors
Set the default language on the Pango context
actor: Set text direction on parenting
tests: Display the index inside text-box-layout
box-layout: Honour :text-direction
text: Dirty layout cache on text direction changes
actor: Add :text-direction property
Use the newly added ClutterTextDirection enumeration
Add ClutterTextDirection enumeration
The ClutterLayoutMeta instances should be created on demand, whenever
the layout manager needs them - if the layout manager supports layout
properties.
This removes the requirement to call add_child_meta() and
remove_child_meta() on add and remove respectively; it also simplifies
the implementation of LayoutManager sub-classes since we can add
fallback code in the base abstract class.
Eventually, this will also lead to an easier to implement ClutterScript
parser for layout properties.
With the new scheme, the ClutterLayoutMeta instance is created whenever
the layout manager tries to access it; if there isn't an instance
already attached to the container's child, one is created -- assuming
that the LayoutManager sub-class has overridden the
get_child_meta_type() virtual function and it's returning a valid GType.
We can also provide a default implementation for create_child_meta(),
by getting the GType and instantiating a ClutterLayoutMeta with all the
fields already set. If the layout manager requires more work then it can
obviously override the default implementation (and even chain up to it).
The ClutterBox actor has been updated, as well as the ClutterBoxLayout
layout manager, to take advantage of the changes of LayoutManager.
The colour test for the stage in _clutter_do_pick checks for white to
determine whether the stage was picked but since 47db7af4d we were
setting the colur to black. This usually worked because the id of the
default stage ends up being 0 which equates to black. However if a
second stage is created then it will always end up picking the first
stage.
We currently enable blending if the material colour has
transparency. This patch makes it also enable blending if any of the
lighting colours have transparency. Arguably this isn't neccessary
because we don't expose any API to enable lighting so there is no
bug. However it is currently possible to enable lighting with a direct
call to glEnable and this otherwise works so it is a shame not to have
it.
http://bugzilla.openedhand.com/show_bug.cgi?id=1907
The stage's pick id can be written to the framebuffer when we call
cogl_clear so there's no need for the stage to also chain up in it's pick
function resulting in clutter-actor.c also emitting a rectangle for the
stage.
cogl_push_draw_buffer, cogl_set_draw_buffer and cogl_pop_draw_buffer are now
deprecated and new code should use the new cogl_framebuffer_* API instead.
Code that previously did:
cogl_push_draw_buffer ();
cogl_set_draw_buffer (COGL_OFFSCREEN_BUFFER, buffer);
/* draw */
cogl_pop_draw_buffer ();
should now be re-written as:
cogl_push_framebuffer (buffer);
/* draw */
cogl_pop_framebuffer ();
As can be seen from the example above the rename has been used as an
opportunity to remove the redundant target argument from
cogl_set_draw_buffer; it now only takes one call to redirect to an offscreen
buffer, and finally the term framebuffer may be a bit more familiar to
anyone coming from an OpenGL background.
Instead of storing an enum with the backend type for each texture and
then using a switch statement to decide which function to call, we
should store pointers to all of the functions in a struct and have
each texture point to that struct. This is potentially slightly faster
when there are more backends and it makes implementing new backends
easier because it's more obvious which functions have to be
implemented.
cogl_offscreen_new_to_texture previously bailed out if the given texture's
GL target was anything but GL_TEXTURE_2D, but it now also allows
foreign GL_TEXTURE_RECTANGLE_ARB textures.
Thanks to Owen for reporting this issue, ref:
https://bugzilla.gnome.org/show_bug.cgi?id=601032
cogl_material_copy can be used to create a new CoglHandle referencing a copy
of some given material.
From now on we will advise that developers always aim to use this function
instead of cogl_material_new() when creating a material that is in any way
derived from another.
By using cogl_material_copy, Cogl can maintain an ancestry for each material
and keep track of "similar" materials. The plan is that Cogl will use this
information to minimize the cost of GPU state transitions.
This function was #if 0'd before we released Clutter 1.0 so there's no
implementation of it. At some point we thought it might assist with
developers breaking out into raw OpenGL. Breaking out to raw GL is a
difficult problem though so we decided instead we will wait for a specific
use case to arrise before trying to support it.
Actors, unlike objects, can effectively go away whilst being
animated - usually because of an explicit destoy().
The Animation created by clutter_actor_animate() and friends
should keep a weak reference on the actor and eventually
get rid of the animation itself in case the actor goes away
whilst being animated.
_cogl_material_get_layer expects a CoglMaterial* pointer but it was
being called with a CoglHandle. This doesn't matter because the
CoglHandle is actually just the CoglMaterial* pointer anyway but it
breaks the ability to change the _cogl_material_pointer_from_handle
macro.
• Use the same style for the Cogl API reference as the one used for
the Clutter API reference.
• Fix the introspection annotations for cogl_bitmap_get_size_from_file()
The imported Mesa matrix code has some documentation annotations
that make gtk-doc very angry. Since it's all private anyway we
can safely make gtk-doc ignore the offending stuff.
$(COGL_DRIVER)/cogl-defines.h is generated in the configure script so
it ends up in the build directory. Therefore the build rule for
cogl/cogl-defines.h should depend on the file in $(builddir) not
$(srcdir).
The deprecation notices in gtk-doc should also refer to the
release that added the deprecation, and if the deprecated
symbol has been replaced by something else then the new symbol
should be correctly referenced.
The main COGL header cogl.h is currently created at configure time
because it conditionally includes the driver-dependent defines. This
sometimes leads to a stale cogl.h with old definitions which can
break the build until you clean out the whole tree and start from
scratch.
We can generate a stable cogl-defines.h at build time from the
equivalent driver-dependent header and let cogl.h include that
file instead.
_cogl_feature_check expects the array of function names to be
terminated with a NULL pointer but I forgot to add this. This was
causing crashes depending on what happened to be in memory after the
array.
For VBOs, we don't need to check for the extension if the GL version
is greater than 1.5. Non-power-of-two textures are given in 2.0.
We could also assume shader support in GL 2.0 except that the function
names are different from those in the extension so it wouldn't work
well with the current mechanism.
Previously if you need to depend on a new GL feature you had to:
- Add typedefs for all of the functions in cogl-defines.h.in
- Add function pointers for each of the functions in
cogl-context-driver.h
- Add an initializer for the function pointers in
cogl-context-driver.c
- Add a check for the extension and all of the functions in
cogl_features_init. If the extension is available under multiple
names then you have to duplicate the checks.
This is quite tedious and error prone. This patch moves all of the
features and their functions into a list of macro invocations in
cogl-feature-functions.h. The macros can be redefined to implement all
of the above tasks from the same header.
The features are described in a struct with a pointer to a table of
functions. A new function takes the feature description from this
struct and checks for its availability. The feature can take a list of
extension names with a list of alternate namespaces (such as "EXT" or
"ARB"). It can also detect the feature from a particular version of
GL.
The typedefs are now gone and instead the function pointer in the Cogl
context just directly contains the type.
Some of the functions in the context were previously declared with the
'ARB' extension. This has been removed so that now all the functions
have no suffix. This makes more sense when the extension could
potentially be merged into GL core as well.
There is a new internal Cogl function called _cogl_check_driver_valid
which looks at the value of the GL_VERSION string to determine whether
the driver is supported. Clutter now calls this after the stage is
realized. If it fails then the stage is marked as unrealized and a
warning is shown.
_cogl_features_init now also checks the version number before getting
the function pointers for glBlendFuncSeparate and
glBlendEquationSeparate. It is not safe to just check for the presence
of the functions because some drivers may define the function without
fully implementing the spec.
The GLES version of _cogl_check_driver_valid just always returns TRUE
because there are no version requirements yet.
Eventually the function could also check for mandatory extensions if
there were any.
http://bugzilla.openedhand.com/show_bug.cgi?id=1875
We can not process events for a stage that has been destroyed so we
should make sure that the events for the stage are removed from the
global event queue during dispose.
http://bugzilla.openedhand.com/show_bug.cgi?id=1882
Both ClutterAlpha:mode and ClutterAnimation:mode can be defined using:
• an integer id
• the "nick" field of the AnimationMode GEnumValue
• a custom, tweener-like string
All these methods should be documented.
Like in ClutterAlpha, ClutterAnimation:mode must be overridden when
parsing a Script definition, as we accept both a numeric id and the
string id for easing modes.
When _cogl_add_path_to_stencil_buffer is used to draw a path we don't
need to clear the entire stencil buffer. Instead it can clear just the
bounding box of the path. This adds an extra parameter called
'need_clear' which is only set if the stencil buffer is being used for
clipping.
http://bugzilla.openedhand.com/show_bug.cgi?id=1829
When the text direction changes we should evict the cached layouts
to avoid stale entries in case the direction change produces a layout
with the same size.
Every actor should have a property for retrieving (and setting) the
text direction.
The text direction is used to provide a consisten behaviour in both
left-to-right and right-to-left languages. For instance, ClutterText
should perform key navigation following text direction. Layout
managers should also take into account text direction to derive the
right packing order for their children.
Instead of using PangoDirection directly we should use the
ClutterTextDirection enumeration.
We also need a pair of accessor functions for setting and
getting the default text direction.
This fixes a warning about an uninitialised value. It could also
potentially fix some crashes for example if the enable_flags value
happened to include a bit for enabling a vertex array if no vertex
buffer pointer was set.
While loading a JPEG from disk (with clutter_texture_new_from_file),
I got the following:
<Error>: CGBitmapContextCreate: unsupported parameter combination: 8
integer bits/component; 24 bits/pixel; 3-component colorspace;
kCGImageAlphaNone; 3072 bytes/row.
<Error>: CGContextDrawImage: invalid context
Looking around, I found that CGBitmapContextCreate can't make 24bpp
offscreen pixmaps without an alpha channel...
This fixes the bug, and seems to not break other things...
http://bugzilla.openedhand.com/show_bug.cgi?id=1159
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>