When printing out the property value during a ClutterScript debug run we
generate the value's content using g_strdup_value_contents() - though we
do it unconditionally. The contents might not be printed (they most
likely won't, actually) and will be freed afterwards. This is
unnecessary: we can allocate the contents string after checking if we're
going to print out the debug note, thus avoiding the whole
allocation/free cycle unless strictly needed.
When setting up the state for a layer, we need to switch texture
units before we do anything that might bind the texture, or
we'll bind the wrong texture to the previous unit.
http://bugzilla.openedhand.com/show_bug.cgi?id=2033
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
* Add new clutter_geometry_union(), because writing union intersection
is harder than it looks. Fixes two problems with the inline code in
clutter_stage_glx_add_redraw_clip().
1) The ->x and ->y of were reassigned to before using them to
compute the new width and height.
2) since ClutterGeometry has unsigned width, x + width is unsigned,
and comparison goes wrong if either rectangle has a negative
x + width. (We fixed width for GdkRectangle to be signed for GTK+-2.0,
this is a potent source of bugs.)
* Use in clutter_stage_glx_add_redraw_clip()
* Account for the case where the incoming rectangle is empty, and don't
end up with the stage being entirely redrawn.
* Account for the case where the stage already has a degenerate
width and don't end up with redrawing only the new rectangle and not
the rest of the stage.
The better fix here for the second two problems is to stop using a 0
width to mean the entire stage, but this should work for now.
http://bugzilla.openedhand.com/show_bug.cgi?id=2040
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
We need to set up the rowstride and alignment properly in
CoglTexture2D before reading texture data.
http://bugzilla.openedhand.com/show_bug.cgi?id=2036
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If you forgot to call clutter_init() then you currently end up with a
warning saying that the stage cannot be initialized because the backend
does not support multiple stages. Clearly not useful.
We can catch some of the missing initialization in the features API,
since we will likely end up asking for a feature at some point.
We kind of assume that stuff will break well before during the
ClutterBackend::create_context() implementation if we fail to create a
GL context. We do, however, have error reporting in place inside the
Backend API to catch those cases. Unfortunately, since we switched to
lazy initialization of the Stage, there can be a case of GL context
creation failure that still leads to a successful initialization - and a
segmentation fault later on. This is clearly Not Good™.
Let's try to catch a failure in all the places calling create_context()
and report back to the user the error in a meaningful way, before
crashing and burning.
If you call get_n_columns() during the instance initialization phase but
before set_name()/set_types() have been called, you'll get a (guint) -1.
This is less than ideal.
If columns haven't been initialized we should just return 0, which was
the intent of the API since the beginning.
Based on a patch by: Bastian Winkler <buz@netbuz.org>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The int storage, and the initial value of -1, is used as a guard when
subclassing ClutterListModel to allow the sub-class to call
clutter_model_set_names() and clutter_model_set_types().
This reverts commit c274118a8f.
This makes it more likely consumers notice invalid unreferences.
GObject has the same assertion.
http://bugzilla.openedhand.com/show_bug.cgi?id=2029
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
clutter_model_get_n_columns is supposed to return a guint, so the
n_columns field needs to be a guint with the initial value set to 0.
http://bugzilla.openedhand.com/show_bug.cgi?id=2017
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When entering cogl_texture_2d_new_from_bitmap the internal format can
be COGL_PIXEL_FORMAT_ANY. This was causing _cogl_texture_2d_can_create
to use an invalid GL format type. Mesa apparently ignores this but it
was causing errors when Cogl is compiled with debugging under NVidia.
http://bugzilla.openedhand.com/show_bug.cgi?id=2026
Add a return result from CoglTexture.transform_quad_coords_to_gl(),
so that we can properly determine the nature of repeats in
the face of GL_TEXTURE_RECTANGLE_ARB, where the returned
coordinates are not normalized.
The comment "We also work out whether any of the texture
coordinates are outside the range [0.0,1.0]. We need to do
this after calling transform_coords_to_gl in case the texture
backend is munging the coordinates (such as in the sub texture
backend)." is disregarded and removed, since it's actually
the virtual coordinates that determine whether we repeat,
not the GL coordinates.
Warnings about disregarded layers are used in all cases where
applicable, including for subtextures.
http://bugzilla.openedhand.com/show_bug.cgi?id=2016
Signed-off-by: Neil Roberts <neil@linux.intel.com>
Fix clutter initialisation if argb visuals are enabled, setting a border
color on creating the dummy window. This should avoid BadMatch happening
when the depth of the root window visual is not the same of the depth
of the argb visual.
http://bugzilla.openedhand.com/show_bug.cgi?id=2011
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Backport of the upstream JSON-GLib commit that improved the strictness
of JsonParser.
The original upstream commit is:
29881f03468db08bfb404cfcd5b61b4cdc419a87
In _cogl_texture_2d_sliced_foreach_sub_texture_in_region(), don't
assert that the target is GL_TEXTURE_2D; instead conditionalize
normalization on the target.
http://bugzilla.openedhand.com/show_bug.cgi?id=2015
If the EGL context is already created then we shouldn't try to create
another one. This was causing problems where one context would be
created from calling _clutter_feature_init and the other was created
from _clutter_backend_get_features. Cogl would set up its state using
the first context and then assume the state was still valid when the
second context became used so blending was not working correctly.
http://bugzilla.openedhand.com/show_bug.cgi?id=2020
The documentation and name of the get_transformation_matrix function
implies that 'matrix' is purely an out parameter. However it wasn't
initializing the matrix before calling the 'apply_transform' virtual
so it was basically just a wrapper for the virtual. The virtual
assumes the matrix parameter is in/out and applies the actor's
transformation on top of any existing transformations. This causes
unexpected semantics that are inconsistent with the documentation.
This changes clutter_glx_texture_pixmap_update_area so it defers the
call to glXBindTexImageEXT until our pre "paint" signal handler which
makes clutter_glx_texture_pixmap_update_area cheap to call.
The hope is that mutter can switch to reporting raw damage updates to
ClutterGLXTexturePixmap and we can use these to queue clipped redraws.
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
commit 511e5ceb51 accidentally removed the #ifdef COGL_ENABLE_DEBUG
guards around the "cogl-debug" and "cogl-no-debug" cogl_args[] which
this patch restores.
The FlowLayout fails to provide a preferred size in case no sizing is
specified on one axis. It should, instead, have the preferred size of
the sum of its children, depending on the orientation property.
http://bugzilla.openedhand.com/show_bug.cgi?id=2013
Some EGL drivers for embedded devices require a specific framebuffer
device to be opened and passed to eglCreateWindowSurface(). Since it's
optional, we can provide an environment variabled called
CLUTTER_FB_DEVICE that can be used to specify the path of the device
to be opened.
http://bugzilla.openedhand.com/show_bug.cgi?id=1997
Update the EGL native framebuffer backend to be 1.2-ready:
» create the EGL context and the surface inside the create_context()
implementation so that a context is always available
» simplify the StageWindow implementation
» clean up old code
http://bugzilla.openedhand.com/show_bug.cgi?id=1997
Just like _cogl_texture_2d_new_with_size(),
_cogl_texture_2d_new_from_bitmap() needs to check if an unsliced
texture can be created at the given size, or if hardware
limitations prevent this.
http://bugzilla.openedhand.com/show_bug.cgi?id=2014
Signed-off-by: Neil Roberts <neil@linux.intel.com>
cogl_read_pixels() no longer asserts that the format passed in is
RGBA_8888 but instead accepts any format. The appropriate GL enums for
the format are passed to glReadPixels so OpenGL should be perform a
conversion if neccessary.
It currently assumes glReadPixels will always give us premultiplied
data. This will usually be correct because the result of the default
blending operations for Cogl ends up with premultiplied data in the
framebuffer. However it is possible for the framebuffer to be in
whatever format depending on what CoglMaterial is used to render to
it. Eventually we may want to add a way for an application to inform
Cogl that the framebuffer is not premultiplied in case it is being
used for some special purpose.
If the requested format is not premultiplied then Cogl will convert
it. The tests have been changed to read the data as premultiplied so
that they won't be affected by the conversion. Picking in Clutter has
been changed to use COGL_PIXEL_FORMAT_RGB_888 because it doesn't need
the alpha component. clutter_stage_read_pixels is left unchanged
because the application can't specify a format for that so it seems to
make most sense to store unpremultiplied values.
http://bugzilla.openedhand.com/show_bug.cgi?id=1959
* stage-min-size-rework:
docs: Update minimum size accessors
actor: Use the TOPLEVEL flag instead of a type check
[stage] Use min-width/height props for min size
The clutter-profile.c print_report() code would crash if no stats had
been gathered because uprof would return NULL for the "Redrawing" timer
which we then dereferenced.
This changes the code to start by checking for the "Mainloop",
"Redrawing" and "Do Pick" timers and if none are present it returns
immediately without generating any report.
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
There is no need for us to check for low-level functions and header
files, especially since we haven't been checking the results until
now. This makes cross-compiling slightly more bearable.
If the actor is an internal child of another actor then we should call
unparent() when destroying it, like clutter_actor_reparent() does;
otherwise we'll leak the actor, since the parent holds a reference to
it.
http://bugzilla.openedhand.com/show_bug.cgi?id=2009
Instead of shadowing these properties with different properties with the
same names on stage, actually use them. Behaviour should be identical,
except the minimum stage size can now be enforced by setting the
min-width/height properties as well as using the set_minimum_size
function.
If we do not unset the Stage we will have stale data, and the Crossing
event when re-entering a Stage will not be emitted, as the actor under
the pointer might be the same as before.
This adds a COGL_INDICES_TYPE_UNSIGNED_INT enum value so that unsigned
ints can be used with cogl_vertex_buffer_indices_new. Unsigned ints
are not supported in core on GLES so a feature flag has also been
added to advertise this. GLES only sets the feature if the
GL_OES_element_index_uint extension is available. It is an error to
call indices_new() with unsigned ints unless the feature is
advertised.
http://bugzilla.openedhand.com/show_bug.cgi?id=1998
Allow a ClutterModel to be constructed through the ClutterScript API.
Currently this allows a model to be generated like like this:
{
"id" : "test-model",
"type" : "ClutterListModel",
"columns" : [
[ "text-column", "gchararray" ],
[ "int-column", "gint" ],
[ "actor-column", "ClutterRectangle" ]
]
}
where 'columns' is an array containing arrays of column-name,
column-type pairs.
http://bugzilla.openedhand.com/show_bug.cgi?id=2007
The code has gotten really complicated to follow.
As soon as we have a sync-to-vblank mechanism we should just bail out.
Also, __GL_SYNC_TO_VBLANK (which is used by nVidia) should be assumed
equivalent to a CLUTTER_VBLANK_GLX_SWAP.
We should explain what a "key frame" is for ClutterAnimator, possibly
with some sort of visual cue.
This allows me to demonstrate my poor skills at using Inkscape, as well
as my overall bad taste for graphics design.
The top-level types list was comically out of date, and it was only
determining whether the type we were constructing was initially unowned
or a full object. We can safely replace it with a simple type check.
It would be useful to be able to share the Timeline across different
animator instances, or with different animation constructs. Also this
allows sharing definitions of Timelines in ClutterScript.
The arguments for remove_key() can be NULL, but there is an extraneous
assertion that fails if they are. The pre-conditions should match the
documentation, in this case.
A sub-class of ClutterBox might add ChildMeta support, and since
pack_at() does not go through clutter_container_add_actor(), we
need to manually call the create_child_meta() ourselves.
It is conceivable that Container implementations might add children
outside of the Container::add() implementation - e.g. for packing at
a specific index. Since the addition (and removal) might happen outside
the common path we need to expose all the API that is implicitly called
by ClutterContainer when adding and removing a child - namely the
ChildMeta creation and destruction.
Previously the GLES2 backend needed a special wrapper for
glBindTexture because it needed to know the internal GL format of the
texture in order to correctly implement the GL_MODULATE texture env
mode. When GL_MODULATE is used then the RGB values are taken from the
previous texture layer rather than being fetched from the
texture. However since the material API was added Cogl no longer uses
the GL_MODULATE texture env mode but instead always uses GL_COMBINE.
Compiling the GLES2 backend broke since the more-texture-backends
branch merge because the cogl_get_internal_gl_format function was
removed and there was one place in GLES2 specific code that was using
this to bind the texture.
The texture layer combine functions are now hard coded to GL_COMBINE
instead of GL_MODULATE. The combine function can be customized with
all the parameters of GL_COMBINE. A shader is generated to implement
the given parameters.
Currently it will try to generate code for the constant color but it
will use a uniform which does not exist.
The GLES2 backend for Cogl is failing to compile because
GL_MAX_TEXTURE_UNITS is not defined. Let's define it and provide a
wrapper which uses GL_MAX_TEXTURE_IMAGE_UNITS or
COGL_GLES2_MAX_TEXTURE_UNITS, whichever is the smallest.
A bogus ClutterInterpolation argument had been carried from
clutter_animator_set_interpolation to clutter_animator_get_interpolation
in copy and paste.
GLib 2.24 (but starting from the 2.23.2 unstable release) added a new
macro for collecting GValues from a va_list.
The newly added G_VALUE_COLLECT_INIT() macro should be used in place
of initializing the GValue and calling G_VALUE_COLLECT(), and improves
the collection performances by avoiding multiple checks, free and
initialization calls.
The installed _HEADERS should be the public ones and the enumeration
types; repeating clutter-x11-texture-pixmap.h breaks with automake 1.11
and doesn't strictly make any sense.
http://bugzilla.openedhand.com/show_bug.cgi?id=2002
When set_container() is called with a NULL container we cannot use the
passed pointer to unset the CLUTTER_ACTOR_NO_LAYOUT flag. We should
store a back pointer to the container as object data (there's no need
to add a Private data structure in this case) and unset the flag on the
back pointer instead.
Previously only ClutterGroup was able to set the CLUTTER_ACTOR_NO_LAYOUT
flag which allows clutter-actor.c to avoid a relayout when showing or
hiding fixed layout containers. Instead of it being the responsibility
of the container to set this flag this patch makes the layout manager
itself decide in the ::set_container method. This way both ClutterBox
and ClutterGroup can take advantage of the optimization.
g_list_insert_sorted inserts the new actor before all others that
compare equal so for the normal case when all actors have depth==0
this has the surprising behaviour of layering the actors in reverse
order. To fix this it now manually inserts the actor in the right
place by searching until it finds an actor at a higher depth and
inserting before that.
http://bugzilla.openedhand.com/show_bug.cgi?id=1988
This reverts commit 939e56e2b1.
Changing the depth sort function to have inconsistent behaviour for
nodes that compare equal breaks the stability of g_list_sort. It ends
up so that every time clutter_container_sort_depth_order is called the
order of all actors with the same depth is reversed.
http://bugzilla.openedhand.com/show_bug.cgi?id=1988
To aid in the debugging of Clutter stage resize issues this adds a
COGL_DEBUG=opengl option that will trace "some select OpenGL calls"
(currently just glViewport calls)
Most Cogl debugging code conditions are marked as G_UNLIKELY with the
intention of having the CPU branch prediction always assume the
path is disabled so having debugging support in release binaries has
negligible overhead.
This patch simply fixes a few cases where we weren't using G_UNLIKELY.
COGL_DEBUG=all wasn't previously useful as there are several options
that change the behaviour of Cogl and all together wouldn't help anyone
debug anything.
This patch makes it so COGL_DEBUG=all|verbose now only enables options
that don't change the behaviour of Cogl, i.e. they only affect the
amount of noise we'll print to a terminal.
In addition to that this patch also improves the output from
COGL_DEBUG=help so we now print a table of options including one liner
descriptions of what each option enables.
Some of the ClutterDebugFlags are not meant as a logging facility: they
actually change Clutter's behaviour at run-time.
It would be useful to have this distinction ratified, and thus split
ClutterDebugFlags into two: one DebugFlags for logging facilities and
another set of flags for behavioural changes.
This split is warranted because:
• it should be possible to do "CLUTTER_DEBUG=all" and only have
log messages on the output
• it should be possible to use behavioural modifiers even on a
Clutter that has been compiled without debugging messages
support
The commit adds two new debugging flags:
ClutterPickDebugFlags - controlled by the CLUTTER_PICK environment
variable
ClutterPaintDebugFlags - controlled by the CLUTTER_PAINT environment
variable
The PickDebugFlags are:
nop-picking
dump-pick-buffers
While the PaintDebugFlags is:
disable-swap-events
The mechanism is equivalent to the CLUTTER_DEBUG environment variable,
but it does not depend on the debug level selected when configuring and
compiling Clutter. The picking and painting debugging flags are
initialized at clutter_init() time.
http://bugzilla.openedhand.com/show_bug.cgi?id=1991
The motion event compression should be affected by the device field of
the event; that is: we should compress motion events coming from the
same device.
If an actor is on the boundary of a Stage and the pointer for a device
enters the Stage over that actor, the sequence of events currently is:
➔ ENTER (source: actor, related: NULL)
➔ MOTION
Thus the Stage never gets an ENTER event. This is a regression from
Clutter 1.0.
The correct sequence is:
➔ ENTER (source: stage, related: NULL)
➔ ENTER (source: actor, related: stage)
➔ MOTION
This also maps to the sequence of events sythesized by Clutter when
leaving the Stage through an actor overlapping the Stage boundary.
http://bugzilla.moblin.org/show_bug.cgi?id=9781
The introduction of the StageManager in 0.8 implied that the first Stage
instance to be created was automatically assigned the status of "default
stage". This was all well and good, since the default stage was created
behind the curtains by the initialization sequence.
Now that the initialization sequence does not create a default stage any
longer, it means that the first stage created using clutter_stage_new()
gets to be the default, and all special and warm and fuzzy - which also
means that the first stage created by clutter_stage_new() cannot be
destroyed or handled as any other stage. Whoopsie.
Let's go back to the old semantics: the stage created by the first
invocation of clutter_stage_get_default() is the default stage, and
nothing else can be set as default. One day we'll be able to break the
API and the whole default stage business will be a thing of the past.
Embedding toolkits should benefit from a proper documentation of
clutter_input_device_update_from_event(): its meaning, its use and
the caveats for the "update_stage" argument.