* Add new clutter_geometry_union(), because writing union intersection
is harder than it looks. Fixes two problems with the inline code in
clutter_stage_glx_add_redraw_clip().
1) The ->x and ->y of were reassigned to before using them to
compute the new width and height.
2) since ClutterGeometry has unsigned width, x + width is unsigned,
and comparison goes wrong if either rectangle has a negative
x + width. (We fixed width for GdkRectangle to be signed for GTK+-2.0,
this is a potent source of bugs.)
* Use in clutter_stage_glx_add_redraw_clip()
* Account for the case where the incoming rectangle is empty, and don't
end up with the stage being entirely redrawn.
* Account for the case where the stage already has a degenerate
width and don't end up with redrawing only the new rectangle and not
the rest of the stage.
The better fix here for the second two problems is to stop using a 0
width to mean the entire stage, but this should work for now.
http://bugzilla.openedhand.com/show_bug.cgi?id=2040
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
We need to set up the rowstride and alignment properly in
CoglTexture2D before reading texture data.
http://bugzilla.openedhand.com/show_bug.cgi?id=2036
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The system JSON-GLib installation should be the preferred way of parsing
JSON in Clutter. The internal copy is limited by re-synchronization from
upstream, and by the fact that upstream contains a fork of GScanner that
allows parsing escaped UTF-8. We should warn users compiling Clutter
with the internal copy, just like we warn about the internal image
backend.
The X11TexturePixmap actor uses XComposite API directly, without guards.
It has been doing so for a while, against the fact that we do check for
the XComposite extension - but we don't depend on it. As soon as you try
building Clutter on X11 without the XComposite extension available all
hell breaks loose.
The obvious fix is to make Clutter depend on XComposite - basically
ratifying what's the current state of things.
If you forgot to call clutter_init() then you currently end up with a
warning saying that the stage cannot be initialized because the backend
does not support multiple stages. Clearly not useful.
We can catch some of the missing initialization in the features API,
since we will likely end up asking for a feature at some point.
We kind of assume that stuff will break well before during the
ClutterBackend::create_context() implementation if we fail to create a
GL context. We do, however, have error reporting in place inside the
Backend API to catch those cases. Unfortunately, since we switched to
lazy initialization of the Stage, there can be a case of GL context
creation failure that still leads to a successful initialization - and a
segmentation fault later on. This is clearly Not Good™.
Let's try to catch a failure in all the places calling create_context()
and report back to the user the error in a meaningful way, before
crashing and burning.
If you call get_n_columns() during the instance initialization phase but
before set_name()/set_types() have been called, you'll get a (guint) -1.
This is less than ideal.
If columns haven't been initialized we should just return 0, which was
the intent of the API since the beginning.
Based on a patch by: Bastian Winkler <buz@netbuz.org>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The int storage, and the initial value of -1, is used as a guard when
subclassing ClutterListModel to allow the sub-class to call
clutter_model_set_names() and clutter_model_set_types().
This reverts commit c274118a8f.
This makes it more likely consumers notice invalid unreferences.
GObject has the same assertion.
http://bugzilla.openedhand.com/show_bug.cgi?id=2029
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
clutter_model_get_n_columns is supposed to return a guint, so the
n_columns field needs to be a guint with the initial value set to 0.
http://bugzilla.openedhand.com/show_bug.cgi?id=2017
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When entering cogl_texture_2d_new_from_bitmap the internal format can
be COGL_PIXEL_FORMAT_ANY. This was causing _cogl_texture_2d_can_create
to use an invalid GL format type. Mesa apparently ignores this but it
was causing errors when Cogl is compiled with debugging under NVidia.
http://bugzilla.openedhand.com/show_bug.cgi?id=2026
Add a return result from CoglTexture.transform_quad_coords_to_gl(),
so that we can properly determine the nature of repeats in
the face of GL_TEXTURE_RECTANGLE_ARB, where the returned
coordinates are not normalized.
The comment "We also work out whether any of the texture
coordinates are outside the range [0.0,1.0]. We need to do
this after calling transform_coords_to_gl in case the texture
backend is munging the coordinates (such as in the sub texture
backend)." is disregarded and removed, since it's actually
the virtual coordinates that determine whether we repeat,
not the GL coordinates.
Warnings about disregarded layers are used in all cases where
applicable, including for subtextures.
http://bugzilla.openedhand.com/show_bug.cgi?id=2016
Signed-off-by: Neil Roberts <neil@linux.intel.com>
Fix clutter initialisation if argb visuals are enabled, setting a border
color on creating the dummy window. This should avoid BadMatch happening
when the depth of the root window visual is not the same of the depth
of the argb visual.
http://bugzilla.openedhand.com/show_bug.cgi?id=2011
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Backport of the upstream JSON-GLib commit that improved the strictness
of JsonParser.
The original upstream commit is:
29881f03468db08bfb404cfcd5b61b4cdc419a87
The create_context() and ensure_context() sections should be more clear
on the role of the functions, and their eventual caveats, like being
called multiple times.
The test creates a GL_TEXTURE_RECTANGLE_ARB texture using
cogl_texture_new_from_foreign and confirms that rendering it works
correctly. If the rectangle texture extension isn't available then
this test always succeeds.
http://bugzilla.openedhand.com/show_bug.cgi?id=2015
In _cogl_texture_2d_sliced_foreach_sub_texture_in_region(), don't
assert that the target is GL_TEXTURE_2D; instead conditionalize
normalization on the target.
http://bugzilla.openedhand.com/show_bug.cgi?id=2015
If the EGL context is already created then we shouldn't try to create
another one. This was causing problems where one context would be
created from calling _clutter_feature_init and the other was created
from _clutter_backend_get_features. Cogl would set up its state using
the first context and then assume the state was still valid when the
second context became used so blending was not working correctly.
http://bugzilla.openedhand.com/show_bug.cgi?id=2020
The documentation and name of the get_transformation_matrix function
implies that 'matrix' is purely an out parameter. However it wasn't
initializing the matrix before calling the 'apply_transform' virtual
so it was basically just a wrapper for the virtual. The virtual
assumes the matrix parameter is in/out and applies the actor's
transformation on top of any existing transformations. This causes
unexpected semantics that are inconsistent with the documentation.
This changes clutter_glx_texture_pixmap_update_area so it defers the
call to glXBindTexImageEXT until our pre "paint" signal handler which
makes clutter_glx_texture_pixmap_update_area cheap to call.
The hope is that mutter can switch to reporting raw damage updates to
ClutterGLXTexturePixmap and we can use these to queue clipped redraws.
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
commit 511e5ceb51 accidentally removed the #ifdef COGL_ENABLE_DEBUG
guards around the "cogl-debug" and "cogl-no-debug" cogl_args[] which
this patch restores.