If a Texture has been set to:
• keep its size synchronized with the image data
• maintain the aspect ratio of the image data
then it should also change its request mode depending on the orientation
of the image data, so that layout managers have a fighting chance of
sizing it correctly.
The marshallers we use for the signals are declared in a private header,
and it stands to reason that they should also be hidden in the shared
object by using the common '_' prefix. We are also using some direct
g_cclosure_marshal_* symbol from GLib, instead of consistently use the
clutter_marshal_* symbol.
in create_pick_material we were using a static boolean to gate when we
show a warning, but that would mean if the problem recurs between
different textures then the warning will only be shown once. We now have
a private bitfield flag instead, just so we don't spew millions of
warnings if the problem is recurring.
This adds a boolean "pick-with-alpha" property to ClutterTexture and when
true, it will use the textures alpha channel to define the actors shape when
picking.
Users should be aware that it's a bit more expensive to pick textures like
this (so probably best not to blindly enable it on *all* your textures)
since it implies rasterizing the texture during picking whereas we would
otherwise just send a solid filled quad to the GPU. It will also interrupt
the internal batching of geometry for pick renders which can otherwise often
be done in a single draw call.
clutter_texture_paint shouldn't need to optimize the case where
paint_opacity == 0 and bailout, since we've been doing this optimization for
all actors in clutter_actor_paint for a while now.
In 125bded81 some comments were introduced to ClutterTexture
complaining that it can have a Cogl texture before being
realized. Clutter always assumes that the single GL context is current
so there is no need to wait until the actor is realized before setting
a texture. This patch replaces the comments with clarification that
this should not be a problem.
The patch also changes the documentation about the realized state in
various places to clarify that it is acceptable to create any Cogl
resources before the actor is realized.
http://bugzilla.openedhand.com/show_bug.cgi?id=2075
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
This replaces code like this:
if (CLUTTER_ACTOR_IS_VISIBLE (self))
clutter_actor_queue_redraw (self);
with:
clutter_actor_queue_redraw (self);
clutter_actor_queue_redraw internally knows what can be optimized when
the actor is not visible, but it also knows that the queue_redraw signal
must always be sent in case a ClutterClone is cloning a hidden actor.
Reading back the texture data in unrealize does not seem like a
desirable feature any more, clutter has evolved a lot since it was
implemented.
What's wrong with it now:
* It takes *a lot* of time to read the data back with glReadPixel(),
* When several textures share the same CoglTexture, the same data can
be read back multiple times,
* If the underlying material uses multiple texture units, only the
first one was copied back,
* In ClutterCairoTexture, we end up having two separate copies of the
data,
* GL actually manages texture memory accross system/video memory
for us!
For all the reasons above, let's get rid of the glReadPixel() in
Texture::unrealize()
Fixes: OHB#1842
cogl_push_draw_buffer, cogl_set_draw_buffer and cogl_pop_draw_buffer are now
deprecated and new code should use the new cogl_framebuffer_* API instead.
Code that previously did:
cogl_push_draw_buffer ();
cogl_set_draw_buffer (COGL_OFFSCREEN_BUFFER, buffer);
/* draw */
cogl_pop_draw_buffer ();
should now be re-written as:
cogl_push_framebuffer (buffer);
/* draw */
cogl_pop_framebuffer ();
As can be seen from the example above the rename has been used as an
opportunity to remove the redundant target argument from
cogl_set_draw_buffer; it now only takes one call to redirect to an offscreen
buffer, and finally the term framebuffer may be a bit more familiar to
anyone coming from an OpenGL background.
When rendering to an fbo for supporting clutter_texture_new_from_actor we
render to an fbo with the same size as the source actor, but with a viewport
the same size as the stage. We offset the viewport so when we render the
source actor in its normal transformed stage position it lands on the fbo.
Previously we were rounding the transformed position given as a float by
truncating the fraction (just using a C cast) but that resulted in an
incorrect pixel offset when rendering offscreen depending on the source
position.
We now simply + 0.5 before casting (or -0.5 for negative numbers)
For supporting clutter_texture_new_from_actor(): when updating a
ClutterTexture's fbo we previously set up an offset frustum in the
perspective matrix before rendering source actors to an offscreen draw
buffer so as to give a perspective as if it were being drawn at its
original stage location.
Now that Cogl supports offset viewports there is a simpler way...
When we come to render the source actor to our offscreen draw buffer we
now copy the projection matrix from the stage; we create a viewport
that's also the same size as the stage (though larger than the offscreen
draw buffer) and as before we apply the modelview transformations of
the source actors ancestry before painting it.
The only trick we need now is to offset the viewport according to the
transformed (to screen space) allocation of the source actor (something we
required previously too). We negatively offset the stage sized viewport
such that the smaller offscreen draw buffer is positioned to sit underneath
the source actor in stage coordinates.
To help keep clutter_texture_paint maintainable this splits out a big
chunk of standalone code that's responsible for updating the fbo when
clutter_texture_new_from_actor has been used.
When updating the FBO for a source actor (to support
clutter_texture_new_from_actor()) we used to simply set an offscreen draw
buffer to be current, paint the source actor and then explicitly set the
window to be current again. This precluded chaining texture_new_from_actor
though because updating another FBO associated with a source actor would end
up restoring the window as the current buffer instead of the previous
offscreen buffer. Now that we use Cogl's draw buffer stack; chaining
clutter_texture_new_from_actor() should be possible.
cogl_viewport only accepted a viewport width and height, but there are times
when it's also desireable to have a viewport offset so that a scene can be
translated after projection but before hitting the framebuffer.
When cogl_texture_new_from_data() fails in clutter_texture_set_from_data()
and no GError is provided, the clutter app will segfault when dereferencing
the GError ** and emitting LOAD_FINISHED signal.
Based on a patch by: Haakon Sporsheim <haakon.sporsheim@gmail.com>
http://bugzilla.openedhand.com/show_bug.cgi?id=1806
The CLUTTER_TEXTURE_IN_CLONE_PAINT was used with the old CloneTexture
actor; now that we have ClutterClone nothing sets the private flag
anymore, and the flag itself is not needed.
Removed the G_PARAM_CONSTRUCT from the property registration of
"load-async" and "load-data-async". It made it impossible to use only
load-data-async, as the async loading state would be unset when
load-async got set it's default FALSE value.
In the int-to-float switch for actor properties, the ::size-change signal
was moved to use floats instead of integers. Sub-pixel precision for image
size is meaningless, though, so we should revert it back to ints.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1659
The clutter_context_get_default() function is private, but shared
across Clutter. For this reason, it should be prefixed by '_' so
that the symbol is hidden from the shared object.
Now that we typically have premultiplied data stored in Cogl
textures, when fetching a texture into local data for temporary
storage, use a premultiplied format to avoid an unpremultiply/
premultiply roundtrip.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Many operations, like mixing two textures together or alpha-blending
onto a destination with alpha, are done most logically if texture data
is in premultiplied form. We also have many sources of premultiplied
texture data, like X pixmaps, FBOs, cairo surfaces. Rather than trying
to work with two different types of texture data, simplify things by
always premultiplying texture data before uploading to GL.
Because the default blend function is changed to accommodate this,
uses of pure-color CoglMaterial need to be adapted to add
premultiplication.
gl/cogl-texture.c gles/cogl-texture.c: Always premultiply
non-premultiplied texture data before uploading to GL.
cogl-material.c cogl-material.h: Switch the default blend functions
to ONE, ONE_MINUS_SRC_ALPHA so they work correctly with premultiplied
data.
cogl.c: Make cogl_set_source_color() premultiply the color.
cogl.h.in color-material.h: Add some documentation about
premultiplication and its interaction with color values.
cogl-pango-render.c clutter-texture.c tests/interactive/test-cogl-offscreen.c:
Use premultiplied colors.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
RGBA data in X pixmaps and in FBOs is already premultiplied; use
the right format when creating cogl textures.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
The load-finished signal has a GError* argument which is meant to
signify whether the loading was successful. However many of the
places in ClutterTexture that emit this signal directly pass their
'error' variable which is a GError** and will be NULL or not
completely independently of whether there was an error. If the
argument was dereferenced it would probably crash.
The test-texture-async interactive test case should also verify
that the ::load-finished signal is correctly emitted.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1622
The texture filters are now a property of the material layer rather
than the texture object. Whenever a texture is painted with a material
it sets the filters on all of the GL textures in the Cogl texture. The
filter is cached so that it won't be changed unnecessarily.
The automatic mipmap generation has changed so that the mipmaps are
only generated when the texture is painted instead of every time the
data changes. Changing the texture sets a flag to mark that the
mipmaps are dirty. This works better if the FBO extension is available
because we can use glGenerateMipmap. If the extension is not available
it will temporarily enable automatic mipmap generation and reupload
the first pixel of each slice. This requires tracking the data for the
first pixel.
The COGL_TEXTURE_AUTO_MIPMAP flag has been replaced with
COGL_TEXTURE_NO_AUTO_MIPMAP so that it will default to
auto-mipmapping. The mipmap generation is now effectively free if you
are not using a mipmap filter mode so you would only want to disable
it if you had some special reason to generate your own mipmaps.
ClutterTexture no longer has to store its own copy of the filter
mode. Instead it stores it in the material and the property is
directly set and read from that. This fixes problems with the filters
getting out of sync when a cogl handle is set on the texture
directly. It also avoids the mess of having to rerealize the texture
if the filter quality changes to HIGH because Cogl will take of
generating the mipmaps if needed.
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
The documentation for ClutterTexture's set_from_rgb_data() and
set_from_yuv_data() says:
Note: This function is likely to change in future versions.
This is not true, since they'll remain for the whole 1.x API cycle.
The setup_viewport() function should only be used by Clutter and
not by application code.
It can be emulated by changing the Stage size and perspective and
requeueing a redraw after calling clutter_stage_ensure_viewport().
The CoglTexture constructors expose the "max-waste" argument for
controlling the maximum amount of wasted areas for slicing or,
if set to -1, disables slicing.
Slicing is really relevant only for large images that are never
repeated, so it's a useful feature only in controlled use cases.
Specifying the amount of wasted area is, on the other hand, just
a way to mess up this feature; 99% the times, you either pull this
number out of thin air, hoping it's right, or you try to do the
right thing and you choose the wrong number anyway.
Instead, we can use the CoglTextureFlags to control whether the
texture should not be sliced (useful for Clutter-GST and for the
texture-from-pixmap actors) and provide a reasonable value for
enabling the slicing ourself. At some point, we might even
provide a way to change the default at compile time or at run time,
for particular platforms.
Since max_waste is gone, the :tile-waste property of ClutterTexture
becomes read-only, and it proxies the cogl_texture_get_max_waste()
function.
Inside Clutter, the only cases where the max_waste argument was
not set to -1 are in the Pango glyph cache (which is a POT texture
anyway) and inside the test cases where we want to force slicing;
for the latter we can create larger textures that will be bigger than
the threshold we set.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Signed-off-by: Neil Roberts <neil@linux.intel.com>
Now that everything is float, the marsharlling function of the
size-change signal should reflect that fact.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Most of the operations involving the texture's allocated area require
floats -- either for computations or for setting the geometry into
COGL. So it doesn't make any sense to use get_allocation_coords() and
cast everything to floats.
ClutterTexture has many properties that can only be accessed using
the GObject API. This is fairly inefficient and makes binding the
class overly complicated.
The Texture class should have accessor methods for all its properties,
properly documented.
The stencil buffer is always cleared the first time a clip is used
that needs it and the stencil test is disabled otherwise so there is
no need to clear before a paint.