Some internal symbols used for the GLES 2 wrapper were accidentally
being exported. This prepends an underscore to them so they won't
appear in the shared library.
It is often useful to determine if one actor is an ancestor of
another. Add a method to do that.
http://bugzilla.openedhand.com/show_bug.cgi?id=2162
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
By default, ShaderEffect creates a fragment shader; in order to be able
to deprecate ClutterShader we need a way for ShaderEffect sub-classes to
create a vertex shader if needed - By using a write-only, constructor
only property.
ClutterShader has, internally, a ClutterShaderType enumeration that can
be used exactly for this. We just need to expose it and create a GObject
property for ClutterShaderEffect.
Whenever a path or a rectangle is added to the clip stack it now also
stores a screen space bounding box in the entry. Then when the clip
stack is flushed the bounding box is first used to set up the
scissor. That way when we eventually come to use the stencil buffer
the clear will be affected by the scissor so we don't have to clear
the entire buffer.
_cogl_path_get_bounds is no longer static and is exported in
cogl-path-private.h so that it can be used in the clip stack code. The
old version of the function returned x/y and width/height. However
this was mostly used to call cogl_rectangle which takes x1/y1
x2/y2. The function has been changed to just directly return the
second form because it is more useful. Anywhere that was previously
using the function now just directly looks at path->path_nodes_min and
path->path_nodes_max instead.
The transform_point function takes a modelview matrix, projection
matrix and a viewport and performs all three transformations on a
point to give a Cogl window coordinate. This is useful in a number of
places in Cogl so this patch moves it to cogl.c and adds it to
cogl-internal.h
For sliced 2D textures, _cogl_texture_2d_sliced_get_data() uses the
bitmap width, instead of the rowstride, when memcpy()ing into the
dest buffer.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
We only had getters for the red, green, blue and alpha channels of a
color. This meant that, if you wanted to change, say, the alpha
component of a color, one would need to query the red, green and blue
channels and use set_from_4ub() or set_from_4f().
Instead of this, just provide some setters for CoglColor, using the same
naming scheme than the existing getters.
For some operations on pre-multiplied colors (say, replace the alpha
value), you need to unpremultiply the color.
This patch provides the counterpart to cogl_color_premultiply().
The place where we actually change the framebuffer is
_cogl_framebuffer_flush_state(), so if we changed to a new frame buffer
we need to initialize the color bits there.
http://bugzilla.openedhand.com/show_bug.cgi?id=2094
OpenGL 3.0 deprecated querying of the GL_{RED,GREEN,BLUE}_BITS
constants, and the FBO extension provides a mechanism to query for the
color buffer sizes which *should* work even with the default
framebuffer. Unfortunately, this doesn't seem to hold for Mesa - so we
just use this for the offscreen CoglFramebuffer type, and we fall back
to glGetIntegerv() for the onscreen one.
http://bugzilla.openedhand.com/show_bug.cgi?id=2094
DeformEffect is an abstract class that should be used to write effects
that change the geometry of an actor before submitting it to the GPU.
Just like the ShaderEffect class, DeformEffect renders the actor to
which it has been applied into an FBO; then it creates a mesh and stores
it inside a VBO. Sub-classes can control vertex attributes like
position, texel coordinates and the color.
Instead of using the stage offsets when painting we can simply traslate
the current modelview. This allows sub-classes to fully override the
paint_target() virtual function without chaining up.
This function had two problems. Firstly it would clear the enable
blend flag before calling pre_change_notify so that if blending was
previously enabled the journal would end up being flushed while the
flag was still cleared. Secondly it would call the pre change notify
whenever blending is needed regardless of whether it was already
needed previously.
This was causing problems in test-depth.
This adds a _cogl_bind_gl_texture_transient function that should be used
instead of glBindTexture so we can have a consistent cache of the
textures bound to each texture unit so we can avoid some redundant
binding.
As part of an effort to improve the architecture of CoglMaterial
internally this overhauls how we flush layer state to OpenGL by adding a
formal backend abstraction for fragment processing and further
formalizing the CoglTextureUnit abstraction.
There are three backends: "glsl", "arbfp" and "fixed". The fixed backend
uses the OpenGL fixed function APIs to setup the fragment processing,
the arbfp backend uses code generation to handle fragment processing
using an ARBfp program, and the GLSL backend is currently only there as
a formality to handle user programs associated with a material. (i.e.
the glsl backend doesn't yet support code generation)
The GLSL backend has highest precedence, then arbfp and finally the
fixed. If a backend can't support some particular CoglMaterial feature
then it will fallback to the next backend.
This adds three new COGL_DEBUG options:
* "disable-texturing" as expected should disable all texturing
* "disable-arbfp" always make the arbfp backend fallback
* "disable-glsl" always make the glsl backend fallback
* "show-source" show code generated by the arbfp/glsl backends
_cogl_atlas_texture_blit_begin binds a texture to use as the
destination and it expects it to stay bound until
_cogl_atlas_texture_end_blit is called. However there was a call to
_cogl_journal_flush directly after setting up the blit state which
could cause the wrong texture to be bound. This just moves the flush
to before the call to _cogl_atlas_texture_blit_begin.
This was breaking test-cogl-sub-texture.
1) Always flush when migrating textures out of an atlas because although
it's true that the original texture data will remain valid in the
original texture we can't assume that journal entries have resolved the
GL texture that will be used. This is only true if a layer0_override has
been used.
2) Don't flush at the point of creating a new atlas simply flush
immediately before reorganizing an atlas. This means we are now assuming
that we will never see recursion due to atlas textures being modified
during a journal flush. This means it's the responsibility of the
primitives code to _ensure_mipmaps for example not the responsibility of
_cogl_material_flush_gl_state.
We want to make sure that the material state flushing code will never
result in changes to the texture storage for that material. So for
example mipmaps need to be ensured by the primitives code.
Changes to the texture storage will invalidate the texture coordinates
in the journal and we want to avoid a recursion of journal flushing.
This adds a way to compare two CoglMatrix structures to see if they
represent the same transformations. memcmp can't be used because a
CoglMatrix contains private flags and padding.
THIS IS A WORK IN PROGRESS
Mesa is building a big shader when using ARB_texture_env_combine. The
idea is to bypass that computation, do it ourselves and cache the
compiled program in a CoglMaterial.
For now that feature can be enabled by setting the COGL_PIPELINE
environment variable to "arbfp". COGL_SHOW_FP_SOURCE can be set to a non
empty string to dump the fragment program source too.
TODO:
* fog (really easy, using OPTION)
* support tex env combiner operands, DOT3, ADD_SIGNED, INTERPOLATE
combine modes (need refactoring the generation of temporary
variables) (not too hard)
* alpha testing for GLES 2.0?
The Cogl context has now a feature_flags_private enum that will allow us
to query and use OpenGL features without exposing them in the public
API.
The ARB_fragment_program extension is the first user of those flags.
Looking for this extension only happens in the gl driver as the gles
drivers will not expose them.
One can use _cogl_features_available_private() to check for the
availability of such private features.
While at it, reindent cogl-internal.h as described in CODING_STYLE.
Every time we request a CoglPangoFontMap, either internally or
externally, we should have one available.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
If we have the GLX_SGI_video_sync extension then it's possible to always
keep track for the video sync counter each time we call glXSwapBuffers
or do a sub stage blit. This then allows us to avoid waiting before
issuing a blit if we can see that the counter has already progressed.
Also since we expect that glXCopySubBuffer is synchronized to the vblank
we don't need to use glFinish () in conjunction with the vblank wait
since the vblank wait's only purpose is to add a delay.
The GLX_SGI_video_sync spec explicitly says that it's only supported for
direct contexts so we don't setup up the function pointers if
glXIsDirect () returns GL_FALSE.
Neither glXCopySubBuffer or glBlitFramebuffer are integrated with the
swap interval of a framebuffer so that means when we do partial stage
updates (as Mutter does in response to window damage) then the blits
aren't throttled which means applications that throw lots of damage
events at the compositor can effectively cause Clutter to run flat out
taking up all the system resources issuing more blits than can even be
seen.
This patch now makes sure we use the GLX_SGI_video_sync or a
DRM_VBLANK_RELATIVE ioctl to throttle blits to the vblank frequency as
we do when using glXSwapBuffers.
Currently glXCopySubBufferMESA is used for sub stage redraws, but in case
a driver does not support GLX_MESA_copy_sub_buffer we fall back to redrawing
the complete stage which isn't really optimal.
So instead to directly fallback to complete redraws try using GL_EXT_framebuffer_blit
to do the BACK to FRONT buffer copies.
http://bugzilla.openedhand.com/show_bug.cgi?id=2128
At two places in cogl_wrap_prepare_for_draw it was trying to loop over
the texture units to flush some state. However it was retrieving the
texture unit pointer using w->active_texture_unit instead of the loop
index so it would end up with the wrong state.
Also in glEnableClientState it was using the active unit instead of
the client active unit.
Layout properties work similarly to child properties, with the added
headache that they require the 3-tuple:
( layout manager, container, actor )
to be valid in order to be inspected, parsed and applied. This means
using the newly added back-pointer from the container to the layout
manager and then rejigging a bit how the ScriptParser handles the
unresolved properties.
Similarly to the child properties, which use the "child::" prefix, the
layout manager properties use the "layout::" prefix and are defined with
the child of a container holding a layout manager.
Store a back pointer of the layout manager inside the container using
the GObject instance data. This introduces a change in the implementation
of ClutterLayoutManager, though it's still binary compatible.
• 3 general fixes (typos, copy/paste),
• ignore cogl-object-private.h,
• cogl_fixed_atani() was in reality cogl_fixed_atan(), fixed in commit
43564f05.
• Fix the cogl-vector section: sections must have a </SECTION> tag at
the end. Also the cogl-vector section was added in the middle of the
cogl-buffer one. Let's shiffle it out and add that </SECTION> tag.
GCC can catch errors when it knows that a variadic function must be
ended with NULL. Let's use the glib macro encapsulating the GCC
attribute to clutter_animator_set() and clutter_state_set().
As with a351ff2af earlier, distributing headers generated at configure
time conflicts with out of tree builds as the distributed headers will
be included first instead of including the generated ones.
This provides a mechanism for associating private data with any
CoglObject. We expect Clutter will use this to associate weak materials
with normal materials.
clutter-jon.h is generated at configure time, we should not distribute it.
This caused a build issue when compiling from a tarballs and out of tree
builds as we ended up with two clutter-json.h one in $(top_srcdir)/json
and the other in $(top_builddir)/json and picked up the wrong one
($(top_srcdir)/json is included first in the include search path).
Stacking multiple effects sub-classing ClutterOffscreenEffect requires
a small fix in the code that computes the screen coordinates of the
actor to position the FBO correctly with regards to the stage.
Since ClutterEffect is an ActorMeta it should be possible to animate the
properties of named effects using the @effects syntax, just like it
happens for actions and constraints.
Sub-classes of ShaderEffect currently have to get the handle for the
Cogl shader and call cogl_shader_source(); this makes it awkward to
implement a ShaderEffect, and it exposes handles and Cogl API that we
might want to change in the future.
We should provide a ClutterShaderEffect method that allows to (safely)
set the shader source at the right time for sub-classes to use.
The OffscreenEffect should set up the off screen draw buffer so that it
has the same projection and modelview as if it where on screen; we
achieve that by setting up the viewport to be the same size of the stage
but with an initial offset given by the left-most vertex of the actor.
When we paint the texture attached to the FBO we then set up the
modelview matrix of the on screen draw buffer so that it's the same as
the stage one: this way, the texture will be painted in screen
coordinates and it will occupy the same area as the actor would have
had.
A simple, GLSL shader-based blur effect.
The blur shader is taken straight from the test-shader.c interactive
test case. It's a fairly clunky, inefficient and visually incorrect
implementation of a box blur, but it's all we have right now until I
figure out a way to do multi-pass shading with the current API.
The ShaderEffect class is an abstract base type for shader-based
effects. GLSL-based effects should be implemented by sub-classing
ShaderEffect and overriding ActorMeta::set_actor() to set the source
code of the shader, and Effect::pre_paint() to update the uniform
values, if any.
The ShaderEffect has a generic API for sub-classes to set the values
of the uniforms defined by their shaders, and it uses the shader
types we defined for ClutterShader, to avoid re-inventing the wheel
every time.
The OffscreenEffect class is meant to be used to implement Effect
sub-classes that create an offscreen framebuffer and redirect the
actor's paint sequence there. The OffscreenEffect is useful for
effects using fragment shaders.
Any shader-based effect being applied to an actor through an offscreen
buffer should be used before painting the resulting target material and
not for every actor. This means that doing:
pre_paint: cogl_program_use(program)
set up offscreen buffer
paint: [ actors ] → offscreen buffer → target material
post_paint: paint target material
cogl_program_use(null)
Is not correct. Unfortunately, we cannot really do:
post_paint: cogl_program_use(program)
paint target material
cogl_program_use(null)
Because the OffscreenEffect::post_paint() implementation also pops the
offscreen buffer and re-instates the previous framebuffer:
post_paint: cogl_program_use(program)
change frame buffer ← ouch!
paint target material
cogl_program_use(null)
One way to fix it is to allow using the shader right before painting
the target material - which means adding a new virtual inside the
OffscreenEffect class vtable in additions to the ones defined by the
parent Effect class.
The newly-added paint_target() virtual allows the correct sequence of
actions by adding an entry point for sub-classes to wrap the "paint
target material" operation with custom code, in order to implement the
case above correctly as:
post_paint: change frame buffer
cogl_program_use(program)
paint target material
cogl_program_use(null)
The added upside is that sub-classes of OffscreenEffect involving
shaders really just need to override the prepare() and paint_target()
virtuals, since the pre_paint() and post_paint() do all that's needed.
ClutterEffect is an abstract class that should be used to apply effects
on generic actors.
The ClutterEffect class just defines what an effect should implement; it
could be defined as an interface, but we might want to add some default
behavior dependent on the internal state at a later point.
The effect API applies to any actor, so we need to provide a way to
assign an effect to an actor, and let ClutterActor call the Effect
methods during the paint sequence.
Once an effect is attached to an actor we will perform the paint in this
order:
• Effect::pre_paint()
• Actor::paint signal emission
• Effect::post_paint()
Since an effect might collide with the Shader class, we either allow a
shader or an effect for the time being.
When getting the relative modelview matrix we need to reset it to the
stage's initial state or, at least, initialize it to the identity
matrix, instead of assuming we have an empty stack.
This replaces the use of CoglHandle with strongly type CoglClipStack *
pointers instead. The only function not converted for now is
cogl_is_clip_stack which will be done in a later commit.
This replaces the use of CoglHandle with strongly type CoglBitmap *
pointers instead. The only function not converted for now is
cogl_is_bitmap which will be done in a later commit.
This replaces the use of CoglHandle with strongly type CoglPath *
pointers instead. The only function not converted for now is
cogl_is_path which will be done in a later commit.
This patch makes it so that only the backwards compatibility
COGL_HANDLE_DEFINE macro defines a _cogl_xyz_handle_new function. The
new COGL_OBJECT_DEFINE macro only defines a _cogl_xyz_object_new
function.
It's valid C to declare a function omitting it prototype, but it seems
to be a good practise to always declare a function with its
corresponding prototype.
While this is totally fine (None is 0L and, in the pointer context, will
be converted in the right internal NULL representation, which could be a
value with some bits to 1), I believe it's clearer to use NULL instead
of None when we talk about pointers.
While this is totally fine (0 in the pointer context will be converted
in the right internal NULL representation, which could be a value with
some bits to 1), I believe it's clearer to use NULL in the pointer
context.
It seems that, in most case, it's more an overlook than a deliberate
choice to use FALSE/0 as NULL, eg. copying a _COGL_GET_CONTEXT (ctx, 0)
or a g_return_val_if_fail (cond, 0) from a function returning a
gboolean.
This replaces the use of CoglHandle with strongly type CoglBuffer *
pointers instead. The only function not converted for now is
cogl_is_buffer which will be done in a later commit.
CoglHandle is a common source of complaints and confusion because people
expect a "handle" to be some form of integer type with some indirection
to lookup the corresponding objects as opposed to a direct pointer.
This patch starts by renaming CoglHandle to CoglObject * and creating
corresponding cogl_object_ APIs to replace the cogl_handle ones.
The next step though is to remove all use of CoglHandle in the Cogl APIs
and replace with strongly typed pointer types such as CoglMaterial * or
CoglTexture * etc also all occurrences of COGL_INVALID_HANDLE can just
use NULL instead.
After this we will consider switching to GTypeInstance internally so we
can have inheritance for our types and hopefully improve how we handle
bindings.
Note all these changes will be done in a way that maintains the API and
ABI.
in create_pick_material we were using a static boolean to gate when we
show a warning, but that would mean if the problem recurs between
different textures then the warning will only be shown once. We now have
a private bitfield flag instead, just so we don't spew millions of
warnings if the problem is recurring.
This adds a boolean "pick-with-alpha" property to ClutterTexture and when
true, it will use the textures alpha channel to define the actors shape when
picking.
Users should be aware that it's a bit more expensive to pick textures like
this (so probably best not to blindly enable it on *all* your textures)
since it implies rasterizing the texture during picking whereas we would
otherwise just send a solid filled quad to the GPU. It will also interrupt
the internal batching of geometry for pick renders which can otherwise often
be done in a single draw call.
Since the default alpha test function of GL_ALWAYS is equivalent to
GL_ALPHA_TEST being disabled we don't need to worry about Enabling/Disabling
it when flushing material state, instead it's enough to leave it always
enabled. We will assume that any driver worth its salt wont incur any
additional cost for glEnable (GL_ALPHA_TEST) + GL_ALWAYS vs
glDisable (GL_ALPHA_TEST).
This patch simply calls glEnable (GL_ALPHA_TEST) in cogl_create_context
clutter_texture_paint shouldn't need to optimize the case where
paint_opacity == 0 and bailout, since we've been doing this optimization for
all actors in clutter_actor_paint for a while now.
When _cogl_disable_other_texcoord_arrays is called it disables the
neccessary texcoord arrays and then removes the bits for the disabled
arrays in ctx->texcoord_arrays_enabled. However none of the places
that call the function then set any bits in ctx->texcoord_arrays_enabled
so the arrays would never get marked and they would never get disabled
again.
This patch just changes it so that _cogl_disable_other_texcoord_arrays
also sets the corresponding bits in ctx->texcoord_arrays_enabled.
Since emit_drag_end() can be called from a MOTION event capture we
cannot call clutter_event_get_button(). We should, instead, use the
press_button value because if we're emitting ::drag-end it means we
also emitted ::drag-begin and the value is valid.
We need to tell the introspection scanner all the dependencies we
require, including the pkg-config name to use when compiling the
GIR file into a typelib object.
New virtual functions cannot go wherever they want, if we need to
preserve the ABI.
Also, the coding style should match the rest of ClutterActor and
Clutter's own coding style.
When destroying an Actor the various ActorMeta instance should already
be disposed - unless something is holding a reference to them, in which
case we should use the ::destroy signal to unset the ActorMeta:actor
back pointer.
ClickAction adds "clickable" semantics to an actor. It provides all
the business logic to emit a high-level "clicked" signal from the
various low-level signals inside ClutterActor.
The DragAction should, by default, drag the actor to which it has been
applied, instead of delegating what to do to the developer. If custom
code need to override it, g_signal_stop_emission_by_name() can be called
to stop the default handler to ever running.
Instead of directly using a guint32 to store a bitmask for each used
texcoord array, it now stores them in a CoglBitmask. This removes the
limitation of 32 layers (although there are still other places in Cogl
that imply this restriction). To disable texcoord arrays code should
call _cogl_disable_other_texcoord_arrays which takes a bitmask of
texcoord arrays that should not be disabled. There are two extra
bitmasks stored in the CoglContext which are used temporarily for this
function to avoid allocating a new bitmask each time.
http://bugzilla.openedhand.com/show_bug.cgi?id=2132
This implements a growable array of bits called CoglBitmask. The
CoglBitmask is intended to be cheap if less than 32 bits are used. If
more bits are required it will allocate a GArray. The type is meant to
be allocated on the stack but because it can require additional
resources it also has a destroy function.
http://bugzilla.openedhand.com/show_bug.cgi?id=2132
Previously the counter for the number of layers was only updated
whenever the texture handle for a layer changes. However there are
many other ways for a new layer to be created for example by setting a
layer combine constant. Also by default the texture on a layer is
COGL_INVALID_HANDLE so if the application tries to create an explicit
layer with no texture by calling cogl_material_set_layer with
COGL_INVALID_HANDLE then it also wouldn't update the count.
This patch fixes that by incrementing the count in
cogl_material_get_layer instead. This function is called by all
functions that may end up creating a layer so it seems like the most
appropriate place.
http://bugzilla.openedhand.com/show_bug.cgi?id=2132
It should be quite acceptable to use a texture without defining any
texture coords. For example a shader may be in use that is doing
texture lookups without referencing the texture coordinates. Also it
should be possible to replace the vertex colors using a texture layer
without a texture but with a constant layer color.
enable_state_for_drawing_buffer no longer sets any disabled layers in
the overrides. Instead of counting the number of units with texture
coordinates it now keeps them in a mask. This means there can now be
gaps in the list of enabled texture coordinate arrays. To cope with
this, the Cogl context now also stores a mask to track the enabled
arrays. Instead of code manually iterating each enabled array to
disable them, there is now an internal function called
_cogl_disable_texcoord_arrays which disables a given mask.
I think this could also fix potential bugs when a vertex buffer has
gaps in the texture coordinate attributes that it provides. For
example if the vertex buffer only had texture coordinates for layer 2
then the disabling code would not disable the coordinates for layers 0
and 1 even though they are not used. This could cause a crash if the
previous data for those arrays is no longer valid.
http://bugzilla.openedhand.com/show_bug.cgi?id=2132
Added the implementation for clutter_actor_get_accessible, virtual
ClutterActor function, used to obtain the accessible object of
any ClutterActor.
As it is defined virtual, it would be possible to redefine it, so
any custom clutter actor could implement their accessibility object,
withouth relying totally on a accessibility implementation module.
See gtkiconview as example.
http://bugzilla.openedhand.com/show_bug.cgi?id=2070