This is to try and improve API consistency. Simple cogl structures that
don't derive from CoglObject and which can be allocated on the stack,
such as CoglColor and CoglMatrix should all have "_init" or
"_init_from" functions to initialize all the structure members. (As
opposed to a cogl_xyz_new() function for CoglObjects). CoglColor
previously used the naming scheme "_set_from" for these initializers but
"_set" is typically reserved for setting individual properties of a
structure/object.
This adds three _init functions:
cogl_color_init_from_4ub
cogl_color_init_from_4f
cogl_color_init_from_4fv
The _set_from functions are now deprecated but only with a gtk-doc
annotation for now. This is because the cogl_color_set_from API is quite
widely used already and so were giving a grace period before enabling a
GCC deprecated warning just because otherwise the MX maintainers will
complain to me that I've made their build logs look messy.
The journal logs colors as 4bytes into a vertex array and since we are
planning to make CoglMaterial track its color using a CoglColor instead
of a byte array this convenience will be useful for re-implementing
_cogl_material_get_colorubv.
When converting the floating point allocation width to an integer
multiple of PANGO_SCALE to give to the PangoLayout it can sometimes
end up slightly short of the allocated size due to rounding
errors. This can cause some of the lines to be wrapped differently
when a non-integer-aligned position is used (such as when animating
text). It works better to round the number to the nearest integer by
adding 0.5 instead of letting the default float cast truncate it
downwards.
http://bugzilla.openedhand.com/show_bug.cgi?id=2170
Both ::drag-begin and ::drag-end have a "button" argument - even though
we assume internally, and externally, that dragging can only be the
result of a primary button operation.
* wip/deform-effect:
docs: Add DeformEffect and PageTurnEffect to the API reference
effect: Add PageTurnEffect
effect: Add DeformEffect
offscreen-effect: Traslate the modelview with the offsets
docs: Fix Effect subclassing section
The marshallers we use for the signals are declared in a private header,
and it stands to reason that they should also be hidden in the shared
object by using the common '_' prefix. We are also using some direct
g_cclosure_marshal_* symbol from GLib, instead of consistently use the
clutter_marshal_* symbol.
Some internal symbols used for the GLES 2 wrapper were accidentally
being exported. This prepends an underscore to them so they won't
appear in the shared library.
It is often useful to determine if one actor is an ancestor of
another. Add a method to do that.
http://bugzilla.openedhand.com/show_bug.cgi?id=2162
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
By default, ShaderEffect creates a fragment shader; in order to be able
to deprecate ClutterShader we need a way for ShaderEffect sub-classes to
create a vertex shader if needed - By using a write-only, constructor
only property.
ClutterShader has, internally, a ClutterShaderType enumeration that can
be used exactly for this. We just need to expose it and create a GObject
property for ClutterShaderEffect.
Whenever a path or a rectangle is added to the clip stack it now also
stores a screen space bounding box in the entry. Then when the clip
stack is flushed the bounding box is first used to set up the
scissor. That way when we eventually come to use the stencil buffer
the clear will be affected by the scissor so we don't have to clear
the entire buffer.
_cogl_path_get_bounds is no longer static and is exported in
cogl-path-private.h so that it can be used in the clip stack code. The
old version of the function returned x/y and width/height. However
this was mostly used to call cogl_rectangle which takes x1/y1
x2/y2. The function has been changed to just directly return the
second form because it is more useful. Anywhere that was previously
using the function now just directly looks at path->path_nodes_min and
path->path_nodes_max instead.
The transform_point function takes a modelview matrix, projection
matrix and a viewport and performs all three transformations on a
point to give a Cogl window coordinate. This is useful in a number of
places in Cogl so this patch moves it to cogl.c and adds it to
cogl-internal.h
For sliced 2D textures, _cogl_texture_2d_sliced_get_data() uses the
bitmap width, instead of the rowstride, when memcpy()ing into the
dest buffer.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
We only had getters for the red, green, blue and alpha channels of a
color. This meant that, if you wanted to change, say, the alpha
component of a color, one would need to query the red, green and blue
channels and use set_from_4ub() or set_from_4f().
Instead of this, just provide some setters for CoglColor, using the same
naming scheme than the existing getters.
For some operations on pre-multiplied colors (say, replace the alpha
value), you need to unpremultiply the color.
This patch provides the counterpart to cogl_color_premultiply().
The place where we actually change the framebuffer is
_cogl_framebuffer_flush_state(), so if we changed to a new frame buffer
we need to initialize the color bits there.
http://bugzilla.openedhand.com/show_bug.cgi?id=2094
OpenGL 3.0 deprecated querying of the GL_{RED,GREEN,BLUE}_BITS
constants, and the FBO extension provides a mechanism to query for the
color buffer sizes which *should* work even with the default
framebuffer. Unfortunately, this doesn't seem to hold for Mesa - so we
just use this for the offscreen CoglFramebuffer type, and we fall back
to glGetIntegerv() for the onscreen one.
http://bugzilla.openedhand.com/show_bug.cgi?id=2094
DeformEffect is an abstract class that should be used to write effects
that change the geometry of an actor before submitting it to the GPU.
Just like the ShaderEffect class, DeformEffect renders the actor to
which it has been applied into an FBO; then it creates a mesh and stores
it inside a VBO. Sub-classes can control vertex attributes like
position, texel coordinates and the color.
Instead of using the stage offsets when painting we can simply traslate
the current modelview. This allows sub-classes to fully override the
paint_target() virtual function without chaining up.
This function had two problems. Firstly it would clear the enable
blend flag before calling pre_change_notify so that if blending was
previously enabled the journal would end up being flushed while the
flag was still cleared. Secondly it would call the pre change notify
whenever blending is needed regardless of whether it was already
needed previously.
This was causing problems in test-depth.
This adds a _cogl_bind_gl_texture_transient function that should be used
instead of glBindTexture so we can have a consistent cache of the
textures bound to each texture unit so we can avoid some redundant
binding.
As part of an effort to improve the architecture of CoglMaterial
internally this overhauls how we flush layer state to OpenGL by adding a
formal backend abstraction for fragment processing and further
formalizing the CoglTextureUnit abstraction.
There are three backends: "glsl", "arbfp" and "fixed". The fixed backend
uses the OpenGL fixed function APIs to setup the fragment processing,
the arbfp backend uses code generation to handle fragment processing
using an ARBfp program, and the GLSL backend is currently only there as
a formality to handle user programs associated with a material. (i.e.
the glsl backend doesn't yet support code generation)
The GLSL backend has highest precedence, then arbfp and finally the
fixed. If a backend can't support some particular CoglMaterial feature
then it will fallback to the next backend.
This adds three new COGL_DEBUG options:
* "disable-texturing" as expected should disable all texturing
* "disable-arbfp" always make the arbfp backend fallback
* "disable-glsl" always make the glsl backend fallback
* "show-source" show code generated by the arbfp/glsl backends
_cogl_atlas_texture_blit_begin binds a texture to use as the
destination and it expects it to stay bound until
_cogl_atlas_texture_end_blit is called. However there was a call to
_cogl_journal_flush directly after setting up the blit state which
could cause the wrong texture to be bound. This just moves the flush
to before the call to _cogl_atlas_texture_blit_begin.
This was breaking test-cogl-sub-texture.
1) Always flush when migrating textures out of an atlas because although
it's true that the original texture data will remain valid in the
original texture we can't assume that journal entries have resolved the
GL texture that will be used. This is only true if a layer0_override has
been used.
2) Don't flush at the point of creating a new atlas simply flush
immediately before reorganizing an atlas. This means we are now assuming
that we will never see recursion due to atlas textures being modified
during a journal flush. This means it's the responsibility of the
primitives code to _ensure_mipmaps for example not the responsibility of
_cogl_material_flush_gl_state.
We want to make sure that the material state flushing code will never
result in changes to the texture storage for that material. So for
example mipmaps need to be ensured by the primitives code.
Changes to the texture storage will invalidate the texture coordinates
in the journal and we want to avoid a recursion of journal flushing.
This test breaks out into raw OpenGL to create a foreign texture so it
needs to be careful not to trample on any state that may be cached by
Cogl internally.
This test breaks out into raw OpenGL to create a RECTANGLE texture so it
needs to be careful not to trample on any state that may be cached by
Cogl internally.
This adds a way to compare two CoglMatrix structures to see if they
represent the same transformations. memcmp can't be used because a
CoglMatrix contains private flags and padding.
THIS IS A WORK IN PROGRESS
Mesa is building a big shader when using ARB_texture_env_combine. The
idea is to bypass that computation, do it ourselves and cache the
compiled program in a CoglMaterial.
For now that feature can be enabled by setting the COGL_PIPELINE
environment variable to "arbfp". COGL_SHOW_FP_SOURCE can be set to a non
empty string to dump the fragment program source too.
TODO:
* fog (really easy, using OPTION)
* support tex env combiner operands, DOT3, ADD_SIGNED, INTERPOLATE
combine modes (need refactoring the generation of temporary
variables) (not too hard)
* alpha testing for GLES 2.0?
The Cogl context has now a feature_flags_private enum that will allow us
to query and use OpenGL features without exposing them in the public
API.
The ARB_fragment_program extension is the first user of those flags.
Looking for this extension only happens in the gl driver as the gles
drivers will not expose them.
One can use _cogl_features_available_private() to check for the
availability of such private features.
While at it, reindent cogl-internal.h as described in CODING_STYLE.
Every time we request a CoglPangoFontMap, either internally or
externally, we should have one available.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
If we have the GLX_SGI_video_sync extension then it's possible to always
keep track for the video sync counter each time we call glXSwapBuffers
or do a sub stage blit. This then allows us to avoid waiting before
issuing a blit if we can see that the counter has already progressed.
Also since we expect that glXCopySubBuffer is synchronized to the vblank
we don't need to use glFinish () in conjunction with the vblank wait
since the vblank wait's only purpose is to add a delay.
The GLX_SGI_video_sync spec explicitly says that it's only supported for
direct contexts so we don't setup up the function pointers if
glXIsDirect () returns GL_FALSE.
Neither glXCopySubBuffer or glBlitFramebuffer are integrated with the
swap interval of a framebuffer so that means when we do partial stage
updates (as Mutter does in response to window damage) then the blits
aren't throttled which means applications that throw lots of damage
events at the compositor can effectively cause Clutter to run flat out
taking up all the system resources issuing more blits than can even be
seen.
This patch now makes sure we use the GLX_SGI_video_sync or a
DRM_VBLANK_RELATIVE ioctl to throttle blits to the vblank frequency as
we do when using glXSwapBuffers.
Currently glXCopySubBufferMESA is used for sub stage redraws, but in case
a driver does not support GLX_MESA_copy_sub_buffer we fall back to redrawing
the complete stage which isn't really optimal.
So instead to directly fallback to complete redraws try using GL_EXT_framebuffer_blit
to do the BACK to FRONT buffer copies.
http://bugzilla.openedhand.com/show_bug.cgi?id=2128