Previously the uniform overrides were stored in a linked list. Now
they are stored in a g_malloc'd array. The values are still tightly
packed so that there is only a value for each uniform that has a
corresponding bit in override_mask. The allocated size of the array
always exactly corresponds to the number of bits set in the
override_mask. This means that when a new uniform value is set on a
pipeline it will have to grow the array and copy the old values
in. The assumption is that setting a value for a new uniform is much
less frequent then setting a value for an existing uniform so it makes
more sense to optimise the latter.
The advantage of using an array is that we can quickly jump to right
boxed value given a uniform location by doing a population count in
the bitmask for the number of bits less than the given uniform
location. This can be done in O(1) time whereas the old approach using
a list would scale by the number of bits set.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This returns a population count of all the bits that are set in the
bitmask.
There is now also a _cogl_bitmask_popcount_upto which counts the
number of bits set up to but not including the given bit index. This
will be useful to determine the number of uniform overrides to skip if
we tightly pack the values in an array.
The test-bitmask test has been modified to check these two functions.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
The tests tries all of the various combinations of setting uniform
values on a pipeline and verifies the expected results with a some
example shaders.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds the following new public experimental functions to set
uniform values on a CoglPipeline:
void
cogl_pipeline_set_uniform_1f (CoglPipeline *pipeline,
int uniform_location,
float value);
void
cogl_pipeline_set_uniform_1i (CoglPipeline *pipeline,
int uniform_location,
int value);
void
cogl_pipeline_set_uniform_float (CoglPipeline *pipeline,
int uniform_location,
int n_components,
int count,
const float *value);
void
cogl_pipeline_set_uniform_int (CoglPipeline *pipeline,
int uniform_location,
int n_components,
int count,
const int *value);
void
cogl_pipeline_set_uniform_matrix (CoglPipeline *pipeline,
int uniform_location,
int dimensions,
int count,
gboolean transpose,
const float *value);
These are similar to the old functions used to set uniforms on a
CoglProgram. To get a value to pass in as the uniform_location there
is also:
int
cogl_pipeline_get_uniform_location (CoglPipeline *pipeline,
const char *uniform_name);
Conceptually the uniform locations are tied to the pipeline so that
whenever setting a value for a new pipeline the application is
expected to call this function. However in practice the uniform
locations are global to the CoglContext. The names are stored in a
linked list where the position in the list is the uniform location.
The global indices are used so that each pipeline can store a mask of
which uniforms it overrides. That way it is quicker to detect which
uniforms are different from the last pipeline that used the same
CoglProgramState so it can avoid flushing uniforms that haven't
changed. Currently the values are not actually compared which means
that it will only avoid flushing a uniform if there is a common
ancestor that sets the value (or if the same pipeline is being flushed
again - in which case the pipeline and its common ancestor are the
same thing).
The uniform values are stored in the big state of the pipeline as a
sparse linked list. A bitmask stores which values have been overridden
and only overridden values are stored in the linked list.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds a _cogl_bitmask_set_flags function which can be used to copy
the values from a CoglBitmask to an array of unsigned longs which can
be used with the COGL_FLAGS_* macros. The values are or'd in so that
in can be used multiple times to combine multiple bitmasks.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This moves the POPCOUNTL macro from cogl-winsys-glx to cogl-util and
renames it to _cogl_util_popcountl so that it can be used in more
places. The fallback function for when the GCC builtin is not
available has been replaced with an 8-bit lookup table because the
HAKMEM implementation doesn't look like it would work when longs are
64-bit so it's not suitable for a general purpose function on 64-bit
architectures. Some of the pages regarding population counts seem to
suggest that using a lookup table is the fastest method anyway.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds a function to copy one boxed value to another. It is assumed
that the destination boxed value is totally initialised (so it won't
try to free any memory in it).
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This wraps all of the calls to glUniform* in the GE() macro so that it
will detect GL errors in the right place.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
The code for manipulating CoglBoxedValues is now separated from
cogl-program.c into its own file. That way when we add support for
setting uniform values on a CoglPipeline the code for storing the
values can be shared.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds some macros to iterate over all the bits set in an array of
flags. The macros are a bit awkward because it tries to avoid using a
callback pointer so that the code is inlined.
cogl_bitmask is now using these macros as well so that the logic can
be shared.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Previously cogl-flags was using an array of ints to store the
flags. There was a comment saying that it would be nice to use longs
but this is awkward because g_parse_debug_flags can only work in
ints. This is a silly reason not to use longs because we can just
parse multiple sets of flags per long. This patch therefore changes
cogl-flags to use longs and tweaks the debug key parsing code.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Instead of testing each bit when iterating a bitmask, we can use ffsl
to skip over unset bits in single instruction. That way it will scale
by the number of bits set, not the total number of bits.
ffsl is a non-standard function which glibc only provides by defining
GNUC_SOURCE. However if we are compiling with GCC we can avoid that
mess and just use the equivalent builtin. When not compiling for GCC
it will fall back to _cogl_util_ffs if the size of ints and longs are
the same (which is the case on i686). Otherwise it fallbacks to a slow
function implementation.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
The boilerplate for defining a GType for CoglFixed lived in
cogl-util.c but this didn't seem to make much sense seeing as nothing
in the cogl-util.h header file relates to CoglFixed and there is
already a separate C file for CoglFixed code. This also removes some
redundant header includes from cogl-util.c
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds a test which tries manipulating some bits on a CoglBitmask
and verifies that it gets the expected result. This test is fairly
unusual in that it is directly testing some internal Cogl code that
isn't exposed through the public API. To make this work it directly
includes the source for CoglBitmask.
CoglBitmask does some somewhat dodgy things with converting longs to
pointers and back so it makes sense to have a test case to verify that
this is working on all platforms.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Instead of storing only 31 bits in the pointer for a CoglBitmask, it
now assumes it can store a whole unsigned long minus the one bit used
to mark whether it has been converted to a GArray or not. This works
on the assumption that we can cast an unsigned long to a pointer and
back without losing information which I think should be true for any
platforms that Cogl is interested in. This has the advantage that on
64-bit architectures we can store 63 bits before we have to resort to
using a GArray at no extra cost. The values in the GArray are now
stored as unsigned longs as well on the assumption that it is more
efficient to load and store data in chunks of longs rather than ints.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
The size of the framebuffer may not be the size of the framebuffer that we
requested - we should use the actual size of the framebuffer in the
calculations to position the crate in the center.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Cogl keeps a pointer to the last used onscreen framebuffer from the
context to implement the deprecated cogl_set_draw_buffer function
which can take COGL_WINDOW_BUFFER as the target to use the last
onscreen buffer. Previously this would also take a reference to that
pointer. However that was causing a circular reference between the
framebuffer and the context which makes it impossible to clean up
resources properly when onscreen buffers are used. This patch instead
changes it to just store the pointer and then clear the pointer during
_cogl_onscreen_free as a kind of cheap weak reference.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This tweaks the backface culling test to use the experimental pipeline
API as well as the legacy API. All of the primitives are now rendered
with all 16 combinations of front winding, cull face mode and legacy
state.
The test to 'draw a regular rectangle' has been removed. I think this
initially existed because their were different functions to draw a
rectangle with and without texturing. This is no longer the case so it
is no longer useful and it's awkward to implement because it need a
separate pipeline to disable the texturing.
https://bugzilla.gnome.org/show_bug.cgi?id=663628
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds two new experimental public functions to replace the old
internal _cogl_pipeline_set_cull_face_state function:
void
cogl_pipeline_set_cull_face_mode (CoglPipeline *pipeline,
CoglPipelineCullFaceMode cull_face_mode);
void
cogl_pipeline_set_front_face_winding (CoglPipeline *pipeline,
CoglWinding front_winding);
There are also the corresponding getters.
https://bugzilla.gnome.org/show_bug.cgi?id=663628
Reviewed-by: Robert Bragg <robert@linux.intel.com>
cogl-utils.h needs to include cogl-defines.h so that it knows whether
COGL_HAS_GLIB_SUPPORT is defined.
https://bugzilla.gnome.org/show_bug.cgi?id=663578
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This patch moves the call to _cogl_destroy_texture_units() from
_cogl_context_free() to later on. When destroying a GL texture the
texture units are checked. This would end up accessing invalid memory
so we need to try to destroy the texture units only after everything
that might be referencing a texture has been destroyed.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Although there was a comment in cogl_texture_2d_copy_from_framebuffer
explaining that we shouldn't flush the clip state, the comment was a bit
miss-leading implying we were going to explicitly set a NULL clip. Also
we weren't actually avoiding flushing the clip state since we were
passing 0 for the CoglDrawFlags.
We now pass COGL_FRAMEBUFFER_FLUSH_SKIP_CLIP_STATE in to the flags when
flushing the framebuffer state and the comment has be updated to explain
that clipping won't affect reading from the framebuffer so we don't need
to flush it.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This api is deprecated and documented to be a NOP which wasn't actually
true. This patch actually updates the function to be a NOP. Its nice
that this gets rid of a point where we flush framebuffer state because
we are looking to add a new VirtualFramebuffer interface which will need
special consideration at each of the points we flush framebuffer state.
It was a mistake that this API was ever published, we don't believe
anyone is using the api but until we break api we have to keep the
symbol. The documented semantics are vague enough that a NOP is ok since
we never explicitly documented how the state would be flushed to GL so
there would be no way to reliably use that state anyway.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This change is one logical update to update the Wayland support. This
comprises of the following parts:
* Binding to both the shell and compositor global objects - necessary since
the API for setting top level status moved to the wl_shell interface
* The Wayland visual API went away and instead you setup the EGL surface
appropriately
* The message handling was refined to reflect the current behaviour - now
obsolete comments were removed and new comments updated
Reviewed-by: Robert Bragg <robert@linux.intel.com>
There was a comment implying that if a rgba config has been requested
but no suitable config was found then we would automatically fall back
to an rgb config instead. Actually if no rgba visual is found we simply
fail without any automatic fall back because Cogl is not in a good
position to judge if automatic fall backs are acceptable for higher
level apis such as clutter.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
On GLES2, we need to specify an array size for the texture coord
varying array. Previously this size would be decided in one of the
following ways:
- For generated vertex shaders it is always the number of layers in
the pipeline.
- For generated fragment shaders it is the highest sampled texture
unit in the pipeline or the number of attributes supplied by the
primitive, whichever is higher.
- For user shaders it is usually the number of attributes supplied by
the primitive. However, if the application tries to compile the
shader and query the result before using it, it will always be at
least 4.
These shaders can quite easily end up with different values for the
declaration which makes it fail to link. This patch changes it so that
all of the shaders are generated with the maximum of the number of
texture attributes supplied by the primitive and the number of layers
in the pipeline. If this value changes then the shaders are
regenerated, including user shaders. That way all of the shaders will
always have the same value.
https://bugzilla.gnome.org/show_bug.cgi?id=662184
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Previously the layer combine on the test pipeline was set up to
replace the incoming color with the layer constant. This patch changes
it to sample the replacement color from a 1x1 texture instead. This
exposes a bug on the GLES2 backend where the vertex shader will be
generated with a size for cogl_tex_coord_out of 4 but the
corresponding declaration in the fragment shader will have n_layers,
which is 1. This makes the program fail to link and the test fails.
https://bugzilla.gnome.org/show_bug.cgi?id=662184
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This documents that cogl_texture_get_rowstride() is deprecated (or
rather it was a mistake that the api was ever published) and also
clarifies the rowstride argument documentation for
cogl_texture_get_data() to explain how it's automatically calculated
when 0 is passed to help avoid misleading people into thinking that
cogl_texture_get_rowstride() is an appropriate way to get a valid
rowstride for that.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Xlib headers define many trivially named objects which can later cause
name collision problems when only cogl.h header is included in a program
or library. Xlib headers are now only included through including the
standalone header cogl-xlib.h.
https://bugzilla.gnome.org/show_bug.cgi?id=661174
Reviewed-by: Robert Bragg <robert@linux.intel.com>
As part of the on going effort to port the conformance tests that were
originally written as clutter tests to be standalone cogl tests this
patch ports the test-sub-texture test to be standalone now.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
For conformance tests that want to read back a region of pixels and
check they all have the same color we now have a test_utils_check_region
utility function for that.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Instead of emulating _CLAMP_TO_EDGE in cogl-primitives.c we now defer to
cogl_meta_texture_foreach_in_region() to support _CLAMP_TO_EDGE for us.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
cogl_meta_texture_foreach_in_region() now directly supports
CLAMP_TO_EDGE wrap modes. This means the cogl_rectangle code will be
able to build on this and makes the logic accessible to anyone
developing custom primitives that need to support meta textures.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
When associating indices with a CoglPrimitive you are now forced to
specify the number of indices that should be read when drawing.
It's easy to forget to call cogl_primitive_set_n_vertices() after
associating indices with a primitive (and anyway you can see that someone
could be led to believe Cogl can determine that implicitly somewhow) so
this should avoid a lot of mistakes with using the API.
We'd expect that setting indices and updating the n_vertices property
would go hand in hand 99% of the time anyway so this change should
be more convenient as well as less error prone.
This patch adds some documentation for cogl_primitive_set_indices and
cogl_primitive_get/set_n_vertices. It also tries to clarify how the
CoglPrimitive:n_vertices property is updated and what that property
means in relation to other functions too.
https://bugzilla.gnome.org/show_bug.cgi?id=661019
Reviewed-by: Neil Roberts <neil@linux.intel.com>