When the flags contain a value that only has the most-significant bit
set then ffsl will return the size of an unsigned long. According to
the C spec it is undefined what happens when shifting by a number
greater than or equal to the size of the left operand. On Intel (and
probably others) this seems to end up being a no-op so the iteration
breaks. To fix this we can split the shift into two separate
shifts. We always need to shift by at least one bit so we can put this
one bit shift into a separate operator.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
The uniform names are now stored in a GPtrArray instead of a linked
list. There is also a hash table to speed up converting names to
locations.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Previously the uniform overrides were stored in a linked list. Now
they are stored in a g_malloc'd array. The values are still tightly
packed so that there is only a value for each uniform that has a
corresponding bit in override_mask. The allocated size of the array
always exactly corresponds to the number of bits set in the
override_mask. This means that when a new uniform value is set on a
pipeline it will have to grow the array and copy the old values
in. The assumption is that setting a value for a new uniform is much
less frequent then setting a value for an existing uniform so it makes
more sense to optimise the latter.
The advantage of using an array is that we can quickly jump to right
boxed value given a uniform location by doing a population count in
the bitmask for the number of bits less than the given uniform
location. This can be done in O(1) time whereas the old approach using
a list would scale by the number of bits set.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This returns a population count of all the bits that are set in the
bitmask.
There is now also a _cogl_bitmask_popcount_upto which counts the
number of bits set up to but not including the given bit index. This
will be useful to determine the number of uniform overrides to skip if
we tightly pack the values in an array.
The test-bitmask test has been modified to check these two functions.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds the following new public experimental functions to set
uniform values on a CoglPipeline:
void
cogl_pipeline_set_uniform_1f (CoglPipeline *pipeline,
int uniform_location,
float value);
void
cogl_pipeline_set_uniform_1i (CoglPipeline *pipeline,
int uniform_location,
int value);
void
cogl_pipeline_set_uniform_float (CoglPipeline *pipeline,
int uniform_location,
int n_components,
int count,
const float *value);
void
cogl_pipeline_set_uniform_int (CoglPipeline *pipeline,
int uniform_location,
int n_components,
int count,
const int *value);
void
cogl_pipeline_set_uniform_matrix (CoglPipeline *pipeline,
int uniform_location,
int dimensions,
int count,
gboolean transpose,
const float *value);
These are similar to the old functions used to set uniforms on a
CoglProgram. To get a value to pass in as the uniform_location there
is also:
int
cogl_pipeline_get_uniform_location (CoglPipeline *pipeline,
const char *uniform_name);
Conceptually the uniform locations are tied to the pipeline so that
whenever setting a value for a new pipeline the application is
expected to call this function. However in practice the uniform
locations are global to the CoglContext. The names are stored in a
linked list where the position in the list is the uniform location.
The global indices are used so that each pipeline can store a mask of
which uniforms it overrides. That way it is quicker to detect which
uniforms are different from the last pipeline that used the same
CoglProgramState so it can avoid flushing uniforms that haven't
changed. Currently the values are not actually compared which means
that it will only avoid flushing a uniform if there is a common
ancestor that sets the value (or if the same pipeline is being flushed
again - in which case the pipeline and its common ancestor are the
same thing).
The uniform values are stored in the big state of the pipeline as a
sparse linked list. A bitmask stores which values have been overridden
and only overridden values are stored in the linked list.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds a _cogl_bitmask_set_flags function which can be used to copy
the values from a CoglBitmask to an array of unsigned longs which can
be used with the COGL_FLAGS_* macros. The values are or'd in so that
in can be used multiple times to combine multiple bitmasks.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This moves the POPCOUNTL macro from cogl-winsys-glx to cogl-util and
renames it to _cogl_util_popcountl so that it can be used in more
places. The fallback function for when the GCC builtin is not
available has been replaced with an 8-bit lookup table because the
HAKMEM implementation doesn't look like it would work when longs are
64-bit so it's not suitable for a general purpose function on 64-bit
architectures. Some of the pages regarding population counts seem to
suggest that using a lookup table is the fastest method anyway.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds a function to copy one boxed value to another. It is assumed
that the destination boxed value is totally initialised (so it won't
try to free any memory in it).
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This wraps all of the calls to glUniform* in the GE() macro so that it
will detect GL errors in the right place.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
The code for manipulating CoglBoxedValues is now separated from
cogl-program.c into its own file. That way when we add support for
setting uniform values on a CoglPipeline the code for storing the
values can be shared.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds some macros to iterate over all the bits set in an array of
flags. The macros are a bit awkward because it tries to avoid using a
callback pointer so that the code is inlined.
cogl_bitmask is now using these macros as well so that the logic can
be shared.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Previously cogl-flags was using an array of ints to store the
flags. There was a comment saying that it would be nice to use longs
but this is awkward because g_parse_debug_flags can only work in
ints. This is a silly reason not to use longs because we can just
parse multiple sets of flags per long. This patch therefore changes
cogl-flags to use longs and tweaks the debug key parsing code.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Instead of testing each bit when iterating a bitmask, we can use ffsl
to skip over unset bits in single instruction. That way it will scale
by the number of bits set, not the total number of bits.
ffsl is a non-standard function which glibc only provides by defining
GNUC_SOURCE. However if we are compiling with GCC we can avoid that
mess and just use the equivalent builtin. When not compiling for GCC
it will fall back to _cogl_util_ffs if the size of ints and longs are
the same (which is the case on i686). Otherwise it fallbacks to a slow
function implementation.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
The boilerplate for defining a GType for CoglFixed lived in
cogl-util.c but this didn't seem to make much sense seeing as nothing
in the cogl-util.h header file relates to CoglFixed and there is
already a separate C file for CoglFixed code. This also removes some
redundant header includes from cogl-util.c
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Instead of storing only 31 bits in the pointer for a CoglBitmask, it
now assumes it can store a whole unsigned long minus the one bit used
to mark whether it has been converted to a GArray or not. This works
on the assumption that we can cast an unsigned long to a pointer and
back without losing information which I think should be true for any
platforms that Cogl is interested in. This has the advantage that on
64-bit architectures we can store 63 bits before we have to resort to
using a GArray at no extra cost. The values in the GArray are now
stored as unsigned longs as well on the assumption that it is more
efficient to load and store data in chunks of longs rather than ints.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Cogl keeps a pointer to the last used onscreen framebuffer from the
context to implement the deprecated cogl_set_draw_buffer function
which can take COGL_WINDOW_BUFFER as the target to use the last
onscreen buffer. Previously this would also take a reference to that
pointer. However that was causing a circular reference between the
framebuffer and the context which makes it impossible to clean up
resources properly when onscreen buffers are used. This patch instead
changes it to just store the pointer and then clear the pointer during
_cogl_onscreen_free as a kind of cheap weak reference.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds two new experimental public functions to replace the old
internal _cogl_pipeline_set_cull_face_state function:
void
cogl_pipeline_set_cull_face_mode (CoglPipeline *pipeline,
CoglPipelineCullFaceMode cull_face_mode);
void
cogl_pipeline_set_front_face_winding (CoglPipeline *pipeline,
CoglWinding front_winding);
There are also the corresponding getters.
https://bugzilla.gnome.org/show_bug.cgi?id=663628
Reviewed-by: Robert Bragg <robert@linux.intel.com>
cogl-utils.h needs to include cogl-defines.h so that it knows whether
COGL_HAS_GLIB_SUPPORT is defined.
https://bugzilla.gnome.org/show_bug.cgi?id=663578
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This patch moves the call to _cogl_destroy_texture_units() from
_cogl_context_free() to later on. When destroying a GL texture the
texture units are checked. This would end up accessing invalid memory
so we need to try to destroy the texture units only after everything
that might be referencing a texture has been destroyed.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Although there was a comment in cogl_texture_2d_copy_from_framebuffer
explaining that we shouldn't flush the clip state, the comment was a bit
miss-leading implying we were going to explicitly set a NULL clip. Also
we weren't actually avoiding flushing the clip state since we were
passing 0 for the CoglDrawFlags.
We now pass COGL_FRAMEBUFFER_FLUSH_SKIP_CLIP_STATE in to the flags when
flushing the framebuffer state and the comment has be updated to explain
that clipping won't affect reading from the framebuffer so we don't need
to flush it.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This api is deprecated and documented to be a NOP which wasn't actually
true. This patch actually updates the function to be a NOP. Its nice
that this gets rid of a point where we flush framebuffer state because
we are looking to add a new VirtualFramebuffer interface which will need
special consideration at each of the points we flush framebuffer state.
It was a mistake that this API was ever published, we don't believe
anyone is using the api but until we break api we have to keep the
symbol. The documented semantics are vague enough that a NOP is ok since
we never explicitly documented how the state would be flushed to GL so
there would be no way to reliably use that state anyway.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This change is one logical update to update the Wayland support. This
comprises of the following parts:
* Binding to both the shell and compositor global objects - necessary since
the API for setting top level status moved to the wl_shell interface
* The Wayland visual API went away and instead you setup the EGL surface
appropriately
* The message handling was refined to reflect the current behaviour - now
obsolete comments were removed and new comments updated
Reviewed-by: Robert Bragg <robert@linux.intel.com>
There was a comment implying that if a rgba config has been requested
but no suitable config was found then we would automatically fall back
to an rgb config instead. Actually if no rgba visual is found we simply
fail without any automatic fall back because Cogl is not in a good
position to judge if automatic fall backs are acceptable for higher
level apis such as clutter.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
On GLES2, we need to specify an array size for the texture coord
varying array. Previously this size would be decided in one of the
following ways:
- For generated vertex shaders it is always the number of layers in
the pipeline.
- For generated fragment shaders it is the highest sampled texture
unit in the pipeline or the number of attributes supplied by the
primitive, whichever is higher.
- For user shaders it is usually the number of attributes supplied by
the primitive. However, if the application tries to compile the
shader and query the result before using it, it will always be at
least 4.
These shaders can quite easily end up with different values for the
declaration which makes it fail to link. This patch changes it so that
all of the shaders are generated with the maximum of the number of
texture attributes supplied by the primitive and the number of layers
in the pipeline. If this value changes then the shaders are
regenerated, including user shaders. That way all of the shaders will
always have the same value.
https://bugzilla.gnome.org/show_bug.cgi?id=662184
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This documents that cogl_texture_get_rowstride() is deprecated (or
rather it was a mistake that the api was ever published) and also
clarifies the rowstride argument documentation for
cogl_texture_get_data() to explain how it's automatically calculated
when 0 is passed to help avoid misleading people into thinking that
cogl_texture_get_rowstride() is an appropriate way to get a valid
rowstride for that.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Xlib headers define many trivially named objects which can later cause
name collision problems when only cogl.h header is included in a program
or library. Xlib headers are now only included through including the
standalone header cogl-xlib.h.
https://bugzilla.gnome.org/show_bug.cgi?id=661174
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Instead of emulating _CLAMP_TO_EDGE in cogl-primitives.c we now defer to
cogl_meta_texture_foreach_in_region() to support _CLAMP_TO_EDGE for us.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
cogl_meta_texture_foreach_in_region() now directly supports
CLAMP_TO_EDGE wrap modes. This means the cogl_rectangle code will be
able to build on this and makes the logic accessible to anyone
developing custom primitives that need to support meta textures.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
When associating indices with a CoglPrimitive you are now forced to
specify the number of indices that should be read when drawing.
It's easy to forget to call cogl_primitive_set_n_vertices() after
associating indices with a primitive (and anyway you can see that someone
could be led to believe Cogl can determine that implicitly somewhow) so
this should avoid a lot of mistakes with using the API.
We'd expect that setting indices and updating the n_vertices property
would go hand in hand 99% of the time anyway so this change should
be more convenient as well as less error prone.
This patch adds some documentation for cogl_primitive_set_indices and
cogl_primitive_get/set_n_vertices. It also tries to clarify how the
CoglPrimitive:n_vertices property is updated and what that property
means in relation to other functions too.
https://bugzilla.gnome.org/show_bug.cgi?id=661019
Reviewed-by: Neil Roberts <neil@linux.intel.com>
if a normalize factor of 1 is passed then we don't need to normalize the
starting point to find the closest point equivalent to 0.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This exposes cogl_sub_texture_new() and cogl_is_sub_texture() as
experimental public API. Previously sub-textures were only exposed via
cogl_texture_new_from_sub_texture() so there wasn't a corresponding
CoglSubTexture type. A CoglSubTexture is a high-level texture defined as
a sub-region of some other parent texture. CoglSubTextures are high
level textures that implement the CoglMetaTexture interface which can
be used to manually handle texture repeating.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
It's useful to be able to query back the number of
point_samples_per_pixel that may have previously be chosen using
cogl_framebuffer_set_samples_per_pixel().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This exposes CoglTextureRectangle in the experimental cogl 2.0 api. For
now we just expose a single constructor;
cogl_texture_rectangle_new_with_size() but we can add more later.
This is part of going work to improve our texture apis with more
emphasis on providing low-level access to the varying semantics of
different texture types understood by the gpu instead of only trying to
present a lowest common denominator api.
CoglTextureRectangle is notably useful for never being restricted to
power of two sizes and for being sampled with non-normalized texture
coordinates which can be convenient for use a lookup tables in glsl due
to not needing separate uniforms for mapping normalized coordinates to
texels. Unlike CoglTexture2D though rectangle textures can't have a
mipmap and they only support the _CLAMP_TO_EDGE wrap mode.
Applications wanting to use CoglTextureRectangle should first check
cogl_has_feature (COGL_FEATURE_ID_TEXTURE_RECTANGLE).
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Since we've had several developers from admirable projects say they
would like to use Cogl but would really prefer not to pull in
gobject,gmodule and glib as extra dependencies we are investigating if
we can get to the point where glib is only an optional dependency.
Actually we feel like we only make minimal use of glib anyway, so it may
well be quite straightforward to achieve this.
This adds a --disable-glib configure option that can be used to disable
features that depend on glib.
Actually --disable-glib doesn't strictly disable glib at this point
because it's more helpful if cogl continues to build as we make
incremental progress towards this.
The first use of glib that this patch tackles is the use of
g_return_val_if_fail and g_return_if_fail which have been replaced with
equivalent _COGL_RETURN_VAL_IF_FAIL and _COGL_RETURN_IF_FAIL macros.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
when printing a matrix we aim to print out the matrix type but we
weren't checking the flags first to see if the type is valid. We now
check for the DIRTY_TYPE flag and if not set we also validate the matrix
type isn't out of range.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
In cogl_matrix_look_at we have a tmp CoglMatrix allocated on the stack
but we weren't initializing its flags before passing it to
cogl_matrix_translate which meant if we were using COGL_DEBUG=matrices
we would end up trying to print out an invalid matrix type resulting in
a crash when overrunning an array of type names.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This factors out the CoglOnscreen code from cogl-framebuffer.c so we now
have cogl-onscreen.c, cogl-onscreen.h and cogl-onscreen-private.h.
Notably some of the functions pulled out are currently namespaced as
cogl_framebuffer but we know we are planning on renaming them to be in
the cogl_onscreen namespace; such as cogl_framebuffer_swap_buffers().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This adds COGL_PIPELINE_WRAP_MODE_MIRRORED_REPEAT enum so that mirrored
texture repeating can be used. This also adds support for emulating the
MIRRORED_REPEAT mode via the cogl-spans API so it can also be used with
meta textures such as sliced and atlas textures.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
CoglMetaTexture is an interface for dealing with high level textures
that may be comprised of one or more low-level textures internally. The
interface allows the development of primitive drawing APIs that can draw
with high-level textures (such as atlas textures) even though the
GPU doesn't natively understand these texture types.
There is currently just one function that's part of this interface:
cogl_meta_texture_foreach_in_region() which allows an application to
resolve the internal, low-level textures of a high-level texture.
cogl_rectangle() uses this API for example so that it can easily emulate
the _REPEAT wrap mode for textures that the hardware can't natively
handle repeating of.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Instead of using integers to represent spans we now use floats instead.
This means we are no longer forced to iterate using non-normalized
coordinates so we should hopefully be able to avoid numerous redundant
unnormalize/normalize steps when using the spans api.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Since we can assume that all slices are CoglTexture2D textures we don't
need to chain on our implementation of _foreach_sub_texture_in_region
by calling _cogl_texture_foreach_sub_texture_in_region() for each slice.
We now simply determine the normalized virtual coordinates for the
current span inline and call the given callback inline too.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Currently features are represented as bits in a 32bit mask so we
obviously can't have more than 32 features with that approach. The new
approach is to use the COGL_FLAGS_ macros which lets us handle bitmasks
without a size limit and we change the public api to accept individual
feature enums instead of a mask. This way there is no limit on the
number of features we can add to Cogl.
Instead of using cogl_features_available() there is a new
cogl_has_feature() function and for checking multiple features there is
cogl_has_features() which takes a zero terminated vararg list of
features.
In addition to being able to check for individual features this also
adds a way to query all the features currently available via
cogl_foreach_feature() which will call a callback for each feature.
Since the new functions take an explicit context pointer there is also
no longer any ambiguity over when users can first start to query
features.
Reviewed-by: Neil Roberts <neil@linux.intel.com>