Previously the private feature flags were stored in an enum and we
already had 31 flags. Adding the 32nd flag would presumably make it
add -2³¹ as one of the values which might cause problems. To avoid
this we'll just use an fixed-size array of longs and use indices for
the enum values like we do for the public features.
A slight complication with this is in the CoglDriverDescription where
we were previously using a static intialised value to describe the set
of features that the driver supports. We can't easily do this with the
flags array so instead the features are stored in a fixed-size array
of indices.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d94cb984e3c93630f3c2e6e3be9d189672aa20f3)
Conflicts:
cogl/cogl-context-private.h
cogl/cogl-context.c
cogl/cogl-private.h
cogl/cogl-renderer.c
cogl/driver/gl/cogl-pipeline-opengl.c
cogl/driver/gl/gl/cogl-driver-gl.c
cogl/driver/gl/gl/cogl-pipeline-progend-fixed-arbfp.c
cogl/driver/gl/gles/cogl-driver-gles.c
cogl/driver/nop/cogl-driver-nop.c
The cogl-texture-private.h needs to be included as
_cogl_texture_needs_premult_conversion, so that we can avoid implicit
declaration warnings of that symbol (aka C4013 on MSVC).
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 4afe9dc1fea646e2a9576f9a0dbd1ffafa40485b)
This removes the gl centric _cogl_texture_prepare_for_upload api from
cogl-texture.c and instead adds a _cogl_bitmap_convert_for_upload() api
which everything now uses instead. GL specific code that needed the gl
internal/format/type enums returned by _cogl_texture_prepare_for_upload
now use ->pixel_format_to_gl directly.
Since there was a special case optimization in
cogl_texture_new_from_file that aimed to avoid copying the temporary
bitmap that's created for the given file and allow conversions to
happen in-place the new _cogl_bitmap_convert_for_upload() api supports
converting in place depending on a 'can_convert_in_place' argument.
This ability to convert bitmaps in-place has been integrated across the
different components as appropriate.
In updating cogl-texture-2d-sliced.c this was able to remove a number of
other GL specific parts to how spans are setup.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit e190dd23c655da34b9c5c263a9f6006dcc0413b0)
Conflicts:
cogl/cogl-auto-texture.c
cogl/cogl.symbols
Previously the functions for packing and unpacking pixels where
generated by token pasting together a function name along with its
type, like the following:
_cogl_pack_ ## uint8_t
Then later in cogl-bitmap-conversion.c it would directly refer to the
function names without token pasting.
This wouldn't work however if the system headers define the stdint
types using #defines instead of typedefs because in that case the
function name generated using token pasting would get the expanded
type name but the reference that doesn't use token pasting wouldn't.
This patch adds an extra macro passed to the cogl-bitmap-packing.h
header which just has the type size. That way the function can be
defined like this instead:
_cogl_pack_ ## 8
That should prevent it from hitting problems with #defined types.
https://bugzilla.gnome.org/show_bug.cgi?id=691945
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d6b5d7085b004ebd48c1543b820331802395ee63)
_cogl_bitmap_new_with_malloc_buffer() now takes a CoglError for throwing
exceptional errors and all callers have been updated to pass through
any application error pointer as appropriate.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 67cad9c0eb5e2650b75aff16abde49f23aabd0cc)
This allows apps to catch out-of-memory errors when allocating textures.
Textures can be pretty huge at times and so it's quite possible for an
application to try and allocate more memory than is available. It's also
very possible that the application can take some action in response to
reduce memory pressure (such as freeing up texture caches perhaps) so
we shouldn't just automatically abort like we do for trivial heap
allocations.
These public functions now take a CoglError argument so applications can
catch out of memory errors:
cogl_buffer_map
cogl_buffer_map_range
cogl_buffer_set_data
cogl_framebuffer_read_pixels_into_bitmap
cogl_pixel_buffer_new
cogl_texture_new_from_data
cogl_texture_new_from_bitmap
Note: we've been quite conservative with how many apis we let throw OOM
CoglErrors since we don't really want to put a burdon on developers to
be checking for errors with every cogl api call. So long as there is
some lower level api for apps to use that let them catch OOM errors
for everything necessary that's enough and we don't have to make more
convenient apis more awkward to use.
The main focus is on bitmaps and texture allocations since they
can be particularly large and prone to failing.
A new cogl_attribute_buffer_new_with_size() function has been added in
case developers need to catch OOM errors when allocating attribute buffers
whereby they can first use _buffer_new_with_size() (which doesn't take a
CoglError) followed by cogl_buffer_set_data() which will lazily allocate
the buffer storage and report OOM errors.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f7735e141ad537a253b02afa2a8238f96340b978)
Note: since we can't break the API for Cogl 1.x then actually the main
purpose of cherry picking this patch is to keep in-line with changes
on the master branch so that we can easily cherry-pick patches.
All the api changes relating stable apis released on the 1.12 branch
have been reverted as part of cherry-picking this patch so this most
just applies all the internal plumbing changes that enable us to
correctly propagate OOM errors.
This commit introduces some new framebuffer api to be able to
enable texture based depth buffers for a framebuffer (currently
only supported for offscreen framebuffers) and once allocated
to be able to retrieve the depth buffer as a texture for further
usage, say, to implement shadow mapping.
The API works as follow:
* Before the framebuffer is allocated, you can request that a depth
texture is created with
cogl_framebuffer_set_depth_texture_enabled()
* cogl_framebuffer_get_depth_texture() can then be used to grab a
CoglTexture once the framebuffer has been allocated.
The coding style has for a long time said to avoid using redundant glib
data types such as gint or gchar etc because we feel that they make the
code look unnecessarily foreign to developers coming from outside of the
Gnome developer community.
Note: When we tried to find the historical rationale for the types we
just found that they were apparently only added for consistent syntax
highlighting which didn't seem that compelling.
Up until now we have been continuing to use some of the platform
specific type such as gint{8,16,32,64} and gsize but this patch switches
us over to using the standard c99 equivalents instead so we can further
ensure that our code looks familiar to the widest range of C developers
who might potentially contribute to Cogl.
So instead of using the gint{8,16,32,64} and guint{8,16,32,64} types this
switches all Cogl code to instead use the int{8,16,32,64}_t and
uint{8,16,32,64}_t c99 types instead.
Instead of gsize we now use size_t
For now we are not going to use the c99 _Bool type and instead we have
introduced a new CoglBool type to use instead of gboolean.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 5967dad2400d32ca6319cef6cb572e81bf2c15f0)
This creates a CoglBitmap which points into an existing buffer in
system memory. That way it can be used to create a texture or to read
pixel data into. The function replaces the existing internal function
_cogl_bitmap_new_from_data but removes the destroy notify call back.
If the application wants notification of destruction it can just use
the cogl_object_set_user_data function as normal. Internally there is
now a convenience function to create a bitmap for system memory and
automatically free the buffer using that mechanism.
The name of the function is inspired by
cairo_image_surface_create_for_data which has similar semantics.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
If the fast-path inplace premult conversion can't be used then it will
now fallback to unpacking the buffer into a row of guint16s and use
the generic conversion.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This adds _cogl_bitmap_convert_into_bitmap which is the same as
_cogl_bitmap_convert except that it writes into an existing bitmap
instead of allocating a new one. _cogl_bitmap_convert now just
allocates a buffer and calls the new function. This is used in
_cogl_read_pixels to avoid allocating a second intermediate buffer
when the pixel format to store in is not GL_RGBA.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
If we are going to unpack the data into a known format anyway we might
as well do the premult conversion instead of delaying it to do
in-place. This helps because not all formats with alpha channels are
handled by the in-place premult conversion code. This removes the
_cogl_bitmap_convert_format_and_premult function so that now
_cogl_bitmap_convert is a completely general purpose function that can
convert from anything to anything. _cogl_bitmap_convert now includes a
fast path for when the base formats are the same and the premult
conversion can be handled with the in-place code so that we don't need
to unpack and can just copy the bitmap instead.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Previously the bitmap code was setup so that there could be an image
library used to convert between formats and then some 'fallback' code
when the image library can't handle the conversion. However there was
never any implementation of the conversion in the image library so the
fallback was always used. I don't think this split really makes sense
so this patch renames cogl-bitmap-fallback to cogl-bitmap-conversion
and removes the stub conversion functions in the image library.
Reviewed-by: Robert Bragg <robert@linux.intel.com>