Commit Graph

4 Commits

Author SHA1 Message Date
Neil Roberts
51f3e28c1f bitmap: Don't try to token paste the typenames from stdint.h
Previously the functions for packing and unpacking pixels where
generated by token pasting together a function name along with its
type, like the following:

 _cogl_pack_ ## uint8_t

Then later in cogl-bitmap-conversion.c it would directly refer to the
function names without token pasting.

This wouldn't work however if the system headers define the stdint
types using #defines instead of typedefs because in that case the
function name generated using token pasting would get the expanded
type name but the reference that doesn't use token pasting wouldn't.

This patch adds an extra macro passed to the cogl-bitmap-packing.h
header which just has the type size. That way the function can be
defined like this instead:

 _cogl_pack_ ## 8

That should prevent it from hitting problems with #defined types.

https://bugzilla.gnome.org/show_bug.cgi?id=691945

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit d6b5d7085b004ebd48c1543b820331802395ee63)
2013-01-22 18:00:11 +00:00
Damien Lespiau
87bc616d34 framebuffer: Support texture based depth buffers
This commit introduces some new framebuffer api to be able to
enable texture based depth buffers for a framebuffer (currently
only supported for offscreen framebuffers) and once allocated
to be able to retrieve the depth buffer as a texture for further
usage, say, to implement shadow mapping.

The API works as follow:
  * Before the framebuffer is allocated, you can request that a depth
    texture is created with
    cogl_framebuffer_set_depth_texture_enabled()
  * cogl_framebuffer_get_depth_texture() can then be used to grab a
    CoglTexture once the framebuffer has been allocated.
2013-01-18 10:53:29 +00:00
Robert Bragg
54735dec84 Switch use of primitive glib types to c99 equivalents
The coding style has for a long time said to avoid using redundant glib
data types such as gint or gchar etc because we feel that they make the
code look unnecessarily foreign to developers coming from outside of the
Gnome developer community.

Note: When we tried to find the historical rationale for the types we
just found that they were apparently only added for consistent syntax
highlighting which didn't seem that compelling.

Up until now we have been continuing to use some of the platform
specific type such as gint{8,16,32,64} and gsize but this patch switches
us over to using the standard c99 equivalents instead so we can further
ensure that our code looks familiar to the widest range of C developers
who might potentially contribute to Cogl.

So instead of using the gint{8,16,32,64} and guint{8,16,32,64} types this
switches all Cogl code to instead use the int{8,16,32,64}_t and
uint{8,16,32,64}_t c99 types instead.

Instead of gsize we now use size_t

For now we are not going to use the c99 _Bool type and instead we have
introduced a new CoglBool type to use instead of gboolean.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 5967dad2400d32ca6319cef6cb572e81bf2c15f0)
2012-08-06 14:27:39 +01:00
Neil Roberts
908ba29be5 bitmap-fallback: Support converting all pixel format types
_cogl_bitmap_fallback_convert now supports converting to and from all
of the pixel formats, except it continues to preserve the premult
status of the original bitmap. The pixels are unpacked into a
temporary buffer that is either 8-bits per component or 16-bits per
component RGBA depending on whether the destination format is going to
use more than 8 bits per component (eg RGBA_1010102). The packing and
unpacking code is stored in a separate header which is included twice
to generate the functions needed for both sizes of unpacked data. The
hope is that when converting between two formats that are both 8-bit
sized, such as swizzling between BGRA and RGBA, then the
multiplications and divisions in the code will be optimized out and it
shouldn't be too inefficient. Previously the inner switch statement to
decide which conversion to use only operated on one pixel at a time so
it was probably relatively slow.

Reviewed-by: Robert Bragg <robert@linux.intel.com>
2012-03-05 17:44:12 +00:00