Commit Graph

3 Commits

Author SHA1 Message Date
Damien Lespiau
87bc616d34 framebuffer: Support texture based depth buffers
This commit introduces some new framebuffer api to be able to
enable texture based depth buffers for a framebuffer (currently
only supported for offscreen framebuffers) and once allocated
to be able to retrieve the depth buffer as a texture for further
usage, say, to implement shadow mapping.

The API works as follow:
  * Before the framebuffer is allocated, you can request that a depth
    texture is created with
    cogl_framebuffer_set_depth_texture_enabled()
  * cogl_framebuffer_get_depth_texture() can then be used to grab a
    CoglTexture once the framebuffer has been allocated.
2013-01-18 10:53:29 +00:00
Robert Bragg
54735dec84 Switch use of primitive glib types to c99 equivalents
The coding style has for a long time said to avoid using redundant glib
data types such as gint or gchar etc because we feel that they make the
code look unnecessarily foreign to developers coming from outside of the
Gnome developer community.

Note: When we tried to find the historical rationale for the types we
just found that they were apparently only added for consistent syntax
highlighting which didn't seem that compelling.

Up until now we have been continuing to use some of the platform
specific type such as gint{8,16,32,64} and gsize but this patch switches
us over to using the standard c99 equivalents instead so we can further
ensure that our code looks familiar to the widest range of C developers
who might potentially contribute to Cogl.

So instead of using the gint{8,16,32,64} and guint{8,16,32,64} types this
switches all Cogl code to instead use the int{8,16,32,64}_t and
uint{8,16,32,64}_t c99 types instead.

Instead of gsize we now use size_t

For now we are not going to use the c99 _Bool type and instead we have
introduced a new CoglBool type to use instead of gboolean.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 5967dad2400d32ca6319cef6cb572e81bf2c15f0)
2012-08-06 14:27:39 +01:00
Neil Roberts
908ba29be5 bitmap-fallback: Support converting all pixel format types
_cogl_bitmap_fallback_convert now supports converting to and from all
of the pixel formats, except it continues to preserve the premult
status of the original bitmap. The pixels are unpacked into a
temporary buffer that is either 8-bits per component or 16-bits per
component RGBA depending on whether the destination format is going to
use more than 8 bits per component (eg RGBA_1010102). The packing and
unpacking code is stored in a separate header which is included twice
to generate the functions needed for both sizes of unpacked data. The
hope is that when converting between two formats that are both 8-bit
sized, such as swizzling between BGRA and RGBA, then the
multiplications and divisions in the code will be optimized out and it
shouldn't be too inefficient. Previously the inner switch statement to
decide which conversion to use only operated on one pixel at a time so
it was probably relatively slow.

Reviewed-by: Robert Bragg <robert@linux.intel.com>
2012-03-05 17:44:12 +00:00