This uses the G_GNUC_DEPRECATED macros to mark the
cogl_{texture,vertex_buffer,shader}_ref and unref APIs as deprecated.
Since this flagged that cogl-pango-display-list.c and
clutter-glx-texture-pixmap.c were still using deprecated _ref/_unref
APIs they have now been changed to use the cogl_handle_ref/unref API
instead.
The function prototypes for the primitives API were spread between
cogl-path.h and cogl-texture.h and should have been in a
cogl-primitives.h.
As well as shuffling the prototypes around into more sensible places
this commit splits the cogl-path API out from cogl-primitives.c into
a cogl-path.c
We've had complaints that our Cogl code/headers are a bit "special" so
this is a first pass at tidying things up by giving them some
consistency. These changes are all consistent with how new code in Cogl
is being written, but the style isn't consistently applied across all
code yet.
There are two parts to this patch; but since each one required a large
amount of effort to maintain tidy indenting it made sense to combine the
changes to reduce the time spent re indenting the same lines.
The first change is to use a consistent style for declaring function
prototypes in headers. Cogl headers now consistently use this style for
prototypes:
return_type
cogl_function_name (CoglType arg0,
CoglType arg1);
Not everyone likes this style, but it seems that most of the currently
active Cogl developers agree on it.
The second change is to constrain the use of redundant glib data types
in Cogl. Uses of gint, guint, gfloat, glong, gulong and gchar have all
been replaced with int, unsigned int, float, long, unsigned long and char
respectively. When talking about pixel data; use of guchar has been
replaced with guint8, otherwise unsigned char can be used.
The glib types that we continue to use for portability are gboolean,
gint{8,16,32,64}, guint{8,16,32,64} and gsize.
The general intention is that Cogl should look palatable to the widest
range of C programmers including those outside the Gnome community so
- especially for the public API - we want to minimize the number of
foreign looking typedefs.
OpenGL is an implementation detail for Cogl so it's not appropriate to
expose OpenGL extensions through the Cogl API.
Note: Clutter is currently still using this API, because it is still
doing raw GL calls in ClutterGLXTexturePixmap, so this introduces a
couple of (legitimate) build warnings while compiling Clutter.
The signbit macro is defined in C99 so it should be available but some
versions of GCC don't appear to define it by default. If it's not
available we can use a hack to test the bit directly.
A material layer can not be considered equal if it is using different
texture filtering modes. This was causing problems where rectangles
with different filters would end up batched together and then rendered
with the wrong filter mode.
The function modifies the pixels pointed by p in-place so the pointer
can not be constant. The compiler was accepting this because the
modification is done from inline assembler.
Gtk-doc is reporting a lot of false positives in the unused text file,
mostly because of new private files that have been added to Cogl but not
to the gtk-doc ignore list for the Cogl API reference.
Once the false positives have been removed we have a couple of really
missing symbols that should be added to the cogl-sections.txt file.
The PixelFormat bit and mask #defines should not be used and are there
mostly for convenience, so we can push them to the "private" sub-section
of the API reference.
This pushed Cogl's API reference coverage to 100%.
_cogl_texture_driver_gen is needed to set the texture minification
mode to Cogl's default of GL_LINEAR. There was also a line to set this
in _cogl_texture_2d_new_with_size but it wasn't working because it was
called *before* the texture was bound. If the texture was later
rendered with the default material it then it would end up with GL's
default mipmap filtering mode but without mipmaps so it would render
white squares instead.
This adds a fast path for premultiplying an RGBA image using SSE2
instructions. SSE registers are 128-bit and we need at least 16-bits
per component for the intermediate result of the multiplication so we
can do two pixels in parallel with one register. The function
interleaves 2 SSE registers to multiply 4 pixels in one function call
with the hope that this will pipeline better.
http://bugzilla.openedhand.com/show_bug.cgi?id=1939
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
OpenGL ES has no PBO extension, so we fallback to using a malloc'ed
buffer. Make sure the OpenGL-only defines don't leak into the OpenGL ES
compilation.
First, let's add a new public feature called, surprisingly,
COGL_FEATURE_PBOS to check the availability of PBOs and provide a
fallback path when running on older GL implementations or on OpenGL ES
In case the underlying OpenGL implementation does not provide PBOs, we
need a fallback path (a malloc'ed buffer). The CoglPixelBufer
constructors will instanciate a subclass of CoglBuffer that handles
map/unmap and set_data() with a malloc'ed buffer.
The public feature is useful to check before using set_data() on a
buffer as it will mean doing a memcpy() when not supporting PBOs (in
that case, it's better to create the texture directly instead of using a
CoglBuffer).
The only goal of using COGL buffers is to use them to create
textures. cogl_texture_new_from_buffer() is the new symbol to create
textures out of buffers.
This subclass of CoglBuffer aims at wrapping PBOs or other system
surfaces like DRM buffer objects. Two constructors are available:
cogl_pixel_buffer_new() with a size when you only care about the size of
the buffer (such a buffer can be used to store several texture data such
as the three planes of a I420 frame).
cogl_pixel_buffer_new_full() is more a 1:1 mapping between the data and
an underlying surface, with the possibility of having access to a low
level memory buffer that may have a stride.
Buffer objects are cool! This abstracts the buffer API first introduced
by GL_ARB_vertex_buffer_object and then extended to other objects.
The coglBuffer abstract class is intended to be the base class of all
the buffer objects, letting the user map() buffers. If the underlying
implementation does not support buffer objects (or only support VBO but
not FBO for instance), fallback paths should be provided.
The only way the user has to set the mipmap filters is through the
material/layer API. This API defaults to GL_LINEAR/GL_LINEAR for the max
and min filters. With the main use case of cogl being 2D interfaces, it
makes sense do default to GL_LINEAR for the min filter.
When creating new textures, we did not set any filter on them, using
OpenGL defaults': GL_NEAREST_MIPMAP_LINEAR for the min filter and
GL_LINEAR for the max filter. This will make the driver allocate memory
for the mipmap tree, memory that will not be used in the nominal case
(as the material API defaults to GL_LINEAR).
This patch tries to ensure that the min filter is set to GL_LINEAR
before any glTexImage*() call is done on the texture by setting the
filter when generating new OpenGL handles.
Some GL functions have a return value that the GE() macro is not able to
handle. Let's define a new Ge_RET() macro which will be able to handle
functions such as glMapBuffer().
While at it, removed the unused variadic dots to the GE() macro.
* animator-parser:
docs: Describe the Animation definition syntax
animator: Provide a ClutterScript parser
animator: Allow retrieving type property type from a key
script: Use a node when resolving an animation mode
When we trashed the contents of the stencil buffer during
_cogl_path_fill_nodes we marked the clip stack state as dirty and expected
the clip stack code would clean up our glStencilFunc state.
The problem is that we only try and update the clip state during
_cogl_journal_init (when we flush the framebuffer state) which is only
called when the journal first gets something logged in it.
To make sure the stencil state is cleaned up we now also flush the journal
so _cogl_journal_init will be called for the next logged rectangle.
* origin/cwiiis-stage-resize:
[stage-x11] Set the default size differently
[stage] Set default size correctly
Revert "[x11] Don't set actor size on ConfigureNotify"
[x11] Don't set actor size on ConfigureNotify
[stage] Now that get_geometry works, use it
[stage-x11] make get_geometry always get geometry
[stage] Get the current size correctly
[stage] Set minimum width/height to 1x1
[stage] Add set/get_minumum_size
This adds three new texture backends.
- CoglTexture2D: This is a trimmed down version of CoglTexture2DSliced
which only supports a single texture and only works with the
GL_TEXTURE_2D target. The code is a lot simpler so it has a less
overheads than dealing with slices. Cogl will use this wherever
possible.
- CoglSubTexture: This is used to get a CoglHandle to represent a
subregion of another texture. The texture can be used as if it was a
standalone texture but it does not need to copy the resources.
- CoglAtlasTexture: This collects RGB and RGBA textures into a single
GL texture with the aim of reducing texture state changes and
increasing batching. The backend will try to manage the atlas and
may move the textures around to close gaps in the texture. By
default all textures will be placed in the atlas.
There was a typo in getting the height of the full texture to check
whether the sub region fits so that it was using the width
instead. This was causing crashes when debugging is enabled for some
apps.
In cogl_texture_new_from_file we create and own a temporary
bitmap. There's no need to copy this data if we need to do a premult
conversion so instead it just does conversion before passing it on to
cogl_texture_new_from_bitmap.
The Cogl atlas code was using _cogl_texture_prepare_for_upload with a
NULL pointer for the dst_bmp to determine the internal format of the
texture without converting the bitmap. It needs to do this to decide
whether the texture will go in the atlas before wasting time on the
conversion. This use of the function is a little confusing so that
part of it has been split out into a new function called
_cogl_texture_determine_internal_format. The code to decide whether a
premult conversion is needed has also been split out.
Commit 92a375ab4 changed the initial value of max_texcoord_attrib_unit
to -1 so that it could disable the texture coord array for the first
texture unit when there are no texture coords used in the vbo. However
max_texcoord_attrib_unit was an unsigned value so this actually became
G_MAXUINT. The disabling loop at the bottom still worked because
G_MAXUINT+1==0 but the check for whether any texture unit is greater
than max_texcoord_attrib_unit was failing so it would always end up
disabling all texture units. This is now fixed by changing
max_texcoord_attrib_unit to be signed.
When deciding if a material layer is equal it now compares the GL
target and texture number if the textures are not sliced. This is
needed to get batching across atlased textures.
Cogl accepts a pixel format for both the data in memory and the
internal format to be used for the texture. If they do not match then
it would convert them using the CoglBitmap functions before uploading
the data. However, GL also lets you specify both formats so it makes
more sense to let GL do the conversion. The driver may need the
texture in a specific format so it may end up being converted anyway.
The cogl_texture_upload_data functions have been removed and replaced
with a single function to prepare the bitmap. This will only do the
premultiplication conversion because that is the only part that GL
can't do directly.
The premult part of _cogl_convert_premult has now been split out as
_cogl_convert_premult_status. _cogl_convert_premult has been renamed
to _cogl_convert_format to make it less confusing. The premult
conversion is now done in-place instead of copying the
buffer. Previously it was copying the buffer once for the format
conversion and then copying it again for the premult conversion. The
premult conversion never changes the size of the buffer so it's quite
easy to do in place. We can also use the separated out function
independently.
The internal format of the atlas texture is still set to the
appropriate format so Cogl will disable blending for textures that are
intended to be RGB. This should end up ignoring the alpha channel from
the texture in the atlas. This makes the code slightly easier to
maintain and should also improve the chances of batching.
* device-manager: (37 commits)
x11: Re-enable XI1 extension keyboards
x11: Always handle core device events before XI events
docs: Documentation fixes for DeviceManager
device-manager: Fix the signals definition
docs: Add sections for InputDevice and DeviceManager
docs: Add clutter_input_device_get_device_name()
tests: Print out the device details on motion
Always register core devices
device: Remove unused is_default member
win32: Experimental implementation of device support
tests: Print the device name, as well as its Id
x11: Fill out the :name property of the InputDevices
device: Add the :name property to InputDevice
x11: Store core devices on the X11 Backend singleton
device: Unset the cursor actor when leaving the stage
device: Add pointer actor getter
x11: Discard the LeaveNotify for off-stage ButtonRelease
device: Do not overwrite the stage for an InputDevice
event: Off-stage button releases have a click count of 1
event: Scroll events do not have click count
...
Instead of assigning a new colour to each quad of a batch, the
rectangle debugging code now assigns a new colour to each batch so
that it can be used to visually see what is being batched. The colour
is stored in a global variable that is reset during cogl_clear. This
improves the chances that the same colour will be used for a batch in
the next frames to avoid flickering.
When setting up the state for the vertex buffer,
enable_state_for_drawing_buffer tries to keep track of the highest
numbered texture unit in use. It then disables any texture arrays for
units that were previously enabled if they are greater than that
number. However if there is no texturing in the VBO then the max used
unit would be left at 0 which it would later think meant unit 0 is
still in use so it wouldn't disable it. To fix this it now initialises
the max used unit to -1 which it should interpret as ‘no units are in
use’ so it will later disable the arrays for all units.
Thanks to Jon Mayo for reporting the bug.
http://bugzilla.openedhand.com/show_bug.cgi?id=1957
We were checking the number of texture units against the GL enum that is
used in glGetInteger() to query that number. Let's abstract this in a
little function.
Took the opportunity to dig a bit on the usage of GL limits for the
number of texture (image) units and document our use of them. We'll need
something finer grained if we want to fully exploit texture image units
with a programmable pipeline.