The array allocated for storing the difference flags for each layer in
cogl-pipeline-opengl.c was being cleared with the size of a pointer
instead of the size actually allocated for the array. Presumably this
would mean that if there is more than one layer it wouldn't clear the
array properly.
Also the size of the array was slightly wrong because it was allocating
the size of a pointer for each layer instead of the size of an
unsigned long.
This was originally reported by Jasper St. Pierre on #clutter.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1e134dd7cd5317651be158a483c7cb2723ce8869)
Even if Cogl decides to set a zero timeout because there are events
queued, it still makes sense to give the winsys a chance to add file
descriptors to the list. The winsys might be relying on the list of
CoglPollFDs passed to poll_dispatch to decide whether to read from a
file descriptor and that should happen even if Cogl also woke up the
main loop because the event queue isn't empty.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 6d2f3bc4913d0f1570c09e3714ac8fe2dbfc7a03)
It is expected that cogl_sdl_idle() will be called from the
application immediately before blocking in SDL_WaitEvent. However,
dispatching the onscreen events may cause more events to be queued. If
that happens we need to make sure the blocking returns immediately.
This patch makes it post the dummy event that the application chose in
order to make that happen.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 9e34a1e8ce97b67ebb2889c622f2c9f1076b087d)
In SDL1 the event type numbers were a single byte so there were only
reserving a byte to store the application's chosen type in
CoglRenderer. However in SDL2 they are a Uint32 and SDL_USEREVENT is
0x8000 so if the application was using that then Cogl would actually
end up posting event type 0.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 39c9177776ac601a92c6f4112558464af6968ea0)
It seems like it would be quite a reasonable design for an application
to immediately paint the buffer and call swap_buffers within the
handler for the sync event. This previously wouldn't work.
When using the GLX winsys if swap_region is called then it immediately
tries to set the pending notification flag. However if this is called
from the event callback then when the callback is complete it will
clear the flag again and the pending notification will be lost. This
patch just makes it clear the pending flag before invoking the
callback so that it can be safely queued again.
With any winsys that doesn't directly handle the sync event
notification it would almost work except that it was iterating the
live list of pending events. If the callback causes another event to
be added to this list by issuing a buffer swap then the iteration
would never complete and cogl_poll_dispatch would never return. This
patch just makes it steal the list before iterating so that any
additions will be dispatched by a later call to cogl_poll_dispatch
instead.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 2263b31594900b73900d2ce22cf70c68e7e793c6)
The first hunk from commit 93b7b4c850dd928bf21ee168a95641a8d631f713
turned out to be redundant because GLX guarantees that configs returned
by glXChooseFBConfig should be sorted with non msaa configs coming
first. The second hunk is required since we use glXGetFBConfigs in that
case which doesn't sort the configs.
I had meant to drop this part of the patch before landing it but forgot.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b19fcc1869275826e952925af922125daf8a48de)
There is no guaranty that glXGetFBConfigs will return fbconfig ordered
with non msaa config first. This patch make sure that non msaa config
get choose.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 93b7b4c850dd928bf21ee168a95641a8d631f713)
Add an API to get the current time in the time system that Cogl
is reporting timestamps. This is to be used to convert timestamps
into a different time system.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 9f3735a0c37adcfcffa485f81699b53a4cc0caf8)
Add a CoglFrameInfo object that tracks timing information for frames
that are drawn. We track a frame counter and frame timing information
for each CoglOnscreen. Internally a CoglFrameInfo is automatically
created for each frame, delimited by cogl_onscreen_swap_buffers() or
cogl_onscreen_swap_region() calls.
CoglFrameInfos are delivered to applications via frame event callbacks
that can be registered with a new cogl_onscreen_add_frame_callback()
api. Two initial event types (dispatched on all platforms) have been
defined; a _SYNC event used for throttling the frame rate of
applications and a _COMPLETE event used so signify the end of a frame.
Note: This new _add_frame_callback() api makes the
cogl_onscreen_add_swap_complete_callback() api redundant and so it
should be considered deprecated. Since the _add_swap_complete_callback()
api is still experimental api, we will be looking to quickly migrate
users to the new api so we can remove the old api.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 700401667db2522045e4623d78797b17f9184501)
When we block waiting for the swap, prefer doing that using
glXWaitForMsc() from OML_sync_control because that returns a system
time value for the precise time of the swap.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1e8114aabc78b90373d3d5f3f7c0224f8786e399)
This adds a cogl_renderer_foreach_output() function that can be used to
iterate the display outputs for a particular renderer.
This also updates cogl-info to use this new api so it can dump out all
the output information.
Reviewed-by: Owen W. Taylor <otaylor@fishsoup.net>
(cherry picked from commit a2abf4c4c1fd5aeafd761f965d07a0fe9a362afc)
The CoglOutput object represents one output such as a monitor or
laptop panel, with information about attributes of the output such as
the position of the output within the global coordinate space, and
the refresh rate.
We don't yet publically export the ability to get output information but
we track it for the GLX backend, where we'll use it to track the refresh
rate.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d7ef9d8d71488d0e6874f1ffc6e48700d5c82a31)
There is a cogl_renderer_get_n_fragment_texture_units() function which
is documented to return the number of texture units that are
accessible from a fragment program. This just directly returns the
value from GL_MAX_TEXTURE_IMAGE_UNITS which is available in either the
GLSL extensions or the ARBfp extension. Clutter-GST relies on this to
determine whether it can use a program to convert the YUV data on the
GPU.
When the GL3 driver was added in 66c9db993595b this was changed to
only query the value when the GLSL feature is available. Previously it
would always query the value when the GL or GLES2 driver is used. This
change makes sense on master because there is no API for an
application to make its own ARBfp programs so the only way to access
texture units from a program is via GLSL. However on the 1.14 branch
this patch broke clutter-gst when GLSL is disabled because it thinks
the ARBfp programs can't use multi-texturing.
This patch just changes it to also query the value when ARBfp support
is available.
Note: it's probably note a good idea to apply this patch to master,
but only to the 1.14 branch. On master the function probably needs to
be changed anyway because it is using _COGL_GET_CONTEXT().
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Similar to commit 2c0cfdefbb9d1 for the SDL2 winsys, the GLX and EGL
window systems need to bind the dummy surface or drawable when the
currently bound onscreen is destroyed so that there will always be a
valid context bound.
Previously I got the idea that this would not be necessary on GLX
because the documentation for glXDestroyDrawable states that the
drawable won't actually be destroyed if it is currently bound until it
becomes unbound. However it doesn't say what happens if the underlying
X window is also destroyed and after testing it seems this causes a
segfault in Mesa in GLX and an XError for EGLX.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 4a464eec8c5b5832b9fd6b69746ab4ab36229182)
GL_TEXTURE_MAX_LEVEL is not supported on GLES so we can't set it. It
looks like Mesa was letting us get away with this but on other drivers
it may cause errors. The enum is not defined in the GLES headers so it
was failing to compile unless the GL driver is also enabled.
The test-texture-mipmap-get-set test is now marked as n/a on GLES2
because it can't support limiting the sampled mipmaps.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ba51c393818582b058f5f1e66cf8d13835ad10e5)
Conflicts:
tests/conform/test-conform-main.c
The GLES2 driver wasn't compiling unless the GL driver is also enabled
because some run-time conditional code was directly using GL-only
defines.
This should also fix compiling using the stock GL headers on OS X
which don't define GL_NUM_EXTENSIONS.
https://bugzilla.gnome.org/show_bug.cgi?id=692420
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 661e1719aa0b95c409c568ec91ea52b8ff90519b)
Previously when creating a foreign rectangle texture it would ignore
the passed in texture information and query the texture directly when
using COGL_DRIVER_GL. However this should also work for
COGL_DRIVER_GL3. This patch changes it to check the private feature
flags for the texture querying feature instead of directly checking
the driver value.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 258c98b82027cb5074afe7844ff3954bbe928757)
This was generating warnings when the GL driver is disabled.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f26682dcc04642fed9db959c63d6c6e4261d2148)
Conflicts:
cogl/cogl-auto-texture.c
This adds support for the EGL_EXT_buffer_age extension which is a
counterpart to the GLX_EXT_buffer_age extension.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 92d869764c03d0bac6b51dac833510c22669ac4a)
Add a new BUFFER_AGE winsys feature and a get_buffer_age method to
cogl-onscreen that allows to query the value.
https://bugzilla.gnome.org/show_bug.cgi?id=669122
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Note: When landing the patch I made some gtk-doc updates and changed
_get_buffer_age to return an age of 0 always if the age feature isn't
support instead of using _COGL_RETURN_VAL_IF_FAIL. -- Robert Bragg
(cherry picked from commit 427b1038051e9b53a071d8c229b363b075bb1dc0)
Comparing the pointed-to value is clearly what was meant.
Found by Coverity.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f676352210fad856ae85962733e488bc1a832411)
Previously the functions for packing and unpacking pixels where
generated by token pasting together a function name along with its
type, like the following:
_cogl_pack_ ## uint8_t
Then later in cogl-bitmap-conversion.c it would directly refer to the
function names without token pasting.
This wouldn't work however if the system headers define the stdint
types using #defines instead of typedefs because in that case the
function name generated using token pasting would get the expanded
type name but the reference that doesn't use token pasting wouldn't.
This patch adds an extra macro passed to the cogl-bitmap-packing.h
header which just has the type size. That way the function can be
defined like this instead:
_cogl_pack_ ## 8
That should prevent it from hitting problems with #defined types.
https://bugzilla.gnome.org/show_bug.cgi?id=691945
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d6b5d7085b004ebd48c1543b820331802395ee63)
This make autogen.sh look for automake-1.13 and also updates all
Makefile.am files to no longer use the INCLUDES variable which automake
1.13 warns is deprecated by AM_CPPFLAGS.
https://bugzilla.gnome.org/show_bug.cgi?id=690891
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 5de5569e960102afe979a5f2f0403e1defebca62)
We have a workaround in Cogl to fix viewport clipping with Mesa Intel
Gen 6 drivers but this was breaking the semantics of
cogl_framebuffer_clear() which should not be affected by viewport
clipping. This makes sure we disable and restore the workaround when
clearing the framebuffer. This fixes Clutter's test-cogl-viewport
conformance test.
This tweaks the ordering of some struct members in some of the more
important structs so that the compiler won't insert wasted padding to
avoid breaking the alignment. Some members that were previously
unsigned long have been changed to unsigned int. These members need to
be able to fit in 32-bits to run on 32-bit machines anyway so there's
no point in having them extend to 64-bit on 64-bit machines. This
doesn't affect the public API.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit b721af236680005464e39f7f4dd11381d95efb16)
In commit 1fa7c0f10a8a0 the sliced texture code which creates the
array of pointers to the texture slices was changed so that the
textures are appended to the end of the array instead of initially
creating the array with the right size upfront and then shrinking the
array on error. However it was then still also setting the size of the
array after creating it so the new textures would actually end up in
an unused part of the array. The part of the array that is used was
left unitialised so it would crash. This just removes the call to set
the size of the array.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 7df09d505ba28a1a960df867346af67118e96718)
GL3 has support for clip planes but they are used differently and
involve writing to a builtin output variable in the vertex shader. The
current clip plane code assumes it is only used with a fixed function
driver and tries to directly push to the matrix builtins. This
obviously won't work on GL3 so for now let's just disable clip planes.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5f621589467ab961f5130590298dc8e26d658a92)
When a component-alpha texture is made using a GL3 context a GL_RED
texture is actually used and a swizzle is set up to hide it. However
if a framebuffer is then bound to that texture then when the bits are
queried this workaround will leak out of the API. To fix this it now
detects the situation and reports the number of red bits as the number
of alpha bits.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 425cfb2675912a2cbcaaaeed7c2196d563948222)
Previously when the context was initialised Cogl would query the
number of stencil bits and set a private feature flag to mark that it
can use the buffer for clipping if there was at least 3. The problem
with this is that the number of stencil bits returned by
GL_STENCIL_BITS depends on the currently bound framebuffer. This patch
adds an internal function to query the number of stencil bits in a
framebuffer and makes it use that instead when determining whether it
can push the clip using the stencil buffer.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e928d21516a6c07798655341f4f0f8e3c1d1686c)
Cogl publicly exposes the depth buffer state so we might as well have
a function to query the number of depth bits of a framebuffer.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 853143eb10387f50f8d32cf09af31b8829dc1e01)
The GL framebuffer driver now makes sure to bind the framebuffer
before counting the number of bits. Previously it would just query the
number of bits for whatever framebuffer happened to be used last.
In addition the virtual for querying the framebuffer bits has been
modified to take a pointer to a structure instead of a separate
pointer to each component. This should make it slightly more efficient
and easier to maintain.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e9c58b2ba23a7cebcd4e633ea7c3191f02056fb5)
The GL3 context is created using the glXCreateContextAttribs function
which is part of the GLX_ARB_create_context extension. However
previously the function pointers from GLX extensions were only
retrieved once the GL context is created. That meant that the GL3
context creation function would always assume that the extension is
not supported so it would always fail.
This patch changes it to query the functions when the renderer is set
up instead. The base winsys feature flags that are determined while
querying the functions are stored in a member of CoglGLXRenderer.
These are then copied to the CoglContext when it is initialised.
The spec for glXGetProcAddress says that the functions returned are
context-independent. That implies that it is safe to call it without
binding a context although that is not explicitly stated as far as I
can tell. A big of googling finds this DRI documentation which says it
can be used without a context:
http://dri.freedesktop.org/wiki/glXGetProcAddressNeverReturnsNULL
And also this code sample:
http://www.opengl.org/wiki/Tutorial:_OpenGL_3.0_Context_Creation_%28GLX%29
One point that makes me concerned that this might not always work in
practice is that the code in SDL2 to create a GL3 context first
creates a dummy GL2 context in order to have something bound before it
calls glXGetProcAddress. I think this may just be a misunderstanding
based on how wglGetProcAddress works however.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 04a7aca9a98e84e43ac5559305a1358112902e30)
_cogl_texture_spans_foreach_in_region first swaps over the texture
coordinates if they are flipped so that it can always iterate in a
positive direction. It sets a flag so that it will remember that the
coordinates are flipped. Before invoking the callback it is meant to
reflip the coordinates so that the callee doesn't need to be aware of
the flipping. However it was only flipping the sub-texture coordinates
and not the virtual coordinates. This was causing sliced textures to
draw their slice rectangles with the wrong geometry.
test-backface-culling was failing because of this.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e7338a1e09cb22151374aefa6f0bb58485af9189)
There were a few problems with the sub texture iterating code of
sliced textures which were causing some conformance tests to fail when
NPOT textures are disabled:
• The spans are stored in un-normalized coordinates and the
coordinates passed to the foreach function are normalized. The
function was trying to un-normalize them before passing them to the
span iterator code but it was using the wrong factor which was
causing it to actually doubley normalize them.
• The shim function to renormalize the coordinates before passing them
to the callback was renormalizing the sub-texture coordinates
instead of the virtual coordinates. The sub-texture coordinates are
already in the right scale for whatever is the underlying texture so
we don't need to touch them. Instead we need to normalize the
virtual coordinates because these are coming from the un-normalized
coordinates that we passed to the span iterating code.
• The normalize factors passed to the span iterating were always 1.
The code uses this normalizing factor to round the incoming
coordinates to the nearest multiple of a full texture. It divides
the coordinates by the factor rather than multiplying so it looks
like we should be passing the virtual texture size here.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c9773566b0ec0a17b34c440090529de8cff9609e)
This adds a cogl_texture_set_data function that is basically just a
convenience wrapper around cogl_texture_set_region. In the common case
where you want to upload the full contents of a mipmap level though this
api takes 4 less arguments (6 in total) so it's a bit simpler.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit e651dbdc4e4f03016a3dee513e3680270a4a9142)
Consistent with how we lazily allocate framebuffers this patch allows us
to instantiate textures but still specify constraints and requirements
before allocating storage so that we can be sure to allocate the most
appropriate/efficient storage.
This adds a cogl_texture_allocate() function that is analogous to
cogl_framebuffer_allocate() which can optionally be called to explicitly
allocate storage and catch any errors. If this function isn't used
explicitly then Cogl will implicitly ensure textures are allocated
before the storage is needed.
It is generally recommended to rely on lazy storage allocation or at
least perform explicit allocation as late as possible so Cogl can be
fully informed about the best way to allocate storage.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 1fa7c0f10a8a03043e3c75cb079a49625df098b7)
Note: This reverts the cogl_texture_rectangle_new_with_size API change
that dropped the CoglError argument and keeps the semantics of
allocating the texture immediately. This is because Mutter currently
uses this API so we will probably look at updating this later once
we have a corresponding Mutter patch prepared. The other API changes
were kept since they only affected experimental api.
There was a lot of redundancy in how we tracked the width and height of
different texture types which is greatly simplified by adding width and
height members to CoglTexture directly and removing the get_width and
get_height vfuncs from CoglTextureVtable
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 3236e47723e4287d5e0023f29083521aeffc75dd)
This moves the _cogl_texture_get_gl_format function from cogl-texture.c
to cogl-texture-gl.c and renames it _cogl_texture_gl_get_format.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f8deec01eff7d8d9900b509048cf1ff1c86ca879)
This moves the direct use of GL in cogl-framebuffer.c for handling
cogl_framebuffer_read_pixels_into_bitmap() into
driver/gl/cogl-framebuffer-gl.c and adds a
->framebuffer_read_pixels_into_bitmap vfunc to CoglDriverVtable.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 2f893054d6754e6bc7983f061b27c7858f1a593c)
This remove cogl-internal.h in favour of using cogl-private.h. Some
things in cogl-internal.h were moved to driver/gl/cogl-util-gl-private.h
and the _cogl_gl_error_to_string function whose prototype was moved from
cogl-internal.h to cogl-util-gl-private.h has had its implementation
moved from cogl.c to cogl-util-gl.c
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 01cc82ece091aa3bec4c07fdd6bc9e5135fca573)
Symbols changed names, %1 makes gtk-doc sad and some referenced symbols
were missing in the -sections.txt file.
(cherry picked from commit c12919c321186ac7b223bc4f82c588ca2f199d67)
For external (non gtk-doc even) constants, we can use <constant> to
correctly tag those without gtk-doc trying to cross-reference them.
(cherry picked from commit 78d22c6cd44a2279adcd2b94c3317292af861c70)
gtk-doc is not smart enough to parse things like:
typedef struct
{
...
} CoglFoo;
but needs the '{' at the end of the first line.
(cherry picked from commit d1187550ef547305fdeb8a22a7e39a95611a0e1d)
gtk-doc needs the types in -sections.txt to be able to do
cross-references. Add all those currently generating warnings.
(cherry picked from commit e57a21d2608f0885e6f2eb3a017feb7dffb7a63c)
That's actually for signals in gtk-doc and we're not dealing with
GObjects so it's not really appropriate. Used <structfield> as it's the
closest tag I could find to describe a 'property' of a CoglObject and
gives a generic style in the produced HTML.
(cherry picked from commit 8b485d57577cff227a0c7a2e6c06d8d277821374)
I just added the general types creating warnings in the current state of the
documentation (ie the ones references by already documented functions)
and moved the section from the 'Utility' section to the 'General'
section which I believe is a better fit as they are used by more than
one type and not really utilities.
(cherry picked from commit c51b147789763863ef32482d7ffa936160ed7c93)
Various changes have led to the current, separate from the pipeline,
depth state, this commit fixes the remaining waring around that.
(cherry picked from commit 111e687e722ad67a0e1c09f881c6282ccb06410b)
Instead of just having the reference at the end of the paragraph.
Usually seen as more usable.
(cherry picked from commit 6988d3ae61ab16fb298b34d2bd31860833f04186)
Argument names and @$arg suffered from various little mismatches, fix
them in a batch commit.
(cherry picked from commit d2ac3c5a88d980e7519c98bd261111b93cf73a6e)
cogl-index-range was the old API, update the section name to match what
is declared in the documentation. Also update the short description to
better match the new API.
(cherry picked from commit d73df38ff2a8ebe477e139e5ac20838c8f4364bb)
gtk-doc complains that having a sentence starting by Return is a bit
ambiguous and it'd rather see 'Returns:' spelled out.
Fixes 2 warnings:
warning: Free-form return value description in $symbol. Use `Returns:'
to avoid ambiguities
(cherry picked from commit 9718f31717b3a0e01b7c4c69cea138f39d23c0e0)
COGL_HAS_* and COGL_ENABLE_DEBUG are either defined in config.h or not.
So let's test against this, not against their truth value, this allow us
to use -Wundef to catch undefined macros in preprocessor directives.
(cherry picked from commit 73b62832f24711073b0876a6c0f5c61727842c1c)
Cogl always needs to have the context bound to something so that it
can freely create resources such as textures even if there is no
current window. When the currently bound SDLWindow is destroyed, SDL
apparently explicitly unbinds the GL context. If something then later
for example tries to create a texture Cogl would start getting GL
errors and fail. To fix this the SDL winsys now just binds the dummy
window before deiniting the currently bound onscreen.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 2c0cfdefbb9d1ac5097d98887d3581b67a324fae)
We have found several times now when writing code using Cogl that it
would really help if Cogl's matrix stack api was public as a utility
api. In Rig for example we want to avoid redundant arithmetic when
deriving the matrices of entities used to render and we aren't able
to simply use the framebuffer's matrix stack to achieve this. Also when
implementing cairo-cogl we found that it would be really useful if we
could have a matrix stack utility api.
(cherry picked from commit d17a01fd935d88fab96fe6cc0b906c84026c0067)
At times there can be huge numbers of CoglMatrixEntry structures
allocated if they are being used to track the transform of many drawing
commands or objects in a scenegraph. Therefore the size of a
CoglMatrixEntry should be kept as small as possible both to help reduce
the memory usage of applications but also to improve cache usage since
matrix stack manipulations are performance critical at times.
This reduces the size of CoglMatrixEntry structures for non-debug builds
by removing the composite_gets counter used to sanity check how often
the transform for an entry is resolved.
(cherry picked from commit c400b86681a328b1e12b7e120e9c3f4f12c356e0)
This moves the parent pointer member to the top of the CoglMatrixEntry
structure since it will lead to wasted padding when we build for 64bit
cpus.
(cherry picked from commit 42b4750070286a6404b103d8a827a46efb6b344c)
When unrefing a CoglMatrixEntry we walk up the ancestry unrefing and
freeing entries until we find an entry that doesn't need to be freed.
The problem fixed by this patch was that we didn't dereference the
parent member of each entry until after the entry was freed and so there
was the potential for reading a junk parent pointer back.
(cherry picked from commit e5d836b84acb35a009854a0cc0892320023789d1)
It is considered an error to pass a NULL data pointer to
cogl_attribute_buffer_new so we now call
cogl_attribute_buffer_new_with_size instead.
(cherry picked from commit 8e201574b9c35847aa4e999a391741538a0b356b)
Both the texture drivers weren't handling errors correctly when a
CoglPixelBuffer was used to set the contents of an entire texture.
This was causing it to hit an assertion failure in the pixel buffer
tests.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 888733d3c3b24080d2f136cedb3876a41312e4cf)
cogl_texture_set_region() and cogl_texture_set_region_from_bitmap() now
have a level argument so image data can be uploaded to a specific mipmap
level.
The prototype for cogl_texture_set_region was also updated to simplify
the arguments.
The arguments for cogl_texture_set_region_from_bitmap were reordered to
be consistent with cogl_texture_set_region with the source related
arguments listed first followed by the destination arguments.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 3a336a8adcd406b53731a6de0e7d97ba7932c1a8)
Note: Public API changes were reverted in cherry-picking this patch
This removes several uses of _COGL_GET_CONTEXT in cogl-atlas-texture.c.
Notably this involved making CoglPangoGlyphCache track an associated
CoglContext pointer which cogl-pango can pass to
_cogl_atlas_texture_new_with_size().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit d66afbd0758539330490945c699a05c0749c76aa)
Removes some (not all) use of _COGL_GET_CONTEXT() from cogl-winsys-glx.c
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 698a131c4991e4393ce966b968637fba194f252c)
This removes all use of _COGL_GET_CONTEXT from cogl-journal.c
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b5c8ec5db52a6cb71f29b338a59fb3772506fef7)
The _cogl_propagate_error() function takes ownership of the incoming
error pointer so there's no need to allocate a new error when passing
it on. The errors can potentially be passed up from a number of layers
so it seems worthwhile to avoid the allocation.
The _cogl_propagate_gerror() function was previously using
_cogl_propagate_error(). Presumably this would not have worked because
that function would try to free the error from glib using
cogl_error_free but that would use the wrong free function and thus
the wrong slice allocator. The GError propagating function is only
used when gdk-pixbuf is enabled which now requires glib support anyway
so we can just avoid defining the function when compiling without
glib.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 91266162bef9f89fb42c01be0f929d5079758096)
‘Propagate’ was misspelled as ‘propogate’.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5fb4a6178c3e64371c01510690d9de1e8a740bde)
This make _cogl_framebuffer_blit take explicit src and dest framebuffer
pointers and updates all the texture blitting strategies in cogl-blit.c
to avoid pushing/popping to/from the the framebuffer stack.
The removes the last user of the framebuffer stack which we've been
aiming to remove before Cogl 2.0
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 598ca33950a93dd7a201045c4abccda2a855e936)
This adds a driver/gl/cogl-texture-gl.c file and moves some gl specific
bits from cogl-texture.c into it. The moved symbols were also given a
_gl_ infix and the calling code was updated accordingly.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 2c9e81de70cc02d72b1ce9013c49e39300a05b6a)
This ensures we initialize the value of cache->flipped in
_cogl_matrix_entry_cache_init()
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 507814d27298231c9ae50d74b386fb00f0909922)
_cogl_bitmap_new_with_malloc_buffer() now takes a CoglError for throwing
exceptional errors and all callers have been updated to pass through
any application error pointer as appropriate.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 67cad9c0eb5e2650b75aff16abde49f23aabd0cc)
This splits out the very high level texture constructors that may
internally construct one of several types of lower level texture due to
various constraints.
This also updates the prototypes for these constructors to take an
explicit context pointer and return a CoglError consistent with other
texture constructors.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit a1cabfae6ad50c51006c608cdde7d631b7832e71)
Previously we were passing NULL to
cogl_texture_2d_new_{from_bitmap,with_size} so if there was an error the
application would be aborted. This ensures we pass an internal CoglError
so errors can be caught and suppressed instead.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b8d1a1db482e1417979df9f88f92da47aa954bd0)
This allows apps to catch out-of-memory errors when allocating textures.
Textures can be pretty huge at times and so it's quite possible for an
application to try and allocate more memory than is available. It's also
very possible that the application can take some action in response to
reduce memory pressure (such as freeing up texture caches perhaps) so
we shouldn't just automatically abort like we do for trivial heap
allocations.
These public functions now take a CoglError argument so applications can
catch out of memory errors:
cogl_buffer_map
cogl_buffer_map_range
cogl_buffer_set_data
cogl_framebuffer_read_pixels_into_bitmap
cogl_pixel_buffer_new
cogl_texture_new_from_data
cogl_texture_new_from_bitmap
Note: we've been quite conservative with how many apis we let throw OOM
CoglErrors since we don't really want to put a burdon on developers to
be checking for errors with every cogl api call. So long as there is
some lower level api for apps to use that let them catch OOM errors
for everything necessary that's enough and we don't have to make more
convenient apis more awkward to use.
The main focus is on bitmaps and texture allocations since they
can be particularly large and prone to failing.
A new cogl_attribute_buffer_new_with_size() function has been added in
case developers need to catch OOM errors when allocating attribute buffers
whereby they can first use _buffer_new_with_size() (which doesn't take a
CoglError) followed by cogl_buffer_set_data() which will lazily allocate
the buffer storage and report OOM errors.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f7735e141ad537a253b02afa2a8238f96340b978)
Note: since we can't break the API for Cogl 1.x then actually the main
purpose of cherry picking this patch is to keep in-line with changes
on the master branch so that we can easily cherry-pick patches.
All the api changes relating stable apis released on the 1.12 branch
have been reverted as part of cherry-picking this patch so this most
just applies all the internal plumbing changes that enable us to
correctly propagate OOM errors.
constant attributes don't have a corresponding buffer so
_cogl_attribute_free shouldn't try to unref it. Also, for good measure,
in the case of constant attributes we should call
_cogl_boxed_value_destroy() (although currently we know there is no
dynamic data associated with the boxed values).
(cherry picked from commit 89d6dc90d10c59676e0deed87c2c15a0c9712737)
This makes it possible to create vertex attributes that efficiently
represent constant values without duplicating the constant for every
vertex. This adds the following new constructors for constant
attributes:
cogl_attribute_new_const_1f
cogl_attribute_new_const_2fv
cogl_attribute_new_const_3fv
cogl_attribute_new_const_4fv
cogl_attribute_new_const_2f
cogl_attribute_new_const_3f
cogl_attribute_new_const_4f
cogl_attribute_new_const_2x2fv
cogl_attribute_new_const_3x3fv
cogl_attribute_new_const_4x4fv
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 6507216f8030e84dcf2e63b8ecfe906ac47f2ca7)
_cogl_pipeline_progend_glsl_pre_change_notify and
_cogl_pipeline_progend_glsl_layer_pre_change_notify were only dirtying
the current program state for changes related to fragment processing.
This make both functions also check for changes that affect vertex
shader codegen.
This also fixes a mistake where
_cogl_pipeline_progend_glsl_layer_pre_change_notify was checking for
non-layer related changes which would never be seen, and instead it
should be checking for layer based changes only.
This adds back compatibility for CoglShaders that reference the
cogl_tex_coord_in[] or cogl_tex_coord_out[] varyings. Unlike the
previous way this was done this patch maintains the use of layer numbers
for attributes and maintains forwards compatibility by letting shaders
alternatively access the per-layer tex_coord varyings via
cogl_tex_coord%i_in/out defines that index into the array.
This removes the need to maintain an array of tex_coord varyings and
instead we now just emit a varying per-layer uniquely named using a
layer_number infix like cogl_tex_coord0_out and cogl_tex_coord0_in.
Notable this patch also had to change the journal flushing code to use
pipeline layer numbers to determine the name of texture coordinate
attributes.
We now also break batches by using a deeper comparison of layers so
such that two pipelines with the same number of layers can now cause a
batch break if they use different layer numbers.
This adds an internal _cogl_pipeline_layer_numbers_equal() function that
takes two pipelines and returns TRUE if they have the same number of
layers and all the layer numbers are the same too, otherwise it returns
FALSE.
Where we used to break batches based on changes to the number of layers
we now break according to the status of
_cogl_pipeline_layer_numbers_equal
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit e55b64a9cdc93285049d9b969bef67484c2d9fb3)
Note: this will cause a temporary regression for the Cogl 1.x CoglShader
api since it will break compatibility with existing shaders that
reference the texture varyings from the fragment shader.
The intention is to follow up with another patch to add back
CoglShader compatibility.
This makes Cogl explicitly check for out-of-memory errors reported by
the opengl driver in cogl_texture_3d_new_with_size() calls. This allows
us to throw a COGL_SYSTEM_ERROR_NO_MEMORY error and return NULL so
applications may gracefully handle this condition.
This patch only affects the cogl_texture_3d_new_with_size() api not
_new_from_data() or _new_from_bitmap().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit a602cae233b16d2ec9ad6fd238b169720467cf75)
This makes Cogl explicitly check for out-of-memory errors reported by
the opengl driver in cogl_texture_2d_new_with_size() calls. This allows
us to throw a COGL_SYSTEM_ERROR_NO_MEMORY error and return NULL so
applications may gracefully handle this condition.
This patch only affects the cogl_texture_2d_new_with_size() api not
_new_from_data() or _new_from_bitmap().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 0283423dad59ba3d3e4cde400c29ac8e7803f888)