Interleaving multiple snippets with different hooks
(COGL_SNIPPET_HOOK_VERTEX and COGL_SNIPPET_HOOK_VERTEX_TRANSFORM,
for instance) used to cause a bug during shader code generation.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 38ca76695d54bbbfe3b940a6d0b2ae879e6fd66b)
Adding a layer difference may mean the pipeline overrides all of the
layers of its parent which might make the parent redundant so we
should try to prune the hierarchy.
This is particularly important for CoglGst because whenever a new
frame is ready it tries to make a copy of the pipeline it last used
and then replace all of the textures in the layers. Without this patch
the new pipeline would keep the parent pipeline alive which means also
keeping the old textures alive so all of the frames of the video would
effectively be leaked.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 576c7b55aa835448c977f1d79d128dffd40e7cd8)
Add the newly-added symbols during the development cycle, and drop those
that are dropped. Also, clean up the private symbols that were exported,
those that are still left in cogl.symbols are those still being referenced
by Cogl-Pango
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This reverts commit 83dbf79986981fac9ec0f2575b7c7cb32f629f0f.
On further consideration we realized that needing this change either
indicated a bug in the code using cogl, or that it was a symptom of
some other bug in Cogl resulting in us returning NULL in
cogl_buffer_map_range but not returning a CoglError too.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 8c5127c712570f1ea0d495a7fe7290ae5ee60ce6)
If a pipeline has been flushed that disables depth writing and then we
try to clear the framebuffer with cogl_framebuffer_clear4f, passing
COGL_BUFFER_BIT_DEPTH then we need to make sure that depth writing is
re-enabled before issuing the glClear call. We also need to make sure
that when the next primitive is flushed that we re-check what state the
depth mask should be in.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 3cf497042897d1aa6918bc55b71a36ff67e560b9)
This makes sure that a viewport change when comparing between separate
framebuffers also implies a clip change when we are applying the Intel
gen6 workaround for broken viewport clipping. Without this then
switching between different size framebuffers could leave a scissor
matching the size of a previous framebuffer.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f23f2129c58550f819cff783f47039d7bd91391e)
This makes some changes to _cogl_bitmap_gl_bind to be more paranoid
about bad access arguments and make sure we don't mark a bitmap as bound
if there was an error in _cogl_buffer_gl_bind.
We now validate the access argument upfront to check that one of _READ
or _WRITE access has been requested. In the case that cogl is built
without debug support then we will still detect a bad access argument
later and now explicitly return before marking the bitmap as bound, just
in case the g_assert_not_reach has been somehow disabled. Finally we
defer setting bitmap->bound = TRUE until after we have check for any
error with _cogl_bitmap_gl_bind.
https://bugzilla.gnome.org/show_bug.cgi?id=686770
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 1720d5cf32449a189fd9d400cf5e6696cd50a9fa)
Previously the sampler uniform declarations such as cogl_sampler0 were
generated by walking the list of layers in the shader state. This had
two problems. Firstly it would only generate the declarations for
layers that have been referenced. If a layer has a combine mode of
replace then the samplers from previous layers couldn't be used by
custom snippets. Secondly it meant that the samplers couldn't be
referenced by functions in the declarations sections because the
samplers are declared too late.
This patch fixes it to generate the layer declarations in the backend
start function using all of the layers on the pipeline instead. In
addition it adds the sampler declarations to the vertex shader as they
were previously missing.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1824df902bbb9995cae6ffb7a413913f2df35eef)
Conflicts:
cogl/driver/gl/cogl-pipeline-fragend-glsl.c
cogl/driver/gl/cogl-pipeline-vertend-glsl.c
This adds hook points to add global function and variable declarations
to either the fragment or vertex shader. The declarations can then be
used by subsequent snippets. Only the ‘declarations’ string of the
snippet is used and the code is directly put in the global scope near
the top of the shader.
The reason this is necessary rather than just adding a normal snippet
with the declarations is that for the other hooks Cogl assumes that
the snippets are independent of each other. That means if a snippet
has a replace string then it will assume that it doesn't even need to
generate the code for earlier hooks which means the global
declarations would be lost.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ebb82d5b0bc30487b7101dc66b769160b40f92ca)
To avoid linking trouble in C++ stuff
Reviewed-By: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b194f1bf58055ef1f5075508f19336cae648a0c8)
Conflicts:
cogl/cogl-object.h
This fixes some minor errors and warnings that were preventing Cogl
building with mingw32:
• cogl-framebuffer-gl.c was not including cogl-texture-private.h.
Presumably something else ends up including that when building for
GLX.
• The WGL winsys was not including cogl-error-private.h
• A call to strsplit in the WGL winsys was wrong.
• For some reason the test-wrap-rectangle-textures test was trying to
include the GDKPixbuf header.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5380343399f834d9f96ca3b137d49c9c2193900a)
Mesa's libGLU tesselator code has had a commit on it since it was
copied into Cogl. It sounds like it fixes a potential crash so we
should probably have it in Cogl too.
http://cgit.freedesktop.org/mesa/glu/commit/?id=bfdf99d6ff64b9c2
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c6b2429546d3ea0aa91caa47c7c90f932984ea33)
glMapBufferRange is documented to fail with GL_INVALID_OPERATION if
GL_MAP_INVALIDATE_BUFFER_BIT is set as well as GL_MAP_READ_BIT. I
guess this makes sense when only read access is requested because
there would be no point in reading back uninitialised data. However,
Clutter requests read/write access with the discard hint when
rendering to a CoglBitmap with Cairo. The data is new so the discard
hint makes sense but it also needs read access so that it can read
back the data it just wrote for blending.
This patch works around the GL restriction by skipping the discard
hints if read access is requested. If the buffer discard hint is set
along with read access it will recreate the buffer store as an
alternative way to discard the buffer as it does in the case where the
GL_ARB_map_buffer_range extension is not supported.
https://bugzilla.gnome.org/show_bug.cgi?id=694164
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 986675d6043e8701f2d65415cf72ffc91734debd)
The journal manually flushes its own modelview matrix state so it
needs to mark the state as dirty so that if a primitive is drawn with
the same matrix state as the last primitive it will correctly reflush
it.
https://bugzilla.gnome.org/show_bug.cgi?id=693612
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c7290c994c742456ff0977cb394c289afb377049)
If we make this per-context and create two Cogl contexts, some types
won't re-register, and we'll be in a broken state where some types will
be considered not to be texture types.
https://bugzilla.gnome.org/show_bug.cgi?id=693696
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 567f049d20554bb8ea4e40fa5e72a9fd0bbd409e)
When Cogl is compiled with support for both the GL and GLES drivers it
only includes the GL header and not the GLES header. That means in
that case it would not compile in the code for the
GL_EXT_discard_framebuffer extension even though it could be used on
the GLES driver. This patch makes it use the standard names for the
GL_COLOR, GL_STENCIL etc names instead of the _EXT suffixed names and
manually defines them if we are using the GLES headers. That way the
discard code can be used unconditionally.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 59c30292d0f3c28d6e0e08bc5bf3b4b10545d856)
This patch just adds a call to _cogl_framebuffer_flush_state to ensure
the correct framebuffer is bound before discarding its buffers.
Previously it would presumably just discard the buffers of whatever
framebuffer happened to be used last.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 37c390a5b33d4f65ff6c834e9be2f8de716635ee)
The array allocated for storing the difference flags for each layer in
cogl-pipeline-opengl.c was being cleared with the size of a pointer
instead of the size actually allocated for the array. Presumably this
would mean that if there is more than one layer it wouldn't clear the
array properly.
Also the size of the array was slightly wrong because it was allocating
the size of a pointer for each layer instead of the size of an
unsigned long.
This was originally reported by Jasper St. Pierre on #clutter.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1e134dd7cd5317651be158a483c7cb2723ce8869)
Even if Cogl decides to set a zero timeout because there are events
queued, it still makes sense to give the winsys a chance to add file
descriptors to the list. The winsys might be relying on the list of
CoglPollFDs passed to poll_dispatch to decide whether to read from a
file descriptor and that should happen even if Cogl also woke up the
main loop because the event queue isn't empty.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 6d2f3bc4913d0f1570c09e3714ac8fe2dbfc7a03)
It is expected that cogl_sdl_idle() will be called from the
application immediately before blocking in SDL_WaitEvent. However,
dispatching the onscreen events may cause more events to be queued. If
that happens we need to make sure the blocking returns immediately.
This patch makes it post the dummy event that the application chose in
order to make that happen.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 9e34a1e8ce97b67ebb2889c622f2c9f1076b087d)
In SDL1 the event type numbers were a single byte so there were only
reserving a byte to store the application's chosen type in
CoglRenderer. However in SDL2 they are a Uint32 and SDL_USEREVENT is
0x8000 so if the application was using that then Cogl would actually
end up posting event type 0.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 39c9177776ac601a92c6f4112558464af6968ea0)
It seems like it would be quite a reasonable design for an application
to immediately paint the buffer and call swap_buffers within the
handler for the sync event. This previously wouldn't work.
When using the GLX winsys if swap_region is called then it immediately
tries to set the pending notification flag. However if this is called
from the event callback then when the callback is complete it will
clear the flag again and the pending notification will be lost. This
patch just makes it clear the pending flag before invoking the
callback so that it can be safely queued again.
With any winsys that doesn't directly handle the sync event
notification it would almost work except that it was iterating the
live list of pending events. If the callback causes another event to
be added to this list by issuing a buffer swap then the iteration
would never complete and cogl_poll_dispatch would never return. This
patch just makes it steal the list before iterating so that any
additions will be dispatched by a later call to cogl_poll_dispatch
instead.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 2263b31594900b73900d2ce22cf70c68e7e793c6)
The first hunk from commit 93b7b4c850dd928bf21ee168a95641a8d631f713
turned out to be redundant because GLX guarantees that configs returned
by glXChooseFBConfig should be sorted with non msaa configs coming
first. The second hunk is required since we use glXGetFBConfigs in that
case which doesn't sort the configs.
I had meant to drop this part of the patch before landing it but forgot.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b19fcc1869275826e952925af922125daf8a48de)
There is no guaranty that glXGetFBConfigs will return fbconfig ordered
with non msaa config first. This patch make sure that non msaa config
get choose.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 93b7b4c850dd928bf21ee168a95641a8d631f713)
Add an API to get the current time in the time system that Cogl
is reporting timestamps. This is to be used to convert timestamps
into a different time system.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 9f3735a0c37adcfcffa485f81699b53a4cc0caf8)
Add a CoglFrameInfo object that tracks timing information for frames
that are drawn. We track a frame counter and frame timing information
for each CoglOnscreen. Internally a CoglFrameInfo is automatically
created for each frame, delimited by cogl_onscreen_swap_buffers() or
cogl_onscreen_swap_region() calls.
CoglFrameInfos are delivered to applications via frame event callbacks
that can be registered with a new cogl_onscreen_add_frame_callback()
api. Two initial event types (dispatched on all platforms) have been
defined; a _SYNC event used for throttling the frame rate of
applications and a _COMPLETE event used so signify the end of a frame.
Note: This new _add_frame_callback() api makes the
cogl_onscreen_add_swap_complete_callback() api redundant and so it
should be considered deprecated. Since the _add_swap_complete_callback()
api is still experimental api, we will be looking to quickly migrate
users to the new api so we can remove the old api.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 700401667db2522045e4623d78797b17f9184501)
When we block waiting for the swap, prefer doing that using
glXWaitForMsc() from OML_sync_control because that returns a system
time value for the precise time of the swap.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1e8114aabc78b90373d3d5f3f7c0224f8786e399)
This adds a cogl_renderer_foreach_output() function that can be used to
iterate the display outputs for a particular renderer.
This also updates cogl-info to use this new api so it can dump out all
the output information.
Reviewed-by: Owen W. Taylor <otaylor@fishsoup.net>
(cherry picked from commit a2abf4c4c1fd5aeafd761f965d07a0fe9a362afc)
The CoglOutput object represents one output such as a monitor or
laptop panel, with information about attributes of the output such as
the position of the output within the global coordinate space, and
the refresh rate.
We don't yet publically export the ability to get output information but
we track it for the GLX backend, where we'll use it to track the refresh
rate.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d7ef9d8d71488d0e6874f1ffc6e48700d5c82a31)
There is a cogl_renderer_get_n_fragment_texture_units() function which
is documented to return the number of texture units that are
accessible from a fragment program. This just directly returns the
value from GL_MAX_TEXTURE_IMAGE_UNITS which is available in either the
GLSL extensions or the ARBfp extension. Clutter-GST relies on this to
determine whether it can use a program to convert the YUV data on the
GPU.
When the GL3 driver was added in 66c9db993595b this was changed to
only query the value when the GLSL feature is available. Previously it
would always query the value when the GL or GLES2 driver is used. This
change makes sense on master because there is no API for an
application to make its own ARBfp programs so the only way to access
texture units from a program is via GLSL. However on the 1.14 branch
this patch broke clutter-gst when GLSL is disabled because it thinks
the ARBfp programs can't use multi-texturing.
This patch just changes it to also query the value when ARBfp support
is available.
Note: it's probably note a good idea to apply this patch to master,
but only to the 1.14 branch. On master the function probably needs to
be changed anyway because it is using _COGL_GET_CONTEXT().
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Similar to commit 2c0cfdefbb9d1 for the SDL2 winsys, the GLX and EGL
window systems need to bind the dummy surface or drawable when the
currently bound onscreen is destroyed so that there will always be a
valid context bound.
Previously I got the idea that this would not be necessary on GLX
because the documentation for glXDestroyDrawable states that the
drawable won't actually be destroyed if it is currently bound until it
becomes unbound. However it doesn't say what happens if the underlying
X window is also destroyed and after testing it seems this causes a
segfault in Mesa in GLX and an XError for EGLX.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 4a464eec8c5b5832b9fd6b69746ab4ab36229182)
GL_TEXTURE_MAX_LEVEL is not supported on GLES so we can't set it. It
looks like Mesa was letting us get away with this but on other drivers
it may cause errors. The enum is not defined in the GLES headers so it
was failing to compile unless the GL driver is also enabled.
The test-texture-mipmap-get-set test is now marked as n/a on GLES2
because it can't support limiting the sampled mipmaps.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ba51c393818582b058f5f1e66cf8d13835ad10e5)
Conflicts:
tests/conform/test-conform-main.c
The GLES2 driver wasn't compiling unless the GL driver is also enabled
because some run-time conditional code was directly using GL-only
defines.
This should also fix compiling using the stock GL headers on OS X
which don't define GL_NUM_EXTENSIONS.
https://bugzilla.gnome.org/show_bug.cgi?id=692420
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 661e1719aa0b95c409c568ec91ea52b8ff90519b)
Previously when creating a foreign rectangle texture it would ignore
the passed in texture information and query the texture directly when
using COGL_DRIVER_GL. However this should also work for
COGL_DRIVER_GL3. This patch changes it to check the private feature
flags for the texture querying feature instead of directly checking
the driver value.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 258c98b82027cb5074afe7844ff3954bbe928757)
This was generating warnings when the GL driver is disabled.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f26682dcc04642fed9db959c63d6c6e4261d2148)
Conflicts:
cogl/cogl-auto-texture.c
This adds support for the EGL_EXT_buffer_age extension which is a
counterpart to the GLX_EXT_buffer_age extension.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 92d869764c03d0bac6b51dac833510c22669ac4a)
Add a new BUFFER_AGE winsys feature and a get_buffer_age method to
cogl-onscreen that allows to query the value.
https://bugzilla.gnome.org/show_bug.cgi?id=669122
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Note: When landing the patch I made some gtk-doc updates and changed
_get_buffer_age to return an age of 0 always if the age feature isn't
support instead of using _COGL_RETURN_VAL_IF_FAIL. -- Robert Bragg
(cherry picked from commit 427b1038051e9b53a071d8c229b363b075bb1dc0)
Comparing the pointed-to value is clearly what was meant.
Found by Coverity.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f676352210fad856ae85962733e488bc1a832411)
Previously the functions for packing and unpacking pixels where
generated by token pasting together a function name along with its
type, like the following:
_cogl_pack_ ## uint8_t
Then later in cogl-bitmap-conversion.c it would directly refer to the
function names without token pasting.
This wouldn't work however if the system headers define the stdint
types using #defines instead of typedefs because in that case the
function name generated using token pasting would get the expanded
type name but the reference that doesn't use token pasting wouldn't.
This patch adds an extra macro passed to the cogl-bitmap-packing.h
header which just has the type size. That way the function can be
defined like this instead:
_cogl_pack_ ## 8
That should prevent it from hitting problems with #defined types.
https://bugzilla.gnome.org/show_bug.cgi?id=691945
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d6b5d7085b004ebd48c1543b820331802395ee63)
This make autogen.sh look for automake-1.13 and also updates all
Makefile.am files to no longer use the INCLUDES variable which automake
1.13 warns is deprecated by AM_CPPFLAGS.
https://bugzilla.gnome.org/show_bug.cgi?id=690891
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 5de5569e960102afe979a5f2f0403e1defebca62)
We have a workaround in Cogl to fix viewport clipping with Mesa Intel
Gen 6 drivers but this was breaking the semantics of
cogl_framebuffer_clear() which should not be affected by viewport
clipping. This makes sure we disable and restore the workaround when
clearing the framebuffer. This fixes Clutter's test-cogl-viewport
conformance test.
This tweaks the ordering of some struct members in some of the more
important structs so that the compiler won't insert wasted padding to
avoid breaking the alignment. Some members that were previously
unsigned long have been changed to unsigned int. These members need to
be able to fit in 32-bits to run on 32-bit machines anyway so there's
no point in having them extend to 64-bit on 64-bit machines. This
doesn't affect the public API.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit b721af236680005464e39f7f4dd11381d95efb16)
In commit 1fa7c0f10a8a0 the sliced texture code which creates the
array of pointers to the texture slices was changed so that the
textures are appended to the end of the array instead of initially
creating the array with the right size upfront and then shrinking the
array on error. However it was then still also setting the size of the
array after creating it so the new textures would actually end up in
an unused part of the array. The part of the array that is used was
left unitialised so it would crash. This just removes the call to set
the size of the array.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 7df09d505ba28a1a960df867346af67118e96718)
GL3 has support for clip planes but they are used differently and
involve writing to a builtin output variable in the vertex shader. The
current clip plane code assumes it is only used with a fixed function
driver and tries to directly push to the matrix builtins. This
obviously won't work on GL3 so for now let's just disable clip planes.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5f621589467ab961f5130590298dc8e26d658a92)
When a component-alpha texture is made using a GL3 context a GL_RED
texture is actually used and a swizzle is set up to hide it. However
if a framebuffer is then bound to that texture then when the bits are
queried this workaround will leak out of the API. To fix this it now
detects the situation and reports the number of red bits as the number
of alpha bits.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 425cfb2675912a2cbcaaaeed7c2196d563948222)
Previously when the context was initialised Cogl would query the
number of stencil bits and set a private feature flag to mark that it
can use the buffer for clipping if there was at least 3. The problem
with this is that the number of stencil bits returned by
GL_STENCIL_BITS depends on the currently bound framebuffer. This patch
adds an internal function to query the number of stencil bits in a
framebuffer and makes it use that instead when determining whether it
can push the clip using the stencil buffer.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e928d21516a6c07798655341f4f0f8e3c1d1686c)
Cogl publicly exposes the depth buffer state so we might as well have
a function to query the number of depth bits of a framebuffer.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 853143eb10387f50f8d32cf09af31b8829dc1e01)