When running in a purely swrast environment (such as with
LIBGL_ALWAYS_SOFTWARE), the extension is not exposed by mesa,
but wayland is still possible with wl_shm.
https://bugzilla.gnome.org/show_bug.cgi?id=704750
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 8d4e4b0e8062708cece4d4c929abccc492ee21cc)
It was an oversight when making the CoglAtlasTexture api public that we
continued to use the COGL_TEXTURE_INTERNAL_DEFINE macro. This updates
the code to now use COGL_TEXTURE_DEFINE which means the
cogl_is_atlas_texture() function will now be exported in the public api.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Lionel Landwerlin <llandwerlin@gmail.com>
(cherry picked from commit ecbe209f48be80fe45b48f92b277a2aee08d5704)
The recommended usage model for rendering pipelines with minor changes
is to make a copy of a base pipeline just before rendering and then
modify that. The new pipeline can then be used as the base pipeline
for the next paint. Currently this has a known problem when modifying
uniform values in that Cogl won't prune the redundant ancestry and
instead it will end up with an ever-growing chain of pipelines. This
is particularly bad for something like CoglGST where it could also end
up leaking textures for the video frames if the pipelines are used to
render video.
The patch adds a test case for that situation so that we won't forget
about the problem. The test is maked as a known failure. Additionally
the patch adds a similar test for setting the blend constant to
constrast the test with some state that does work correctly.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 2550181543389d6e9e1cb9618d17cd352a0cf9b6)
This reverts the change in semantics for cogl_texture_new_with_size so
that it goes back to allocating textures synchronously and returning
NULL if there was a failure to allocate. Only the new/2.0 texture apis
will have the lazy allocation behaviour so we avoid breaking existing
code. This also fixes a potential crasher by removing a code path
that was passing NULL to cogl_texture_allocate() that would have caused
and abort if there were an error.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This adds a #define for gl_PointCoord to all shaders so that it can be
accessed with a name in the Cogl namespace.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c28fc054788e88627bcc2346f4c4c368870ff777)
Previously we would only add the #version pragma to shaders when
point sprite texture coordinates are enabled for a layer so that we
can access the gl_PointCoord builtin. However I don't think there's
any good reason not to just always request GLSL version 1.2 if it's
available. That way applications can always use gl_PointCoord without
having to enable point sprite texture coordinates.
This adds a glsl_version_to_use member to CoglContext which is used to
generate the #version pragma as part of the shader boilerplate. On
desktop GL this is set to 120 if version 1.2 is available, otherwise
it is left at 110. On GLES it is always left as 100.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e4dfe8b07e8af111ecbcb0da20ff2a2875a2b5d0)
Conflicts:
cogl/driver/gl/gl/cogl-driver-gl.c
The documentation for the builtin varyings for the texture coordinates
was wrongly claiming that the varyings are stored in an array. This
was changed in e55b64a9cdc9 so that each layer gets its own
independent varying.
The documentation was also referring to texture units instead of layer
numbers. The texture units are no longer publicly exposed in the
shaders and instead everything should in theory be expressed in terms
of layer numbers.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit bf6b509c340bdc3be30e1a81fb96710b3176e9dc)
The deprecation macros need to be before the function prototype on
Visual Studio, and is also accepted by GCC.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Add the symbols that were added to the public Cogl API, and remove the
export of an internal API that was also removed. Unfortunately
_cogl_system_error_quark needs to be exported for the conformance test
programs.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Cogl-Path was split out from the main Cogl library to become a standalone
library, but many libraries/appplications using Cogl (such as Clutter)
expects that Cogl-Path is still in Cogl. Define
COGL_HAS_COGL_PATH_SUPPORT here, as it will always be needed, at least for
the 1.16 release series, so that builds of items using Cogl would not
break, such as Clutter.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
The autotools variables to substitute for versioning in 1.16 is wrong, so we
have incorrect versioning info during a release as the variables are for
Cogl-2.x. Fix this so we can have the correct versioning info for the
Cogl/Cogl-Pango DLLs during a release.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
The cogl-texture-private.h needs to be included as
_cogl_texture_needs_premult_conversion, so that we can avoid implicit
declaration warnings of that symbol (aka C4013 on MSVC).
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 4afe9dc1fea646e2a9576f9a0dbd1ffafa40485b)
So we don't read an initializes value later on. Caught by valgrind:
Conditional jump or move depends on uninitialised value(s)
_cogl_object_texture_rectangle_indirect_free (cogl-texture-rectangle.c:105)
_cogl_object_context_indirect_free (cogl-context.c:453)
...
main (text.c:149)
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 90415aae9495749a2a9e85fb17425a3c7f6a08c8)
Mesa has started getting picky about specifying the precision for
floating types in the fragment shader. We already have a default
precision specifier in all the fragment shaders but apparently this
wasn't working because it is only used when the __VERSION__ define is
100 and Mesa is reporting 110. Regardless of whether Mesa is doing the
right thing or not I think it makes sense to use GL_ES instead of
__VERSION__ because we will also need the precision specifier if we
start requesting GLSL 3.0. The GLES specification explictly states
that GL_ES will only be defined for GLES and this is similar to what
the internal meta shaders do in Mesa.
http://cgit.freedesktop.org/mesa/mesa/commit/?id=cabd45773b58d6aa482
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 37f5205739cfb0745ea74dedaec117081ba0088b)
Since e07d0fc7441 the COGL_DRIVER environment variable was able to
override the application's driver selection. This doesn't seem like a
good idea because if the application is specifying a driver explicitly
then presumably it can not work with any other driver. This patch
changes it so that if a driver is selected in the configuration and by
the application then they must match, otherwise it will fail with a
CoglError.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 4e0942d74c8d69aa48e0176bfecf27f64a950d0f)
Only COGL_RENDERER_CONSTRAINT_SUPPORTS_COGL_GLES2 affects the driver
selection and all of the the other constraints are only relevant to
the winsys selection. However Cogl was trying to apply all of the
constraints to the driver selection which meant that if any other
constraint was specified then it would always fail. This patch makes
the driver selection filter out all other constraints based on a mask
defined in cogl-renderer-private.h.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f07febc8913b97fb828e7f2cc2857813af2d3657)
Currently it's only possible to set an onscreen template on a
CoglDisplay by passing a template to cogl_display_new(). For
applications that want to deal with fallbacks then they may want to
replace the onscreen template so this adds a
cogl_display_set_onscreen_template() function.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f307c9545791dae5472a9568fef6b31b3bf52854)
This updates the static assertion in cogl-renderer.c to only check that
the flags will fit in 32 bits instead of asserting the type has the same
size as an unsigned long.
(cherry picked from commit c6893fa3c9eda0f13b79d3a1fc03f8b79c42a8f6)
WebGL doesn't allow you to separately attach buffers to the
STENCIL_ATTACHMENT and DEPTH_ATTACHMENT framebuffer attachment points
and instead requires you to use the DEPTH_STENCIL_ATTACHMENT whenever
you want a depth and stencil buffer.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit ec7b6360c9c4e45e0b113f9dca7bb1502e7e93be)
This adds a COGL_DRIVER_WEBGL enum and a new driver description for
webgl in cogl-renderer.c. This also adds a COGL_DRIVER_FLAG_OPENGL_WEB
driver flag and a HAVE_COGL_WEBGL define which we can start to use to
handle special cases where webgl differs from gles2.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 2c167b7a4ee56241827322bbe7cb287b0628437c)
This adds a table of driver descriptions to cogl-renderer.c in order of
preference and when choosing what driver to use we now iterate the table
instead of repeating boilerplate checks. For handling the "default driver"
that can be specified when building cogl and handling driver overrides
there is a foreach_driver_description() that will make sure to iterate
the default driver first or if an override has been set then nothing but
the override will be considered.
This patch introduces some driver flags that let us broadly categorize
what kind of GL driver we are currently running on. Since there are
numerous OpenGL apis with different broad feature sets and new apis
may be introduced in the future by Khronos then we should tend to
avoid using the driver id to do runtime feature checking. These flags
provide a more stable quantity for broad feature checks.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit e07d0fc7441dddc3f0a2bc33a6a37d62ddc3efc0)
Add API to allow complex applications using the KMS backend
to go almost straight to direct configuration (which is not possible
because Cogl needs to be in charge of buffers and FB objects).
https://bugzilla.gnome.org/show_bug.cgi?id=705837
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 52fb8e1c33d8c83c731c05cee767928fdd5991d7)
This now updates the guard to ignore whether
COGL_ENABLE_EXPERIMENTAL_2_0_API is defined since we need to work for
clutter which does define that, as well as clutter users that don't.
cogl.h was meant to include cogl-path.h so long as
COGL_ENABLE_EXPERIMENTAL_2_0_API is not defined but it was actually
requiring it to be defined which was breaking clutter applications.
The eglTerminate code in Mesa will try to destroy the wl_drm object
which involves using data structures in the wl_display. Cogl was
disconnecting the display before calling eglTerminate which meant that
this would end up accessing potentially garbage data.
https://bugzilla.gnome.org/show_bug.cgi?id=705591
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 358d85f35d0fe36698b758163729c4551fe5fd25)
The deprecation macros, which expand to __declspec (deprecated) on Visual
Studio, is expected to be before the return type of the function which
is annotated by them, and having the deprecation macros there is also
accepted by GCC as well.
This will fix the builds of all applications/libraries using Cogl under
Visual Studio,
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Add and rename the symbols that have been added/renamed during the
development cycle, and also remove those that have been dropped during the
process.
Also continue the quest of purging from the exports lists of the internal
APIs as some of those are no longer referenced by other Cogl components.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Unfortunately named initializers is a feature that is not supported by
all compilers (such as pre-2013 Visual Studio) so avoid using that.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5a5659f9861dfe7a4808f2a5284de8fe6175bec2)
Use the HAVE_STRINGS_H check before we include strings.h, as it is not
universally available.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ff65144c84a16f9470d3f3931dc91cc9a6ef5938)
...For both the regular WGL winsys and SDL winsys builds, that
COGL_HAS_GTYPE_SUPPORT is defined, so that the builds won't break as
Visual Studio builds do assume an existing installation of GLib.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ef41aea2796315a47693bf278f08b41ca6703566)
Full GL treats the position attribute specially and requires that it
must be bound to generic attribute location 0 unlike GLES 2.0 or
GL 3.2 core. We now make sure to unconditionally bind the
cogl_position_in attribute to location 0 before linking any glsl program
in cogl.
For reference the relevant part of the GL 3.0 spec that covers these
semantics is Section 2.7 "Vertex Specification" pg 27
After this change there was one remaining problem in
test-custom-attributes where the test_short_verts() test was using its
own "pos" attribute instead of cogl_position_in and so cogl wasn't able
to ensure it would be bound to location 0.
This updates the test to use cogl_position_in but to work around the
fact that glVertexPointer doesn't support UNSIGNED_SHORT components we
force the test to use the glsl backend by setting a shader snippet on
the pipeline.
https://bugs.freedesktop.org/show_bug.cgi?id=67548
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 992ef7b3b49ebb56adde2133bb36330c04133a3f)
This renames cogl_offscreen_new_to_texture to
cogl_offscreen_new_with_texture. The intention is to then cherry-pick
this back to the cogl-1.16 branch so we can maintain a parallel
cogl_offscreen_new_to_texture() function which keeps the synchronous
allocation semantics that some clutter applications are currently
relying on.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit ecc6d2f64481626992b2fe6cdfa7b999270b28f5)
Note: Since we can't break the 1.x api on this branch this keeps a
thin shim around cogl_offscreen_new_with_texture to implement
cogl_offscreen_new_to_texture with its synchronous allocation
semantics.
The API says that it should return NULL on failure but it does not do that
due to the lazy allocation.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=703174
This splits out the cogl_path_ api into a separate cogl-path sub-library
like cogl-pango and cogl-gst. This enables developers to build Cogl with
this sub-library disabled if they don't need it which can be useful when
its important to keep the size of an application and its dependencies
down to a minimum. The functions cogl_framebuffer_{fill,stroke}_path
have been renamed to cogl_path_{fill,stroke}.
There were a few places in core cogl and cogl-gst that referenced the
CoglPath api and these have been decoupled by using the CoglPrimitive
api instead. In the case of cogl_framebuffer_push_path_clip() the core
clip stack no longer accepts path clips directly but it's now possible
to get a CoglPrimitive for the fill of a path and so the implementation
of cogl_framebuffer_push_path_clip() now lives in cogl-path and works as
a shim that first gets a CoglPrimitive and uses
cogl_framebuffer_push_primitive_clip instead.
We may want to consider renaming cogl_framebuffer_push_path_clip to
put it in the cogl_path_ namespace.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 8aadfd829239534fb4ec8255cdea813d698c5a3f)
So as to avoid breaking the 1.x API or even the ABI since we are quite
late in the 1.16 development cycle the patch was modified to build
cogl-path as a noinst_LTLIBRARY before building cogl and link the code
directly into libcogl.so as it was previously. This way we can wait
until the start of the 1.18 cycle before splitting the code into a
separate libcogl-path.so.
This also adds shims for cogl_framebuffer_fill/stroke_path() to avoid
breaking the 1.x API/ABI.
Otherwise, if we try egl-wayland first, we get the environment
variable from that, which crashes mesa trying to open the gbm device
as a wayland display.
https://bugzilla.gnome.org/show_bug.cgi?id=705836
Almost nothing draws attributes directly and for those things that do
it's trivial to adapt them to instead draw via the cogl_primitive api.
This simplifies the Cogl api a bit.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 7395925bcc01aad6c695fd0d9af78b784b3c64d4)
Conflicts:
cogl/cogl-framebuffer.c
cogl/cogl-framebuffer.h
When splitting out the CoglPath api we saw that we would be left with
inconsistent drawing apis if the drawing apis in core Cogl were lumped
into the cogl_framebuffer_ api considering other Cogl sub-libraries or
that others will want to create higher level drawing apis outside of
Cogl but can't use the same namespace.
So that we can aim for a more consistent style this adds a
cogl_primitive_draw() api, comparable to cogl_path_fill() or
cogl_pango_show_layout() that's intended to replace
cogl_framebuffer_draw_primitive()
Note: the attribute and rectangle drawing apis are still in the
cogl_framebuffer_ namespace and this might potentially change but in
these cases there is no single object representing the thing being drawn
so it seems a more reasonable they they live in the framebuffer
namespace for now.
Note: the cogl_framebuffer_draw_primitive() api isn't removed by this
patch so it can more conveniently be cherry picked to the 1.16 branch so
we can mark it deprecated for a short while. Even though it's marked as
experimental api we know that there are people using the api so we'd
like to give them a chance to switch to the new api.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 418912b93ff81a47f9b38114d05335ab76277c48)
Conflicts:
cogl-pango/cogl-pango-display-list.c
cogl/Makefile.am
cogl/cogl-framebuffer.c
cogl/cogl-pipeline-layer-state.h
cogl/cogl2-path.c
cogl/driver/gl/cogl-clip-stack-gl.c
This updates cogl_bitmap_new_for_data() to calculate the rowstride from
the width and bpp if the given rowstride is 0, to be consistent with how
the texture apis work.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 1c809210092a8c5e223edfcab1e378b205cf35d6)
In preparation for removing the automagic cogl-auto-texture apis this
adds a more minimal version of the cogl_texture_new_with_size code to
cogl-atlas.c for creating textures used to migrate images out of an
atlas and to cogl-texture-pixmap-x11.c.
Note: It turned out that both of these minimal versions were the same so
I did consider keeping a shared utility, but since the implementations
are very small and potentially due to the differing requirements for
atlas and pixmap-x11 textures we might even want them to differ later I
chose to keep them separate.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 6d64307483713e7a5a7ef554275619def51b840f)
Conflicts:
cogl/cogl-atlas.c
cogl/winsys/cogl-texture-pixmap-x11.c
This adds cogl_texture_2d_sliced_new_from_bitmap/data/file apis in
preparation for removing the cogl_texture_new_from_bitmap/file apis that
are considered a bit too magic, but we don't want to loose the
convenience they have.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 218da8e1349d7658f45c6933b9736c0d32941b8b)
Conflicts:
cogl/cogl-auto-texture.c
This adds a cogl_texture_2d_new_from_file() api since we are planning to
remove cogl_texture_new_from_file() but don't want to loose the
convenience it had.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 10e91aa513123ed277a8d45976f8d75445d7dc9c)
This exposes the CoglAtlasTexture api, making the following public:
cogl_atlas_texture_new_with_size
cogl_atlas_texture_new_from_file
cogl_atlas_texture_new_from_data
cogl_atlas_texture_new_from_bitmap
The plan is to remove auto-texture apis like cogl_texture_new_from_file
since they are a bit too magic, but that means we need an explicit way
for users to allocate texture that will go in the atlas.
Although the _new_from_file() api is arguably redundant since you can
use _bitmap_new_from_file() followed by _atlas_texture_new_from_bitmap()
we don't want to loose any of the convenience that
cogl_texture_new_from_file() had.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit fe515e6063ba4c3ddb5cd00d2c8527d9a6336a12)
Conflicts:
cogl/Makefile.am
This removes the gl centric _cogl_texture_prepare_for_upload api from
cogl-texture.c and instead adds a _cogl_bitmap_convert_for_upload() api
which everything now uses instead. GL specific code that needed the gl
internal/format/type enums returned by _cogl_texture_prepare_for_upload
now use ->pixel_format_to_gl directly.
Since there was a special case optimization in
cogl_texture_new_from_file that aimed to avoid copying the temporary
bitmap that's created for the given file and allow conversions to
happen in-place the new _cogl_bitmap_convert_for_upload() api supports
converting in place depending on a 'can_convert_in_place' argument.
This ability to convert bitmaps in-place has been integrated across the
different components as appropriate.
In updating cogl-texture-2d-sliced.c this was able to remove a number of
other GL specific parts to how spans are setup.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit e190dd23c655da34b9c5c263a9f6006dcc0413b0)
Conflicts:
cogl/cogl-auto-texture.c
cogl/cogl.symbols
cogl_display_new() takes a ref on the renderer, so code creating a
renderer and not keeping a pointer to it do unref later needs to drop
the ref immediately.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5433555f19ac73f3f236026f1bafca758d63c9fa)
Just like:
commit f3adec1faeb651dd97095a02256932cc82761f40
Author: Neil Roberts <neil@linux.intel.com>
Date: Thu Jul 11 13:51:28 2013 +0100
Initialise dirty_real_blend_enable in _cogl_pipeline_copy
But this time for unknown_color_alpha.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e6a6a2752fb1cc14860cbc559f41f25f7e7f195e)
Mesa annotates the GL version string with "(Core Profile)" when using
the OpenGL 3 core profile and so our heuristics that try and determine
what vendor and GPU is being used where being confused. This updates
the check_mesa_driver_package() function to consider this optional
annotation.
This adds a small unit test to verify the parsing of some example
version strings. We can update this with more real world version strings
if the format changes again in the future.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 1a074173d20857c7bedb6a862958713e5ef8d2d1)
When making a copy of a pipeline, the flag to mark whether the real
blend enable is valid was not being initialised.
Thanks to Damien Lespiau for pointing this out
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f3adec1faeb651dd97095a02256932cc82761f40)
Instead of queuing the frame sync event immediately after a swap, the
Wayland winsys now installs a frame callback and queues the event when
Wayland reports that the frame is complete. It also reports the
COGL_FRAME_EVENT_COMPLETE event at the same time because there is no
more information we can give.
This patch is a bit of a divergence from how the events are handled in
the GLX winsys. Instead of installing its own idle function, the
_cogl_onscreen_queue_event() function has now been made non-static so
that it can be used by the Wayland winsys. The frame callback now just
queues an event using that. The pending_frame_infos queue on the
CoglOnscreen isn't used and instead the CoglFrameInfo is immediately
popped off the queue so that it can be stored as part of the closure
data when the frame callback is set up. That way it would use the
right frame info even if somehow the Wayland callbacks were invoked in
the wrong order and the code is a bit simpler.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f7ea370a0d5013c9f0263f37c7f892adc8a2f087)
Previously if the Wayland socket gets closed then Cogl would ignore
the error when dispatching events which meant the socket would be
constantly ready for reading, the main loop would never go idle and it
would sit at 100% CPU. When Wayland encounters an error it will
actually close the socket which means if something else opened another
file then we might even end up polling on a completely unrelated FD.
This patch makes it remove the FD from the main loop as soon as it
hits an error so that it will at least avoid breaking the main loop.
However I think most applications would probably want to abort in this
case so we might also want to add a way to inform the application of
this or even just abort directly.
The cogl_poll_* functions have been changed so that they can cope if
the pending and dispatch callbacks remove their own FD while they are
invoked.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 85857b10687a5a246a0a4ef42711e560c7a6f45d)
This allows to easily caculate shades of the same color or pick colors
with the same saturation/luminance. In short, all sorts of interesting
things.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit edcbeaf3c941f7a2335fbec47d5248cd9b0e8088)
This adds cogl_wayland_renderer_set_event_dispatch_enabled() which can
be used to prevent Cogl from adding the socket for the Wayland display
to its list of file descriptors to poll. This can be used in
applications that want to integrate Cogl with existing code that is
reading from the Wayland socket itself.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f5b8d98676ab3e90ad80459019c737ec2ff90aa4)
The Wayland 1.0 protocol supports multiple independent components querying the
available interfaces by retreiving their own wl_registry object so the
application doesn't need to pass them down anymore.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 8ca36a1d1ab7236fec0f4d7b7361ca96e14c32be)
The idea with the framebuffer allocation is that it will lazily
allocate so that if you don't want to handle errors then you don't
have to be aware that there is an allocation step. In order for this
to work any accessors that get data that is only available after
allocation should implicitly allocate the framebuffer. This patch
makes that change for cogl_wayland_onscreen_get_surface and
cogl_wayland_onscreen_get_shell_surface.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 0c4ba78787323fedd162d7b71b86b460908b9b98)
cogl_wayland_onscreen_get_surface previously only worked if the
onscreen had a foreign surface on it. However there is no reason why
this shouldn't also work fine for manipulating the surface that Cogl
created as well. We may want to consider adding a separate getter for
the foreign surface that can be used before the framebuffer is
allocated.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 6bc12947a51224b70525893143bfe421723ce255)
CoglFixed was trying to use the __FLOAT_WORD_ORDER macro in order to
do some fast float conversions but it wasn't including any header that
could define it so it was giving an annoying warning. This patch
checks for the macro in endian.h in the configure script and only
checks its value if it's available.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Commit 839cf49763 changed the inline ARM assembler so that it
won't be used when targetting the Thumb instruction set. I manually
applied the patch but I messed up the #if so it was generating a
warning.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
The 1.x branch needs to use some of the deprecated API internally in
order to set up some deprecated state. This was causing a lot of
annoying warnings so instead we'll just disable the deprecation
attribute when COGL_COMPLIATION is defined.
It probably wouldn't be a good idea to apply this to the 2.0 branch
because at least for now we want to get warnings if we accidentally
use deprecated API internally.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
The Wayland server API has changed so that wl_shm_buffer is no longer
a type of wl_buffer and it instead must be retrieved directly from the
resource.
cogl_wayland_texture_2d_new_from_buffer now takes a resource pointer
instead of directly taking a wl_buffer and it will do different things
depending on whether it can get a wl_shm_buffer out of the resource
instead of trying to query the buffer type.
Cogland has also been updated so that it tracks a resource for buffers
of surfaces instead of directly tracking a wl_buffer. This are pointed
to by a new CoglandBuffer struct which can be referenced by a
CoglandBufferReference. The WL_BUFFER_RELEASE event will be posted
when the last reference to the buffer is removed instead of directly
whenever a new buffer is attached. This is similar to how Weston
works.
https://bugzilla.gnome.org/show_bug.cgi?id=702999
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 9b35e1651ad0e46ed489893b60563e2c25457701)
Conflicts:
examples/cogland.c
Previously Cogl would only call wl_display_flush after doing a swap
buffers on the onscreen because that is the only place where Cogl
itself would end up queueing requests. However since commit
323fe188748 Cogl takes control of calling wl_display_dispatch as well
which effectively makes it very difficult for the application to
handle the Wayland event queue itself. Therefore it needs to rely on
Cogl to do it which means that other parts of the application may also
queue requests that need to be flushed.
This patch tries to copy the display fd handling of window.c in the
Weston example clients. wl_display_flush will always be called in
prepare function for the fd which means it will always be called
before going idle. If flushing the display causes the socket buffer to
become full, it will additionally poll for write on the FD to try
flushing again when it becomes empty.
We also need to call wl_display_dispatch_pending in the prepare
because apparently calling eglSwapBuffers can cause it to read data
from the FD to receive events for a different queue. In that case
there will be events that need to be handled but the FD will no longer
be ready for reading so we won't wake up the main loop any other way.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 962d1825105a87dd8358a765353b77f6af8fe760)
_cogl_poll_rendererer_modify_fd can be used internally to modify the
event mask on an FD to be polled. This will be used in the Wayland
backend to start blocking on write whenever flushing the display fills
the socket's buffer. Modifying the FD's events causes the poll age to
increase.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 8bc0df53ee508687b87e547c1cbac5e8d7d5fc80)
Eventually the Wayland winsys will want to do useful work in its
prepare callback before the main loop goes idle. Previously
cogl_poll_renderer_get_info would stop calling any further prepare
functions if it found one with a zero timeout. That would mean the
Wayland prepare function might not get called before going idle in
some cases. This patch changes it so that it continues to call all of
the prepare functions regardless of the timeout.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 02f7fa538c9d2b383fa0f601177140b571ecf315)
If we don't do this then it might leak connections to the display if
multiple different renderers are tried.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 8e5b4d40a4d960d0d20927d30ee68a37387fe776)
When a primitive is drawn with an attribute that contains texture
coordinates Cogl will fetch the corresponding layer in order to
determine the unit number. However if the pipeline didn't actually
have a layer it would end up redundantly creating it. It's probably
not a good idea to be modifying the pipeline while flushing the
attributes state so this patch makes it pass the no-create flag to the
get_layer function and then skips out enabling the attribute if the
layer didn't already exist.
https://bugzilla.gnome.org/show_bug.cgi?id=702570
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 7507ad1a55a2aeb5beb8c0e3343e1e1f2805ddde)
When a layer is added to a pipeline without setting a texture it ends
up sampling from a default 1x1 texture which is meant to be solid
white. However for some reason we were creating the texture with 0
opacity which is effectively an invalid premultiplied colour. This
would make the blending behave oddly if it was used.
https://bugzilla.gnome.org/show_bug.cgi?id=702570
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 2ffc77565fb6395b986d3274f8bdb6eee6addbf9)
Unlike in GError, the policy in Cogl for when NULL is passed as the
CoglError argument is that the program should abort with a fatal
error. Previously however any errors that were being propagated were
being silently dropped if the application passed NULL. This patch
fixes it to also log a fatal error in that case.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 41e233b4b27de579f77b82115cf43a618bf0c93f)
When determining whether a pipeline needs blending, it was previously
returning TRUE if the pipeline has no snippets, whereas it should be
the other way around because we can't determine the final colour when
there are snipets.
https://bugzilla.gnome.org/show_bug.cgi?id=702570
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 109c815bf747fe027a74f098b4fcb6ea4846a482)
Previously on GLES2 where there is no builtin point size uniform then
we would always add a line to the vertex shader to write to the
builtin point size output because when generating the shader it is not
possible to determine if the pipeline will be used to draw points or
not. This patch changes it so that the default point size is 0.0f
which is documented to have undefined results when drawing points.
That way we can avoid adding the point size code to the shader in that
case. The assumption is that any application that is drawing points
will probably have explicitly set the point size on the pipeline
anyway so it is not a big deal to change the default size from 1.0f.
This adds a new pipeline state flag to track whether the point size is
non-zero. This needs to be its own state because altering it needs to
cause a different shader to be added to the pipeline cache. The state
flags that affect the vertex shader have been changed from a constant
to a runtime function because they will be different depending on
whether there is a builtin point size uniform.
There is also a unit test to ensure that changing the point size does
or doesn't generate a new shader depending on the values.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit b2eba06e16b587acbf5c57944a70ceccecb4f175)
Conflicts:
cogl/cogl-pipeline-private.h
cogl/cogl-pipeline-state-private.h
cogl/cogl-pipeline-state.c
cogl/cogl-pipeline.c
The handler for ConfigureNotify events in the EGL X11 winsys was
incorrectly trying dereference the onscreen pointer even if it didn't
find an onscreen for the X window that has resized. This meant that if
the application has other windows that weren't created by Cogl then it
would crash when handling events for them.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit a0056df61903d74180d4e4caa1046e68396d1be0)
Add register constraints to prevent asm statement complaints like:
{standard input}:382: rdhi, rdlo and rm must all be different
Signed-off-by: Donn Seeley <donn.seeley@windriver.com>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
There are two asm() statements in cogl-fixed.c that can't be assembled
in Thumb mode. This patch switches it to the generic code in Thumb
mode.
Signed-off-by: Donn Seeley <donn.seeley@windriver.com>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Previously when trying the three different texture types to create an
automagic texture it would handle the out-of-memory error specially
and bypass trying the remaining texture types. Presumably the idea is
that out-of-memory is a serious error and it can't be recovered from.
However, in the case of atlas textures, this error will be thrown if
the texture is too large to fit into an atlas. In that case it makes
sense to try another texture type so that it can fallback to using a
sliced texture. I think conceptually each different texture type will
have different memory requirements so it seems reasonable to try the
others if there is not enough memory for one of them.
This was causing cogl_texture_new_from_data to break when loading very
large textures because it wouldn't end up slicing them.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ad6968135a01823eb6a94668dd22c7a4df6f9327)
This removes cogl-queue.h and adds a copy of Wayland's embedded list
implementation. The advantage of the Wayland model is that it is much
simpler and so it is easier to follow. It also doesn't require
defining a typedef for every list type.
The downside is that there is only one list type which is a
doubly-linked list where the head has a pointer to both the beginning
and the end. The BSD implementation has many more combinations some of
which we were taking advantage of to reduce the size of critical
structs where we didn't need a pointer to the end of the list.
The corresponding changes to uses of cogl-queue.h are:
• COGL_STAILQ_* was used for onscreen the list of events and dirty
notifications. This makes the size of the CoglContext grow by one
pointer.
• COGL_TAILQ_* was used for fences.
• COGL_LIST_* for CoglClosures. In this case the list head now has an
extra pointer which means CoglOnscreen will grow by the size of
three pointers, but this doesn't seem like a particularly important
struct to optimise for size anyway.
• COGL_LIST_* was used for the list of foreign GLES2 offscreens.
• COGL_TAILQ_* was used for the list of sub stacks in a
CoglMemoryStack.
• COGL_LIST_* was used to track the list of layers that haven't had
code generated yet while generating a fragment shader for a
pipeline.
• COGL_LIST_* was used to track the pipeline hierarchy in CoglNode.
The last part is a bit more controversial because it increases the
size of CoglPipeline and CoglPipelineLayer by one pointer in order to
have the redundant tail pointer for the list head. Normally we try to
be very careful about the size of the CoglPipeline struct. Because
CoglPipeline is slice-allocated, this effectively ends up adding two
pointers to the size because GSlice rounds up to the size of two
pointers.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 13abf613b15f571ba1fcf6d2eb831ffc6fa31324)
Conflicts:
cogl/cogl-context-private.h
cogl/cogl-context.c
cogl/driver/gl/cogl-pipeline-fragend-glsl.c
doc/reference/cogl-2.0-experimental/Makefile.am
Previously CoglPipelineSnippetList was using the BSD embedded list
type with a mini struct to combine the list node with a pointer to the
snippet. This is effectively equivalent to just using a GList so we
might as well do that. This will help if we eventually want to get rid
of cogl-queue.h
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 54a168f3c7829c427d54ab517533bb9f7384d022)
The free function for atlas textures was previously always assuming
that there will be a valid sub_texture pointer but this might not be
the case if the texture was never successfully allocated. This was
causing Cogl to crash if the application tries to make a texture that
can not fit in the atlas using the automagic texture API.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit a7a8b7aefc8cb03fe8b716bee06b3449a7dba85f)
This adds a new function to enable per-vertex point size on a
pipeline. This can be set with
cogl_pipeline_set_per_vertex_point_size(). Once enabled the point size
can be set either by drawing with an attribute named
'cogl_point_size_in' or by writing to the 'cogl_point_size_out'
builtin from a snippet.
There is a feature flag which must be checked for before using
per-vertex point sizes. This will only be set on GL >= 2.0 or on GLES
2.0. GL will only let you set a per-vertex point size from GLSL by
writing to gl_PointSize. This is only available in GL2 and not in the
older GLSL extensions.
The per-vertex point size has its own pipeline state flag so that it
can be part of the state that affects vertex shader generation.
Having to enable the per vertex point size with a separate function is
a bit awkward. Ideally it would work like the color attribute where
you can just set it for every vertex in your primitive with
cogl_pipeline_set_color or set it per-vertex by just using the
attribute. This is harder to get working with the point size because
we need to generate a different vertex shader depending on what
attributes are bound. I think if we wanted to make this work
transparently we would still want to internally have a pipeline
property describing whether the shader was generated with per-vertex
support so that it would work with the shader cache correctly.
Potentially we could make the per-vertex property internal and
automatically make a weak pipeline whenever the attribute is bound.
However we would then also need to automatically detect when an
application is writing to cogl_point_size_out from a snippet.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 8495d9c1c15ce389885a9356d965eabd97758115)
Conflicts:
cogl/cogl-context.c
cogl/cogl-pipeline-private.h
cogl/cogl-pipeline.c
cogl/cogl-private.h
cogl/driver/gl/cogl-pipeline-progend-fixed.c
cogl/driver/gl/gl/cogl-pipeline-progend-fixed-arbfp.c
This ensures we only add a static_breadcrumb pointer to every
CoglPipeline when build with debugging enabled. Since applications may
allocate a lot of pipelines we want to keep the basic size of pipelines
(ignoring optional sparse state) down to a minimum.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 4716312e14bc253cd174a22b3db9d2c9cf031fa1)
This moves the code in test-bitmask into a UNIT_TEST() directly in
cogl-bitmask.c which will now be run as a tests/unit/ test.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 693c85e0cde8a1ffbffc03a5f8fcc1f92e8d0ac7)
Includes fix to build conform tests with -I$(top_builddir)/cogl to
be able to find cogl-gl-header.h
This adds a white-box unit test that verifies that GL_BLEND is disabled
when drawing an opaque rectangle, enabled when drawing a transparent
rectangle and then disabled again when drawing a transparent rectangle
but with a blend string that effectively disables blending.
This shares the test utilities and launcher infrastructure we are using
for conformance tests so we get consistent reporting and so unit tests
will be run against a range of different drivers.
This adds a --enable-unit-tests configure option which is enabled by
default but if disabled will make all UNIT_TESTS() into static inline
functions that we should expect the compiler to discard since they won't
be referenced by anything.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 9047cce06bbf9051ec77e622be2fdbb96ed767a8)
Since _cogl_pipeline_update_blend_enable() can sometimes show up quite
high in profiles; instead of calling
_cogl_pipeline_update_blend_enable() whenever we change pipeline state
that may affect blending we now just set a dirty flag and when we flush
a pipeline we check this dirty flag and lazily calculate whether blender
really needs to be enabled if it's set.
Since it turns out we were too optimistic in assuming most GL drivers
would recognize blending with ADD(src,0) is equivalent to disabling
GL_BLEND we now check this case ourselves so we can always explicitly
disable GL_BLEND if we know we don't need blending.
This introduces the idea of an 'unknown_color_alpha' boolean to the
pipeline flush code which is set whenever we can't guarantee that the
color attribute is opaque. For example this is set whenever a user
specifies a color attribute with 4 components when drawing a primitive.
This boolean needs to be cached along with every pipeline because
pipeline::real_blend_enabled depends on this and so we need to also call
_cogl_pipeline_update_blend_enable() if the status of this changes.
Incidentally with this patch we now no longer ever use
_cogl_pipeline_set_blend_enable() internally. For now the internal api
hasn't been removed though since we might want to consider re-purposing
it as a public api since it will now not conflict with our own internal
state tracking and could provide a more convenient way to disable
blending than setting a blend string.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit ab2ae18f3207514c91fa6fd9f2d3f2ed93a86497)
_cogl_egl_query_wayland_buffer was using _COGL_RETURN_IF_FAIL but the
function needs to return a CoglBool so it was giving a warning.
(cherry picked from commit d0290eb19fc9bf56fb24f8eab573e19966ea7e1a)
This adds a callback that can be registered with
cogl_onscreen_add_dirty_callback which will get called whenever the
window system determines that the contents of the window is dirty and
needs to be redrawn. Under the two X-based winsys's, this is reported
off the back of the Expose events, under SDL it is reported from
SDL_VIDEOEXPOSE or SDL_WINDOWEVENT_EXPOSED and under Windows from the
WM_PAINT messages. The Wayland winsys doesn't really have the concept
of dirtying the buffer but in order to allow applications to work the
same way on all platforms it will emit the event when the surface is
first shown and whenever it is resized.
There is a private feature flag to specify whether dirty events are
supported. If the winsys does not set this then Cogl will simulate
dirty events by emitting one when the window is first allocated and
when it is resized. The only winsys's that don't set this flag are
things like KMS or the EGL null winsys where there is no windowing
system and showing and hiding the onscreen doesn't really make any
sense. In that case Cogl can assume the buffer will only become dirty
once when it is first allocated.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 85c5a9ba419b2247bd768284c79ee69164a0c098)
Conflicts:
cogl/cogl-private.h
After discussing with Kristian Høgsberg it seems that the semantics of
wl_egl_window_resize is meant to be that if nothing has been drawn to
the framebuffer since the last swap then the resize will take effect
immediately. Cogl was previously always delaying the call to
wl_egl_window_resize until the next swap. That meant that if you
wanted to resize the surface you would have to call
cogl_wayland_onscreen_resize and then redundantly draw a frame at the
old size so that you can swap to get the resize to occur before
drawing again at the right size. Typically an application would decide
to resize at the start of its paint sequence so it should be able to
just resize immediately.
In current Mesa master it seems that there is a bug which means that
it won't actually delay a resize that is done mid-scene and instead it
will just discard what came before. To get consistent behaviour in
Cogl, the code to delay the call to wl_egl_window_resize is still used
if it determines that the buffer is dirty. There is an existing
_cogl_framebuffer_mark_mid_scene call which was being used to track
when the framebuffer becomes dirty since the last clear. This function
is now also used to track a new flag to track whether something has
been drawn since the last swap. It is called ‘mid_scene’ under the
assumption that this may also be useful for other things later.
cogl_framebuffer_clear has been slightly altered to always call
_cogl_framebuffer_mark_mid_scene even if it determines that it doesn't
need to clear because the framebuffer should still be considered to be
in the middle of a scene. Adding a quad to the journal now also begins
the scene.
This also fixes a potential bug where it looks like pending_dx/dy were
never cleared so they would always be accumulated even after the
resize is flushed.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 945689a62903990a20abb87a85d2c96eb3985fe7)
In some later patches we want to be able to use the term ‘dirty’ as a
public facing concept which represents expose events from the window
system. In that case the internal concept of dirtying the framebuffer
is confusing, so this patch changes the name to instead mean that
we've doing something which causes the framebuffer to be in the middle
of a frame.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 88eed85b52c29f66659ea112038f3522c9bd864e)
If we delay setting the surface to toplevel until it is shown then
that gives the application an opportunity to avoid calling show so
that it can set its own surface type.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ab59c3a421968d7f159d89ca2f0ba8a9f098cbf6)
Previously the WGL winsys was expecting the application to send all
windows messages to Cogl via the cogl_win32_renderer_handle_event
function. When using a GLib main loop we can make this work
transparently to the application with a GSource for the magic
G_WIN32_MSG_HANDLE file descriptor. That causes the GMainLoop to wake
up whenever a message is available.
This patch makes the WGL winsys add that magic value as a source fd.
This will only have any meaning if the application is using glib, but
it shouldn't matter because the cogl_poll_renderer_get_info function
is documented to only work on Unix-based winsys's anyway.
This patch is an API break because by default Cogl will now start
stealing all of the Windows messages. Something like Clutter that wants to handle
its own event retrieval would now need to call
cogl_win32_renderer_set_event_retrieval_enabled to stop Cogl from
stealing the events.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 99a7f84d7149f24f3e86c5d3562f9f2632ff6df8)
The implementation of cogl_wayland_texture_2d_new_from_buffer now uses
eglQueryWaylandBuffer to query the format of the buffer before trying to
create a texture from the buffer. This makes sure we don't try and
create a texture from YUV buffers for instance that may actually require
multiple textures. We now also report an error when we don't understand
the buffer type or format.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 79252d4e419e2462c5bc89ea4614b40bddc932c5)
This removes the various checks for != COGL_DRIVER_GLES1 when tracking
blend state that was trying to avoid checking the equation or alpha
component factors when they are known to be fixed on gles1. Now we just
rely on the opengl driver to do the right thing for the different
drivers and ignore the differences in the general pipeline state
tracking.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f67c7eaf23e1e2088e9956cb2b66dfdc9abc8b3b)
This enables basic Emscripten support in Cogl via the SDL winsys.
Assuming you have setup an emscripten toolchain you can configure Cogl
like this:
emconfigure ./configure --enable-debug --enable-emscripten
Building the examples will build .html files that can be loaded directly
by a WebGL enabled browser.
Note: at this point the emscripten support has just barely been smoke
tested so it's expected that as we continue to build on this we will
learn about more things we need to change in Cogl to full support this
environment.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit a3bc2e7539391b074e697839dfae60b69c37cf10)
The ARB_sync api depends on a GLsync type which may not be available if
GL_ARB_sync isn't defined, such as when building for gles2 only. This
guards the prototypes with #ifdef GL_ARB_sync to fix compilation when a
GLsync type isn't defined.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit ba79020e0f5b102e8b25cd831c408dd68d241297)
Fixes 'undefined reference to cogl_error_free' when using g++.
Signed-off-by: Andreas Oberritter <obi@saftware.de>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 41c54fcaba5b4bf76a0e943bac6bca777f3dae2f)
cogl_framebuffer_add_fence creates a synchronisation fence, which will
invoke a user-specified callback when the GPU has finished executing all
commands provided to it up to that point in time.
Support is currently provided for GL 3.x's GL_ARB_sync extension, and
EGL's EGL_KHR_fence_sync (when used with OpenGL ES).
Signed-off-by: Daniel Stone <daniel@fooishbar.org>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=691752
(cherry picked from commit e6d37470da9294adc1554c0a8c91aa2af560ed9f)
This adds a _cogl_poll_renderer_add_source() function that we can use
within cogl to hook into the mainloop without necessarily having a file
descriptor to poll. Since the intention is to use this to support
polling for fence completions this also updates the
CoglPollCheckCallback type to take a timeout pointer so sources can
optionally update the timeout that will be passed to poll.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 81c1ce0ffce4e75e08622e20848405987e00b3cc)
If this happens, XRRGetScreenResources will return NULL, so just treat
that like nothing happened.
https://bugzilla.gnome.org/show_bug.cgi?id=699431
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 57a79912ac70080a2f9cbe65181a25b00bf1192a)
This makes sure we include cogl-defines.h in cogl-matrix.h before
checking if COGL_HAS_GYPE_SUPPORT is defined
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit aa5ba324bb3b2ce77be29942f8716d61919cefeb)
This adds api to be able requests a swap_buffers and also pass a list of
damage rectangles that can be passed on to a compositor to enable it to
minimize how much of the screen it needs to recompose.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 0d9684c7b7c2018bb42715c369555330d38514a2)
Instead of driving event dispatching through a per winsys poll_dispatch
vfunc its now possible to associate a check and dispatch function with
each file descriptor that is registered for polling. This means we can
remove the winsys get_dispatch_timeout and poll_dispatch vfuncs and it
also makes it easier for more orthogonal internal components to add file
descriptors for polling to the mainloop.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 627947622df36dd529b9dc60a3ae9e6083532b19)
This adds a _cogl_poll_renderer_add_idle api that can be used internally
for queuing an idle callback without needing to make any assumption
about the system mainloop that is being used. This is now used to avoid
having the _cogl_poll_renderer_dispatch() directly check for all kinds of
events to dispatch, and to avoid having the winsys dispatch vfuncs need
to directly know about CoglContext. This means we can now avoid having a
back reference from CoglRenderer to the CoglContext.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit a1e169f18f4257caec58760adccfe4ec09b9805d)
This adds some utility code to help us manage lists of closures
consistently within Cogl. The utilities are from Rig and were originally
written by Neil Roberts.
This adapts the way we track CoglOnscreen resize and frame closures to
use the new utilities.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 2e15fc76eb29bf5932418f7ee80f1fcb2f6a816c)
This updates the cogl_poll_ apis to allow dispatching events before we
have a CoglContext and to also enables pollfd state to be changed in a
more add-hoc way by different Cogl components by replacing the
winsys->get_poll_info with _cogl_poll_renderer_add/remove_fd functions
and a winsys->get_dispatch_timeout vfunc.
One of the intentions here is that applications should be able to run
their mainloop before creating a CoglContext to potentially get events
relating to CoglOutputs.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 667e58c9cb2662aef5f44e580a9eda42dc8d0176)
When adding the frame callback API in 70040166 we decided on a common
idiom for adding callbacks which would return an opaque pointer
representing the closure for the callback. This pointer can then be
used to later remove the callback. The closure can also contain an
optional callback to invoke when the user data parameter is destroyed.
The resize callback didn't work this way and instead had an integer
handle to identify the closure. This patch changes it to work the same
way as the frame callback.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 33164c4b04d253ebe0ff41b12c1e90232c519274)
This adds support for optionally providing a foreign Wayland surface to
a CoglOnscreen before allocation. Setting a foreign surface prevents
Cogl from creating a toplevel Wayland shell surface for the OnScreen.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e447d9878f3bcfe5fe336d367238383b02879223)
This prevents leaking the Wayland shell surface associated with a Cogl
OnScreen when it is finalised.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 760fc9f3af5475530262b82a55df311fceca358a)
This adds compiler symbol deprecation declarations for old Cogl APIs so
that users can easily see via compiler warning when they are using these
symbols, and also see a hint for what the apis should be replaced with.
So that users of Cogl can manage when to show these warnings this
introduces a scheme borrowed from glib whereby you can declare what
version of the Cogl api you are using:
COGL_VERSION_MIN_REQUIRED can be defined to indicate the oldest Cogl api
that the application wants to use. Cogl will only warn about
deprecations for symbols that were deprecated earlier than this required
version. If this is left undefined then by default Cogl will warn about
all deprecations.
COGL_VERSION_MAX_ALLOWED can be defined to indicate the newest api
that the application uses. If the application uses symbols newer than
this then Cogl will give a warning about that.
This patch removes the need to maintain the COGL_DISABLE_DEPRECATED
guards around deprecated symbols.
This patch fixes a few uses of deprecated symbols in the examples/
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Call wl_display_dispatch on POLLIN. This follows the implementation
in weston/clients/window.c and improves integration of input events,
at least.
Signed-off-by: Andreas Oberritter <obi@saftware.de>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 323fe1887487f19c3e26aa6b7644de31d8d0a532)
https://bugzilla.gnome.org/show_bug.cgi?id=697330
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit a7b4930e14add7d955c22f396178b71083dfb52f)
Conflicts:
cogl/Makefile.am
Fixes compilation with C++ compiler.
Signed-off-by: Andreas Oberritter <obi@saftware.de>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 7b3c6dd7f6810f3c8dec62904daa887c917ab7e2)
When a pipeline is added to the cache, a normal copy would previously be
made to use as the key in the hash table. This copy keeps a reference
to the real pipeline which means all of the resources it contains are
retained forever, even if they aren't necessary to generate the hash.
This patch changes it to create a trimmed down copy that only has the
state necessary to generate the hash. A new function called
_cogl_pipeline_deep_copy is added which makes a new pipeline that is
directly a child of the root pipeline. It then copies over the
pertinent state from the original pipeline. The pipeline state is
copied using the existing _cogl_pipeline_copy_differences function.
There was no equivalent function for the layer state so I have added
one.
That way the pipeline key doesn't have the texture data state and it
doesn't hold a reference to the original pipeline so it should be much
cheaper to keep around.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e27e01c1215e7e7c7c0183ded11dd769bb112c5c)
Calculating the hash value for a pipeline can be a bit expensive.
Previously when adding a new pipeline to the hash table we would end
up calculating the hash value once when checking whether the pipeline
is already in the hash table and then again when adding the pipeline
to the hash table. Ideally GHashTable would provide some API to add an
entry with a precalculated hash value to avoid the recalculation, but
seeing as it doesn't do that we can force it to avoid recalculating by
storing the hash value as part of the struct we are using for the key.
That way the hash func passed to GHashTable can simply return the
precalculated value and we can calculate the hash outside of the
GHashTable calls.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 4a0460a452fd1be382fd5a92d8cfd5e0cdfd4403)
The pipeline cache contains three separate hash tables, one for the
state affecting the vertex shaders, one for the fragment shaders and
one for the resulting combined program. Previously these hash tables
had a fair bit of duplicated code to calculate the hashes, check for
equality and copy the pipeline when it is added. This patch moves the
common bits of code to a new type called CoglPipelineHashTable which
just wraps a GHashTable with a given set of state flags to use for
hashing and checking for equality.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 402796430c839038339e531363b8c2463f9b2a9e)
Conflicts:
cogl/Makefile.am
Since 67cad9c0 and f7735e141a the bitmap allocation and mapping
functions now take an extra error argument. The quartz image backend
was missed in this update so Cogl would fail to compile if
--enable-quartz-image is used.
https://bugzilla.gnome.org/show_bug.cgi?id=696730
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1827965befccf331b0787f71cb191d370640a9de)
This makes sure the EGL_KHR_create_context enums are always defined in
cogl-winsys-egl.c so we will build with drivers that don't support this
extension. Cogl will do runtime checks to explicitly check that the
extension is available before ever referencing these enums so this is
safe to do.
https://bugzilla.gnome.org/show_bug.cgi?id=694537
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit bd034b7451e7d9c602bcc91f1a00f6aaa7b05ec0)
Interleaving multiple snippets with different hooks
(COGL_SNIPPET_HOOK_VERTEX and COGL_SNIPPET_HOOK_VERTEX_TRANSFORM,
for instance) used to cause a bug during shader code generation.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 38ca76695d54bbbfe3b940a6d0b2ae879e6fd66b)
Adding a layer difference may mean the pipeline overrides all of the
layers of its parent which might make the parent redundant so we
should try to prune the hierarchy.
This is particularly important for CoglGst because whenever a new
frame is ready it tries to make a copy of the pipeline it last used
and then replace all of the textures in the layers. Without this patch
the new pipeline would keep the parent pipeline alive which means also
keeping the old textures alive so all of the frames of the video would
effectively be leaked.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 576c7b55aa835448c977f1d79d128dffd40e7cd8)
Add the newly-added symbols during the development cycle, and drop those
that are dropped. Also, clean up the private symbols that were exported,
those that are still left in cogl.symbols are those still being referenced
by Cogl-Pango
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This reverts commit 83dbf79986981fac9ec0f2575b7c7cb32f629f0f.
On further consideration we realized that needing this change either
indicated a bug in the code using cogl, or that it was a symptom of
some other bug in Cogl resulting in us returning NULL in
cogl_buffer_map_range but not returning a CoglError too.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 8c5127c712570f1ea0d495a7fe7290ae5ee60ce6)
If a pipeline has been flushed that disables depth writing and then we
try to clear the framebuffer with cogl_framebuffer_clear4f, passing
COGL_BUFFER_BIT_DEPTH then we need to make sure that depth writing is
re-enabled before issuing the glClear call. We also need to make sure
that when the next primitive is flushed that we re-check what state the
depth mask should be in.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 3cf497042897d1aa6918bc55b71a36ff67e560b9)
This makes sure that a viewport change when comparing between separate
framebuffers also implies a clip change when we are applying the Intel
gen6 workaround for broken viewport clipping. Without this then
switching between different size framebuffers could leave a scissor
matching the size of a previous framebuffer.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f23f2129c58550f819cff783f47039d7bd91391e)
This makes some changes to _cogl_bitmap_gl_bind to be more paranoid
about bad access arguments and make sure we don't mark a bitmap as bound
if there was an error in _cogl_buffer_gl_bind.
We now validate the access argument upfront to check that one of _READ
or _WRITE access has been requested. In the case that cogl is built
without debug support then we will still detect a bad access argument
later and now explicitly return before marking the bitmap as bound, just
in case the g_assert_not_reach has been somehow disabled. Finally we
defer setting bitmap->bound = TRUE until after we have check for any
error with _cogl_bitmap_gl_bind.
https://bugzilla.gnome.org/show_bug.cgi?id=686770
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 1720d5cf32449a189fd9d400cf5e6696cd50a9fa)
Previously the sampler uniform declarations such as cogl_sampler0 were
generated by walking the list of layers in the shader state. This had
two problems. Firstly it would only generate the declarations for
layers that have been referenced. If a layer has a combine mode of
replace then the samplers from previous layers couldn't be used by
custom snippets. Secondly it meant that the samplers couldn't be
referenced by functions in the declarations sections because the
samplers are declared too late.
This patch fixes it to generate the layer declarations in the backend
start function using all of the layers on the pipeline instead. In
addition it adds the sampler declarations to the vertex shader as they
were previously missing.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1824df902bbb9995cae6ffb7a413913f2df35eef)
Conflicts:
cogl/driver/gl/cogl-pipeline-fragend-glsl.c
cogl/driver/gl/cogl-pipeline-vertend-glsl.c
This adds hook points to add global function and variable declarations
to either the fragment or vertex shader. The declarations can then be
used by subsequent snippets. Only the ‘declarations’ string of the
snippet is used and the code is directly put in the global scope near
the top of the shader.
The reason this is necessary rather than just adding a normal snippet
with the declarations is that for the other hooks Cogl assumes that
the snippets are independent of each other. That means if a snippet
has a replace string then it will assume that it doesn't even need to
generate the code for earlier hooks which means the global
declarations would be lost.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ebb82d5b0bc30487b7101dc66b769160b40f92ca)
To avoid linking trouble in C++ stuff
Reviewed-By: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b194f1bf58055ef1f5075508f19336cae648a0c8)
Conflicts:
cogl/cogl-object.h
This fixes some minor errors and warnings that were preventing Cogl
building with mingw32:
• cogl-framebuffer-gl.c was not including cogl-texture-private.h.
Presumably something else ends up including that when building for
GLX.
• The WGL winsys was not including cogl-error-private.h
• A call to strsplit in the WGL winsys was wrong.
• For some reason the test-wrap-rectangle-textures test was trying to
include the GDKPixbuf header.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5380343399f834d9f96ca3b137d49c9c2193900a)
Mesa's libGLU tesselator code has had a commit on it since it was
copied into Cogl. It sounds like it fixes a potential crash so we
should probably have it in Cogl too.
http://cgit.freedesktop.org/mesa/glu/commit/?id=bfdf99d6ff64b9c2
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c6b2429546d3ea0aa91caa47c7c90f932984ea33)
glMapBufferRange is documented to fail with GL_INVALID_OPERATION if
GL_MAP_INVALIDATE_BUFFER_BIT is set as well as GL_MAP_READ_BIT. I
guess this makes sense when only read access is requested because
there would be no point in reading back uninitialised data. However,
Clutter requests read/write access with the discard hint when
rendering to a CoglBitmap with Cairo. The data is new so the discard
hint makes sense but it also needs read access so that it can read
back the data it just wrote for blending.
This patch works around the GL restriction by skipping the discard
hints if read access is requested. If the buffer discard hint is set
along with read access it will recreate the buffer store as an
alternative way to discard the buffer as it does in the case where the
GL_ARB_map_buffer_range extension is not supported.
https://bugzilla.gnome.org/show_bug.cgi?id=694164
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 986675d6043e8701f2d65415cf72ffc91734debd)
The journal manually flushes its own modelview matrix state so it
needs to mark the state as dirty so that if a primitive is drawn with
the same matrix state as the last primitive it will correctly reflush
it.
https://bugzilla.gnome.org/show_bug.cgi?id=693612
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c7290c994c742456ff0977cb394c289afb377049)
If we make this per-context and create two Cogl contexts, some types
won't re-register, and we'll be in a broken state where some types will
be considered not to be texture types.
https://bugzilla.gnome.org/show_bug.cgi?id=693696
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 567f049d20554bb8ea4e40fa5e72a9fd0bbd409e)
When Cogl is compiled with support for both the GL and GLES drivers it
only includes the GL header and not the GLES header. That means in
that case it would not compile in the code for the
GL_EXT_discard_framebuffer extension even though it could be used on
the GLES driver. This patch makes it use the standard names for the
GL_COLOR, GL_STENCIL etc names instead of the _EXT suffixed names and
manually defines them if we are using the GLES headers. That way the
discard code can be used unconditionally.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 59c30292d0f3c28d6e0e08bc5bf3b4b10545d856)
This patch just adds a call to _cogl_framebuffer_flush_state to ensure
the correct framebuffer is bound before discarding its buffers.
Previously it would presumably just discard the buffers of whatever
framebuffer happened to be used last.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 37c390a5b33d4f65ff6c834e9be2f8de716635ee)
The array allocated for storing the difference flags for each layer in
cogl-pipeline-opengl.c was being cleared with the size of a pointer
instead of the size actually allocated for the array. Presumably this
would mean that if there is more than one layer it wouldn't clear the
array properly.
Also the size of the array was slightly wrong because it was allocating
the size of a pointer for each layer instead of the size of an
unsigned long.
This was originally reported by Jasper St. Pierre on #clutter.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1e134dd7cd5317651be158a483c7cb2723ce8869)
Even if Cogl decides to set a zero timeout because there are events
queued, it still makes sense to give the winsys a chance to add file
descriptors to the list. The winsys might be relying on the list of
CoglPollFDs passed to poll_dispatch to decide whether to read from a
file descriptor and that should happen even if Cogl also woke up the
main loop because the event queue isn't empty.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 6d2f3bc4913d0f1570c09e3714ac8fe2dbfc7a03)
It is expected that cogl_sdl_idle() will be called from the
application immediately before blocking in SDL_WaitEvent. However,
dispatching the onscreen events may cause more events to be queued. If
that happens we need to make sure the blocking returns immediately.
This patch makes it post the dummy event that the application chose in
order to make that happen.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 9e34a1e8ce97b67ebb2889c622f2c9f1076b087d)
In SDL1 the event type numbers were a single byte so there were only
reserving a byte to store the application's chosen type in
CoglRenderer. However in SDL2 they are a Uint32 and SDL_USEREVENT is
0x8000 so if the application was using that then Cogl would actually
end up posting event type 0.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 39c9177776ac601a92c6f4112558464af6968ea0)
It seems like it would be quite a reasonable design for an application
to immediately paint the buffer and call swap_buffers within the
handler for the sync event. This previously wouldn't work.
When using the GLX winsys if swap_region is called then it immediately
tries to set the pending notification flag. However if this is called
from the event callback then when the callback is complete it will
clear the flag again and the pending notification will be lost. This
patch just makes it clear the pending flag before invoking the
callback so that it can be safely queued again.
With any winsys that doesn't directly handle the sync event
notification it would almost work except that it was iterating the
live list of pending events. If the callback causes another event to
be added to this list by issuing a buffer swap then the iteration
would never complete and cogl_poll_dispatch would never return. This
patch just makes it steal the list before iterating so that any
additions will be dispatched by a later call to cogl_poll_dispatch
instead.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 2263b31594900b73900d2ce22cf70c68e7e793c6)
The first hunk from commit 93b7b4c850dd928bf21ee168a95641a8d631f713
turned out to be redundant because GLX guarantees that configs returned
by glXChooseFBConfig should be sorted with non msaa configs coming
first. The second hunk is required since we use glXGetFBConfigs in that
case which doesn't sort the configs.
I had meant to drop this part of the patch before landing it but forgot.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b19fcc1869275826e952925af922125daf8a48de)
There is no guaranty that glXGetFBConfigs will return fbconfig ordered
with non msaa config first. This patch make sure that non msaa config
get choose.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 93b7b4c850dd928bf21ee168a95641a8d631f713)
Add an API to get the current time in the time system that Cogl
is reporting timestamps. This is to be used to convert timestamps
into a different time system.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 9f3735a0c37adcfcffa485f81699b53a4cc0caf8)
Add a CoglFrameInfo object that tracks timing information for frames
that are drawn. We track a frame counter and frame timing information
for each CoglOnscreen. Internally a CoglFrameInfo is automatically
created for each frame, delimited by cogl_onscreen_swap_buffers() or
cogl_onscreen_swap_region() calls.
CoglFrameInfos are delivered to applications via frame event callbacks
that can be registered with a new cogl_onscreen_add_frame_callback()
api. Two initial event types (dispatched on all platforms) have been
defined; a _SYNC event used for throttling the frame rate of
applications and a _COMPLETE event used so signify the end of a frame.
Note: This new _add_frame_callback() api makes the
cogl_onscreen_add_swap_complete_callback() api redundant and so it
should be considered deprecated. Since the _add_swap_complete_callback()
api is still experimental api, we will be looking to quickly migrate
users to the new api so we can remove the old api.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 700401667db2522045e4623d78797b17f9184501)
When we block waiting for the swap, prefer doing that using
glXWaitForMsc() from OML_sync_control because that returns a system
time value for the precise time of the swap.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1e8114aabc78b90373d3d5f3f7c0224f8786e399)
This adds a cogl_renderer_foreach_output() function that can be used to
iterate the display outputs for a particular renderer.
This also updates cogl-info to use this new api so it can dump out all
the output information.
Reviewed-by: Owen W. Taylor <otaylor@fishsoup.net>
(cherry picked from commit a2abf4c4c1fd5aeafd761f965d07a0fe9a362afc)
The CoglOutput object represents one output such as a monitor or
laptop panel, with information about attributes of the output such as
the position of the output within the global coordinate space, and
the refresh rate.
We don't yet publically export the ability to get output information but
we track it for the GLX backend, where we'll use it to track the refresh
rate.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d7ef9d8d71488d0e6874f1ffc6e48700d5c82a31)
There is a cogl_renderer_get_n_fragment_texture_units() function which
is documented to return the number of texture units that are
accessible from a fragment program. This just directly returns the
value from GL_MAX_TEXTURE_IMAGE_UNITS which is available in either the
GLSL extensions or the ARBfp extension. Clutter-GST relies on this to
determine whether it can use a program to convert the YUV data on the
GPU.
When the GL3 driver was added in 66c9db993595b this was changed to
only query the value when the GLSL feature is available. Previously it
would always query the value when the GL or GLES2 driver is used. This
change makes sense on master because there is no API for an
application to make its own ARBfp programs so the only way to access
texture units from a program is via GLSL. However on the 1.14 branch
this patch broke clutter-gst when GLSL is disabled because it thinks
the ARBfp programs can't use multi-texturing.
This patch just changes it to also query the value when ARBfp support
is available.
Note: it's probably note a good idea to apply this patch to master,
but only to the 1.14 branch. On master the function probably needs to
be changed anyway because it is using _COGL_GET_CONTEXT().
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Similar to commit 2c0cfdefbb9d1 for the SDL2 winsys, the GLX and EGL
window systems need to bind the dummy surface or drawable when the
currently bound onscreen is destroyed so that there will always be a
valid context bound.
Previously I got the idea that this would not be necessary on GLX
because the documentation for glXDestroyDrawable states that the
drawable won't actually be destroyed if it is currently bound until it
becomes unbound. However it doesn't say what happens if the underlying
X window is also destroyed and after testing it seems this causes a
segfault in Mesa in GLX and an XError for EGLX.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 4a464eec8c5b5832b9fd6b69746ab4ab36229182)
GL_TEXTURE_MAX_LEVEL is not supported on GLES so we can't set it. It
looks like Mesa was letting us get away with this but on other drivers
it may cause errors. The enum is not defined in the GLES headers so it
was failing to compile unless the GL driver is also enabled.
The test-texture-mipmap-get-set test is now marked as n/a on GLES2
because it can't support limiting the sampled mipmaps.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ba51c393818582b058f5f1e66cf8d13835ad10e5)
Conflicts:
tests/conform/test-conform-main.c
The GLES2 driver wasn't compiling unless the GL driver is also enabled
because some run-time conditional code was directly using GL-only
defines.
This should also fix compiling using the stock GL headers on OS X
which don't define GL_NUM_EXTENSIONS.
https://bugzilla.gnome.org/show_bug.cgi?id=692420
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 661e1719aa0b95c409c568ec91ea52b8ff90519b)
Previously when creating a foreign rectangle texture it would ignore
the passed in texture information and query the texture directly when
using COGL_DRIVER_GL. However this should also work for
COGL_DRIVER_GL3. This patch changes it to check the private feature
flags for the texture querying feature instead of directly checking
the driver value.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 258c98b82027cb5074afe7844ff3954bbe928757)
This was generating warnings when the GL driver is disabled.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f26682dcc04642fed9db959c63d6c6e4261d2148)
Conflicts:
cogl/cogl-auto-texture.c
This adds support for the EGL_EXT_buffer_age extension which is a
counterpart to the GLX_EXT_buffer_age extension.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 92d869764c03d0bac6b51dac833510c22669ac4a)
Add a new BUFFER_AGE winsys feature and a get_buffer_age method to
cogl-onscreen that allows to query the value.
https://bugzilla.gnome.org/show_bug.cgi?id=669122
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Note: When landing the patch I made some gtk-doc updates and changed
_get_buffer_age to return an age of 0 always if the age feature isn't
support instead of using _COGL_RETURN_VAL_IF_FAIL. -- Robert Bragg
(cherry picked from commit 427b1038051e9b53a071d8c229b363b075bb1dc0)
Comparing the pointed-to value is clearly what was meant.
Found by Coverity.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f676352210fad856ae85962733e488bc1a832411)
Previously the functions for packing and unpacking pixels where
generated by token pasting together a function name along with its
type, like the following:
_cogl_pack_ ## uint8_t
Then later in cogl-bitmap-conversion.c it would directly refer to the
function names without token pasting.
This wouldn't work however if the system headers define the stdint
types using #defines instead of typedefs because in that case the
function name generated using token pasting would get the expanded
type name but the reference that doesn't use token pasting wouldn't.
This patch adds an extra macro passed to the cogl-bitmap-packing.h
header which just has the type size. That way the function can be
defined like this instead:
_cogl_pack_ ## 8
That should prevent it from hitting problems with #defined types.
https://bugzilla.gnome.org/show_bug.cgi?id=691945
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d6b5d7085b004ebd48c1543b820331802395ee63)
This make autogen.sh look for automake-1.13 and also updates all
Makefile.am files to no longer use the INCLUDES variable which automake
1.13 warns is deprecated by AM_CPPFLAGS.
https://bugzilla.gnome.org/show_bug.cgi?id=690891
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 5de5569e960102afe979a5f2f0403e1defebca62)
We have a workaround in Cogl to fix viewport clipping with Mesa Intel
Gen 6 drivers but this was breaking the semantics of
cogl_framebuffer_clear() which should not be affected by viewport
clipping. This makes sure we disable and restore the workaround when
clearing the framebuffer. This fixes Clutter's test-cogl-viewport
conformance test.
This tweaks the ordering of some struct members in some of the more
important structs so that the compiler won't insert wasted padding to
avoid breaking the alignment. Some members that were previously
unsigned long have been changed to unsigned int. These members need to
be able to fit in 32-bits to run on 32-bit machines anyway so there's
no point in having them extend to 64-bit on 64-bit machines. This
doesn't affect the public API.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit b721af236680005464e39f7f4dd11381d95efb16)
In commit 1fa7c0f10a8a0 the sliced texture code which creates the
array of pointers to the texture slices was changed so that the
textures are appended to the end of the array instead of initially
creating the array with the right size upfront and then shrinking the
array on error. However it was then still also setting the size of the
array after creating it so the new textures would actually end up in
an unused part of the array. The part of the array that is used was
left unitialised so it would crash. This just removes the call to set
the size of the array.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 7df09d505ba28a1a960df867346af67118e96718)
GL3 has support for clip planes but they are used differently and
involve writing to a builtin output variable in the vertex shader. The
current clip plane code assumes it is only used with a fixed function
driver and tries to directly push to the matrix builtins. This
obviously won't work on GL3 so for now let's just disable clip planes.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5f621589467ab961f5130590298dc8e26d658a92)
When a component-alpha texture is made using a GL3 context a GL_RED
texture is actually used and a swizzle is set up to hide it. However
if a framebuffer is then bound to that texture then when the bits are
queried this workaround will leak out of the API. To fix this it now
detects the situation and reports the number of red bits as the number
of alpha bits.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 425cfb2675912a2cbcaaaeed7c2196d563948222)
Previously when the context was initialised Cogl would query the
number of stencil bits and set a private feature flag to mark that it
can use the buffer for clipping if there was at least 3. The problem
with this is that the number of stencil bits returned by
GL_STENCIL_BITS depends on the currently bound framebuffer. This patch
adds an internal function to query the number of stencil bits in a
framebuffer and makes it use that instead when determining whether it
can push the clip using the stencil buffer.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e928d21516a6c07798655341f4f0f8e3c1d1686c)
Cogl publicly exposes the depth buffer state so we might as well have
a function to query the number of depth bits of a framebuffer.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 853143eb10387f50f8d32cf09af31b8829dc1e01)
The GL framebuffer driver now makes sure to bind the framebuffer
before counting the number of bits. Previously it would just query the
number of bits for whatever framebuffer happened to be used last.
In addition the virtual for querying the framebuffer bits has been
modified to take a pointer to a structure instead of a separate
pointer to each component. This should make it slightly more efficient
and easier to maintain.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e9c58b2ba23a7cebcd4e633ea7c3191f02056fb5)
The GL3 context is created using the glXCreateContextAttribs function
which is part of the GLX_ARB_create_context extension. However
previously the function pointers from GLX extensions were only
retrieved once the GL context is created. That meant that the GL3
context creation function would always assume that the extension is
not supported so it would always fail.
This patch changes it to query the functions when the renderer is set
up instead. The base winsys feature flags that are determined while
querying the functions are stored in a member of CoglGLXRenderer.
These are then copied to the CoglContext when it is initialised.
The spec for glXGetProcAddress says that the functions returned are
context-independent. That implies that it is safe to call it without
binding a context although that is not explicitly stated as far as I
can tell. A big of googling finds this DRI documentation which says it
can be used without a context:
http://dri.freedesktop.org/wiki/glXGetProcAddressNeverReturnsNULL
And also this code sample:
http://www.opengl.org/wiki/Tutorial:_OpenGL_3.0_Context_Creation_%28GLX%29
One point that makes me concerned that this might not always work in
practice is that the code in SDL2 to create a GL3 context first
creates a dummy GL2 context in order to have something bound before it
calls glXGetProcAddress. I think this may just be a misunderstanding
based on how wglGetProcAddress works however.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 04a7aca9a98e84e43ac5559305a1358112902e30)
_cogl_texture_spans_foreach_in_region first swaps over the texture
coordinates if they are flipped so that it can always iterate in a
positive direction. It sets a flag so that it will remember that the
coordinates are flipped. Before invoking the callback it is meant to
reflip the coordinates so that the callee doesn't need to be aware of
the flipping. However it was only flipping the sub-texture coordinates
and not the virtual coordinates. This was causing sliced textures to
draw their slice rectangles with the wrong geometry.
test-backface-culling was failing because of this.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e7338a1e09cb22151374aefa6f0bb58485af9189)