Before we just set it to "none", but this was not enough since various
calls will depend on not just the context being active, but the main
rendering surface.
Fixes https://gitlab.gnome.org/GNOME/mutter/issues/21
(cherry picked from commit ae26cd0774)
By the looks of it, commit 95e9fa10ef was taping over an Intel DRI bug
that would make it return post-swizzling pixel data on glReadPixels().
There's been reports over time of that commit resulting in wrong colors
on other drivers, and lately Mesa >17.3 started showing the same symptoms
on Intel.
But texture swizzling works by changing parameters before fragment shaders
and reading pixels from an already drawn FBO/texture doesn't involve those.
This should thus use pixel_format_to_gl_with_target(), which will result in
correctly requesting the same pixel format than the underlying texture,
while still considering it BGRA for the upper layers in the swizzling case.
https://gitlab.gnome.org/GNOME/mutter/issues/72Closes: #72
We just arbitrarily chose the first EGL config matching the passed
attributes, but we then assumed we always got GBM_FORMAT_XRGB8888. That
was not a correct assumption. Instead, make sure we always pick the
format we expect.
Closes: https://gitlab.gnome.org/GNOME/mutter/issues/2
On drivers that do not support glGetTexImage2D (i.e. on GLES),
cogl_texture_get_data() has a "feature" that allows it to download
texture data by rendering the texture on an intermediate framebuffer
object and then reading back the data from there. However, this
feature requires the user to have previously set an "active"
framebuffer object in the context, which makes this very tricky
because it is not clear to the developer that he needs to do that
in order for some code to work on GLES (of course it works on
desktop GL, so nobody notices...) and additionally the code actually
crashes if an active fbo is not set!
This patch basically removes this feature in order to prevent
the crash and is in line with how this code has evolved in cogl-2.0:
https://git.gnome.org/browse/cogl/commit/?id=6d6a277b8e9a63a8268046e5258877ba94a1da5bhttps://bugzilla.gnome.org/show_bug.cgi?id=789961
When creating a renderer with a custom winsys (which is always how
mutter uses cogl) make it possible to pass a user data with the winsys.
Still unused.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
The GL_BGRA definition is not available for GLES2 contexts, which use
the EXT_texture_format_BGRA8888 instead, causing a build failure when
trying to use it in those contexts.
Fortunately, this hack is only relevant for GL, so let's guard it to
prevent the failure in GLES2, where that extension is used instead.
https://bugzilla.gnome.org/show_bug.cgi?id=786568
Those are cached and reused across runs, which doesn't qualify to mesa
as "static" indeed. Properly marking those as dynamic is more true, and
brings in slight performance benefits just by avoiding the resulting
(and later silenced) mesa warning.
https://bugzilla.gnome.org/show_bug.cgi?id=782344
Fixes cogl_texture_get_data() resorting to the wrong conversions when
extracting the texture data. This notably resulted in RGB/RGBA buffers
copied as-is into BGRA buffers, for instance for the fullscreen animation,
or single-window screenshots of such buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
The ARB_robustness extension defined the following tokens as
returned by GetGraphicsResetStatusARB (see spec at [1]):
GUILTY_CONTEXT_RESET_ARB 0x8253
INNOCENT_CONTEXT_RESET_ARB 0x8254
UNKNOWN_CONTEXT_RESET_ARB 0x8255
These tokens might not be defined in some GL implementations,
such as Mesa 13's implementation of GLES 2.0, so we need to
define them ourselves not to break those builds.
[1] https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_robustness.txthttps://bugzilla.gnome.org/show_bug.cgi?id=781398
We already do have a texture with an internal format in these paths,
so we should check the required format according to it.
This fixes CoglAtlasTexture (and CoglPangoRenderer indirectly), as
it forces a RGBA format on its texture, but pixel_format_to_gl()
anyway assumed swizzling is performed on the texture, while it is
not the case.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
This is used by the GL driver in order to determine whether swizzling
actually applies given the bitmap and target texture internal format.
If both agree that they store BGRA, then swizzling may apply.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
If the GL implementation/hw supports the GL_*_texture_swizzle extension,
pretend that BGRA textures shall contain RGBA data, and let the flipping
happen when the texture will be used in the rendering pipeline.
This avoids rather expensive format conversions when forcing BGRA buffers
into RGBA textures, which happens rather often with WL_SHM_FORMAT_ARGB8888
buffers (like gtk+ uses) in little-endian machines.
In intel/mesa/wayland, the performance improvement is rather noticeable,
CPU% as seen by top decreases from 45-50% to 25-30% when running
gtk+/tests/scrolling-performance with a cairo renderer.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
Because the threaded-swap-wait functionality requires XInitThreads(),
and because it isn't clear that it is a win for all applications,
add a API function to conditionally enable it.
Fix the cogl-crate example not to just have a hard-coded dependency
on libX11.
https://bugzilla.gnome.org/show_bug.cgi?id=779039
It's a good guess that the buffer swap will occur at the next vblank,
so use glXWaitVideoSync in a separate thread to deliver a sync event
rather than just letting the client block when frame drawing, which
can signficantly change app logic as compared to the INTEL_swap_event
case.
https://bugzilla.gnome.org/show_bug.cgi?id=779039
When we don't have GLX_OML_sync_control, we still can set the
frame presentation time, but we always use the system monotonic time,
so return that from get_clock_time().
https://bugzilla.gnome.org/show_bug.cgi?id=779039
As previously commented in the code, SGI_video_sync is per-display, rather
than per-renderer. The is_direct flag for the renderer was tested before
it was initialized (per-display) and that resulted in SGI_video_sync
never being used.
https://bugzilla.gnome.org/show_bug.cgi?id=779039
In order to minimize the amount of breakage, while at the same time
making it easier to make backward incompatible changes needed to
continue turning libmutter into a capable Wayland compositor, make the
libmutter and friends (libmutter-clutter, libmutter-cogl*) parallel
installable by adding a version number to the name. This changes
various filenames, for example what previously was libmutter.so is now
libmutter-0.so (assuming the version for now is 0), and
libmutter-clutter-1.0.so is now libmutter-clutter-0.so. The pkg-config
filenames and GObject introspection has been renamed to reflect this as
well.
This enables a downstream compositor rely on a specific version of the
libmutter API, while gracefully handling API/ABI changes by having to
update to the new version at their own pace.
https://bugzilla.gnome.org/show_bug.cgi?id=777317
Different libEGL will do different things for eglGetDisplay since it has
to guess what kind of display it's been handed. Better to just use the
API that makes it explicit.
Signed-off-by: Adam Jackson <ajax@redhat.com>
https://bugzilla.gnome.org/show_bug.cgi?id=772422
We need a GLES2 extension macro in cogl-texture-2d-gl.c, but we can't
include GLES2/gl2ext.h because it will conflict with things in
GL/glext.h. We can't rely on cogl including anything for us since it'd
only include GLES2/gl2ext.h if OpenGL support was explicitly disabled.
https://bugzilla.gnome.org/show_bug.cgi?id=774891
EGLDevice requires a define from GLES2, even when GL is used instead.
As type definitions may conflict between the two, we shouldn't include
both at the same time. Instead, provide the missing define explicitly
when not using GLES2.
https://bugzilla.gnome.org/show_bug.cgi?id=774891
Add API to enable the caller to have a custom method for allocating an
external texture. This will enable the possibility for mutter to
generate a texture from for example an EGLStream without having to add
support for that in Cogl.
https://bugzilla.gnome.org/show_bug.cgi?id=773629
Don't rely on the Cogl layer having Wayland specific paths by
determining the buffer type and creating the EGLImage ourself, while
using the newly exposed CoglTexture from EGLImage API. This changes the
API used by MetaWaylandSurface to make the MetaWaylandBuffer API be
aware when the buffer is being attached. For SHM and EGL buffers, only
the first time it is attached will result in a new texture being
allocated, but later for EGLStream's, more logic on every attach is
needed.
https://bugzilla.gnome.org/show_bug.cgi?id=773629
The WL_bind_wayland_display spec says that EGL images should be created
using EGL_WAYLAND_BUFFER_WL as the target and a NULL context. Mesa
seems to be lenient and accept any context, however some other stacks
aren't so forgiving and fail if anything apart from EGL_NO_CONTEXT is
used.
Signed-off-by: Sjoerd Simons <sjoerd.simons@collabora.co.uk>
https://bugzilla.gnome.org/show_bug.cgi?id=769731
Mutter (and libmutter users) are the only users of this version of
cogl, and will more or less only use the cogl-1.0, cogl-2.0 and cogl
experimental API variants, and having the possibility of having
different API versions of the same API depending on what file includes
it is error prone and confusing. Lets just remove the possibility of
having different versions of the same API.
https://bugzilla.gnome.org/show_bug.cgi?id=768977
We bypass our build configuration to fetch API from a version which
isn't the one we actually use. Stop bypassing and just admit that the
1.0 API is still there, but still deprecated.
https://bugzilla.gnome.org/show_bug.cgi?id=768977