As we will start adding support for more pixel formats, we will need to
define a notion of planes. This commit doesn't make any functional
change, but starts adding the idea of pixel formats and how they (at
this point only theoretically) can have multple planes.
Since a lot of code in Mutter assumes we only get to deal with single
plane pixel formats, this commit also adds assertions and if-checks to
make sure we don't accidentally try something that doesn't make sense.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/858
Depends on "cogl: Replace ANGLE with GLES3 and NV framebuffer_blit"
Allow blitting between onscreen and offscreen framebuffers by doing the y-flip
as necessary. This was not possible with ANGLE, but now with ANGLE gone,
glBlitFramebuffer supports flipping the copied image.
This will be useful in follow-up work to copy from onscreen primary GPU
framebuffer to an offscreen secondary GPU framebuffer.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/615
We already effectively require GLSL, because there's no fixed-function
backend anymore. OpenGL 2.0 drivers don't really exist in the wild, so
just go ahead and require 2.1 or better. 2.1 implies GLSL 1.20 or
better, so simplify that as well.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/489
This is effectively a revert of:
commit 6cfc93f26f
Author: Robert Bragg <robert@linux.intel.com>
Date: Tue Oct 2 11:44:00 2012 +0100
clip-stack: workaround intel gen6 viewport clip bug
It's been over six years, if this bug is still present we should just
fix Mesa already.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/481
As it was originally reported on
https://bugzilla.gnome.org/show_bug.cgi?id=779234#c0, the hottest path was
convert_ubyte() in mesa. Reverting this shows no trace of those hot paths,
nor any higher than usual CPU activity.
As the improvements at the time were real, I can only conclude that pixel
conversion was happening somewhere further the pipeline, and swizzling just
helped indirectly. That got eventually fixed, so swizzling just stayed to
cause grief. And lots it caused.
Time to bin this, it seems.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/486
This basically reverts commit 54735dec, which tried to avoid the
GLib-defined types in favor the standard C ones. One exception to this
is the bool type, for which the commit introduces a new type CoglBool.
Let's just get rid of this type in favor of having consistency with the
GLib types. Note by the way that neither CoglBool nor gboolean (which
has a size of `int`) are completely compatible with bool (size `char`).
https://gitlab.gnome.org/GNOME/mutter/merge_requests/321
Commit 25f416c13d added additional compilation warnings, including
-Werror=return-type. There are several places where this results
in build failures if `g_assert_not_reached()` is disabled at compile
time and the compiler misses a return value.
https://gitlab.gnome.org/GNOME/mutter/issues/447
While for normal textures, GL_TEXTURE_2D should be used, when it's an external
texture, binding it using GL_TEXTURE_2D results in an error.
Reading the specification for GL_TEXTURE_EXTERNAL_OES it is unclear whether
getting pixel data from a texture is possible, and tests show it doesn't result
in any data, but in case it would eventually start working, at least bind the
correct target for now.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/362
Don't just set the internal format to the dummy format "any", as that causes
code intended to be unreachable code to be reached. It's not possible to
actually know the internal format of an external texture, however, so it might
not actually correspond to the real format.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/362
Presumably glReadPixels itself can be more performant with pixel format
conversions than doing a fix-up conversion on the CPU afterwards. Hence,
pick required_format based on the destination rather than the source, so
that it has a better chance to avoid the fix-up conversion.
With CoglOnscreen objects, CoglFramebuffer::internal_format (the source
format) is also wrong. It is left to a default value and never set to
reflect the reality. In other words, read-pixels had an arbitrary
intermediate pixel format that was used in glReadPixels and then fix-up
conversion made it work for the destination.
The render buffers (GBM surface) are allocated as DRM_FORMAT_XRGB8888.
If the destination buffer is allocated as the same format, the Cogl
read-pixels first converts with glReadPixels XRGB -> ABGR because of the
above default format, and then the fix-up conversion does ABGR -> XRGB.
This case was observed with DisplayLink outputs, where the native
renderer must use the CPU copy path to fill the "secondary GPU"
framebuffers.
This patch stops using internal_format and uses the desired destination
format instead.
_cogl_framebuffer_gl_read_pixels_into_bitmap() will still use
internal_format to determine alpha premultiplication state and multiply
or un-multiply as needed. Luckily all the formats involved in the
DisplayLink use case are always _PRE and so is the default
internal_format too, so things work in practise.
Furthermore, the GL texture_swizzle extension can never apply to
glReadPixels. Not even with FBOs, as found in this discussion:
https://gitlab.gnome.org/GNOME/mutter/issues/72
Therefore the target_format argument is hardcoded to something that can
never match anything, which will prevent the swizzle from being assumed.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/313
This function gets hit even today on relatively modern Intel systems (I
have a Haswell Desktop with Mesa 18.2.4) if the pixel format is right.
Presumably it makes things slower for no longer a reason.
According to cb146dc515, this
functionality was refactored into a workaround path in 2012. The commit
message mentions the problem existing before Mesa 8.0.2. The number
refers to https://bugs.freedesktop.org/show_bug.cgi?id=46631 .
The use case where I hit this is when improving support for DisplayLink
video outputs. These are used through a "secondary GPU", and since
DisplayLink does not have a GPU, Mutter uses the CPU copy path with Cogl
read-pixels[1]. If the DisplayLink framebuffer was allocated as
DRM_FORMAT_XRGB8888 (the only format it currently handles correctly),
mesa_46631_slow_read_pixels_workaround would get hit. The render buffer is
the same format as the framebuffer, yet doing the copy XRGB -> XRGB ends
up being slower than XRGB -> XBGR which makes no sense.
This patch is not sufficient to fix the XRGB -> XRGB copy performance,
but it is required.
This patch reverts CoglGpuInfoDriverBug into what it was before
cb146dc515.
[1] This is not actually true until
https://gitlab.gnome.org/GNOME/mutter/merge_requests/278 is
merged.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/313
The depth buffer is marked as invalid when 1) the framebuffer is just created,
and 2) whenever GL_DEPTH_TEST is enabled on it. This will ensure the
framebuffers attached depth buffer (if any) is properly cleared before it's
actually used, while saving needless clears while depth testing is disabled
(the default).
https://bugzilla.gnome.org/show_bug.cgi?id=782344
By the looks of it, commit 95e9fa10ef was taping over an Intel DRI bug
that would make it return post-swizzling pixel data on glReadPixels().
There's been reports over time of that commit resulting in wrong colors
on other drivers, and lately Mesa >17.3 started showing the same symptoms
on Intel.
But texture swizzling works by changing parameters before fragment shaders
and reading pixels from an already drawn FBO/texture doesn't involve those.
This should thus use pixel_format_to_gl_with_target(), which will result in
correctly requesting the same pixel format than the underlying texture,
while still considering it BGRA for the upper layers in the swizzling case.
https://gitlab.gnome.org/GNOME/mutter/issues/72Closes: #72
The GL_BGRA definition is not available for GLES2 contexts, which use
the EXT_texture_format_BGRA8888 instead, causing a build failure when
trying to use it in those contexts.
Fortunately, this hack is only relevant for GL, so let's guard it to
prevent the failure in GLES2, where that extension is used instead.
https://bugzilla.gnome.org/show_bug.cgi?id=786568
Fixes cogl_texture_get_data() resorting to the wrong conversions when
extracting the texture data. This notably resulted in RGB/RGBA buffers
copied as-is into BGRA buffers, for instance for the fullscreen animation,
or single-window screenshots of such buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
We already do have a texture with an internal format in these paths,
so we should check the required format according to it.
This fixes CoglAtlasTexture (and CoglPangoRenderer indirectly), as
it forces a RGBA format on its texture, but pixel_format_to_gl()
anyway assumed swizzling is performed on the texture, while it is
not the case.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
This is used by the GL driver in order to determine whether swizzling
actually applies given the bitmap and target texture internal format.
If both agree that they store BGRA, then swizzling may apply.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
If the GL implementation/hw supports the GL_*_texture_swizzle extension,
pretend that BGRA textures shall contain RGBA data, and let the flipping
happen when the texture will be used in the rendering pipeline.
This avoids rather expensive format conversions when forcing BGRA buffers
into RGBA textures, which happens rather often with WL_SHM_FORMAT_ARGB8888
buffers (like gtk+ uses) in little-endian machines.
In intel/mesa/wayland, the performance improvement is rather noticeable,
CPU% as seen by top decreases from 45-50% to 25-30% when running
gtk+/tests/scrolling-performance with a cairo renderer.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
We need a GLES2 extension macro in cogl-texture-2d-gl.c, but we can't
include GLES2/gl2ext.h because it will conflict with things in
GL/glext.h. We can't rely on cogl including anything for us since it'd
only include GLES2/gl2ext.h if OpenGL support was explicitly disabled.
https://bugzilla.gnome.org/show_bug.cgi?id=774891
EGLDevice requires a define from GLES2, even when GL is used instead.
As type definitions may conflict between the two, we shouldn't include
both at the same time. Instead, provide the missing define explicitly
when not using GLES2.
https://bugzilla.gnome.org/show_bug.cgi?id=774891
Add API to enable the caller to have a custom method for allocating an
external texture. This will enable the possibility for mutter to
generate a texture from for example an EGLStream without having to add
support for that in Cogl.
https://bugzilla.gnome.org/show_bug.cgi?id=773629
In cogl use cogl-config.h and in clutter use clutter-build-config.h. We
can't use clutter-config.h in clutter because its already used and
installed.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
When using a context with robustness, glGetError() may return
GL_CONTEXT_LOST at any time and this error doesn't get cleared until
the application calls glGetGraphicsResetStatus() . This means that our
error checking can't call glGetError() in a loop without checking for
that return value and returning in that case.
https://bugzilla.gnome.org/show_bug.cgi?id=739178