So we can properly handle matching DRM and WL_SHM formats in a unified
manner.
Add extensive testing between these and existing pre-multiplied alpha
formats, i.e. all formats we support on Wayland.
Note that unfortunately for some format combinations the value in the
alpha channel is not cleared as expected, likely because of fast-paths
in Cogl. If both source and destination format is opaque, it always
works, however. This thereby includes all cases where they are the same.
Co-Authored-By: Jonas Ådahl <jadahl@gmail.com>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3065>
On GLES2 reading and writing some Cogl formats is not supported
natively. In those cases we use another format to do the reading and
writing. When the internal format and the temporary format differ in
premultiplication, Cogl tries to adjust for it.
Opaque Cogl formats don't have the premult bit set but our internal
format is a premult format. Cogl tries to adjust for it but completely
misses that the opaque format doesn't have an alpha channel and it
should not do so at all.
So skip the premult adjusting when the Cogl format has no alpha channel.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3065>
glGetIntegerv() with GL_RED_BITS/GL_GREEN_BITS/GL_BLUE_BITS/etc. is not
supported with the GL core context, so there is no point in falling back
to that without supporting COGL_PRIVATE_FEATURE_QUERY_FRAMEBUFFER_BITS,
as this will cause an GL error.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3047>
The function ensure_bits_initialized() in cogl-gl-framebuffer-fbo.c
checks for COGL_PRIVATE_FEATURE_QUERY_FRAMEBUFFER_BITS whereas the same
in cogl-gl-framebuffer-back.c simply checks for the driver being
COGL_DRIVER_GL3.
Change the later to use the COGL_PRIVATE_FEATURE_QUERY_FRAMEBUFFER_BITS
flag as well.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3047>
Cogl's feature COGL_PRIVATE_FEATURE_QUERY_FRAMEBUFFER_BITS is required
to use the GL_FRAMEBUFFER_ATTACHMENT_* queries.
Unfortunately, the test for the availability of the private feature is
actually inverted in ensure_bits_initialized() which causes that whole
portion of code to be ignored, falling back to the glGetIntegerv()
method which isn't supported in core profiles.
As Mesa has recently started to be more strict about these, this causes
the CI tests to fail in mutter.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3047>
The only consumer of this type of rect was the scissor clipping,
which was removed by the previous commit.
Remove window rects from CoglClipStack, and all dependent code.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3006>
OpenGL requires more hand holding in the driver regarding what pixel
memory layouts can be written when calling glReadPixels(), compared to
GLES2. Lets move the details of this logic to the corresponding
backends, so in the future, the GLES2 backend can be adapted to handle
more formats, without placing that logic in the generic layer.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2461>
COGL_PIXEL_FORMAT_ABGR_2101010 is defined to mean the 2 A bits are
placed in a 32 bit unsigned integer on the bits with highest
significance, followed by B on the following 10 bits, and so on, until R
on the 10 least significant bits.
UNSIGNED_INT_2_10_10_10_REV_EXT is defined to represent color channels
as
```
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
-------------------------------------------------------------------------------------
| a | b | g | r |
-------------------------------------------------------------------------------------
```
As can be seen, this matches COGL_PIXEL_FORMAT_ABGR_2101010, meaning
that's the format we can directly read and write.
In Cogl, when finding the GL formats, we get the tuple with the GL
format given the format we pass, but we also get returned "required
format" (CoglPixelFormat). This required format represents the format
that is required when reading actual pixels from GLES. In GLES, the
above mentioned format is the only one supported by the
EXT_texture_type_2_10_10_10_REV extension, thus for other types, we need
to do the CPU side conversion ourselves. To achieve this, correctly
return COGL_PIXEL_FORMAT_ABGR_2101010 as the required format.
The internal format should also be GL_RGB10_A2, not GL_RGBA.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2461>
Both the fragend and vertend shader state was called
"CoglPipelineShaderState", which was rather annoying, especially when
the type needs to be exposed outside of the .c file as part of moving
out unit tests. Make the types unique. This also avoids confusing what
type one is looking at.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2555>
Because both code paths require the existence of `GL_TIMESTAMP[_EXT]`
which is only guaranteed if `ARB_timer_query` (included in GL core 3.3)
is implemented.
We know when that is true because `context->glGenQueries` and
`context->glQueryCounter` are non-NULL. So that is the minimum
requirement for any use of `GL_TIMESTAMP`, even when it is used in
`glGetInteger64v`.
Until now, Raspberry Pi (OpenGL 2.1) would find a working implementation
of `glGetInteger64v` but failed to check whether the driver understands
`GL_TIMESTAMP` (it doesn't).
Fixes: https://gitlab.gnome.org/GNOME/mutter/-/issues/2107
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2253>
When Cogl gained support for importing pixmaps, I think there was a
misunderstanding that there is a difference in how it works in GLX and
EGL where GLX needs to rebind the pixmap in order to guarantee that
changes are reflected in the texture after it detects damage, whereas
with EGL it doesn’t. The GLX spec makes it pretty clear that it does
need to rebind whereas the EGL spec is a bit harder to follow. As a
fallout from Mesa MR 12869, it seems like the compositor really does
need to rebind the image to comply with the spec. Notably, in
OES_EGL_image_external there is:
"Binding (or re-binding if already bound) an external texture by calling
BindTexture after all modifications are complete guarantees that
sampling done in future draw calls will return values corresponding to
the values in the buffer at or after the time that BindTexture is
called."
So this commit changes the x11_damage_notify handler for EGL to lazily
queue a rebind like GLX does. The code that binds the image while
allocating the texture has been moved into a reusable helper function.
It seems like there is a bit of a layering violation when accessing the
GL driver internals from the EGL winsys code, but I noticed that the GLX
code also includes the driver GL headers and otherwise it seems pretty
tricky to do properly.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2062>
These match their alpha counterparts, apart from not setting the
alpha bit. This allows our internal mashinery to more easily
distinguish whether we need a slow alpha-pass during rendering or not.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1810>
These match their alpha counterparts, apart from not setting the
alpha bit. This allows our internal mashinery to more easily
distinguish whether we need a slow alpha-pass during rendering or not.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1810>
Seems glGetString(GL_RENDERER) in the wild can return NULL, causing
issues with strstr(). Handle this more gracefully by using
g_return_val_if_fail(), that assumes a NULL renderer means software
rendering.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1931>
Add utilities that allow getting the current GPU timestamp and creating
a query which completes upon completion of all operations currently
submitted on a framebuffer. Combined, these two allow measuring how long
it took the GPU to finish rendering something to a framebuffer.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1762>
Previously we were using a mask of 0x1 for the lifetime of the stencil.
This was wrong for two reasons:
* The intersection algorithm needs to count up to a maximum 2, so a
mask of 1 would clamp to 1 instead. Then decrementing all pixels
resulted in all pixels being zero even though we want some to be 1.
So the stencil then blocked some color buffer pixels being rendered.
* The lifetime of the mask was too long. By leaving it non-zero at
the end of the function we could accidentally end up modifying the
stencil contents during our later color buffer paints.
This fixes faulty rendering of some actors seen in gnome-shell with
test case: `env COGL_DEBUG=stencilling`
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1873>
Previously we were using a mask of 0x1 for the lifetime of the stencil.
This was wrong for two reasons:
* The intersection algorithm needs to count up to a maximum 2, so a
mask of 1 would clamp to 1 instead. Then decrementing all pixels
resulted in all pixels being zero even though we want some to be 1.
So the stencil then blocked some color buffer pixels being rendered.
* The lifetime of the mask was too long. By leaving it non-zero at
the end of the function we could accidentally end up modifying the
stencil contents during our later color buffer paints.
This fixes missing rendering of some actors seen in gnome-shell with
test case: `env COGL_DEBUG=stencilling CLUTTER_PAINT=disable-clipped-redraws`
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1873>
It's currently only handled by a surface backend framebuffer (assuming
the right GLX extensions are available). While it's theoretically
possible to do the same with the offcreen by having multiple textures,
it's not supported, so leave the FBO variant with a single warning if we
end up there.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1514>
The object was still pretending to be CoglFramebuffer itself, by using
naming and calling conventions making it seem like that. Fix that by
passing around the driver instead of the framebuffer.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1514>
The framebuffer driver was lazilly initialized on demand in some cases
(onscreen), and up front other (offscreen). Replace this with a more
predictable up front initialization, done at framebuffer allocation.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1514>