Marking the the depth/stencil as discarded before swapping buffers for
the screen signals the GPU that we don't need to keep them around for
the future.
This helps performance by reducing memory bandwidth usage in some GPUs
which may optimize to not write those buffers back to memory at all
after rendering, when they would just be cleared right after that
anyway.
It is not necessary to mark buffers as discarded after swapping buffers.
This should have no effect according to the spec (since that is going to
be followed by new rendering commands which make the buffer valid again)
and removing that has shown no impact in performance tests.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2091>
cogl_framebuffer_discard_buffers required that the color buffer is
passed, although users might want to discard e.g. just depth and/or
stencil buffers.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2091>
When sysprof-4 and libsysprof-capture-4 are installed into different
prefixes, such as with Nix package manager, the D-Bus interfaces
are likely not discoverable from the latter package.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2572>
OpenGL requires more hand holding in the driver regarding what pixel
memory layouts can be written when calling glReadPixels(), compared to
GLES2. Lets move the details of this logic to the corresponding
backends, so in the future, the GLES2 backend can be adapted to handle
more formats, without placing that logic in the generic layer.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2461>
COGL_PIXEL_FORMAT_ABGR_2101010 is defined to mean the 2 A bits are
placed in a 32 bit unsigned integer on the bits with highest
significance, followed by B on the following 10 bits, and so on, until R
on the 10 least significant bits.
UNSIGNED_INT_2_10_10_10_REV_EXT is defined to represent color channels
as
```
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
-------------------------------------------------------------------------------------
| a | b | g | r |
-------------------------------------------------------------------------------------
```
As can be seen, this matches COGL_PIXEL_FORMAT_ABGR_2101010, meaning
that's the format we can directly read and write.
In Cogl, when finding the GL formats, we get the tuple with the GL
format given the format we pass, but we also get returned "required
format" (CoglPixelFormat). This required format represents the format
that is required when reading actual pixels from GLES. In GLES, the
above mentioned format is the only one supported by the
EXT_texture_type_2_10_10_10_REV extension, thus for other types, we need
to do the CPU side conversion ourselves. To achieve this, correctly
return COGL_PIXEL_FORMAT_ABGR_2101010 as the required format.
The internal format should also be GL_RGB10_A2, not GL_RGBA.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2461>
Unused since https://gitlab.gnome.org/GNOME/mutter/merge_requests/932
AFAICT.
As a bonus, gets rid of these compiler warnings:
../cogl/cogl/cogl-pipeline-layer-state.c: In function ‘cogl_pipeline_get_layer_min_filter’:
../cogl/cogl/cogl-pipeline-layer-state.c:1279:10: warning: ‘min_filter’ may be used uninitialized [-Wmaybe-uninitialized]
1279 | return min_filter;
| ^~~~~~~~~~
../cogl/cogl/cogl-pipeline-layer-state.c:1274:22: note: ‘min_filter’ was declared here
1274 | CoglPipelineFilter min_filter;
| ^~~~~~~~~~
../cogl/cogl/cogl-pipeline-layer-state.c: In function ‘cogl_pipeline_get_layer_mag_filter’:
../cogl/cogl/cogl-pipeline-layer-state.c:1291:10: warning: ‘mag_filter’ may be used uninitialized [-Wmaybe-uninitialized]
1291 | return mag_filter;
| ^~~~~~~~~~
../cogl/cogl/cogl-pipeline-layer-state.c:1287:22: note: ‘mag_filter’ was declared here
1287 | CoglPipelineFilter mag_filter;
| ^~~~~~~~~~
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2757>
Both the fragend and vertend shader state was called
"CoglPipelineShaderState", which was rather annoying, especially when
the type needs to be exposed outside of the .c file as part of moving
out unit tests. Make the types unique. This also avoids confusing what
type one is looking at.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2555>
It consists of only a macro and build description logic.
Adds a macro for simpler tests that doesn't require a context; unit
tests requiring a context should use the same framework as conform
tests.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2555>
All working tests have already migrated to the test suite using mutter;
move the old unported tests over too, and remove the conform test
framework, as it is no longer used.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2555>
They're just typedefs to CoglObject and CoglObjectClass, respectively,
and the latter is unused already. The former is used in a couple of
deprecated headers, which can easily be changed.
Change all CoglHandleObject instances to CoglObject, and drop the types
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2355>
There is no need to make CoglJournal a CoglObject, nor any kind of
object for that matter, since it doesn't require refcounting at all.
CoglJournal is entirely private to, and managed by, CoglFramebuffer,
and it only needs to create and destroy it.
Make CoglJournal a free-form struct, and adjust CoglFramebuffer to
call _cogl_journal_free() instead of cogl_object_unref().
Related: https://gitlab.gnome.org/GNOME/mutter/-/issues/2087
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2355>
DMA buffers might be allocatable, but it doesn't mean the driver doesn't
fail when we try to allocate a buffer with an implicit modifier. Using
the proprietary NVIDIA driver for example, it will fail. Lets catch this
up front and avoid advertising DMA buffer support when we know it won't
work.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2383>
Previously, we would only check for EXT_swap_buffers_with_damage which
generally will find an implementation. However, some EGL implementations
do not appear to support that naming of the extension, preferring to
only advertise KHR_swap_buffers_with_damage.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2316>
Returns TRUE if the active renderer backend can allocate DMA buffers.
This is the case hardware accelerated GBM backends, but FALSE for
surfaceless (i.e. no render node) and EGLDevice (legacy NVIDIA paths).
While software based gbm devices can allocate DMA buffers, we don't want
to allocate them for offscreen rendering, as we really only use these
for inter process transfers, and as buffers allocated for scanout
doesn't use the relevant API, making it return FALSE for these solves
that.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1939>
Because both code paths require the existence of `GL_TIMESTAMP[_EXT]`
which is only guaranteed if `ARB_timer_query` (included in GL core 3.3)
is implemented.
We know when that is true because `context->glGenQueries` and
`context->glQueryCounter` are non-NULL. So that is the minimum
requirement for any use of `GL_TIMESTAMP`, even when it is used in
`glGetInteger64v`.
Until now, Raspberry Pi (OpenGL 2.1) would find a working implementation
of `glGetInteger64v` but failed to check whether the driver understands
`GL_TIMESTAMP` (it doesn't).
Fixes: https://gitlab.gnome.org/GNOME/mutter/-/issues/2107
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2253>
When Cogl gained support for importing pixmaps, I think there was a
misunderstanding that there is a difference in how it works in GLX and
EGL where GLX needs to rebind the pixmap in order to guarantee that
changes are reflected in the texture after it detects damage, whereas
with EGL it doesn’t. The GLX spec makes it pretty clear that it does
need to rebind whereas the EGL spec is a bit harder to follow. As a
fallout from Mesa MR 12869, it seems like the compositor really does
need to rebind the image to comply with the spec. Notably, in
OES_EGL_image_external there is:
"Binding (or re-binding if already bound) an external texture by calling
BindTexture after all modifications are complete guarantees that
sampling done in future draw calls will return values corresponding to
the values in the buffer at or after the time that BindTexture is
called."
So this commit changes the x11_damage_notify handler for EGL to lazily
queue a rebind like GLX does. The code that binds the image while
allocating the texture has been moved into a reusable helper function.
It seems like there is a bit of a layering violation when accessing the
GL driver internals from the EGL winsys code, but I noticed that the GLX
code also includes the driver GL headers and otherwise it seems pretty
tricky to do properly.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2062>