They're just typedefs to CoglObject and CoglObjectClass, respectively,
and the latter is unused already. The former is used in a couple of
deprecated headers, which can easily be changed.
Change all CoglHandleObject instances to CoglObject, and drop the types
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2355>
There is no need to make CoglJournal a CoglObject, nor any kind of
object for that matter, since it doesn't require refcounting at all.
CoglJournal is entirely private to, and managed by, CoglFramebuffer,
and it only needs to create and destroy it.
Make CoglJournal a free-form struct, and adjust CoglFramebuffer to
call _cogl_journal_free() instead of cogl_object_unref().
Related: https://gitlab.gnome.org/GNOME/mutter/-/issues/2087
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2355>
DMA buffers might be allocatable, but it doesn't mean the driver doesn't
fail when we try to allocate a buffer with an implicit modifier. Using
the proprietary NVIDIA driver for example, it will fail. Lets catch this
up front and avoid advertising DMA buffer support when we know it won't
work.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2383>
Previously, we would only check for EXT_swap_buffers_with_damage which
generally will find an implementation. However, some EGL implementations
do not appear to support that naming of the extension, preferring to
only advertise KHR_swap_buffers_with_damage.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2316>
Returns TRUE if the active renderer backend can allocate DMA buffers.
This is the case hardware accelerated GBM backends, but FALSE for
surfaceless (i.e. no render node) and EGLDevice (legacy NVIDIA paths).
While software based gbm devices can allocate DMA buffers, we don't want
to allocate them for offscreen rendering, as we really only use these
for inter process transfers, and as buffers allocated for scanout
doesn't use the relevant API, making it return FALSE for these solves
that.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1939>
Because both code paths require the existence of `GL_TIMESTAMP[_EXT]`
which is only guaranteed if `ARB_timer_query` (included in GL core 3.3)
is implemented.
We know when that is true because `context->glGenQueries` and
`context->glQueryCounter` are non-NULL. So that is the minimum
requirement for any use of `GL_TIMESTAMP`, even when it is used in
`glGetInteger64v`.
Until now, Raspberry Pi (OpenGL 2.1) would find a working implementation
of `glGetInteger64v` but failed to check whether the driver understands
`GL_TIMESTAMP` (it doesn't).
Fixes: https://gitlab.gnome.org/GNOME/mutter/-/issues/2107
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2253>
When Cogl gained support for importing pixmaps, I think there was a
misunderstanding that there is a difference in how it works in GLX and
EGL where GLX needs to rebind the pixmap in order to guarantee that
changes are reflected in the texture after it detects damage, whereas
with EGL it doesn’t. The GLX spec makes it pretty clear that it does
need to rebind whereas the EGL spec is a bit harder to follow. As a
fallout from Mesa MR 12869, it seems like the compositor really does
need to rebind the image to comply with the spec. Notably, in
OES_EGL_image_external there is:
"Binding (or re-binding if already bound) an external texture by calling
BindTexture after all modifications are complete guarantees that
sampling done in future draw calls will return values corresponding to
the values in the buffer at or after the time that BindTexture is
called."
So this commit changes the x11_damage_notify handler for EGL to lazily
queue a rebind like GLX does. The code that binds the image while
allocating the texture has been moved into a reusable helper function.
It seems like there is a bit of a layering violation when accessing the
GL driver internals from the EGL winsys code, but I noticed that the GLX
code also includes the driver GL headers and otherwise it seems pretty
tricky to do properly.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2062>
Because POSIX sh was, with hindsight, not a particularly well-designed
programming language, if we don't 'set -e', then we'll respond to failure
of a setup command such as cd by carrying on regardless.
Signed-off-by: Simon McVittie <smcv@debian.org>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2009>
The assumption here seems to be that it's an overlay onto the
current environment which would make sense; but the implementation in
gnome-desktop-testing currently removes all other environment variables
(see GNOME/gnome-desktop-testing#1). This causes test failure when the
tests are run in Debian's autopkgtest framework, possibly because PATH
is cleared.
Signed-off-by: Simon McVittie <smcv@debian.org>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2009>
Add support for a cogl function to set the damage_region on an onscreen
framebuffer.
The goal of this is to enable using the EGL_KHR_partial_update extension
which can potentially reduce memory bandwidth usage by some GPUs,
particularly on embedded GPUs that operate on a tiling rendering model.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2023>
These match their alpha counterparts, apart from not setting the
alpha bit. This allows our internal mashinery to more easily
distinguish whether we need a slow alpha-pass during rendering or not.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1810>
These match their alpha counterparts, apart from not setting the
alpha bit. This allows our internal mashinery to more easily
distinguish whether we need a slow alpha-pass during rendering or not.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1810>
Using "framebuffer_discard" in the list refers to an invalid extension.
The extension which introduces glDiscardFramebufferEXT is
"GL_EXT_discard_framebuffer".
With this fix, cogl_gl_framebuffer_fbo_discard_buffers can actually call
glDiscardFramebufferEXT.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1974>
A trace "anchor" is a trace head (CoglTraceHead) that is placed in a
certain scope (e.g. function scope), but then only triggered in another
scope, e.g. a condition.
This makes it possible to have optional trace instrumentation, that is
enabled only given e.g. a debug flag.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1700>
Seems glGetString(GL_RENDERER) in the wild can return NULL, causing
issues with strstr(). Handle this more gracefully by using
g_return_val_if_fail(), that assumes a NULL renderer means software
rendering.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1931>
Add utilities that allow getting the current GPU timestamp and creating
a query which completes upon completion of all operations currently
submitted on a framebuffer. Combined, these two allow measuring how long
it took the GPU to finish rendering something to a framebuffer.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1762>
We need to call eglBindAPI() with GLES before we setup the secondary
GPU blit. We've been lucky not really needing this, as it has been
GLES default, which is what the secondary blit uses, in order to not
depend on the default, or if we want to create the secondary blit
objects after initializing cogl, we must make sure to bind the right API
at the right time.
As we need to bind the GLES API when setting up the secondary blit, we
need to make sure that cogl gets the right API bound when that's done,
so Cogl can continue working. For this, add a "bind_api()" method on the
CoglRenderer object, that will know what API is correct to bind.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1828>
Just like we do on EGL. Two bits are required because
`cogl-clip-stack-gl.c` needs each stencil buffer element to be able to
count from 0 to 2.
This mistake probably went unnoticed because:
* Drivers usually provide more than 1 anyway; and
* Optimizations in `cogl-clip-stack-gl.c` avoid calling the code that
needs to count past 1 in most cases.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1873>
Previously we were using a mask of 0x1 for the lifetime of the stencil.
This was wrong for two reasons:
* The intersection algorithm needs to count up to a maximum 2, so a
mask of 1 would clamp to 1 instead. Then decrementing all pixels
resulted in all pixels being zero even though we want some to be 1.
So the stencil then blocked some color buffer pixels being rendered.
* The lifetime of the mask was too long. By leaving it non-zero at
the end of the function we could accidentally end up modifying the
stencil contents during our later color buffer paints.
This fixes faulty rendering of some actors seen in gnome-shell with
test case: `env COGL_DEBUG=stencilling`
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1873>
Previously we were using a mask of 0x1 for the lifetime of the stencil.
This was wrong for two reasons:
* The intersection algorithm needs to count up to a maximum 2, so a
mask of 1 would clamp to 1 instead. Then decrementing all pixels
resulted in all pixels being zero even though we want some to be 1.
So the stencil then blocked some color buffer pixels being rendered.
* The lifetime of the mask was too long. By leaving it non-zero at
the end of the function we could accidentally end up modifying the
stencil contents during our later color buffer paints.
This fixes missing rendering of some actors seen in gnome-shell with
test case: `env COGL_DEBUG=stencilling CLUTTER_PAINT=disable-clipped-redraws`
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1873>
The cogl tests need to run with a display server set, however since we
use TestEnvironment, only the listed env variables will be exposed to
the test and so no DISPLAY will be set when launching it with
gnome-desktop-testing-runner.
As per this, just run the tests using xvfb-run so that we match what's
happening in CI and we ensure that the tests are run in a safe
environment.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1876>
As documented in g_once_init_enter(): "While @location has a volatile qualifier,
this is a historical artifact and the pointer passed to it should not be
volatile.". And effectively this now warns with modern glibc.
Drop the "volatile" qualifier from these static variables as it's expected.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1785>
This concerns only the cases when the presentation timestamp is received
directly from the device (from KMS or from GLX). In the majority of
cases this timestamp is already MONOTONIC. When it isn't, after this
commit, the current value of the MONOTONIC clock is sampled instead.
The alternative is to store the clock id alongside the timestamp, with
possible values of MONOTONIC, REALTIME (from KMS) and GETTIMEOFDAY (from
GLX; this might be the same as REALTIME, I'm not sure), and then
"convert" the timestamp to MONOTONIC when needed. An example of such a
conversion was done in compositor.c (removed in this commit). It would
also be needed for the presentation-time Wayland protocol. However, it
seems that the vast majority of up-to-date systems are using MONOTONIC
anyway, making this effort not justified.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1484>
KMS and GLX device timestamps have microsecond precision, and whenever
we sample the time ourselves it's not the real presentation time anyway,
so nanosecond precision for that case is unnecessary.
The presentation timestamp in ClutterFrameInfo is in microseconds, too,
so this commit makes them have the same precision.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1484>