If the EGL_KHR_no_config_context extension is supported, use it to
choose a format per onscreen which is compatible with the scanout CRTC
and the GL rendering API used.
Suggested by Jonas Ådahl.
v2:
* Drop code which checked for GLES3 renderability. Makes no sense for
various reasons, in particular that EGLconfigs are about EGLSurfaces,
whereas secondary GPU contexts use an FBO for blitting.
* Use error parameter directly for meta_renderer_native_choose_gbm_format
call (Jonas Ådahl)
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3139>
If the EGL_KHR_no_config_context extension is supported, pass
EGL_NO_CONFIG_KHR to eglCreateContext. This will allow binding the
context to surfaces created with different configs.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3139>
In profilers with a timeline or flame graph views it is a very common
scenario that a span name must be displayed in an area too short to fit
it. In this case, profilers may implement automatic shortening to show
the most important part of the span name in the available area. This
makes it easier to tell what's going on without having to zoom all the
way in.
The current trace span names in Mutter don't really follow any system
and cannot really be shortened automatically.
The Tracy profiler shortens with C++ in mind. Consider an example C++
name:
SomeNamespace::SomeClass::some_method(args)
The method name is the most important part, and the arguments with the
class name will be cut if necessary in the order of importance.
This logic makes sence for other languages too, like Rust. I can see it
being implemented in other profilers like Sysprof, since it's generally
useful.
Hence, this commit adjusts our trace names to look like C++ and arrange
the parts of the name in the respective order of importance.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3402>
Group all the three config files from clutter/cogl/meta into one
and also remove unnused configurations and replace duplicated ones
This also fixes Cogl usage of HAS_X11/HAS_XLIB to match the expected
build options
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3368>
- Make Texture a parent GObject class and move the vtable funcs as vfuncs
instead of an interface as we would like to have dispose free the TextureLoader.
- Make the various texture sub-types inherit from it.
- Make all the sub-types constructors return a CoglTexture instead of their respective
specific type. As most of the times, the used functions accept a CoglTexture,
like all the GTK widgets constructors returning GtkWidget.
- Fix up the basics of gi-docgen for all these types.
- Remove CoglPrimitiveTexture as it is useless: It is just a texture underhood.
- Remove CoglMetaTexture: for the exact same reason as above.
- Switch various memory management functions to use g_ variant instead of the cogl_ one
Note we would still want to get rid of the _cogl_texture_init which is something
for the next commit
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3193>
Every `mtk_x11_error_trap_push()` must be paired
with an `mtk_x11_error_trap_pop[_with_return]()` call
otherwise all future errors will be caught and ignored
even if they shouldn't be.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3328>
Or else `glXQueryDrawable` will fail per the `GLX_EXT_buffer_age` spec:
> If querying GLX_BACK_BUFFER_AGE_EXT and <draw> is not bound to
> the calling thread's current context a GLXBadDrawable error is
> generated.
This mistake went unnoticed until `mtk_x11_error_trap_push` was introduced
(55e3b2e5193) because for some reason it is incapable of trapping
`glXQueryDrawable`. Prior to that it seems
`cogl_onscreen_glx_get_buffer_age` would trap and so always returned zero.
This means we're reenabling clipped redraws on X11 here for the first
time in a long time.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/3007
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3255>
In future commits, we will want to create DMA-BUFs with pixel
formats other than COGL_PIXEL_FORMAT_BGRX_8888. In preparation
for that, let's start passing a new pixel format parameter to
this function, and the corresponding winsys vfunc.
All callers of this function pass COGL_PIXEL_FORMAT_BGRX_8888
for now. Next commits will change that.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3175>
This is a X request that may result in errors, so it is better
to have covered by an error trap.
It is thus far not, explicitly at least, which means other less
lenient error traps might not like what happens here. Make the
error trap threeway between backend, x11 and cogl happen less
by chance here.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2864>
Previously, we would only check for EXT_swap_buffers_with_damage which
generally will find an implementation. However, some EGL implementations
do not appear to support that naming of the extension, preferring to
only advertise KHR_swap_buffers_with_damage.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2316>
Returns TRUE if the active renderer backend can allocate DMA buffers.
This is the case hardware accelerated GBM backends, but FALSE for
surfaceless (i.e. no render node) and EGLDevice (legacy NVIDIA paths).
While software based gbm devices can allocate DMA buffers, we don't want
to allocate them for offscreen rendering, as we really only use these
for inter process transfers, and as buffers allocated for scanout
doesn't use the relevant API, making it return FALSE for these solves
that.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1939>
Because both code paths require the existence of `GL_TIMESTAMP[_EXT]`
which is only guaranteed if `ARB_timer_query` (included in GL core 3.3)
is implemented.
We know when that is true because `context->glGenQueries` and
`context->glQueryCounter` are non-NULL. So that is the minimum
requirement for any use of `GL_TIMESTAMP`, even when it is used in
`glGetInteger64v`.
Until now, Raspberry Pi (OpenGL 2.1) would find a working implementation
of `glGetInteger64v` but failed to check whether the driver understands
`GL_TIMESTAMP` (it doesn't).
Fixes: https://gitlab.gnome.org/GNOME/mutter/-/issues/2107
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2253>
When Cogl gained support for importing pixmaps, I think there was a
misunderstanding that there is a difference in how it works in GLX and
EGL where GLX needs to rebind the pixmap in order to guarantee that
changes are reflected in the texture after it detects damage, whereas
with EGL it doesn’t. The GLX spec makes it pretty clear that it does
need to rebind whereas the EGL spec is a bit harder to follow. As a
fallout from Mesa MR 12869, it seems like the compositor really does
need to rebind the image to comply with the spec. Notably, in
OES_EGL_image_external there is:
"Binding (or re-binding if already bound) an external texture by calling
BindTexture after all modifications are complete guarantees that
sampling done in future draw calls will return values corresponding to
the values in the buffer at or after the time that BindTexture is
called."
So this commit changes the x11_damage_notify handler for EGL to lazily
queue a rebind like GLX does. The code that binds the image while
allocating the texture has been moved into a reusable helper function.
It seems like there is a bit of a layering violation when accessing the
GL driver internals from the EGL winsys code, but I noticed that the GLX
code also includes the driver GL headers and otherwise it seems pretty
tricky to do properly.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2062>
Add support for a cogl function to set the damage_region on an onscreen
framebuffer.
The goal of this is to enable using the EGL_KHR_partial_update extension
which can potentially reduce memory bandwidth usage by some GPUs,
particularly on embedded GPUs that operate on a tiling rendering model.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2023>
We need to call eglBindAPI() with GLES before we setup the secondary
GPU blit. We've been lucky not really needing this, as it has been
GLES default, which is what the secondary blit uses, in order to not
depend on the default, or if we want to create the secondary blit
objects after initializing cogl, we must make sure to bind the right API
at the right time.
As we need to bind the GLES API when setting up the secondary blit, we
need to make sure that cogl gets the right API bound when that's done,
so Cogl can continue working. For this, add a "bind_api()" method on the
CoglRenderer object, that will know what API is correct to bind.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1828>
Just like we do on EGL. Two bits are required because
`cogl-clip-stack-gl.c` needs each stencil buffer element to be able to
count from 0 to 2.
This mistake probably went unnoticed because:
* Drivers usually provide more than 1 anyway; and
* Optimizations in `cogl-clip-stack-gl.c` avoid calling the code that
needs to count past 1 in most cases.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1873>
This concerns only the cases when the presentation timestamp is received
directly from the device (from KMS or from GLX). In the majority of
cases this timestamp is already MONOTONIC. When it isn't, after this
commit, the current value of the MONOTONIC clock is sampled instead.
The alternative is to store the clock id alongside the timestamp, with
possible values of MONOTONIC, REALTIME (from KMS) and GETTIMEOFDAY (from
GLX; this might be the same as REALTIME, I'm not sure), and then
"convert" the timestamp to MONOTONIC when needed. An example of such a
conversion was done in compositor.c (removed in this commit). It would
also be needed for the presentation-time Wayland protocol. However, it
seems that the vast majority of up-to-date systems are using MONOTONIC
anyway, making this effort not justified.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1484>
KMS and GLX device timestamps have microsecond precision, and whenever
we sample the time ourselves it's not the real presentation time anyway,
so nanosecond precision for that case is unnecessary.
The presentation timestamp in ClutterFrameInfo is in microseconds, too,
so this commit makes them have the same precision.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1484>
A flag indicating whether the presentation timestamp was provided by the
display hardware (rather than sampled in user space).
It will be used for the presentation-time Wayland protocol.
This is definitely the case for page_flip_handler(), and I'm assuming
this is also the case for the two instances in the GLX code.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1484>
Mutter needs to fetch the X11 Window ID from the onscreen and did that
by using an X11 specific API on the CoglOnscreen, where the X11 type was
"expanded" (Window -> uint32_t). Change this by introducing an interface
called CoglX11Onscreen, implemented by both the Xlib and GLX onscreen
implementations, that keeps the right type (Window), while avoiding X11
specific API for CoglOnscreen.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1514>
Mutter didn't use the APIs for resizeability of CoglOnscreens but
managed the size itself. For the native backend we don't ever resize
onscreens. Thus, remove this unused functionality.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1514>