Recent versions of Xwayland can allow or disallow X11 clients from
different endianess to connect.
Add a setting to configure this feature from mutter, who spawns
Xwayland.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2785>
This define was dropped by commit 0e8aaebc00 (xwayland: Make
XSetIOErrorExitHandler() mandatory), but some #ifdef checks were
brought back by commit 36f30341ac (wayland: Add a prepare-shutdown
signal).
Since there's no define anymore in config.h, these pieces of code
were unintentionally disabled, and a meta_get_display() call be
also left over. Remove the ifdefs and update the code to build
again.
Fixes: 36f30341ac - wayland: Add a prepare-shutdown signal
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2786>
This XChangeWindowAttributes call was never surrounded by an error trap
and was not really expected to fail with BadWindow since the frame window
would be owned by Mutter itself.
This however is no longer true, and we might be getting a BadWindow from
the frame window given the right timing.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2745>
Commit 4e0ffba5c attempted to fix initialization of keyboard a11y,
but mousekeys do attempt to create a virtual input device at a
time that it is too early to try to create one.
Defer this operation until keyboard devices are added, so that
we are ensured to already have the seat input thread set up.
Fixes: 4e0ffba5c - backends/native: Initialize keyboard a11y on startup
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2778>
Quoting Carlos:
The META_PRIORITY_EVENTS ± 1 happening below are in order to set these idles
and timeouts in a priority that is relative to the literal GDK event priority,
making those diverge is a likely way to sneakily break things.
But that's unlikely to happen, and decoupling mutter from GTK further
should make it moot, so perhaps it's alright after all.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2407>
Clutter has an API to get the text direction but used to depend
on gtk3's translation domain. In order to avoid broken i18n
in case gtk3 is not installed, move the transtalable string to
clutter itself.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2407>
Now that dynamic max render time uses a new algorithm and takes dispatch
lateness into account, this seems worth a shot. We'll see how it works
out in the wild.
The net result compared to before these changes is still slightly higher
(by ~0.5 ms) minimum latency for me, as measured by
weston-presentation-shm. It should be less vulnerable to frame drops
though.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2500>
Store only two values per kind of duration: The short term and long term
maximum.
The short term maximum is updated in each frame clock dispatch. The long
term maximum is updated at most once per second: If the short term
maximum is higher, the long term maximum is updated to match it.
Otherwise, a fraction of the delta between the two maxima is subtracted
from the long term maximum.
Compared to the previous algorithm:
* The calculcations are simpler.
* The calculated max render time has a slow exponential drop-off (by at
most a few milliseconds every second) instead of potentially abruptly
dropping after as few as 16 frames.
This should fix https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/4830
since the short term maximum should always include a sample from the
clock's second tick.
v2:
* Use divisor 2 instead of 4.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2500>
Dispatch lateness is the difference between when we wanted frame clock
dispatch to run and when it actually started running. This can be up to
1ms even under normal circumstances due to process scheduling
granularity, or even higher under load.
This keeps track of dispatch lateness of the last 16 frame clock
dispatches, and incorporates the maximum into the dynamic render time
estimate.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2500>
There are two tests; one checks that clearing with a color that cannot
be represented using 8 bits per channel doesn't loose precision when
painted, then read back using glReadPixels(). Would the texture backing
store have 8 bits per channel instead of 10, we'd get a different value
back.
The other test checks that painting from one fbo to another also doesn't
loose that precision.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2461>
OpenGL requires more hand holding in the driver regarding what pixel
memory layouts can be written when calling glReadPixels(), compared to
GLES2. Lets move the details of this logic to the corresponding
backends, so in the future, the GLES2 backend can be adapted to handle
more formats, without placing that logic in the generic layer.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2461>
COGL_PIXEL_FORMAT_ABGR_2101010 is defined to mean the 2 A bits are
placed in a 32 bit unsigned integer on the bits with highest
significance, followed by B on the following 10 bits, and so on, until R
on the 10 least significant bits.
UNSIGNED_INT_2_10_10_10_REV_EXT is defined to represent color channels
as
```
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
-------------------------------------------------------------------------------------
| a | b | g | r |
-------------------------------------------------------------------------------------
```
As can be seen, this matches COGL_PIXEL_FORMAT_ABGR_2101010, meaning
that's the format we can directly read and write.
In Cogl, when finding the GL formats, we get the tuple with the GL
format given the format we pass, but we also get returned "required
format" (CoglPixelFormat). This required format represents the format
that is required when reading actual pixels from GLES. In GLES, the
above mentioned format is the only one supported by the
EXT_texture_type_2_10_10_10_REV extension, thus for other types, we need
to do the CPU side conversion ourselves. To achieve this, correctly
return COGL_PIXEL_FORMAT_ABGR_2101010 as the required format.
The internal format should also be GL_RGB10_A2, not GL_RGBA.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2461>
Commit bf84b24 created meta-enums.h but it's pretty empty so far, the
vast majority of enum definitions is still in common.h.
Move the Meta enum definitions to meta-enums.h as one would expect them
to be found.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2467>
This replaces the v1 implementation, which is now renamed to
legacy-xdg-foreign. Both implementations use the same data structures
internally, so that protocol version mismatches between
the importer client and exporter client don't fail.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2770>