The helper doesn't do anything that makes it worth
to be exposed as public API. End-users, such as GNOME Shell could have
an in-tree helper if they end up using it that much.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3086>
The shell might raise and make windows recent for another workspace when
an app gets activated on another workspace. Making the windows only
recent on the current workspace thus results in inconsistent focus when
another window of the same app is closed.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3315>
There are existing extensions that implement desktop icons as
a combination of a GTK program and a small extension to make
the wayland window behave as if it was of type DESKTOP on X11.
That's quite painful, as it requires reimplementing WM behavior
that is already implemented in mutter itself (stacking, stickiness,
skip-taskbar, ...), as well as modifying gnome-shell to consider
the window in addition to "real" DESKTOP windows (workspace-switch
animations, ctrl-alt-tab, ...).
In addition to that, other extensions may also have special handling
of DESKTOP windows, and their code cannot easily be monkey-patched
to handle "alternative" desktop icons.
This whole game of whack-a-mole can easily be avoided by allowing
desktop-icons extensions to mark their desktop windows as DESKTOP,
so do just that.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3305>
Assigning the corresponding stack layer of DESKTOP windows is
currently X11 specific, because there is no way for wayland
clients to set the DESKTOP window type.
This is about to change, so move the code to the generic layer
handling.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3305>
Change the order of events to adhere to the Wayland specification for
wl_keyboard.enter, which mandates:
> The compositor must send the wl_keyboard.modifiers event after
> this event.
Mutter currently sends the modifiers event before the enter event,
which may break applications that require information about the focused
surface in order to properly handle the modifiers.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/2231
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3341>
These functions ends-up calling gdk-pixbuf for loading textures/bitmaps
from a file and they don't seem to be used anywhere.
These changes are only useful with the following up commit.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3097>
Create a fake monitor region right of the right-most monitor and if a
horizontal barrier extends into that region, fail the barrier. Barriers
are aligned on the top/left edge of the pixel so the most natural
barrier of (e.g. 0-1024) is also wrong - it's one pixel into the next
monitor.
Check this for nonexisting screens on the right too to avoid clients
suddenly failing when multiple monitors are present.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3319>
Assuming two 1920x1080 screens next to each other: a horizontal barrier
starting at 1920 going east is always outside the left screen.
Assuming two 1920x1080 screens on top of each other: a vertical barrier
starting at 1080 going south is always outside the top screen.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3319>
When running headless, only the invalid modifiers are advertised.
That breaks with the NVIDIA proprietary driver which then rejects the
buffers created with the invalid modifier, and that kills Xwayland,
meaning that running Xwayland on top of a mutter based compositor
headless is not possible.
The reason the modifiers are not sent is because AddFb2 is not supported
when running headless.
Other compositors (weston, wlroots) would still send the modifiers even
without AddFb2, and Xwayland works fine on those compositors when
running headless.
Remove the requirement for AddFb2 to send the modifiers, so that
Xwayland can work fine on top of mutter headless with the NVIDIA
proprietary driver.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/3060
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3320>
`clutter_actor_destroy()` queues a stage update. Under certain
circumstances - i.e. when run in a very slow container - this can race
with the stage update triggered by the following
`clutter_virtual_input_device_notify_button()`, occasionally resulting in
`wait_stage_updated()` to return before the
`on_event_return_propagate()` callbacks ran, making the test fail.
This notably became more common since
8f27ebf87e (clutter/frame-clock: Start next update ASAP after idle period)
landed.
Thus wait for a stage update to happen after `clutter_actor_destroy()`,
preventing the race.
Fixes: f6da583d06 (tests/clutter/event-delivery: Add tests for implicit grabbing)
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3332>
For frame updates in response to sporadic user interaction, this results
in input → output latency somewhere between the minimum possible and the
minimum plus the length of one display refresh cycle (assuming the frame
update can complete within a refresh cycle).
Applying a max_render_time based deadline which corresponds to higher
than the minimum possible latency would result in higher effective
minimum latency for sporadic user interaction.
This was discovered by Ivan Molodetskikh, based on measurements
described in https://mastodon.online/@YaLTeR/110848066454900941 .
v2:
* Set min_render_time_allowed_us = 0 as well, to avoid unthrottled
frame events. (Robert Mader)
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3174>
Instead of g_get_monotonic_time. This makes sure last_presentation_time_us
advances by refresh_interval_us.
Doesn't affect test results at this point, but it will with the next
commit.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3174>
When more than one refresh interval has passed since
last_presentation_time_us.
I honestly can't tell if the previous calculation was correct or not,
but I'm confident the new one is, and it's simpler.
v2:
* ASCII art diagram didn't make sense anymore, try to improve
(Ivan Molodetskikh)
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3330>
Every `mtk_x11_error_trap_push()` must be paired
with an `mtk_x11_error_trap_pop[_with_return]()` call
otherwise all future errors will be caught and ignored
even if they shouldn't be.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3328>
Certain kernel drivers can take an unreasonably long time to
complete mode setting operations. That excessive CPU time is charged
to the process's rlimits which can lead to the process getting killed
if the thread is a real-time thread.
This commit inhibits real-time scheduling around mode setting
commits, since those commits are the ones currently presenting as
excessively slow.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/3037
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3324>
At the moment if a thread is made real-time there's no going back,
it stays real-time for the duration of its life.
That's suboptimal because real-time threads are expected by RTKit to
have an rlimit on their CPU time and certain GPU drivers in the kernel
can exceed that CPU time during certain operations like DPMS off.
This commit adds two new ref counted functions:
meta_thread_{un,}inhibit_realtime_in_impl
that allow turning a thread real-time or normally scheduled. At the same
time, this commit stores the RTKit proxy as private data on the thread
so that it can be reused by the above apis.
A subsequent commit will use the new APIs.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3324>
Most of the code writes "real-time" as "realtime" not "real_time".
The only exception is one function `request_real_time_scheduling`.
This commit changes that function for consistency.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3324>
If we queued a mode set, but didn't end up compositing all frames, we'll
have pending mode sets in a hash table waiting to be applied. If we
before all monitors again try to reconfigure things we should drop the
old pending mode sets and start fresh.
We already do this when we're doing so when generating views, but when
just unsetting modes, we didn't, so fix that.
Related: https://bugzilla.redhat.com/show_bug.cgi?id=2242612
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3318>
We had a function called "reset_modes()" on MetaRendererNative, but what
it expected to do was to unset all modes on all CRTCs. Despite this, it
had code to unset modes on unconfigured CRTCs, probably because it was
used for multiple things in the past.
Make this a bit easier to follow by renaming the function
"unset_modes()" and fold the function doing the unsetting into the
function itself.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3318>
Intel has started to advertise big gamma LUT sizes on some hardware
because the hardware supports segmented LUTs. This means they have a lot
more precision at certain segments then others. The uAPI can't expose
this functionality meaningfully so they chose to expose a huge number of
TAPs to sample from to their segmented LUT.
This increase in uAPI LUT size resulted in stack overflows because we
allocated the LUT on the stack. This commit moves it to the heap
instead.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/3064
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3322>
1. Centralize stride calculation in one function.
2. For dmabufs query the stride instead of assuming a certain value.
3. For system memory buffers use the pixel format to calculate the
stride.
4. Stop negotiating `SPA_PARAM_BUFFERS_size` and
`SPA_PARAM_BUFFERS_stride`.
2. fixes an actual bug where we reported wrong max buffer sizes,
resulting in crashes in Gstreamer when doing area screencasts on AMD
GPUs.
The reasoning for 4. is that the values were possibly wrong for
dmabufs as the negotiation happens before we create any buffers.
Further more neither Mutter nor the common consumers required it.
The later either ignore the values (OBS), always accept (gstpipewiresrc)
them or calculate the exact same possibly wrong values (libwebrtc).
Closes: https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/6747
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3311>
With EI support wired to XTEST, and oeffis being enabled in Xwayland
means that XTEST will always go through the XDG portal.
While this the intended behavior for the general use case of Xwayland
running rootless on a desktop compositor, that breaks when Xwayland is
running on a nested compositor, because the portal is for the entire
session and not limited to the nested Wayland compositor.
Enable XDG portal support in Xwayland only when we managed to connect
to the GNOME session manager, which means we are running in a full
desktop session, and not in any form of nested mode.
This is determined by simply using the status returned by set_gnome_env()
which will fail if not connected to a GNOME Session manager.
See-also: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1586
See-also: https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/1170
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/3047
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3303>