The framerate for screen cast sources was set to variable within 1 FPS
and the framerate of the monitor being screen casted. This meant that if
the sink didn't match the framerate (e.g. had a lower max framerate),
the formats would not match and a stream would not be established.
Allow letting the sink clamp the framerate range by setting it as
'unset', allowing it to be negotiated.
Make it so that each logical monitor has a reference to all the
monitors that are assigned to it.
All monitors has a reference to each output that belongs to it.
Each output has a reference to any CRTC it has been assigned.
https://bugzilla.gnome.org/show_bug.cgi?id=786929
(cherry picked from commit 768ec15ea0)
When using two monitors size by side with different scales, once the
cursor moves from one output to another one, its size changes based on
the scale of the given output.
Changing the size of the cursor can cause the cursor area to change
output again if the hotspot is not exactly at the top left corner of the
area, causing the texture of the cursor to change, which will trigger
another output change, so on and so forth causing continuous surface
enter/leave event which flood the clients and eventually kill them.
Change the logic to use only the actual cursor position to determine if
its on the given logical monitor, so that it remains immune to scale
changes induced by output scale differences.
Closes: https://gitlab.gnome.org/GNOME/mutter/issues/83
(cherry picked from commit 67917db45f)
While MetaStage, MetaWindowGroup and MetaDBusDisplayConfigSkeleton don't
appear explicitly in the public API, their gtypes are still exposed via
meta_get_stage_for_screen(), meta_get_*window_group_for_screen() and
MetaMonitorManager's parent type. Newer versions of gjs will warn about
undefined properties if it encounters a gtype without introspection
information, so expose those types to shut up the warnings.
https://bugzilla.gnome.org/show_bug.cgi?id=781471
Various code assumed PipeWire function calls would never fail. Some can
actually fail for real reasons, and some currently can only fail due to
OOM situations, but we should still not assume that will always be the
case.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/102
(cherry picked from commit 0f9c6aef99)
Before we just set it to "none", but this was not enough since various
calls will depend on not just the context being active, but the main
rendering surface.
Fixes https://gitlab.gnome.org/GNOME/mutter/issues/21
(cherry picked from commit ae26cd0774)
When deriving the global scale directly from the current hardware state
(as done when using the X11 backend) we are inspecting the logical
state they had prior to the most recent hot plug. That means that a
primary monitor might have been disabled, and a new primary monitor may
not have been assigned yet.
Stop assuming a primary monitor has an active mode before having
reconstructed the logical state by finding some active monitor if the
old primary monitor was disabled. This avoids a crash when trying to
derive the global scale from a disabled monitor.
Closes: https://gitlab.gnome.org/GNOME/mutter/issues/130
(cherry picked from commit 0b3a1c9c31)
As a follow up to the patch from a95cbd0a, we need to make sure
that the pointer is out of the way as well when monitors changed,
since that's the event that will prevail in some cases. Besides,
this is also consistent with what the code before a95cbd0a was,
which initialized the pointer position in the same way both in
this case and in the real_post_init() function.
Closes: https://gitlab.gnome.org/GNOME/gnome-shell/issues/157
Centering the pointer at startup causes undesired behaviour if
it ends up hovering over reactive elements, that might react
to that positioning, causing confusion. This is the case of
the login dialog when a list of different users is shown, as
centering the pointer at startup in that case will get the
user in the center of the screen pre-selected, which is not
the expected behaviour (i.e. pre-selecting the first one).
Fix this by simply moving the pointer out of the way, close
to the bottom-right corner, during initialization.
Closes: https://gitlab.gnome.org/GNOME/gnome-shell/issues/157
The ResetIdletime API can be used instead of an "XTest" binary to
programmatically reset the idle time, as if the user pressed a button on
a keyboard.
This is necessary since we stopped using the XSync extension to monitor
idletimes, as it didn't consider inhibitors as busy, and mutter's
clutter code ignores "Core Events" as generated by XTest.
This patch will require minimal changes to gnome-settings-daemon's power
test suite so that "key press" idletime resets are triggered through
this D-Bus interface rather than through XTest and a roundtrip through
the X server.
https://bugzilla.gnome.org/show_bug.cgi?id=705942
Take idle inhibitions into account for when to fire idle watches as
requested by OS components.
This should stop gnome-session and gnome-settings-daemon considering
the session idle when they have been inhibited for longer than their
timeout, for example to avoid the screensaver activating, or the
computer suspending after watching a film.
https://bugzilla.gnome.org/show_bug.cgi?id=705942
Now that we've removed the X11 specific backend of the idle monitor,
add back a cut-down version of it for the explicit purpose of being
told about idle time resets when XTest events are used.
XTest events are usually used by test suites and remote display software
to inject events into an X11 session. We should consider somebody moving
the mouse remotely to be just as "active" as somebody moving it locally.
https://bugzilla.gnome.org/show_bug.cgi?id=705942
And use the old "native" backend for both X11 and Wayland. This will
allow us to share fixes between implementations without having to delve
into the XSync X11 extension code.
https://bugzilla.gnome.org/show_bug.cgi?id=705942
Output ID is set equal to 'i' later in the loop. But 'i' was never
incremented, so all outputs were getting the same ID (equal to
the number of CRTCs, because 'i' was reused from the previous loop).
Make it re-enable:able by a hidden "experimental feature". To enable,
add "kms-modifiers" to the org.gnome.mutter.experimental-features
GSettings entry.
https://gitlab.gnome.org/GNOME/mutter/issues/81
If we attempt GBM surface allocation with a set of modifiers but the
allocation fails, fall back to non-modifier allocations. This fixes
startup on Pineview-based Atom systems, where KMS provides us a set of
modifiers but the GBM implementation doesn't support modifier use.
Closes: https://gitlab.gnome.org/GNOME/mutter/issues/84
(cherry picked from commit e6109cfc22)
Rotating an output would show duplicate cursors when the pointer is
located over an area which would be within the output if not rotated.
Make sure to swap the width/height of the output when rotated.
Closes: https://gitlab.gnome.org/GNOME/mutter/issues/85
(cherry picked from commit ebff7fd7f4)
Rendering the next frame (which mostly happens as part of the flush done
in swap buffers) is a task that the GPU can complete independently of
the CPU having to wait for previous page flips. So reverse their order
to get the GPU started earlier, with the aim of greater GPU-CPU
parallelism.
(cherry picked from commit 6e415353e3)
Mutter recently gained the ability to deal with multiple GPUs
rendering at different displays. These GPUs would have a display
connected to them, and Mutter was adapted in order to be aware
of different GPUs and their outputs.
However, one specific edge case appeared: PRIME systems. PRIME
systems have two GPUs:
* The integrated GPU (iGPU), usually Intel, which has connectors
and deals with the routine load.
* The dedicated GPU (dGPU), usually AMD or NVidia, which has no
connectors at all and are there just to aid heavy loads.
On those systems, the dGPU is aggressively put to sleep by the
kernel to avoid energy waste. Waking it up is a costly operation.
With Mutter's adaptation to deal with multiple GPUs, Mutter began
wakening the dGPU every time some rendering had to be done. This
was causing stuttering every time the dGPU was put to sleep, and
Mutter asked it to wake up again.
To fix this situation, this commit ignores GPUs with no connectors
attached.
Issue: #77
This is a small mistake spotted while working on a solution
for #77. When a GPU fails to initialize, we're adding them
anyway, which might have pretty bad consequences when trying
to use these NULL GPUs.
Issue: #77
We just arbitrarily chose the first EGL config matching the passed
attributes, but we then assumed we always got GBM_FORMAT_XRGB8888. That
was not a correct assumption. Instead, make sure we always pick the
format we expect.
Closes: https://gitlab.gnome.org/GNOME/mutter/issues/2
In order to let applications gracefully handle version mismatches, add
a version property to the APIs. Also add a warning on the APIs that
these are not meant for public consumption.
If the coordinates was for a stream not at the stage position (0, 0),
they'd be incorrect. Fix this by correctly translating the coordinates
according to the stream position.
When the buffer modifier is DRM_FORMAT_MOD_LINEAR, we can use the
old code path. That means not specifying any modifier parameter.
It was an issue when the primary GPU was creating a linear GBM surface
and that a secondary GPU (not supporting modifiers) was trying to
import it. It was failing because the driver could not use the
import_modifiers extension even though it could in theory easily
import the buffer.
https://gitlab.gnome.org/GNOME/mutter/issues/18
We were retrieving the supported KMS modifiers for all GPUs even
though what we really need to intersect between these sets of
modifiers:
1) KMS supported modifiers for primary GPU if the GPU is used for
scanout;
2) EGL supported modifiers for secondary GPUs (different than the
primary GPU used for rendering);
3) GBM supported modifiers when creating the surface (already
taken care of by gbm_surface_create_with_modifiers());
https://gitlab.gnome.org/GNOME/mutter/issues/18
So the changes can be instantly applied while the tool is in proximity.
Before we would just do it on proximity-in, which doesn't provide a
good look&feel while modifying the tool settings in g-c-c.
https://gitlab.gnome.org/GNOME/mutter/issues/38Closes: #38
The property has been 32 bits since around 2011 and has not changed, mutter
expects it to be 8 bits. The mismatch causes change_property to never
actually change the property.
https://gitlab.gnome.org/GNOME/mutter/issues/26Closes: #26
This was done by the clutter X11 backend before prior to introducing
MetaRenderer, but during that work, enabling of said extension was lost.
Let's turn it on again.
https://bugzilla.gnome.org/show_bug.cgi?id=739178
There seems to be a kernel race when one disconnects an external
monitor connected to a DisplayPort via a USB-C adapter. The race
results in a connector being reported as connected, but without any
modes supported.
This had the side effect that we tried to set a preferred mode to
the first listed mode, but as no modes were available, we instead tried
to dereference the first element of a NULL array, causing a
segmentation fault.
Mitigate this by skipping adding output if no supported modes are
advertised and the output doesn't support scaling, while moving the
fallback path for calculating a preferred output mode to after possibly
adding the common modes, to avoid the unvolentary NULL dereference.
https://bugzilla.gnome.org/show_bug.cgi?id=789501
Opening and closing the device may result into XI2 grabs being cut short,
resulting into pad buttons being rendered ineffective, and other possible
misbehaviors. This is an XInput flaw that fell in the gap between XI1 and
XI2, and has no easy fix. It pays us for mixing both versions, I guess...
Work this around by keeping the XI1 XDevice attached to the
ClutterInputDevice, this way it will live long enough that this is not
a concern.
Investigation of this bug was mostly carried by Peter Hutterer, I'm just
the executing hand.
https://gitlab.gnome.org/GNOME/mutter/issues/7Closes: #7