DRM_EVENT_CONTEXT_VERSION is the latest context version supported by
whatever version of libdrm is present. Mutter was blindly asserting it
supported whatever version that may be, even if it actually didn't.
With libdrm 2.4.78, setting a higher context version than 2 will attempt
to call the page_flip_handler2 vfunc if it was non-NULL, which being a
random chunk of stack memory, it might well have been.
Set the version as 2, which should be bumped only with the appropriate
version checks.
https://bugzilla.gnome.org/show_bug.cgi?id=781034
Add support for getting hardware presentation times from KMS (Wayland
sessions). Also implement cogl_get_clock_time which is required to compare
and judge the age of presentation timestamps.
For single monitor systems this is straightforward. For multi-monitor
systems though we have to choose a display to sync to. The compositor
already partially solves this for us in the case of only one display
updating because it will only use the subset of monitors that are
changing. In the case of multiple monitors consuming the same frame
concurrently however, we choose the fastest one (in use at the time).
Note however that we also need !73 to land in order to fully realize
multiple monitors running at full speed.
drmModePageFlip() is guaranteed to fail for the invalid FB id 0.
Therefore it never makes sense to call this function with such argument.
Disabling a CRTC must be done with SetCrtc instead, for example.
Trying to flip to FB 0 not only fails, but it also causes Mutter to
never try page flip on this output again, using drmModeSetCrtc()
instead.
We need a way for mutter to exit if no available GPUs are going to work.
For example if gdm starts gnome-shell and we're using a DRM driver that
doesn't work with KMS then we should exit so that GDM can try with Xorg,
rather than operating in headless mode.
Related: https://gitlab.gnome.org/GNOME/mutter/issues/223
If drmModeSetCrtc() is called with no fb, mode or connectors for some
CRTC it may still fail, and we should handle that gracefully instead of
assuming it failed to set a non-disabled state.
Closes https://gitlab.gnome.org/GNOME/mutter/issues/70
When using the EGLStream backend, the MetaRendererNative passed a
GClosure to KMS when using EGLStreams, but KMS flip callback event
handler in meta-gpu-kms.c expected a closure wrapped in a closure
container, meaning it'd instead crash when using EGLStreams. Make the
flip handler get what it expects also when using EGLStreams by wrapping
the flip closure in the container before handing it over to EGL.
https://bugzilla.gnome.org/show_bug.cgi?id=790316
There seems to be a kernel race when one disconnects an external
monitor connected to a DisplayPort via a USB-C adapter. The race
results in a connector being reported as connected, but without any
modes supported.
This had the side effect that we tried to set a preferred mode to
the first listed mode, but as no modes were available, we instead tried
to dereference the first element of a NULL array, causing a
segmentation fault.
Mitigate this by skipping adding output if no supported modes are
advertised and the output doesn't support scaling, while moving the
fallback path for calculating a preferred output mode to after possibly
adding the common modes, to avoid the unvolentary NULL dereference.
https://bugzilla.gnome.org/show_bug.cgi?id=789501
If a monitor's max resolution is a portrait resolution, then assume it is
a native portrait monitor and add portrait versions of the common modes.
https://bugzilla.gnome.org/show_bug.cgi?id=782294
The DRM properties container must be destroyed with
drmModeFreeObjectProperties, and the connectors must be freed on every
caller. Also make it sure that gbm_device structs are destroyed with the
MetaRendererNativeGpuData that owns them.
https://bugzilla.gnome.org/show_bug.cgi?id=789984
When drmHandleEvent() returns an error and errno is set to EAGAIN,
instead of ending up in a busy loop, poll() the fd until there is
anything to read.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
In order to eventually support multilpe GPUs with their own connectors,
split out related meta data management (i.e. outputs, CRTCs and CRTC
modes) into a new MetaGpu GObject.
The Xrandr backend always assumes there is always only a single "GPU" as
the GPU is abstracted by the X server; only the native backend (aside
from the test backend) will eventually see more than one GPU.
The Xrandr backend still moves some management to MetaGpuXrandr, in
order to behave more similarly to the KMS counterparts.
https://bugzilla.gnome.org/show_bug.cgi?id=785381