We're currently always waiting for unfinished page flips before flipping
again. This is awkward when we are in an asynchronous retry-page-flip
loop, as we can synchronously wait for any KMS page flip event.
To avoid ending up with such situations, just freeze the frame clock
while we're retrying, then thaw it when we succeded.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/506
We rely on the frame clock to compress input events, thus if the frame
clock stops, input events are not dispatched. At the same time, there
is no reason to redraw at a full frame rate, as nothing will be
presented anyway, so slow down to 10Hz (compared to the most common
60Hz). Note that we'll only actually reach 10Hz if there is an active
animation being displayed, which won't happen e.g. if there is a screen
shield in the way.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/506
When we're in a page-flip retry loop due to the FIFO being full
(drmModePageFlip() failing with EBUSY), we should not continue to try
when starting to power save, as that means we're blocking new frames,
which itself blocks input events due to them being compressed using the
frame clock.
We'd also hit an assert assuming we only try to page flip when not power
saving.
Thus, fake we flipped if we ended up reaching a power saving state while
retrying.
Fixes: https://gitlab.gnome.org/GNOME/mutter/issues/509https://gitlab.gnome.org/GNOME/mutter/merge_requests/506
It tried to add a (implicitly casted) float to a uint64_t, and due to
floating point precision issues resulted in timestamps intended to be
in the future to actually be in the past. Fix this by first casting the
delay to an uint64_t, then add it to the time stamp.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/506
We might fail to page flip a new buffer, often after resuming, due to
the FIFO being full. Prior to this commit, we handled this by switching
over to plain mode setting instead of page flipping. This is bad because
we won't be synchronized to the refresh rate anymore, but just the
clock.
Instead, deal with this by trying again until the FIFO is no longer
full. Do this on a v-sync based interval, until it works.
This also changes the error handling code for drivers not supporting
page flipping to rely on them returning -EINVAL. The handling is moved
from pretending a page flip working to explicit mode setting in
meta-renderer-native.c.
Fixes: https://gitlab.gnome.org/GNOME/mutter/issues/460
A renderer view will, under the native backend, since long ago always
have a logical monitor associated with it, so remove the code handling
the legacy non-stage view case.
https://gitlab.gnome.org/GNOME/mutter/issues/460
If the extension is missing, the GPU copy path would not work. The code sets
the error, but forgets to return a failure. Fix this.
While adding the necessary return FALSE, also destroy the EGL context we just
created. Code refactoring shares the destroying code.
Found by reading code.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/416
If the GPU copy path would use a software renderer, fall back to the CPU
copy path. The CPU copy path is possibly faster and avoids screen
corruption issues that were observed on an Intel Haswell desktop. The
corruption was likely due to texturing from an unfinished rendering or
memory caching issues.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/325
Print the pixel format chosen for an output on a secondary GPU for
debugging. Knowing the format can aid in debugging e.g. red/blue channel
swaps and CPU copy performance issues.
This adds a DRM format printing helper in meta-crtc-kms.h. This header
is included in most native backend files making it widely available,
while DRM formats are specific to the native backend. It could be shared
with Wayland bits, DRM format codes are used there too.
The helper makes the pixel format much more readable than a "%x".
https://gitlab.gnome.org/GNOME/mutter/merge_requests/341
When setting up an output on a secondary GPU with the CPU copy mode,
allocate the dumb buffers with a DRM format that is advertised supported
instead of hardcoding a format.
Particularly, DisplayLink devices do not quite yet support the hardcoded
DRM_FORMAT_XBGR8888. The proprietary driver stack actually ignores the
format assuming it is DRM_FORMAT_XRGB8888 which results the display
having red and blue channels swapped. This patch fixes the color swap
right now, while taking advantage if the driver adds support for XBGR
later.
The preferred_formats ordering is somewhat arbitrary. Here it is written
from glReadPixels point of view, based on my benchmarks on Intel Haswell
Desktop machine. This ordering prefers the format that was hardcoded
before.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/341
As with the commits earlier, this also adds const qualifiers where
expected. However, the const variables are casted to non-const variants
so they can be passed to glib functions that take non-const variants but
expect const-like input.
Mutter prefers platform devices over anything else as the primary GPU.
This will not work too well, when a platform device does not actually
have a rendering GPU but is a display-only device. An example of this
are DisplayLink devices with the proprietary driver stack, which exposes
a DRM KMS platform device but without any rendering driver.
Mutter cannot rely on EGL init failing on such devices either, because
nowadays Mesa supports software renderers on GBM, so the initialization
may well succeed.
The hardware rendering capability is recognized by matching the GL
renderer string to the known Mesa software renderers. At this time,
there is no better alternative to detecting this.
The secondary GPU data is abused for the GL renderer, as the Cogl
context may not have been created yet. Also, the Cogl context would
only be created on the primary GPU, but at this point the primary GPU
has not been chosen yet. Hence, GPU copy path GL context is used as a
proxy and predictor of what the Cogl context might be if it was created.
Mind, that even the GL flavour are not the same between Cogl and
secondary contexts, so this is stretch but it should be just enough.
The logic to choose the primary GPU is changed to always prefer hardware
rendering devices while also maintaining the old order of preferring
platform over boot_vga devices.
Co-authored by: Emilio Pozuelo Monfort <emilio.pozuelo@collabora.co.uk>
https://gitlab.gnome.org/GNOME/mutter/merge_requests/271
Moves the primary GPU choosing to after all secondary gpu data has been
created.
This makes it possible for a future patch to start looking at secondary
gpu data in choose_primary_gpu () to determine if it is using a hardware
driver or a software renderer.
Co-authored by: Pekka Paalanen <pekka.paalanen@collabora.com>
https://gitlab.gnome.org/GNOME/mutter/merge_requests/271
Initialize the secondary GPU data for all GPUs, even the primary one. By
not looking at the primary_gpu_kms member, a future patch is allowed to
postpone choosing the primary GPU.
A future patch will use the secondary GPU data to decide which GPU will
become the primary GPU.
Co-authored by: Pekka Paalanen <pekka.paalanen@collabora.com>
https://gitlab.gnome.org/GNOME/mutter/merge_requests/271
create_renderer_gpu_data_egl_device () relied on the primary GPU being
already chosen for the "EGLDevice currently only works with single GPU
systems" error message. A future patch will choose the primary GPU after
this, not before, so this check needs to be rewritten before the
initialization order is changed.
The new check is implemented exactly as the error message says: there
must be exactly one GPU, otherwise fail.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/271
Make the choosing and identity of the primary GPU an internal detail to
the native renderer. MonitorManagerKms did not need it for anything.
The primary GPU logic remains unchanged.
This allows follow-up patches to change how the renderer chooses the
primary GPU. It will be easier for the renderer to use private
information for choosing.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/271
If a KMS device has the DRM_CAP_DUMB_PREFER_SHADOW and a software based
GL driver is used, always use a shadow fb. This will speed up read backs
in the llvmpipe OpenGL implementation, making blend operations faster.
Fixes: https://gitlab.gnome.org/GNOME/mutter/issues/106
Since now we don't set the swap throttled value based
on sync-to-vblank, we can effectively remove it from
Cogl. Throttling swap buffers in Cogl is as much a
historical artifact as sync-to-vblank. Furthermore,
it doesn't make sense to disable it on a compositor,
which is the case with the embedded Cogl.
In addition to that, the winsys vfunc for updating
whenever swap throttling changes could also be removed,
since swap throttling is always enabled now.
Removing it means less code, less branches when running,
and one less config option to deal with.
This also removes the micro-perf test, since it doesn't
make sense for the case where Cogl is embedded into the
compositor.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/191
Externally setting the sync-to-vblank setting was a feature
added as a workaround to old Intel and ATI graphic cards, and
is not needed anymore. Furthermore, it doesn't make sense to
change it on a compositor whatsoever.
This commit removes all the ways to externally change this
setting, as well as the now unused API.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/191
We haven't supported disabling stage views in the native backend since
commit 70edc7dda4
Author: Jonas Ådahl <jadahl@gmail.com>
Date: Mon Jul 24 12:31:32 2017 +0800
backends/native: Stop supporting stage views being disabled
There were still some left over checks; lets remove them.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/343
Because it is implemented and always on. By advertising this fact
the master clock is able to sync to the native refresh rate instead
of always using the fallback of 60.00Hz.
https://bugzilla.gnome.org/show_bug.cgi?id=781296
Add support for getting hardware presentation times from KMS (Wayland
sessions). Also implement cogl_get_clock_time which is required to compare
and judge the age of presentation timestamps.
For single monitor systems this is straightforward. For multi-monitor
systems though we have to choose a display to sync to. The compositor
already partially solves this for us in the case of only one display
updating because it will only use the subset of monitors that are
changing. In the case of multiple monitors consuming the same frame
concurrently however, we choose the fastest one (in use at the time).
Note however that we also need !73 to land in order to fully realize
multiple monitors running at full speed.
Use cogl_framebuffer_read_pixels_into_bitmap () instead of
glReadPixels () for the CPU copy path in multi-GPU support.
The cogl function employs several tricks to make the read-pixels as fast
as possible and does the y-flip as necessary. This should make the copy
more performant over all kinds of hardware.
This is expected to be used on virtual outputs (e.g. DisplayLink USB
docks and monitors) foremost, where the dumb buffer memory is just
regular system memory. If the dumb buffer memory is somehow slow, like
residing in discrete VRAM or having an unexpected caching mode, it may
be possible for the cogl function perform worse because it might do the
y-flip in-place in the dumb buffer. Hopefully that does not happen in
any practical scenario.
Calling meta_renderer_native_gles3_read_pixels () here was conceptually
wrong to begin with because it was done with the Cogl GL context of the
primary GPU, not on the GL ES 3 context of a secondary GPU. However,
due eglBindAPI being a no-op in Mesa and the glReadPixels () arguments
being compatible, it worked.
This patch adds a pixel format conversion table between DRM and Cogl
formats. It contains more formats than absolutely necessary and the
texture components field which is currently unused for completeness. See
Mutter issue #323. Making the table more complete documents better how
the pixel formats actually map so that posterity should be less likely
to be confused. This table could be shared with
shm_buffer_get_cogl_pixel_format () as well, but not with
meta_wayland_dma_buf_buffer_attach ().
On HP ProBook 4520s laptop (Mesa DRI Intel(R) Ironlake Mobile, Mesa
18.0.5), without this patch copy_shared_framebuffer_cpu () for a
DisplayLink output takes 5 seconds with a 1080p frame. Obviously that
makes Mutter and gnome-shell completely unusable. With this patch, that
function takes 13-18 ms which makes it usable if not fluent.
On Intel i7-4790 (Mesa DRI Intel(R) Haswell Desktop) machine, this patch
makes no significant difference (the copy takes 4-5 ms).
The format will be needed in a following commit in the CPU copy path
which stops hardcoding another format and starts using the format the
dumb FB was created with.
Change the callers of init_dumb_fb () to use DRM format codes. DRM and
GBM format codes are identical, but since this is about dumb buffers,
DRM formats fit better.
The header /usr/include/gbm.h installed by Mesa says:
* The FourCC format codes are taken from the drm_fourcc.h definition, and
* re-namespaced. New GBM formats must not be added, unless they are
* identical ports from drm_fourcc.
That refers to the GBM_FORMAT_* codes.
The order and way include macros were structured was chaotic, with no
real common thread between files. Try to tidy up the mess with some
common scheme, to make things look less messy.
There was a race in setting next_fb_id when a secondary GPU was using
the CPU copy path. Losing this race caused the attempt to
drmModePageFlip () to FB ID 0 which is invalid and always fails. Failing
to flip causes Mutter to fall back to drmModeSetCrtc () permanently.
In meta_onscreen_native_swap_buffers_with_damage ():
- update_secondary_gpu_state_pre_swap_buffers ()
- copy_shared_framebuffer_cpu () but only on the CPU copy path
- secondary_gpu_state->gbm.next_fb_id is set
- wait_for_pending_flips ()
- Waits for any remaining page flip events and executes and destroys
the related page flip closures.
- on_crtc_flipped ()
- meta_onscreen_native_swap_drm_fb ()
- swap_secondary_drm_fb ()
- secondary_gpu_state->gbm.next_fb_id = 0;
- meta_onscreen_native_flip_crtcs ()
- meta_onscreen_native_flip_crtc ()
- meta_gpu_kms_flip_crtc () gets called with fb_id = 0
This race was observed lost when running 'mutter --wayland' on a machine
with two outputs on Intel and one output on DisplayLink USB dock, and
wiggling around a weston-terminal window between the Intel and
DisplayLink outputs. It took from a second to a minute to trigger. For
testing with DisplayLink outputs Mutter also needed a patch to take the
DisplayLink output into use, as it would have otherwise been ignored
being a platform device rather than a PCI device.
Fix this race by first waiting for pending flips and only then
proceeding with the swap operations. This should be safe, because the
pending flips could have completed already before entering
meta_onscreen_native_swap_buffers_with_damage ().
meta_renderer_native_gles3_read_pixels() was assuming that the target
buffer stride == width * 4. This is not generally true. When a DRM
driver allocates a dumb buffer, it is free to choose a stride so that
the buffer can actually work on the hardware.
Record the driver chosen stride in MetaDumbBuffer, and use it in the CPU
copy path. This should fix any possible stride issues in
meta_renderer_native_gles3_read_pixels().
Track the allocated dumb buffer size in MetaDumbBuffer. Assert that the
size is as expected in copy_shared_framebuffer_cpu().
This is just to ensure that Cogl and the real size match. The size from
Cogl was used in the copy, so getting that wrong might have written
beyond the allocation.
This is a safety measure and has not been observed to happen yet.
If drmModeAddFB2() does not work, the fallback to drmModeAddFB() can
only handle a single specific format. Make sure the requested format is
that one format, and fail the operation otherwise.
This should at least makes the failure mode obvious on such old systems
where the kernel does not support AddFB2, rather than producing wrong
colors.
The "backends: Move MetaOutput::crtc field into private struct"
accidentally changed the view transform calculation code to assume that
"MetaCrtc::transform" corresponds to the transform of the CRTC; so is
not the case yet; one must calculate the transform from the logical
monitor, and check whether it is supported by the CRTC using
meta_monitor_manager_is_transform_handled(). This commit restores the
old behaviour that doesn't use MetaCrtc::transform when calculating the
view transform.
Fixes: https://gitlab.gnome.org/GNOME/mutter/issues/216
Commit c0d9b08ef9 replaced the old GBM API calls
with the multi-plane GBM API. However, the call to gbm_bo_get_handle_for_plane
fails for some DRI drivers (in particular i915). Due to missing error checks,
the subsequent call to drmModeAddFB[2] fails and the screen output locks up.
This commit adds the missing error checks and falls back to the old GBM API
(non-planar) if necessary.
v5: test success of gbm_bo_get_handle_for_plane instead of errno
This commit adopts solution proposed by Daniel van Vugt to check the return
value of gbm_bo_get_handle_for_plane on plane 0 and fall back to old
non-planar method if the call fails. This removes the errno check (for
ENOSYS) that could abort if mesa ever sets a different value.
Related to: https://gitlab.gnome.org/GNOME/mutter/issues/127
Commit 712ec30cd9 added the logic to only
choose EGL configs that match the GBM_FORMAT_XRGB8888 pixel format.
However, there won't be any EGL config satisfying such criteria for
non-GBM backends, such as EGLDevice.
This change will let us choose the first EGL config for the EGLDevice
backend, while still forcing GBM_FORMAT_XRGB8888 configs for the GBM
one.
Related to: https://gitlab.gnome.org/GNOME/mutter/issues/2
drmModeAddFB2 allows userspace to specify a real format enum on
non-ancient kernels, as an improvement over the legacy drmModeAddFB
which derives format from a fixed depth/bpp mapping.
As an optimisation, Weston used to decide at the first failure of
drmModeAddFB2 that the ioctl was unavailable: as non-existent DRM
ioctls return -EINVAL rather than -ENOSYS or similar, bad parameters are
not distinguishable from the ioctl not being present.
Mutter has also implemented the same optimisation for dumb framebuffers,
which potentially papers over errors for the gain of avoiding one ioctl
which will rapidly fail on ancient kernels. Remove the optimisation and
always use AddFB2 where possible.
Closes: #14
When using the EGLStream backend, the MetaRendererNative passed a
GClosure to KMS when using EGLStreams, but KMS flip callback event
handler in meta-gpu-kms.c expected a closure wrapped in a closure
container, meaning it'd instead crash when using EGLStreams. Make the
flip handler get what it expects also when using EGLStreams by wrapping
the flip closure in the container before handing it over to EGL.
https://bugzilla.gnome.org/show_bug.cgi?id=790316
Before we just set it to "none", but this was not enough since various
calls will depend on not just the context being active, but the main
rendering surface.
Fixes https://gitlab.gnome.org/GNOME/mutter/issues/21
Make it re-enable:able by a hidden "experimental feature". To enable, add
"kms-modifiers" to the org.gnome.mutter.experimental-features GSettings entry.
If we attempt GBM surface allocation with a set of modifiers but the
allocation fails, fall back to non-modifier allocations. This fixes
startup on Pineview-based Atom systems, where KMS provides us a set of
modifiers but the GBM implementation doesn't support modifier use.
Closes: https://gitlab.gnome.org/GNOME/mutter/issues/84
Rendering the next frame (which mostly happens as part of the flush done
in swap buffers) is a task that the GPU can complete independently of
the CPU having to wait for previous page flips. So reverse their order
to get the GPU started earlier, with the aim of greater GPU-CPU
parallelism.
We just arbitrarily chose the first EGL config matching the passed
attributes, but we then assumed we always got GBM_FORMAT_XRGB8888. That
was not a correct assumption. Instead, make sure we always pick the
format we expect.
Closes: https://gitlab.gnome.org/GNOME/mutter/issues/2
We were retrieving the supported KMS modifiers for all GPUs even
though what we really need to intersect between these sets of
modifiers:
1) KMS supported modifiers for primary GPU if the GPU is used for
scanout;
2) EGL supported modifiers for secondary GPUs (different than the
primary GPU used for rendering);
3) GBM supported modifiers when creating the surface (already
taken care of by gbm_surface_create_with_modifiers());
https://gitlab.gnome.org/GNOME/mutter/issues/18
Now that we have the list of supported modifiers from the monitor
manager (via the CRTCs to the primary planes), we can use this to inform
EGL it can use those modifiers to allocate the GBM surface with. Doing
so allows us to use tiling and compression for our scanout surfaces.
This requires the Mesa commit in:
Mesa 10.3 (08264e5dad4df448e7718e782ad9077902089a07) or
Mesa 10.2.7 (55d28925e6109a4afd61f109e845a8a51bd17652).
Otherwise Mesa closes the fd behind our back and re-importing will fail.
See FDO bug #76188 for details.
https://bugzilla.gnome.org/show_bug.cgi?id=785779
Newer versions of GBM support buffer modifiers, including multi-plane
buffers. Use this new API to explicitly pull the information from GBM,
and feed it to drmModeAddFB2WithModifiers.
https://bugzilla.gnome.org/show_bug.cgi?id=785779
Some x86 clamshell design devices use portrait tablet LCD panels while
they should use a landscape panel, resoluting in a 90 degree rotated
picture.
Newer kernels detect this and rotate the fb console in software to
compensate. These kernels also export their knowledge of the LCD panel
orientation vs the casing in a "panel orientation" drm_connector property.
This commit adds support to mutter for reading the "panel orientation"
and transparently (from a mutter consumer's pov) fixing this by applying
a (hidden) rotation transform to compensate for the panel orientation.
Related: https://bugs.freedesktop.org/show_bug.cgi?id=94894https://bugzilla.gnome.org/show_bug.cgi?id=782294
Proprietary drivers such as ARM Mali export EGL_KHR_platform_gbm instead
of EGL_MESA_platform_gbm. As such, GBM platform check should be done for
both MESA and non-MESA drivers.
https://bugzilla.gnome.org/show_bug.cgi?id=780668
The DRM properties container must be destroyed with
drmModeFreeObjectProperties, and the connectors must be freed on every
caller. Also make it sure that gbm_device structs are destroyed with the
MetaRendererNativeGpuData that owns them.
https://bugzilla.gnome.org/show_bug.cgi?id=789984
A hybrid GPU system is a system where more than one GPU is connected to
connectors. A common configuration is having a integrated GPU (iGPU)
connected to a laptop panel, and a dedicated GPU (dGPU) connected to
one or more external connector (such as HDMI).
This commit adds support for rendering the compositor stage using the
iGPU, then copying the framebuffer content onto a secondary framebuffer
that will be page flipped on the CRTC of the dGPU.
This can work in two different ways: GPU accelerated using Open GL ES
3, or CPU unaccelerated.
When supported, GPU accelerated copying works by exporting the iGPU
onscreen framebuffer as a DMA-BUF, importing it as a texture on a
separate dGPU EGL context, then using glBlitFramebuffer(), blitting it
onto a framebuffer on the dGPU that can then be page flipped on the dGPU
CRTC.
When GPU acceleration is not available, copying works by creating two
dumb buffers, and each frame glReadPixels() from the iGPU EGL render
context directly into the dumb buffer. The dumb buffer is then page
flipped on the dGPU CRTC.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
When creating a renderer with a custom winsys (which is always how
mutter uses cogl) make it possible to pass a user data with the winsys.
Still unused.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
Make dumb buffer creation/destruction reusable by introducing a
MetaDumbBuffer type (private to meta-renderer-native.c). This will
later be used for software based fallback paths for copying render GPU
buffers onto secondary GPUs.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
Get rid of some technical dept by removing the support in the native
backend for drawing the the whole stage to one large framebuffer.
Previously the only way to disable stage views was to set the
MUTTER_STAGE_VIEWS environment variable to 0; doing that now will cause
the native backend to fail to initialize.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
When drmHandleEvent() returns an error and errno is set to EAGAIN,
instead of ending up in a busy loop, poll() the fd until there is
anything to read.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
The prefix, if any, of a variable name often contains information about
the namespace (such as clutter_backend is the ClutterBackend, while
backend is a MetaBackend). Clean up some more inconsistencies in
meta-renderer-native.c where various variable names were egl_ prefixed
but in fact was Cogl types.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
In order to eventually support multilpe GPUs with their own connectors,
split out related meta data management (i.e. outputs, CRTCs and CRTC
modes) into a new MetaGpu GObject.
The Xrandr backend always assumes there is always only a single "GPU" as
the GPU is abstracted by the X server; only the native backend (aside
from the test backend) will eventually see more than one GPU.
The Xrandr backend still moves some management to MetaGpuXrandr, in
order to behave more similarly to the KMS counterparts.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
Move finding, opening and managment of the KMS file descriptor to
MetaMonitorManagerKms. This means that the monitor manager creation can
now fail, both if more than one GPU with connectors is discovered, or
if finding or opening the primary GPU fails.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
Turn MetaCrtc into a GObject and move it to a separate file. This
changes the storage format, resulting in changing the API for accessing
MetaCrtcs from using an array, to using a GList.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
Turn MetaOutput into a GObject and move it to a separate file. This
changes the storage format, resulting in changing the API for accessing
MetaOutputs from using an array, to using a GList.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
The reverted commit seems to cause
https://bugzilla.gnome.org/show_bug.cgi?id=787240 for some reason. Lets
be safe and revert it for now, as the code freeze is just around the
corner.
This partly (it doesn't reintroduce a whitespace issue) reverts commit
dbc63430d8.
Due to rounding issues, we can't assume a floating point calculation
will end up on an integer, even if we got the factor from the reverse
calculation. Thus, to avoid casting away values like N.999... to N,
when they should really be N+1, round the resulting floating point
calculation before casting it to int.
This fixes an issue where using the scale ~1.739 on a 1920x1080 mode
resulted in error when setting the mode, as the calculated size of the
framebuffer was only 1919x1080.
https://bugzilla.gnome.org/show_bug.cgi?id=786918
When suspending (i.e. VT switching away, the GDM gnome-shell instance
gets hidden, or changing user), destroy the onscreen and offscreen
monitor framebuffers. When resuming, the stage views and framebuffers
will be recreated anyway.
https://bugzilla.gnome.org/show_bug.cgi?id=786299
To support fractional scaling, change the stage view scale to be a
float instead of an int. Also change the places where it is retrieved
and used when scaling things.
https://bugzilla.gnome.org/show_bug.cgi?id=765011
With GLVND, whenever we have both Mesa's and NVIDIA's drives installed
in the system, initializing the GBM backend will always succeed,
regardless of what GPU you have on your system.
This is due to GBM's software rendering fallback.
It seems better to initialize the EGLDevice backend first, which will
fail to find a device match when given a non-NVIDIA GPU.
https://bugzilla.gnome.org/show_bug.cgi?id=784272
Keep track of the logical monitor transform. When a logical monitor is
transformed, all of its monitors are also transformed in the same way.
A logical monitor can either be transformed on the CRTC level, or using
an offscreen intermediate buffer. In both cases will the logical
monitor be transformed, but only in the latter will the view be
transformed.
MetaCrtcs::transform currently does not represent whether the CRTC is
configured to be transformed or not; only when the backend can handle
it does it correctly correspond to the actual CRTC configuration. This
is intended to change with MetaMonitorConfigManager.
https://bugzilla.gnome.org/show_bug.cgi?id=777732
This commit adds support for rendering onto enlarged per logical
monitor framebuffers, using the scaled clutter stage views, for HiDPI
enabled logical monitors.
This works by scaling the mode of the monitors in a logical monitors by
the scale, no longer relying on scaling the window actors and window
geometry for making windows have the correct size on HiDPI monitors.
It is disabled by default, as in automatically created configurations
will still use the old mode. This is partly because Xwayland clients
will not yet work good enough to make it feasible.
To enable, add the 'scale-monitor-framebuffer' keyword to the
org.gnome.mutter.experimental-features gsettings array.
It is still possible to specify the mode via the new D-Bus API, which
has been adapted.
The adaptations to the D-Bus API means the caller need to be aware of
how to position logical monitors on the stage grid. This depends on the
'layout-mode' property that is used (see the DisplayConfig D-Bus
documentation).
https://bugzilla.gnome.org/show_bug.cgi?id=777732
Expose via a new API whether the transform on a logical monitor is
handled by the backend. This was previously only exposed only in the
native backend. This will be used to emulate not supporting transforms
in the backend in the nested backend.
https://bugzilla.gnome.org/show_bug.cgi?id=779745
Whenever an EGLOutput consumer is temporary unable to handle
eglStreamConsumerAcquire() operations (e.g. during a VT-switch),
an EGL_RESOURCE_BUSY_EXT error is generated.
This change adds the appropriate error handling to flip_egl_stream() in
order to recover from such errors.
https://bugzilla.gnome.org/show_bug.cgi?id=779112
In preparation for further refactorizations, rename the MetaMonitorInfo
struct to MetaLogicalMonitor. Eventually, part of MetaLogicalMonitor
will be split into a MetaMonitor type.
https://bugzilla.gnome.org/show_bug.cgi?id=777732
We need to do swap notifications asynchronously from flip events since
these might be processed during swap buffers if we are waiting for the
previous frame's flip to continue with the current.
This means that we might have more than one swap notification queued
to be delivered when the idle handler runs. In that case we must
deliver all notifications for which we've already seen a flip event.
Failing to do so means that if a new frame, that only swaps buffers on
such a swap notification backlogged Onscreen, is started, when later
we get its flip event, we'd notify only an old frame which would hit
this MetaStageNative's frame_cb() early exit:
if (global_frame_counter <= presented_frame_counter)
return;
and we'd never finish the new frame and thus clutter's master clock
would be waiting forever stuck.
https://bugzilla.gnome.org/show_bug.cgi?id=774557
When flush-swap-notify is already queued, we might end up trying to
requeue it, for example when handling flip callbacks inside
swap-buffers. Actually queuing it there is harmless, since old frames
will be discarded anyway.
https://bugzilla.gnome.org/show_bug.cgi?id=774923
We might still end up in swap-buffer without the previous flip callback
having been invoked. This can happen if there are two monitors, and we
manage to draw before having all monitor flip callbacks invoked.
https://bugzilla.gnome.org/show_bug.cgi?id=774923
This commit adds for a new type of buffer being attached to a Wayland
surface: buffers from an EGLStream. These buffers behave very
differently from regular Wayland buffers; instead of each buffer
reperesenting an actual frame, the same buffer is attached over and
over again, and EGL API is used to switch the content of the OpenGL
texture associated with the buffer attached. It more or less
side-tracks the Wayland buffer handling.
It is implemented by creating a MetaWaylandEglStream object, dealing
with the EGLStream state. The lifetime of the MetaWaylandEglStream is
tied to the texture object (CoglTexture), which is referenced-counted
and owned by both the actors and the MetaWaylandBuffer.
When the buffer is reattached and committed, the EGLStream is triggered
to switch the content of the associated texture to the new content.
This means that one cannot keep old texture content around without
copying, so any feature relying on that will effectively be broken.
https://bugzilla.gnome.org/show_bug.cgi?id=773629
This commit adds support for using a EGLDevice and EGLStreams for
rendering on top of KMS instead of gbm. It is disabled by default; to
enable it pass --enable-egl-device to configure.
By default gbm is first tried, and if it fails, the EGLDevice path is
tried. If both fails, mutter will terminate just as before.
https://bugzilla.gnome.org/show_bug.cgi?id=773629
There is no way to pass any backend specific parameters to a
CoglFramebuffer until after it has been allocated by
cogl_framebuffer_allocate() (since this is where the winsys/platform
fields are initialized). This can make it hard to actually allocate
anything, if the platform depends on some backend specific data.
A proper solution would be to refactor the onscreens and framebuffers to
use a GObject based type system instead of the home baked Cogl one, but
that'll be left for another day. For now, allocate in two steps, one to
allocate the backend specific parts (MetaOnscreenNative), and one to
allocate the actual onscreen framebuffer (via
meta_onscreen_native_allocate()).
So far there is nothing that forces this separation, but in the future
there will, for example EGLDevice's need to know about the CRTC in
order to create the EGLSurface.
https://bugzilla.gnome.org/show_bug.cgi?id=773629