On ClutterInputFocus::reset, avoid to unset the preedit text if
none was set earlier. This seems to trick GTK clients into focusing
the cursor position again, even when we are moving away from it.
Fixes: https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/4647
(Cherry-picked from commit 3b6f9111c7)
ClutterText implements its own get_paint_volume() with its own cache,
but was not invalidating the actor paint volume when when it has
changed. This sometimes could result in labels, especially quickly
changing ones, using the old paint volume which either would cut off the
label or leave parts of the old label on screen.
Fixes: https://gitlab.gnome.org/GNOME/mutter/-/issues/1943
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2006>
This mode is passed along by the ClutterInputMethod, the
ClutterInputFocus will preserve it and ensure it is honored
whenever the IM is being reset.
This mode is immediate. The ClutterInputFocus commits the
text directly without queueing a CLUTTER_IM_COMMIT event.
This is important so events are serialized in the right order
in the wayland implementations (i.e. commit before wl_pointer.press).
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1940>
In line with GTK, the input method context should be reset when clicks
are handled by the ClutterInputFocus user. The reset action can then
either clear or commit the preedit text, as configured by the IM module.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1940>
Make sure that when we've recreated views that we'll actually paint a
new frame for it. This was very rarely a problem, as views tend to
result in getting damage etc being queued as side effects of various
things, like layout, but e.g. when running certain tests, this might not
happen. There is no situation where we want to create a new view that
should remain unpainted, so just make sure we initialize it to become up
to date.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1947>
This code sneaked unconditionally, even though we can disable
tracing code with -Dprofiler=false. Add some COGL_HAS_TRACING
checks so that this code is also optionally built.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1951>
Will be used to trace a lot more, and with more details, and thus may
have a larger impact on what is actually measured. This potential impact
is the reason for enabling only when needed.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1700>
The failure to allocate was not properly handled, causing crashes later
on due to the offscreen being NULL.
#0 cogl_gl_framebuffer_bind (target=36160, gl_framebuffer=0x0)
#1 _cogl_driver_gl_flush_framebuffer_state (...)
#2 cogl_context_flush_framebuffer_state (read_buffer=0x55f48f386780, draw_buffer=0x55f48f386780, ...)
#3 cogl_framebuffer_clear4f (framebuffer=0x55f48f386780, ...)
#4 clutter_layer_node_pre_draw (...)
#5 clutter_paint_node_paint (...)
...
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1942>
We only listen to it for 2 settings (drag threshold, double click
time), and we already have the stock ClutterSettings object tracking
the source of these. This code is redundant.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1862>
Not sure how to update the damage or redraw clip or something; at least
this works properly when under a constantly-redrawing window, which is
ok for debugging purposes.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1762>
Max render time shows how early the frame clock needs to be dispatched
to make it to the predicted next presentation time. Before this commit
it was set to refresh interval minus 2 ms. This means Mutter would
always start compositing 14.7 ms before a display refresh on a 60 Hz
screen or 4.9 ms before a display refresh on a 144 Hz screen. However,
Mutter frequently does not need as much time to finish compositing and
submit buffer to KMS:
max render time
/------------\
---|---------------|---------------|---> presentations
D----S D--S
D - frame clock dispatch
S - buffer submission
This commit aims to automatically compute a shorter max render time to
make Mutter start compositing as late as possible (but still making it
in time for the presentation):
max render time
/-----\
---|---------------|---------------|---> presentations
D----S D--S
Why is this better? First of all, Mutter gets application contents to
draw at the time when compositing starts. If new application buffer
arrives after the compositing has started, but before the next
presentation, it won't make it on screen:
---|---------------|---------------|---> presentations
D----S D--S
A-------------X----------->
^ doesn't make it for this presentation
A - application buffer commit
X - application buffer sampled by Mutter
Here the application committed just a few ms too late and didn't make on
screen until the next presentation. If compositing starts later in the
frame cycle, applications can commit buffers closer to the presentation.
These buffers will be more up-to-date thereby reducing input latency.
---|---------------|---------------|---> presentations
D----S D--S
A----X---->
^ made it!
Moreover, applications are recommended to render their frames on frame
callbacks, which Mutter sends right after compositing is done. Since
this commit delays the compositing, it also reduces the latency for
applications drawing on frame callbacks. Compare:
---|---------------|---------------|---> presentations
D----S D--S
F--A-------X----------->
\____________________/
latency
---|---------------|---------------|---> presentations
D----S D--S
F--A-------X---->
\_____________/
less latency
F - frame callback received, application starts rendering
So how do we actually estimate max render time? We want it to be as low
as possible, but still large enough so as not to miss any frames by
accident:
max render time
/-----\
---|---------------|---------------|---> presentations
D------S------------->
oops, took a little too long
For a successful presentation, the frame needs to be submitted to KMS
and the GPU work must be completed before the vblank. This deadline can
be computed by subtracting the vblank duration (calculated from display
mode) from the predicted next presentation time.
We don't know how long compositing will take, and we also don't know how
long the GPU work will take, since clients can submit buffers with
unfinished GPU work. So we measure and estimate these values.
The frame clock dispatch can be split into two phases:
1. From start of the dispatch to all GPU commands being submitted (but
not finished)—until the call to eglSwapBuffers().
2. From eglSwapBuffers() to submitting the buffer to KMS and to GPU
work completing. These happen in parallel, and we want the latest of
the two to be done before the vblank.
We measure these three durations and store them for the last 16 frames.
The estimate for each duration is a maximum of these last 16 durations.
Usually even taking just the last frame's durations as the estimates
works well enough, but I found that screen-capturing with OBS Studio
increases duration variability enough to cause frequent missed frames
when using that method. Taking a maximum of the last 16 frames smoothes
out this variability.
The durations are naturally quite variable and the estimates aren't
perfect. To take this into account, an additional constant 2 ms is added
to the max render time.
How does it perform in practice? On my desktop with 144 Hz monitors I
get a max render time of 4–5 ms instead of the default 4.9 ms (I had
1 ms manually configured in sway) and on my laptop with a 60 Hz screen I
get a max render time of 4.8–5.5 ms instead of the default 14.7 ms (I
had 5–6 ms manually configured in sway). Weston [1] went with a 7 ms
default.
The main downside is that if there's a sudden heavy batch of work in the
compositing, which would've made it in default 14.7 ms, but doesn't make
it in reduced 6 ms, there is a delayed frame which would otherwise not
be there. Arguably, this happens rarely enough to be a good trade-off
for reduced latency. One possible solution is a "next frame is expected
to be heavy" function which manually increases max render time for the
next frame. This would avoid this single dropped frame at the start of
complex animations.
[1]: https://www.collabora.com/about-us/blog/2015/02/12/weston-repaint-scheduling/
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1762>
This fixes a warning/error:
In function 'parse_settings',
inlined from 'read_settings' at ../clutter/clutter/x11/xsettings/xsettings-client.c:398:25:
../clutter/clutter/x11/xsettings/xsettings-client.c:202:13: error: 'buffer.byte_order' may be used uninitialized [-Werror=maybe-uninitialized]
202 | if (buffer.byte_order != MSBFirst &&
| ~~~~~~^~~~~~~~~~~
This is needed to bump the CI image from F33 to F34, which includes a
upgraded compiler.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1865>
A simply wrapper around `CoglTexture`, making it easy to reuse
content without roundtrip from GPU to CPU memory and back.
It optionally takes a clip rectangle which is implemented by
creating a `CoglSubTexture`. A limitation here is that floating
point clips are not supported.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1888>
When using `CLUTTER_PAINT=damage-region` highlighting was conspicuously
absent during fullscreen animations like entering or leaving the
overview. That was because `queued_redraw_clip` was empty, because it
had been initialized from `redraw_clip == NULL` (full stage redraw).
Now we paint the damage region as the full view (which it is) instead
of nothing at all.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1890>