This feature was configured depending on whether the Cogl backend
reported COGL_WINSYS_FEATURE_MULTIPLE_ONSCREEN or not. All cogl backends
do report this, so any code handled the 'static' case were never used.
While we only ever use one stage, it's arguable more correct to
consilidate on the single stage case, but multiple stages is something
that might be desirable for e.g. a remote lock screen, so lets keep this
logic intact.
This has the side effect of completely removing backend features, as
this was the only left-over feature detection that they handled.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2002>
This changes the setup phase of clutter to not be result of calling an
init function that sets up a few global singletons, via global singleton
setup vfuncs.
The way it worked was that mutter first did some initial setup
(connecting to the X11 server), then set a "custom backend" setup vfunc
global, before calling clutter_init().
During the clutter_init() call, the context and backend was setup by
calling the global singleton getters, which implicitly created the
backend and context on-demand.
This has now changed to mutter explicitly creating a `ClutterContext`
(which is actually a `ClutterMainContext`, but with the name shortened to
be consistent with `CoglContext` and `MetaContext`), calling it with a
backend constructor vfunc and user data pointer.
This function now explicitly creates the backend, without having to go
via the previously set global vfunc.
This changes the behavior of some "get_default()" like functions, which
will now fail if called after mutter has shut down, as when it does so,
it now destroys the backends and contexts, not only its own, but the
clutter ones too.
The "ownership" of the clutter backend is also moved to
`ClutterContext`, and MetaBackend is changed to fetch it via the clutter
context.
This also removed the unused option parsing that existed in clutter.
In some places, NULL checks for fetching the clutter context, or
backend, and fetching the cogl context from the clutter backend, had to
be added.
The reason for this is that some code that handles EGL contexts attempts
to restore the cogl EGL context tracking so that the right EGL context
is used by cogl the next time. This makes no sense to do before Cogl and
Clutter are even initialized, which was the case. It wasn't noticed
because the relevant singletons were initialized on demand via their
"getters".
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2002>
This one is a trivial wrapper around clutter_actor_get_children(), so just
use that in the two places where clutter_container_get_children() is used,
and remove clutter_container_get_children().
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2057>
Right now we damage the stage even if an actor is not mapped, for
example in the overview.
Stop doing so, reducing over-paint significantly in some situations.
Clones will still do stage damage on their own.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2035>
ClutterText implements its own get_paint_volume() with its own cache,
but was not invalidating the actor paint volume when when it has
changed. This sometimes could result in labels, especially quickly
changing ones, using the old paint volume which either would cut off the
label or leave parts of the old label on screen.
Fixes: https://gitlab.gnome.org/GNOME/mutter/-/issues/1943
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2006>
This mode is passed along by the ClutterInputMethod, the
ClutterInputFocus will preserve it and ensure it is honored
whenever the IM is being reset.
This mode is immediate. The ClutterInputFocus commits the
text directly without queueing a CLUTTER_IM_COMMIT event.
This is important so events are serialized in the right order
in the wayland implementations (i.e. commit before wl_pointer.press).
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1940>
In line with GTK, the input method context should be reset when clicks
are handled by the ClutterInputFocus user. The reset action can then
either clear or commit the preedit text, as configured by the IM module.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1940>
Make sure that when we've recreated views that we'll actually paint a
new frame for it. This was very rarely a problem, as views tend to
result in getting damage etc being queued as side effects of various
things, like layout, but e.g. when running certain tests, this might not
happen. There is no situation where we want to create a new view that
should remain unpainted, so just make sure we initialize it to become up
to date.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1947>
This code sneaked unconditionally, even though we can disable
tracing code with -Dprofiler=false. Add some COGL_HAS_TRACING
checks so that this code is also optionally built.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1951>
Will be used to trace a lot more, and with more details, and thus may
have a larger impact on what is actually measured. This potential impact
is the reason for enabling only when needed.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1700>
The failure to allocate was not properly handled, causing crashes later
on due to the offscreen being NULL.
#0 cogl_gl_framebuffer_bind (target=36160, gl_framebuffer=0x0)
#1 _cogl_driver_gl_flush_framebuffer_state (...)
#2 cogl_context_flush_framebuffer_state (read_buffer=0x55f48f386780, draw_buffer=0x55f48f386780, ...)
#3 cogl_framebuffer_clear4f (framebuffer=0x55f48f386780, ...)
#4 clutter_layer_node_pre_draw (...)
#5 clutter_paint_node_paint (...)
...
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1942>
We only listen to it for 2 settings (drag threshold, double click
time), and we already have the stock ClutterSettings object tracking
the source of these. This code is redundant.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1862>
Not sure how to update the damage or redraw clip or something; at least
this works properly when under a constantly-redrawing window, which is
ok for debugging purposes.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1762>
Max render time shows how early the frame clock needs to be dispatched
to make it to the predicted next presentation time. Before this commit
it was set to refresh interval minus 2 ms. This means Mutter would
always start compositing 14.7 ms before a display refresh on a 60 Hz
screen or 4.9 ms before a display refresh on a 144 Hz screen. However,
Mutter frequently does not need as much time to finish compositing and
submit buffer to KMS:
max render time
/------------\
---|---------------|---------------|---> presentations
D----S D--S
D - frame clock dispatch
S - buffer submission
This commit aims to automatically compute a shorter max render time to
make Mutter start compositing as late as possible (but still making it
in time for the presentation):
max render time
/-----\
---|---------------|---------------|---> presentations
D----S D--S
Why is this better? First of all, Mutter gets application contents to
draw at the time when compositing starts. If new application buffer
arrives after the compositing has started, but before the next
presentation, it won't make it on screen:
---|---------------|---------------|---> presentations
D----S D--S
A-------------X----------->
^ doesn't make it for this presentation
A - application buffer commit
X - application buffer sampled by Mutter
Here the application committed just a few ms too late and didn't make on
screen until the next presentation. If compositing starts later in the
frame cycle, applications can commit buffers closer to the presentation.
These buffers will be more up-to-date thereby reducing input latency.
---|---------------|---------------|---> presentations
D----S D--S
A----X---->
^ made it!
Moreover, applications are recommended to render their frames on frame
callbacks, which Mutter sends right after compositing is done. Since
this commit delays the compositing, it also reduces the latency for
applications drawing on frame callbacks. Compare:
---|---------------|---------------|---> presentations
D----S D--S
F--A-------X----------->
\____________________/
latency
---|---------------|---------------|---> presentations
D----S D--S
F--A-------X---->
\_____________/
less latency
F - frame callback received, application starts rendering
So how do we actually estimate max render time? We want it to be as low
as possible, but still large enough so as not to miss any frames by
accident:
max render time
/-----\
---|---------------|---------------|---> presentations
D------S------------->
oops, took a little too long
For a successful presentation, the frame needs to be submitted to KMS
and the GPU work must be completed before the vblank. This deadline can
be computed by subtracting the vblank duration (calculated from display
mode) from the predicted next presentation time.
We don't know how long compositing will take, and we also don't know how
long the GPU work will take, since clients can submit buffers with
unfinished GPU work. So we measure and estimate these values.
The frame clock dispatch can be split into two phases:
1. From start of the dispatch to all GPU commands being submitted (but
not finished)—until the call to eglSwapBuffers().
2. From eglSwapBuffers() to submitting the buffer to KMS and to GPU
work completing. These happen in parallel, and we want the latest of
the two to be done before the vblank.
We measure these three durations and store them for the last 16 frames.
The estimate for each duration is a maximum of these last 16 durations.
Usually even taking just the last frame's durations as the estimates
works well enough, but I found that screen-capturing with OBS Studio
increases duration variability enough to cause frequent missed frames
when using that method. Taking a maximum of the last 16 frames smoothes
out this variability.
The durations are naturally quite variable and the estimates aren't
perfect. To take this into account, an additional constant 2 ms is added
to the max render time.
How does it perform in practice? On my desktop with 144 Hz monitors I
get a max render time of 4–5 ms instead of the default 4.9 ms (I had
1 ms manually configured in sway) and on my laptop with a 60 Hz screen I
get a max render time of 4.8–5.5 ms instead of the default 14.7 ms (I
had 5–6 ms manually configured in sway). Weston [1] went with a 7 ms
default.
The main downside is that if there's a sudden heavy batch of work in the
compositing, which would've made it in default 14.7 ms, but doesn't make
it in reduced 6 ms, there is a delayed frame which would otherwise not
be there. Arguably, this happens rarely enough to be a good trade-off
for reduced latency. One possible solution is a "next frame is expected
to be heavy" function which manually increases max render time for the
next frame. This would avoid this single dropped frame at the start of
complex animations.
[1]: https://www.collabora.com/about-us/blog/2015/02/12/weston-repaint-scheduling/
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1762>
This fixes a warning/error:
In function 'parse_settings',
inlined from 'read_settings' at ../clutter/clutter/x11/xsettings/xsettings-client.c:398:25:
../clutter/clutter/x11/xsettings/xsettings-client.c:202:13: error: 'buffer.byte_order' may be used uninitialized [-Werror=maybe-uninitialized]
202 | if (buffer.byte_order != MSBFirst &&
| ~~~~~~^~~~~~~~~~~
This is needed to bump the CI image from F33 to F34, which includes a
upgraded compiler.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1865>
A simply wrapper around `CoglTexture`, making it easy to reuse
content without roundtrip from GPU to CPU memory and back.
It optionally takes a clip rectangle which is implemented by
creating a `CoglSubTexture`. A limitation here is that floating
point clips are not supported.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1888>
When using `CLUTTER_PAINT=damage-region` highlighting was conspicuously
absent during fullscreen animations like entering or leaving the
overview. That was because `queued_redraw_clip` was empty, because it
had been initialized from `redraw_clip == NULL` (full stage redraw).
Now we paint the damage region as the full view (which it is) instead
of nothing at all.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1890>
This commit adds scaling support to clutter_stage_capture_into, which
is currently used when screencasting monitors. This is supposed to
fix graphical issues that arise when using fractional scaling.
Fixes#1131
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1855>
All pointer a11y is a fabrication of Clutter backend-independent
code, with the help of a ClutterVirtualInputDevice and with some
UI on top.
On the other hand, MetaInputSettings is a backend implementation
detail, this has 2 gotchas:
- In the native backend, the MetaInputSettings (and pointer a11y
with it) are initialized early, before the ClutterSeat core
pointer is set up.
- Doing this from the MetaInputSettings also means another dubious
access from the input thread into main thread territory.
Move the pointer a11y into ClutterSettings, making this effectively
backend-independent business, invariably done from the main thread
and ensured to happen after seat initialization.
Fixes: https://gitlab.gnome.org/GNOME/mutter/-/issues/1765
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1849>
Since commit d2f8a30625 we use Graphene to union paint volumes, it
turns out a quite severe issue snuck in during review of that MR though:
Unioned paint volumes (so paint volumes of any actors with children) now
have negative heights. Once projected to 2d coordinates they luckily are
correct again, which is why everything is still working.
The problem is that obvious once looking closer: For the y coordinates
of the unioned paint volume we confused the maximum and the minimum
points and simply used the wrong coordinates to create the unioned paint
volume.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1827>
The graphene functions used by clutter for picking assume that boxes are
inclusive in both there start and end coordinates, so picking at y
coordinate 32 for an actor with the height 32 placed at y coordinate 0
would still be considered a hit. This however is wrong as 32 is the
first position that is not in the actor anymore.
Usually this would not be much of a problem, because motion events are
rarely ever at exactly these borders and even if they are there will be
another motion event soon after. But since actors in gnome-shell usually
are aligned with the pixel grid and on X11 enter/leave events are
generated by the X server at integer coordinates, this case is much
more likely for those.
This can cause issues with Firefox which when using client side
decorations, still requests MWM_DECOR_BORDER via _MOTIF_WM_HINTS to have
mutter draw a border + shadow. This means that the Firefox window even
when using CSD is still reparented. For such windows we receive among
others XI_RawMotion and XI_Enter events, but no XI_Motion events. And
the raw motion events are discarded after an enter event, because that
sets has_pointer_focus to TRUE in MetaSeatX11. So when moving the cursor
from the panel to a maximized Firefox window the last event clutter
receives is the enter event at exactly integer coordinates. Since the
panel is 32px tall and the generated enter event is at y position 32,
the picking code will pick a panel actor and the focus will remain on it
as long as the cursor does not leave the Firefox window.
Fix this by excluding the bottom and right border of a box when picking.
Fixes https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/4041
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1842>
Turns out ClutterClones need a bit of extra handling as always, there's
currently nothing that invalidates a clones paint volume when the source
actors paint volume changes.
Since ClutterClones get_paint_volume() implementation simply takes the
source actors paint volume and returns that, we should make sure they
are kept in sync and invalidate the clones paint volume as soon as the
source actor gets its PV invalidated.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1829>
Updating the last_paint_volume while painting has proven itself to be
quite prone to issues: First we had to make sure actors painted by
offscreen effects get their last_paint_volumes updated correctly (see
0320649a1c), and now a new issue turned up
where we don't update the paint volumes while a fullscreen unredirect is
happening.
To stop those issues from happening and to lay the foundation for using
the last_paint_volume for other things, update the last_paint_volume in
a separate step before painting instead of doing it in
clutter_actor_paint().
To save some resources, avoid introducing another traversal of the
scenegraph and add that step into the existing step of updating the
stage_views lists of actors. To properly update the paint volumes, we
need to do that after finishing the queued redraws, which is why we move
clutter_stage_maybe_finish_queue_redraws() to happen before the new
clutter_stage_finish_layout().
Fixes https://gitlab.gnome.org/GNOME/mutter/-/issues/1699
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1773>
The priv->paint_volume field of ClutterActor stores the cached paint
volume in the actors local coordinate system. It consist of the actors
paint volume itself and the union of all children paint volumes.
We want to invalidate those cached paint volumes according to the
following rules:
- If an actors transformation matrix changes, all paint volumes of the
parent-tree need to be invalidated (that's because the parent-volumes
have unioned the actors paint volume). Our own paint volume does not
need invalidation since the transformation matrix is not applied to it.
- If an actors allocation-size changes, its own paint volume and all the
volumes of the parent-tree need to be invalidated. That's because the
allocation-size is used as the size of the paint volume.
- If a clip gets set or clip_to_allocation gets enabled for an actor,
its own paint volume and all the volumes of the parent-tree need to be
invalidated. That's because the clip is factored in when creating the
paint volume.
So far we did this invalidation in various places and the invalidation
up the parent-tree happened inside clutter_actor_real_queue_relayout().
We did not invalidate on changes to the actors transformation matrices
and the invalidation in clutter_actor_real_queue_relayout() was more
like a "big hammer" that probably invalidated unnecessarily a few times.
So introduce proper infrastructure to invalidate those cached paint
volumes of actors only in the cases where they actually need to be
invalidated. To do that, we reuse the transform_changed() function and
introduce a new function queue_update_paint_volume() that invalidates
the paint volumes up the actor tree.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1773>
ClutterActors can override the get_paint_volume() vfunc in case they
draw outside the allocation. That's used by a bunch of actors, for
example ClutterText or StViewport in gnome-shell.
In case of StViewport, the paint volume returned depends on the value of
the StAdjustment, which means when we start to cache paint volumes more
agressively in ClutterActor, we'll need to add API that allows
StViewport to invalidate the paint volume. So introduce
clutter_actor_invalidate_paint_volume() to invalidate the cached paint
volume.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1773>
The action might not have been triggered yet, as per its trigger
threshold. This doesn't mean we shouldn't reset the point(s) accumulated
so far.
This fixes those touchpoints persisting after disable/enable, thus
making gesture recognition fail from there on.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1791>
We might want to perform distance/threshold checks in the ::prepare
vfunc, but we didn't record the last motion event yet. This used to
give a delta of 0/0 between the press and last motion coordinates,
despite the ClutterGestureAction having a trigger threshold. This
happens no longer.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1791>
The usage of clutter_actor_get_preferred_width/height() for building the
pick box can trigger Clutters size negotiation machinery in case the
allocation of the actor is invalidated, with commit 82f3bdd1 we worked
around that by excluding actors with invalidated allocations from
picking.
There's no need to do that though, when picking we always want to
operate on the last known allocation of the actor, since that is what's
actually painted on the screen.
So instead of not picking at all when an actors allocation is
invalidated, just use the size of the last allocation. We still have to
factor in one extra case, that's when an actor hasn't gotten any
allocation yet: In that case we want to exclude the actor from picking
since the actor is not on the screen yet.
This fixes a regression introduced by the commit mentioned above where
picking wouldn't work on windows that have just been resized.
https://gitlab.gnome.org/GNOME/mutter/-/issues/1674
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1784>
As documented in g_once_init_enter(): "While @location has a volatile qualifier,
this is a historical artifact and the pointer passed to it should not be
volatile.". And effectively this now warns with modern glibc.
Drop the "volatile" qualifier from these static variables as it's expected.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1785>
Some events such as the proximity one requires a device to be set before
we process them, so ensure we process the event details after we've
added the device to the seat.
This may lead to handle a device-removed signal before the clutter event
but it's anyways not different from what we did before commit 012c0a18
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1779>
The CallyStage objects lifetime is tied to the stage, so if we add a
weak pointer to it, we won't be able to remove it, as we would try to do
so not until the stage itself is being disposed, at which point removing
it fails. However, not removing it will make the stage try to clean up
the weak refs, and since it does this more or less directly after
freeing the cally stage, it ends up writing NULL to freed memory,
causing memory corruption.
Fix this by avoiding adding the weak pointer when that pointer is to the
stage.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1775>
This adds a test framework that makes it possible to compare the result
of painting a view against a reference image. Test reference as PNG
images are stored in src/tests/ref-tests/.
Reference images needs to be created for testing to be able to succeed.
Adding a test reference image is done using the
`MUTTER_REF_TEST_UPDATE` environment variable. See meta-ref-test.c for
details.
The image comparison code is largely based on the reference image test
framework in weston; see meta-ref-test.c for details.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1698>
Since commit 2ceac4a device-related X11 events aren't processed anymore,
causing the input settings not to handle the devices.
This is due to the fact that we may never call clutter_seat_handle_event_post()
for such events.
While this is always happening for the native backend, it doesn't happen in
X11 because the events are removed from the queue as part of
meta_x11_handle_event(), and thus no event was queued to the stage by the
backend events source.
This also makes sure that the event post handler is called after the
event is actually processed, and not before an event is queued.
Fixes: https://gitlab.gnome.org/GNOME/mutter/-/issues/1564
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1769>
To make the double buffered shadow buffer damaged tiles detection
feasable, a new EGL extension is needed for creating FBO's backed by
a custom CPU memory buffer, instead of DMA buffers, as DMA buffers can
be very slow to read, much slower than just painting the shadow buffer
directly.
Leave the code there, since such an EGL extension is intended to be
added, but hide it behind an env var so that it isn't enabled by
accident.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1724>
Regarding the sequence = 0 fallback: in some cases (moving a cursor
plane on atomic amdgpu) we get sequence = 0 in the page flip callback.
This seems like an amdgpu bug, so work around it by assuming a sequence
delta of 1 (it is equal to 1 because of the sequence != 0 check above).
Sequence can also legitimately be 0 if we're lucky during the 32-bit
overflow, in which case assuming a delta of 1 will give more or less
reasonable values on this and next presentation, after which it'll be
back to normal.
Sequence is also 0 on mode set fallback and when running nested, in
which case assuming a delta of 1 every frame is the best we can do.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1484>
This concerns only the cases when the presentation timestamp is received
directly from the device (from KMS or from GLX). In the majority of
cases this timestamp is already MONOTONIC. When it isn't, after this
commit, the current value of the MONOTONIC clock is sampled instead.
The alternative is to store the clock id alongside the timestamp, with
possible values of MONOTONIC, REALTIME (from KMS) and GETTIMEOFDAY (from
GLX; this might be the same as REALTIME, I'm not sure), and then
"convert" the timestamp to MONOTONIC when needed. An example of such a
conversion was done in compositor.c (removed in this commit). It would
also be needed for the presentation-time Wayland protocol. However, it
seems that the vast majority of up-to-date systems are using MONOTONIC
anyway, making this effort not justified.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1484>
KMS and GLX device timestamps have microsecond precision, and whenever
we sample the time ourselves it's not the real presentation time anyway,
so nanosecond precision for that case is unnecessary.
The presentation timestamp in ClutterFrameInfo is in microseconds, too,
so this commit makes them have the same precision.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1484>
A flag indicating whether the presentation timestamp was provided by
the display hardware (rather than sampled in user space).
It will be used for the presentation-time Wayland protocol.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1484>
ClutterText has a bit of a mess around its signalling of changes to the
cursor position: There's the position (deprecated) and cursor-position
property, and there's the cursor-changed and cursor-event (deprecated)
signal. The two properties are supposed to be notified when the cursor
position changes, and the two signals are notified when the cursor
position or size changes.
Now the properties notifications and the signals get fired in two very
different places: The two properties are notified in
clutter_text_set_cursor_position(), while the signals are fired during
the paint cycle when we figured out the final cursor position. The
latter is a pretty bad idea, nobody expects such a signal to be fired
during painting, and also changes to the text that are done in the
signal handler will only be applied on the next paint.
Now StEntry listens to cursor position changes via cursor-changed and
invalidates its text shadow, but since the signal is only notified
during the paint, the old text shadow will still get applied. To fix
this, also emit the cursor-changed signal when we notify the
cursor-position property.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1757>
This removes the responsibility of tracking these from the backend to
the base object. The backends are instead responsible for calling the
function to update the values.
For the native backend, it's important that this happens on the correct
thread, so each time either of these states may change, post a idle
callback on the main thread that sets the, at the time of queuing said
callback, up to date state. This means that things on the main thread
will always be able to get a "new enough but not too new" state when
listening on the 'notify::' signals and getting the property value
after.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1739>
In an x11 session, we don't receive motion events from X when the
pointer is above a window. Since commit 734a1859 we only do picking on
motion events though, which means when clicking the mouse to focus a
window, we don't repick and might still think the pointer is hovering
above another window or actor, ending up not focussing the window.
Fix this by always repicking on BUTTON_PRESS events. While this is not
necessary in the wayland session, button presses happen rarely compared
to motion events, so it's not a performance regression to do it in
Wayland sessions, too.
Fixes https://gitlab.gnome.org/GNOME/mutter/-/issues/1660
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1752>
ClutterText allows setting a custom PangoAttrList, and St uses that to
set the text style it's reading from CSS. One style St enforces using
this mechanism is the text color and setting the text color should
obviously not affect the size of the layout. ClutterText does queue a
relayout in that case though because it unconditionally queues a
relayout when updating the PangoAttrList.
We can avoid this relayout by reusing an optimization ClutterText has:
clutter_text_queue_redraw_or_relayout() will only queue a relayout if
the requested size of the layout changed.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1750>