In certain scenarios, the frame clock needs to handle present feedback
long before the assumed presentation time happens. To avoid scheduling
the next frame to soon, avoid scheduling one if we were presented half a
frame interval within the last expected presentation time.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
This adds a current unused, apart from tests, frame clock. It just
reschedules given a refresh rate, based on presentation time feedback.
The aiming for it is to be used with a single frame listener (stage
views) that will notify when a frame is presented. It does not aim to
handle multiple frame listeners, instead, it's assumed that different
frame listeners will use their own frame clocks.
Also add a test that verifies that the basic functionality works.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
When a transition is created for the allocation change, it will delay
the new allocation box getting set depending on transition details.
This, however, means that e.g. the 'needs_allocation' flag never gets
cleared if a transition is created, causing other parts of the code to
get confused thinking it didn't pass through a layout step before paint.
Fix this by calling clutter_actor_allocate_internal() with the current
allocation box if a transition was created, so that we'll properly clear
'needs_allocation' flag.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1345
41130b08eb added a fix for culling subsurfaces with geometry scale.
Unfortunately it only did so for the opaque regions, not for clip and
unobscured regions, as the effect was hidden by bug that was only
fixed by 3187fe8ebc.
Apply the same fix to clip and unobscured regions and use the chance
to move most of the slightly hackish geometry scale related code
into a single place.
We need to scale slightly differently in the two cases, indicated by
the new `ScalePerspectiveType` enum, as the scale is dependent on the
perspective - once from outside, once from inside of the scaled actor.
Closes https://gitlab.gnome.org/GNOME/mutter/-/issues/1312
Since we now have the neccessary infrastructure to get notified about
changes to the absolute transformation matrix, we can also invalidate
the stage-views list on updates to this matrix.
So rename absolute_allocation_changed() to absolute_geometry_changed()
to make it clear this function is not only about allocations, and call
that function recursively for all children on changes to the
transformation matrix, too.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1343
If we want to invalidate the stage-views list reliably on changes to the
actors transformation matrices, we also need to get notified about
changes to the custom transformations applied using the
apply_transform() vfunc.
So provide a new API that allows invalidating the transformation matrix
for actors implementing custom transformations, too. This in turn allows
us to cache the matrix applied using the apply_transform() vfunc by
moving responsibility of keeping track of the caching from
clutter_actor_real_apply_transform() to
_clutter_actor_apply_modelview_transform().
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1343
For ClutterText, the resource scale the text is drawn with affects the
size of the allocation: ClutterText will choose a font scale based on
the resource scale, and that font scale can lead to a slight difference
in size compared to the unscaled font.
We currently handle that by queuing a relayout inside the
"resource-scale-changed" signal handler. This solution is a bit
problematic though since it will take one more allocation cycle until
the allocation is actually updated after a scale-change, so the actor is
painted using the wrong allocation for one frame.
Also the current solution can lead to relayout loops in a few cases, for
example if a ClutterText is located near the edge on a 1x scaled monitor
and is moved to intersect a 2x scaled monitor: Now the resource scale
will change to 2 and a new allocation box is calculated; if this
allocation box is slightly smaller than the old one because of the new
font scale, the allocation won't intersect the 2x scaled monitor again
and the resource scale switches back to 1. Now the allocation gets
larger again and intersects the 2x scaled monitor again.
This commit introduces a way to properly support those actors: In case
an actors resource scale might affect its allocation, it should call the
private function clutter_actor_queue_immediate_relayout(). This will
make sure the actor gets a relayout before the upcoming paint happens
afte every resource scale change. Also potential relayout loops can
be handled by the actors themselves using a "phase" argument that's
passed to implementations of the calculate_resource_scale() vfunc.
The new API is private because resource scales are not meant to be used
in a way where the scale affects the allocation. With ClutterText and
the current behavior of Pango, that can't be avoid though, so we need it
anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
Since we now always return a resource scale, we can remove the boolean
return value from clutter_actor_get_resource_scale() and
_clutter_actor_get_real_resource_scale(), and instead simply return the
scale.
While at it, also remove the underscore from the
_clutter_actor_get_real_resource_scale() private API.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
Now that ClutterActor has a convenient API for getting the stage views
an actor is presented on, we can remove a large part of the code for
resource-scale calculation and instead rely on the stage-views list.
The way this works is a bit different from the old resource scales:
clutter_actor_get_resource_scale() always returns a scale, but this
value is only guaranteed to be correct when called from a vfunc_paint()
implementation, in all other cases the value is guessed using the scale
of the parent actor or the last valid scale. Now in case the value
previously reported by clutter_actor_get_resource_scale() turns out to
be wrong, "resource-scale-changed" will be emitted before the next paint
and the actor has a chance to update its resources.
The general idea behind this new implementation is for actors which only
need the scale during painting to continue using
clutter_actor_get_resource_scale() as they do right now, and for actors
which need the resource scale on other occasions, like during size
negotiation, to use the scale reported by
clutter_actor_get_resource_scale() but also listen to the
"resource-scale-changed" signal to eventually redo the work using the
correct scale.
The "guessing" of the scale is done with the intention of always giving
actors a scale to work with so they don't have to fall back to a scale
value the actor itself has to define, and also with the intention of
emitting the "resource-scale-changed" signal as rarely as possible, so
that when an actor is newly created, it won't have to load its resources
multiple times.
The big advantage this has over the old resource scales is that it's now
safe to call clutter_actor_get_resource_scale() from everywhere (before,
calling it from size negotiation functions would usually fail). It will
also make it a lot easier to use the resource scale for complex cases
like ClutterText without risking to get into relayout loops.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
Add private API to ClutterBackend to set a fallback resource scale
available to Clutter. This API will be used for "guessing" the
resource-scale of ClutterActors in case the actor is not attached to a
stage or not properly positioned yet.
We set this value from inside mutters MetaRenderer while creating new
stage-views for each logical monitor. This makes it possible to set the
fallback scale to the scale of the primary monitor, which is the monitor
where most ClutterActors are going to be positioned.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
We're going to refactor resource scales, making the notification of
changes to the resource scale a lot more important than it is right now
(we won't guarantee queried scales are correct outside the paint cycle
anymore).
Having a separate signal/vfunc for this will make the difference between
the new clutter_actor_get_resource_scale() API (which can return a
guessed value) and the notification of changes to the resource scale
(which will be guaranteed to return an up-to-date value) more obvious.
So replace the "resource-scale" property of ClutterActor with a
"resource-scale-changed" signal that's emitted when the resource scale
is recalculated.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
The portal API requires a screencast session only for absolution motion
with remote desktop, other methods including relative motion do not
require a screencast session.
There is no reason to be more strict than the API actually is, check for
a screencast session only when required, like for absolute motion events
and touch events.
Tested with https://gitlab.gnome.org/snippets/1122
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/1307
There are a couple of places in gnome-shell where we aren't interested
in which workspace is active, but whether a given workspace is active.
Of course it's easy to use the former to determine the latter, but we
can offer a convenience property on the workspace itself almost for
free, so let's do that.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1336
ClutterBoxLayout calculates the preferred size of the opposite
orientation (so for example the height if the orientation is horizontal)
by getting the preferred size of the real orientation first, and then
the preferred size of the opposite orientation, using the other size as
for_width/height when doing the request.
Right now, for non-homogeneous layouts this for_width/height does not
adjust for the spacing set on the box layout. This leads to children
being passed a slightly larger for_width/height, which in case of
ClutterText might cause the line to not wrap when it actually should.
This in turn means we can end up with an incorrect preferred size for
the opposite orientation, leading to a wrong allocation.
So fix that and adjust for the spacing just as we do for homogeneous
layouts by subtracting the total spacing from the available size that is
distributed between children.
This fixes the wrong height of the checkbox label reported in
https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/2574.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1333
These tests were written (and copy-pasted) before ClutterActor
had an actual background-color property. As a preparation to
the removal of ClutterRectangle, replace all these rectangles
with plain actors and background colors.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1332
The property is deprecated and the current implementation simply
redirects it to ClutterActor::background-color, so remove it.
Also update the tests to set the background color directly.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1332
ClutterStage is the one and only subclass of ClutterGroup, but
it overrides basically everything specific to ClutterGroup to
mimic a ClutterActor. What a waste!
Subclass ClutterActor directly and remove all the now useless
vfunc overrides from ClutterStage. Adapt CallyStage to subclass
CallyActor as well.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1332
It is deprecated in favor of the 'z-position' property, and
the implementation itself redirects to the z-position, so
just drop it and replace all get|set_depth calls to their
z-position counterparts.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1332
We were setting the pipeline colour to all white (1.0, 1.0, 1.0, 1.0)
and so the default layer combine function multiplied each pixel
(R, G, B, A) by all ones. Obviously multiplying by one four times per
pixel is a waste of effort so we remove the colour setting *and* set
the layer combine function to a trivial shader that will ignore whatever
the current pipeline colour is set to. So now we do **zero** multiplies
per pixel.
On an i7-7700 at UHD 3840x2160 this results in 5% faster render times
and 10% lower power usage (says intel_gpu_top). The benefit is probably
much higher for virtual machines though, as they're no longer being
asked to do CPU-based math on every pixel of a window.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1331
The previous commit removed checks for intermediate focus states which
would make tests randomly fail, because of their time dependence. What
can be tested however is that if there is no other window available that
would accept the focus, that the focus remains at 'none', after the
focused window has been closed. This newly introduced test checks the
focus directly after closing the window (and syncing) and after the time
it would have taken for the queue to finish. The first check has a
similar timing issue as the removed focus checks in the other tests, but
the test will never accidentally fail, because regardless of whether the
queue has finished or not, the focus is always expected to be 'none'.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1329
While c3d13203 ensured that the test-client has actually closed the
window before testing for the focus change, it also made another timing
related issue with the tests more likely to happen. Serveral tests
assert that the focus is set to 'none' after the focussed window has
been closed when the window below does not accept focus. This however
can never be reliably tested, because closing the window triggers
timeout based iteration of a queue of default focus candidate windows.
This starts after the window has been closed and might finish before the
clients have finished synchronizing. This issue is more likely to
trigger the shorter the queue is and the more test clients there are
that could delay the synchronization.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1329
This avoids some issues which could happen on some setups[0] due to
meta-native-renderer.c:dummy_power_save_page_flip →
meta_onscreen_native_swap_drm_fb implicitly turning of the primary
plane (by destroying the KMS framebuffer assigned to it):
* drmModeObjectSetProperty could return an "Invalid argument" error
between setting a non-empty cursor with drmModeSetCursor(2) and
enabling the primary plane again:
Failed to DPMS: Failed to set connector 69 property 2: Invalid argument
(This was harmless other than the error message, as we always re-set
a mode on the CRTC after setting the DPMS property to on, which
enables the primary plane and implicitly sets the DRM property to on)
* drmModeSetCursor(2) could return an "Invalid argument" error between
setting the DPMS property to on and enabling the primary plane again:
Failed to set hardware cursor (drmModeSetCursor failed: Invalid argument), using OpenGL from now on
[0] E.g. with the amdgpu DC display code.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1240
I noticed my system would fall back to the slow unclipped (and
uncullable) paint path whenever a window touched the left edge of
the screen. Turns out that was a red herring. Just that
`use_clipped_redraw` was uninitialized so clipping/culling was used
randomly.
So the compiler failed to notice `use_clipped_redraw` was uninitialized.
Weirdly, as soon as you fix that it starts complaining that `buffer_age`
might be uninitialized, which appears to be wrong. So we initialize that
too, to shut up the compiler warnings/errors.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1323
In commit 4c1fde9d MetaCullable related code was moved out of
MetaShapedTexture into MetaSurfaceActor. While generally desirable,
this removed drawing optimizations in MetaShapedTexture for partial
redraws. The common case for fully obscured actors was still supposed
to work, but it was now discovered that it actually did not.
This commit revert parts of 4c1fde9d: it reintroduces clipping
to MetaShapedTexture but leaves all culling and actor related logic
in MetaSurfaceActor.
Thanks to Daniel van Vugt for uncovering the issue.
Fixes https://gitlab.gnome.org/GNOME/mutter/-/issues/850
Fixes https://gitlab.gnome.org/GNOME/mutter/-/issues/1295https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1326
When trying to find a default focus window, the code iterates through a
queue of candidates with a timeout between each candidate. If the window
the current timeout is waiting for gets destroyed, this process just
stops instead of trying the next window in the queue.
This issue was made more likely to be triggered with the previous change
to the closed-transient-no-input-parents-queued-default-focus-destroyed
test due to the introduction of a wait, which can introduce a
delay between the two destroy commands.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1325