When we pick the frame clock given the associated actor, that frame
clock in fact comes from a picked actor. In order to not end up with
stale frame clocks, which may happen on e.g. hotplugs, monitor layout
changes, or non-optimal frame clocks, which may happen when the parent
used for picking the clock moves to another view, lets listen to
'stage-views-changed' on the actor used for picking the clock too.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/1327https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1404
Let's not expose that outside of mutter quite yet; it's not used in
gnome-shell, and to avoid future breakage if it starts to be used, lets
move it to clutter-mutter.h so only mutter and clutter itself can use
it.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1404
This aims to make sure a view and its resources are destroyed when it
should. Using references might keep certain components (e.g frame clock)
alive for too long.
We currently don't take any long lived references to the stage view
anywhere, so this doesn't matter in practice, but this may change, and
will be used by a to be added test case.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1404
Always force-track the cursor position (so that the X11 backend can keep
it up to date), and if the cursor wasn't part of the sampled
framebuffer when reading pixels into CPU memory, draw it in an extra
pass using cairo after the fact. The cairo based cursor painting only
happens on the X11 backend, as we otherwise inhibit the hw cursor.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1391
On X11 we won't always receive cursor positions, as some other client
might have grabbed the pointer (e.g. for implementing a popup menu). To
make screen casting show a somewhat correct cursor position, we need to
actively poll the X server about the current cursor position.
We only really want to do this when screen casting or taking a
screenshot, so add an API that forces the cursor tracker to track the
cursor position.
On the native backend this is a no-op as we by default always track the
cursor position anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1391
The clutter_actor_get_transformed_position returns the position of the
top left point of the actor, with the actor transformations. That means
that if the actor is rotated 180º it'll return the "screen" position top
right.
Using this to calculate if the actor is in the screen is causing
problems when it's transformted.
This patch adds a new function clutter_actor_get_transformed_extents,
that will return the transformed actor bounding rect.
This new function is used on the update_stage_views so the actor will
get updated. this way rotated actors will be updated if they are on the
screen.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1386
Make clutter_actor_allocate_preferred_size() convenient to use from
layout managers by not "automatically" honouring the fixed position of
the actor, but instead allowing to pass a position to allocate the
actor at.
This way we can move the handling of fixed positions to
ClutterFixedLayout, the layout manager which is responsible for
allocating actors using fixed positions.
This also makes clutter_actor_allocate_preferred_size() more similar to
clutter_actor_allocate_available_size().
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1310
It's currently a bit hard to get the fixed position of an actor. It can
be either done by using g_object_get() with the "fixed-x"/"fixed-y"
properties or by calling clutter_actor_get_position().
Calling clutter_actor_get_position() can return the fixed position, but
it might also return the allocated position if the allocation is valid.
The latter is not the best behavior when querying the fixed position
during an allocation, so introduce a new function
clutter_actor_get_fixed_position() which always gets the fixed position
and returns FALSE in case no fixed position is set.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1310
With the introduction of the shallow relayout mechanism another small
but severe regression sneaked into our layout machinery: We might
allocate an actor twice during the same allocation cycle, with one
allocation happening using the wrong parent.
This issue happens when reparenting an actor from a NO_LAYOUT parent to
a non-NO_LAYOUT parent, in particular it triggered a bug in gnome-shell
when DND reparents a child from the NO_LAYOUT uiGroup to the overviews
Workspace actor after a drag ended. The reason the issue happens is the
following chain of events:
1. child of a NO_LAYOUT parent queues a relayout, this child is added to
the priv->pending_relayouts list maintained by ClutterStage
2. child is reparented to a different parent which doesn't have the
NO_LAYOUT flag set, another relayout is queued, this time a different
actor is added to the priv->pending_relayouts list
3. the relayout happens and we go through the pending_relayouts list
backwards, that means the correct relayout queued during 2. happens
first, then the old one happens and we simply call
clutter_actor_allocate_preferred_size() on the actor, that allocation
overrides the other, correct one.
So fix that issue by adding a method to ClutterStage which removes
actors from the pending_relayouts list again and call this method as
soon as an actor with a NO_LAYOUT parent is detached from the stage.
With that in place, we can also remove the check whether an actor is
still on stage while looping through pending_relayouts. In case
something else is going wrong and the actor is not on stage,
clutter_actor_allocate() will warn anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1356
When picking which frame clock to use, we traverse up in the actor
hierarchy until a suitable frame clock is found. ClutterTimeline
also listens to the 'stage-views-changed' to make sure it's always
attached to the correct frame clock.
However, there is one special situation where neither of them would
work: when the stage doesn't have a frame clock yet, and the actor
of the timeline is outside any stage view. When that happens, the
returned frame clock is NULL, and 'stage-views-changed' is never
emitted by the actor.
Monitor the stage for stage view changes when the frame clock is
NULL.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
An actor may be placed without being on any current stage view; in this
case, to get the ball rolling, walk up the actor tree to find the first
actor where a frame clock can be picked from.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The frame clock owner should be able to explicitly destroy (i.e. make
defunct) a frame clock, e.g. when a stage view is destructed. This is so
that other objects can keep reference to its without it being left
around even after stopped being usable.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Replace the default master clock with multiple frame clocks, each
driving its own stage view. As each stage view represents one CRTC, this
means we draw each CRTC with its own designated frame clock,
disconnected from all the others.
For example this means we when using the native backend will never need
to wait for one monitor to vsync before painting another, so e.g. having
a 144 Hz monitor next to a 60 Hz monitor, things including both Wayland
and X11 applications and shell UI will be able to render at the
corresponding monitor refresh rate.
This also changes a warning about missed frames when sending
_NETWM_FRAME_TIMINGS messages to a debug log entry, as it's expected
that we'll start missing frames e.g. when a X11 window (via Xwayland) is
exclusively within a stage view that was not painted, while another one
was, still increasing the global frame clock.
Addititonally, this also requires the X11 window actor to schedule
timeouts for _NET_WM_FRAME_DRAWN/_NET_WM_FRAME_TIMINGS event emitting,
if the actor wasn't on any stage views, as now we'll only get the frame
callbacks on actors when they actually were painted, while in the past,
we'd invoke that vfunc when anything was painted.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/903
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/3https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
We'd emit multiple "presented" signals per frame, one for "sync" and one
for "completion". Only the latter were ever used, and removing the
differentiation eases the avoidance of cogl onscreen framebuffer frame
callback details leaking into clutter.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Right now the stage only had a signal called 'after-paint' which was not
tied to painting but updating. Change this to offer 4 signals, for the 4
different stages:
* before-update - emitted in the beginning before the actual stage
updating
* before-paint - emitted before painting if there will be any stage
painting
* after-paint - emitted after painting if there was any stage painting
* after-update - emitted as a last step of updating, no matter whether
there were any painting or not
Currently there were only one listener, that should only really have
been called if there was any painting, so no changes to listeners are
needed.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The mutexes was used by ClutterTexture's async upload and to match GDK's
mutexes on X11. GDK's X11 connection does not share anything with
Clutter's, we don't have the Gdk Clutter backend left, and we have
already removed ClutterTexture, so lets remove these mutexes as well.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
A frame clock dispatch doesn't necessarily result in a frame drawn,
meaning we'll end up in the idle state. However, it may be the case that
something still requires another frame, and will in that case have
requested one to be scheduled. In order to not dead lock, try to
reschedule directly if requested after dispatching, if we ended up in
the idle state.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The frame clock wouldn't be useable yet, but none the less, add API to
get the frame clock best suited for driving the actor. Currently this
translates to the fastest one, but that might change.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The frame clock is meant to eventually drive the painting of the view,
in contrast to the master frame clock painting every view on the stage.
Right now it's a useless place holder.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The native backend had a plain counter, and the X11 backend used the
CoglOnscreen of the screen; change it into a plain counter in
ClutterStageCogl. This also moves the global frame count setting to the
frame info constuctor.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
We currently have mutter set a global frame counter on the frame info in
the native backend, but in order to do this from clutter, change the
frame info construction from being implicitly done so when swapping
buffers to having the caller create the frame info and passing that to
the swap buffers call.
While this commit doesn't introduce any other changes than the API, the
intention is later to have the caller be able to pass it's own state
(e.g. the global frame count) along with the frame info.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
We had time unit conversion helpers (e.g. us2ms(), ns2us(), etc) in
multiple places. Clean that up by moving them all to a common file. That
file is clutter-private.h, as it's accessible by both from clutter/ and
src/.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Currently unused, but it's intention is to use as a initial refresh rate
for a with the stage view associated frame clock. It defaults to 60 Hz
if nothing sets it, but the native backend sets it to the associated
CRTCs current mode's refresh rate.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Without an associated actor, or explicit frame clock set, in the future
a timeline will not know how to progress, as there will be no singe
frame clock to assume is the main one. Thus, deprecate the construction
of timelines without either an actor or frame clock set.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The association is inactive, as in it doesn't do anything yet, but it
will later be used to determine what frame clock should be driving the
timeline by looking at what stage view the actor is currently on.
This also adapts sub types (ClutterPropertyTransition) to have
constuctors that takes an actor just as the new ClutterTimeline
constructor.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
This is so something outside of clutter-stage.c (i.e.
clutter-stage-view.c) can eventually do various things
_clutter_stage_do_update() does now while not redrawing the whole stage.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Devices are updated (repicked) as part of the stage update phase, as
their stacking, position and transform might have changed since since
the last update.
The redraw clip was used to avoid unnecessary updating of devices, if
the device in question had it's position outside of the redraw clip. If
the device coordinate was outside of the redraw clip, what was
underneith the device couldn't have changed.
What it failed to do, however, was to update devices if a relayout had
happened in the same update, as it checked the state whether a layout
had happened before attempting to do a relayout, effectively delaying
the device updating to the next update.
This commit changes the behavior to always update the device given the
complete redraw clip caused by all possible relayouts of the same update
as the device update happens in.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
We'd check if there was any queued redraw on the stage, but this is
inappropriate for two reasons:
1) A monitor and area screen cast source only cares about damage on a
subset of the stage.
2) The global pending-redraw is going away when paint scheduling will be
more view centric.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
This will allow anyone to finish any queued redraws making their
corresponding damage end up being posted to the stage views. This will
allow units to check whether, so far, any updates are queued on a
particular stage view.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Add API to add and remove ClutterTimeline objects to the frame clock.
Just as the legacy master clock, having a timeline added to the frame
clock causes the frame clock to continuously reschedule updates until
the timeline is removed.
ClutterTimeline is adapted to be able to be driven by a
ClutterFrameClock. This is done by adding a 'frame-clock' property, and
if set, the timeline will add and remove itself to the frame clock
instead of the master clock.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The timestamp comes from the GSource, meaning it's a more accurate
representation of when the frame started to be dispatched compared to
getting the current time in any callback.
Currently unused.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
In certain scenarios, the frame clock needs to handle present feedback
long before the assumed presentation time happens. To avoid scheduling
the next frame to soon, avoid scheduling one if we were presented half a
frame interval within the last expected presentation time.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
This adds a current unused, apart from tests, frame clock. It just
reschedules given a refresh rate, based on presentation time feedback.
The aiming for it is to be used with a single frame listener (stage
views) that will notify when a frame is presented. It does not aim to
handle multiple frame listeners, instead, it's assumed that different
frame listeners will use their own frame clocks.
Also add a test that verifies that the basic functionality works.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
When a transition is created for the allocation change, it will delay
the new allocation box getting set depending on transition details.
This, however, means that e.g. the 'needs_allocation' flag never gets
cleared if a transition is created, causing other parts of the code to
get confused thinking it didn't pass through a layout step before paint.
Fix this by calling clutter_actor_allocate_internal() with the current
allocation box if a transition was created, so that we'll properly clear
'needs_allocation' flag.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1345
Since we now have the neccessary infrastructure to get notified about
changes to the absolute transformation matrix, we can also invalidate
the stage-views list on updates to this matrix.
So rename absolute_allocation_changed() to absolute_geometry_changed()
to make it clear this function is not only about allocations, and call
that function recursively for all children on changes to the
transformation matrix, too.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1343
If we want to invalidate the stage-views list reliably on changes to the
actors transformation matrices, we also need to get notified about
changes to the custom transformations applied using the
apply_transform() vfunc.
So provide a new API that allows invalidating the transformation matrix
for actors implementing custom transformations, too. This in turn allows
us to cache the matrix applied using the apply_transform() vfunc by
moving responsibility of keeping track of the caching from
clutter_actor_real_apply_transform() to
_clutter_actor_apply_modelview_transform().
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1343
For ClutterText, the resource scale the text is drawn with affects the
size of the allocation: ClutterText will choose a font scale based on
the resource scale, and that font scale can lead to a slight difference
in size compared to the unscaled font.
We currently handle that by queuing a relayout inside the
"resource-scale-changed" signal handler. This solution is a bit
problematic though since it will take one more allocation cycle until
the allocation is actually updated after a scale-change, so the actor is
painted using the wrong allocation for one frame.
Also the current solution can lead to relayout loops in a few cases, for
example if a ClutterText is located near the edge on a 1x scaled monitor
and is moved to intersect a 2x scaled monitor: Now the resource scale
will change to 2 and a new allocation box is calculated; if this
allocation box is slightly smaller than the old one because of the new
font scale, the allocation won't intersect the 2x scaled monitor again
and the resource scale switches back to 1. Now the allocation gets
larger again and intersects the 2x scaled monitor again.
This commit introduces a way to properly support those actors: In case
an actors resource scale might affect its allocation, it should call the
private function clutter_actor_queue_immediate_relayout(). This will
make sure the actor gets a relayout before the upcoming paint happens
afte every resource scale change. Also potential relayout loops can
be handled by the actors themselves using a "phase" argument that's
passed to implementations of the calculate_resource_scale() vfunc.
The new API is private because resource scales are not meant to be used
in a way where the scale affects the allocation. With ClutterText and
the current behavior of Pango, that can't be avoid though, so we need it
anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
Since we now always return a resource scale, we can remove the boolean
return value from clutter_actor_get_resource_scale() and
_clutter_actor_get_real_resource_scale(), and instead simply return the
scale.
While at it, also remove the underscore from the
_clutter_actor_get_real_resource_scale() private API.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
Now that ClutterActor has a convenient API for getting the stage views
an actor is presented on, we can remove a large part of the code for
resource-scale calculation and instead rely on the stage-views list.
The way this works is a bit different from the old resource scales:
clutter_actor_get_resource_scale() always returns a scale, but this
value is only guaranteed to be correct when called from a vfunc_paint()
implementation, in all other cases the value is guessed using the scale
of the parent actor or the last valid scale. Now in case the value
previously reported by clutter_actor_get_resource_scale() turns out to
be wrong, "resource-scale-changed" will be emitted before the next paint
and the actor has a chance to update its resources.
The general idea behind this new implementation is for actors which only
need the scale during painting to continue using
clutter_actor_get_resource_scale() as they do right now, and for actors
which need the resource scale on other occasions, like during size
negotiation, to use the scale reported by
clutter_actor_get_resource_scale() but also listen to the
"resource-scale-changed" signal to eventually redo the work using the
correct scale.
The "guessing" of the scale is done with the intention of always giving
actors a scale to work with so they don't have to fall back to a scale
value the actor itself has to define, and also with the intention of
emitting the "resource-scale-changed" signal as rarely as possible, so
that when an actor is newly created, it won't have to load its resources
multiple times.
The big advantage this has over the old resource scales is that it's now
safe to call clutter_actor_get_resource_scale() from everywhere (before,
calling it from size negotiation functions would usually fail). It will
also make it a lot easier to use the resource scale for complex cases
like ClutterText without risking to get into relayout loops.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
Add private API to ClutterBackend to set a fallback resource scale
available to Clutter. This API will be used for "guessing" the
resource-scale of ClutterActors in case the actor is not attached to a
stage or not properly positioned yet.
We set this value from inside mutters MetaRenderer while creating new
stage-views for each logical monitor. This makes it possible to set the
fallback scale to the scale of the primary monitor, which is the monitor
where most ClutterActors are going to be positioned.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
We're going to refactor resource scales, making the notification of
changes to the resource scale a lot more important than it is right now
(we won't guarantee queried scales are correct outside the paint cycle
anymore).
Having a separate signal/vfunc for this will make the difference between
the new clutter_actor_get_resource_scale() API (which can return a
guessed value) and the notification of changes to the resource scale
(which will be guaranteed to return an up-to-date value) more obvious.
So replace the "resource-scale" property of ClutterActor with a
"resource-scale-changed" signal that's emitted when the resource scale
is recalculated.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
ClutterBoxLayout calculates the preferred size of the opposite
orientation (so for example the height if the orientation is horizontal)
by getting the preferred size of the real orientation first, and then
the preferred size of the opposite orientation, using the other size as
for_width/height when doing the request.
Right now, for non-homogeneous layouts this for_width/height does not
adjust for the spacing set on the box layout. This leads to children
being passed a slightly larger for_width/height, which in case of
ClutterText might cause the line to not wrap when it actually should.
This in turn means we can end up with an incorrect preferred size for
the opposite orientation, leading to a wrong allocation.
So fix that and adjust for the spacing just as we do for homogeneous
layouts by subtracting the total spacing from the available size that is
distributed between children.
This fixes the wrong height of the checkbox label reported in
https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/2574.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1333
The property is deprecated and the current implementation simply
redirects it to ClutterActor::background-color, so remove it.
Also update the tests to set the background color directly.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1332
ClutterStage is the one and only subclass of ClutterGroup, but
it overrides basically everything specific to ClutterGroup to
mimic a ClutterActor. What a waste!
Subclass ClutterActor directly and remove all the now useless
vfunc overrides from ClutterStage. Adapt CallyStage to subclass
CallyActor as well.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1332
It is deprecated in favor of the 'z-position' property, and
the implementation itself redirects to the z-position, so
just drop it and replace all get|set_depth calls to their
z-position counterparts.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1332
I noticed my system would fall back to the slow unclipped (and
uncullable) paint path whenever a window touched the left edge of
the screen. Turns out that was a red herring. Just that
`use_clipped_redraw` was uninitialized so clipping/culling was used
randomly.
So the compiler failed to notice `use_clipped_redraw` was uninitialized.
Weirdly, as soon as you fix that it starts complaining that `buffer_age`
might be uninitialized, which appears to be wrong. So we initialize that
too, to shut up the compiler warnings/errors.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1323
The ClutterBindConstraint will change the preferred size an actor
reports so it returns the same size as the source actor in some cases.
This behavior was introduced recently with 4f8e518d.
This can lead to infinite loops in case the source actor is a parent of
the actor the BindConstraint is attached to, that's because calling
get_preferred_size() on the source will recursively call
get_preferred_size() on the actor again.
So to avoid those loops, check if the source is a parent of the actor
we're attached to and don't update the preferred size in that case.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1282
For ClutterClones we need to apply a scale to the texture of the clone
to ensure the painted texture of the source actor actually fits the
allocation of the clone. We're doing this using the transformation
matrix instead of using the scale_x/scale_y properties of ClutterActor
to allow users to scale ClutterClones using that API independently.
Now it's quite a bad idea to get the allocation boxes for calculating
that scale using clutter_actor_get_allocation_box(), since that method
will internally do an immediate relayout of the stage in case the actor
isn't allocated. Another side effect of that approach is that it makes
it impossible to invalidate the transform (which will be needed when we
start caching those matrices) properly.
So since we eventually allocate both the source actor and the clone
ourselves anyway, we can simply use the allocation box inside
clutter_clone_allocate() (which is definitely updated and valid at that
point) to calculate the scale factor.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1181
It seems wrong to use the scale factor of the X axis on the Z axis and
it looks like this has been accidentally changed in commit 570fa3f044.
So use a factor of 1.0 instead to not scale the Z axis at all because
the layout machinery only works in X and Y coordinates.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1181
There are cases where a layout manager used by an actor also wants to
return a custom size when the actor has no children, for example in case
the layout manager requests a fixed size. This is currently impossible
because we only query the layout manager when calculating the preferred
size if the actor has children.
So fix that and also use the layout managers size negotiation functions
in case the actor has no children.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1322
The size of the buffer the texture will be written to by
paint_to_buffer() is determined based on
meta_screen_cast_area_stream_src_get_specs() which uses roundf() to
calculate the width and height after scaling. Because the size of the
texture to be written to that buffer is calculated using ceilf(), it
might exceed the allocated buffer when using fractional scaling.
In 3.36 paint_to_buffer() is used from capture_view() which also uses
roundf() to allocate its buffer. Here this leads to a memory corruption
resulting in a crash when taking screenshots of an area.
Fixes https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/2842https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1320
The modifier state of the input device is supposed to be set to the
newest state, while the modifier state detail of the event is set to the
last state before the event (so not including the changes triggered by
the event).
So since the modifier state of the event is the last state anyway, the
state of the ClutterInputDevice is supposed to be set by the backend and
not by the stage while queuing the event, so stop setting the state
here.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1275
Make the clutter_input_device_get_actor() API public and remove
clutter_input_device_get_pointer_actor() in favour of the new function.
This allows also getting the "pointer" actor for a given touch sequence,
not only for real pointer input devices like mice.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1275
Switch from clutter_seat_list_devices() to the new peek_devices() method
of ClutterSeat in cases where we're only looping through the returned
list without manipulating it. This way we don't have to unnecessarily
copy around the list of devices.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1275
Add a method to ClutterSeat that allows peeking the list of input
devices and allow looping through devices a bit faster. The API left is
private so we can make use of peeking the GList internally, but don't
have to expose any details to the outside, which means we'd have to
eventually stick with a GList forever to avoid breaking API.
Since we now have the peek_devices() API internally, we can implement
ClutterSeats public list_devices() API using g_list_copy() on the list
returned by peek_devices().
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1275
While it's strongly discouraged, it is possible to queue a new relayout
of an actor in the middle of an allocation cycle, we warn about it but
don't forbid it.
With the introduction of the "shallow relayout" API, our handling of
those relayouts silently changed: Before introducing "shallow
relayouts", we'd handle them on the next stage update, but with the
priv->pending_relayouts hashtable and the
priv->pending_relayouts_version counter, we now do them immediately
during the same allocation cycle (the counter is increased by 1 when
queuing the relayout and we switch to a new GHashTableIter after
finishing the current relayout, which means we'll now do the newly
queued relayout).
This change in behavior was probably not intended and wasn't mentioned
in the commit message of 5257c6ecc2, so
switch back to the old behavior, which is more robust in preventing
allocation-loops. To do this, use a GSList instead of GHashTable for the
pending_relayouts list, and simply steal that list before doing the
relayouts in _clutter_stage_maybe_relayout().
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1267
ClutterAlignConstraint currently assumes the source actor is positioned
in the same coordinate system as the actor it's attached to and
automatically offsets the adjusted allocation by the origin of the
source actor.
This behavior is only valid though in case the source actor is a sibling
of the constraint actor. If the source actor is somewhere else in the
actor tree, the behavior gets annoying because the constraint actor is
offset by (seemingly) random positions.
To fix this, stop offsetting the constraint actors allocation by the
position of the source.
To still make it possible to align the constraint actors origin with the
origin of the source, no longer override the origin of the allocation
in the AlignConstraint. This allows users to align the origin using a
BindConstraint, binding the actor position to the position of the
source, which is more flexible and also more elegant.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/737
Add a new pivot-point property to the ClutterAlignConstraint, similar to
the pivot point used by ClutterActor, defining the point in the
constraint actor around which the aligning is applied to the actor.
Just as the ClutterActor property, this property is defined using a
GraphenePoint.
By default this property remains set to (-1, -1) and the actor
will always be aligned inside the source actor, preserving the existing
behavior of ClutterAlignConstraint.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/737
Now that we have a proper way to mark our allocation as uninitialized,
make use of that and only disallow implicit transitions of the
"allocation" property if that is the case.
This fixes a bug where easing the allocation of an actor is impossible
when someone queued a relayout on it (or a child of it) before.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1290
We currently initialize the ClutterActorBox of the actors allocation to
zero, but there's a difference between a valid zero-allocation and an
actor having never been allocated. Currently it's impossible for us to
detect the latter case in a reliable way and we use the needs_allocation
flag for this, which may also be set in other situations.
So initialize the allocation of actors to the newly added UNINITIALIZED
ClutterActorBox, which will make it easier to detect whether an actor
already got its initial allocation.
This also fixes another issue right now: Actors which get allocated a
(valid) zero allocation, will now notify the "allocation" property in
this case.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1290
Add support for an artificial UNINITIALIZED marking for ClutterActorBox,
done by setting the boxes origin to Infinity and its size to -Infinity.
That is a value that's considered an invalid allocation by Clutter and
which can never be set by sane code.
This will allow setting the allocation of ClutterActors to an
UNINITIALIZED box when creating actors or when removing them from the
scenegraph and makes it possible to explicitely detect uninitialized
allocations, which is useful in a few cases.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1290
We currently go through the whole tree of mapped actors on every paint
cycle to update the stage views actors are on. Even if no actors need
updating of their stage views, traversing the actor tree is still quite
expensive and shows up when using a profiler.
So tone down the amounts of full-tree traversals we have to do on every
paint cycle and only traverse a subtree if it includes an actor which
actually needs updating of its stage views.
We do that by setting the `needs_update_stage_views` flag to TRUE
recursively for all parents up to the stage when the stage-views list of
an actor gets invalidated. This way we end up updating a few more actors
than necessary, but can avoid searching the whole actor tree for actors
which have `needs_update_stage_views` set to TRUE.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1196
Add a new signal that's emitted when the stage views an actor being
painted on have changed, "stage-views-changed". For example this signal
can be helpful when tracking whether an actor is painted on multiple
stage views or only one.
Since we must clear the stage-views list when an actor leaves the stage
(actors that aren't attached to a stage don't get notified about the
stage views being changed/replaced), we also emit the new signal when an
actor gets detached from the stage (otherwise there would be an edge
case where no signal is emitted but it really should: An actor is
visible on a stage view, then detached from the stage, and then attached
again and immeditely moved outside the view).
Also skip the comparison of the old stage-views list and the new one if
nobody is listening to the signal to save some resources.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1196
There are certain rendering techniques and optimizations, for example
the unredirection of non-fullscreen windows, where information about the
output/stage-view an actor is on is needed to determine whether the
optimization can be enabled.
So add a new method to ClutterActor that allows listing the stage-views
the actor is being painted on: clutter_actor_peek_stage_views()
With the way Clutter works, the only point where we can reliably get
this information is during or right before the paint phase, when the
layout phase of the stage has been completed and no more changes to the
actors transformation matrices happen. So to get the stage views the
actor is on, introduce a new step that's done on every master clock tick
between layout and paint cycle: Traversing through the actor tree and
updating the stage-views the mapped actors are going to be painted on.
We're doing this in a separate step instead of inside
clutter_actor_paint() itself for a few reasons: It keeps the code
separate from the painting code, making profiling easier and issues
easier to track down (hopefully), it allows for a new
"stage-views-changed" signal that doesn't interfere with painting, and
finally, it will make it very easy to update the resource scales in the
same step in the future.
Currently, this list is only invalidated on allocation changes of
actors, but not on changes to the transformation matrices. That's
because there's no proper API to invalidate the transformation matrices
ClutterActor implementations can apply through the apply_transform()
vfunc.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1196
When the stage views the stage is shown on are changed, ClutterStage
currently provides a clutter_stage_update_resource_scales() method
that allows invalidating the resource scales of all actors. With the new
stage-views API that's going to be added to ClutterActor, we also need a
method to invalidate the stage-views lists of actors in case the stage
views are rebuilt and fortunately we can re-use the infrastructure for
invalidating resource scales for that.
So since resource scales depend on the stage views an actor is on,
rename clutter_stage_update_resource_scales() and related methods to
clutter_stage_clear_stage_views(), which also covers resource scales.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1196
While the layout manager of a ClutterActor does get properly unset when
destroying an actor, we currently forget to disconnect the
"layout-changed" signal from it.
So do that, and while at it, also switch to using the signal id for
disconnecting from the signal instead of
g_signal_handlers_disconnect_by_func(), which caused problems before
because it might traverse the signal handler list.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1281
We currently are confusing g_param_spec_enum and g_param_spec_flags for
the offscreen-redirect property of ClutterActor. Since it's actually a
flag, make it a flag everywhere.
Fun fact: This was already partly done with
d7814cf63e, but that commit missed the
setter.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1292
Just like the ClutterBindConstraint, the ClutterAlignConstraint should
listen to "queue-relayout" of its source actor, not
"notify::allocation". That's because the latter will queue a relayout
during an allocation cycle and might cause relayout loops.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1296
Hiding a compositor stage is not something that's really supported, but
will still be used by tests, to get closer to a "fresh" stage for each
test case, when the tests eventually start using the mutter provided
stage.
It'll use that stage simply because creating standalone stages isn't
supported.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1289
The script parser only included G_PARAM_CONSTRUCT_ONLY parameters when
constructing objects. This causes issues if an object requires a
parameter to be set during construction, but may also change after. Fix
this by including G_PARAM_CONSTRUCT parameters when constructing script
objects as well.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1289
Start follow the convention used in ClutterFrameClock by including the
meaning as well as time granularity in the variable name. The
constructor takes the intended duration of the constructed timeline in
milli seconds, so call the constructor argument `duration_ms`. This is
done in preparation for adding more constuctors.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1289
For actors which don't have needs_allocation set to TRUE and where the
new allocation wouldn't be different from the old one, the allocate()
vfunc doesn't have to be called. We still did this in case a parent
actor was moved though (so the absolute origin changed), because we
needed to propagate the ABSOLUTE_ORIGIN_CHANGED allocation flag down to
all actors.
Since that flag is now removed and got replaced with a private property,
we can simply notify the children about the absolute allocation change
using the existing infrastructure and safely stop allocating children at
this point.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1247
With commit 0eab73dc2e we introduced an optimization of not doing
allocations for actors which are hidden. This broke the propagation of
absolute origin changes to hidden actors, so if an actor is moved while
its child is hidden, the child will not get
priv->needs_compute_resource_scale set to TRUE, which means the resource
scale won't be updated when the child gets mapped and shown again.
Since we now have priv->absolute_origin_changed, we can simply check
whether that is TRUE for our parent before bailing out of
clutter_actor_allocate() and if it is, notify the whole hidden sub-tree
about the absolute origin change.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1247
Since clutter_stage_set_viewport() is only used inside clutter-stage.c
anyway, we can make it a static method. Also we can remove the x and y
arguments from it since they're always set to 0 anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1247
When getting the last allocation using
clutter_actor_get_allocation_box(), Clutter will do an immediate
relayout of the stage in case an actor has an invalid allocation. Since
the allocation is always invalid when the allocate() vfunc is called,
clutter_stage_allocate() always forces another allocation cycle.
To fix that, stop comparing the old allocation to the new one to find
out whether the viewport changed, but instead use the existing check in
_clutter_stage_set_viewport() and implement the behavior of rounding the
viewport to the nearest int using roundf() (which should behave just as
CLUTTER_NEARBYINT()) since we're passing around floats anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1247
When manipulating the allocation of a ClutterActor from an allocate()
vfunc override, clutter_actor_set_allocation() is used to let Clutter
know about the changes.
If the actors allocation or its absolute origin did not change before
that, this can also affect the actors absolute_origin_changed property
used by the children to detect changes to their absolute position.
So fix this bug (which luckily didn't seem to affect us so far) and set
priv->absolute_origin_changed to TRUE in case the origin changes inside
clutter_actor_set_allocation_internal(). Since this function is always
called when our allocation changes, we no longer need to update
absolute_origin_changed in clutter_actor_allocate() now.
Since a change to the absolute origin always affects the resource scale,
too, we also need to move that check from clutter_actor_allocate() here
to make sure we update the resource scale.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1247
Since the introduction of the shallow relayout functionality it's
possible to start an allocation cycle at any point in the tree, not only
at the stage. Now when starting an allocation at an actor that's not the
stage, we'd still look at the absolute_origin_changed property of this
actors parent, which might still be set to TRUE from the parents last
allocation.
So avoid using the parents absolute_origin_changed property from the
last allocation in case a shallow relayout is being done and always
reset the absolute_origin_changed property to FALSE after the allocation
cycle.
This broke with the removal of the ABSOLUTE_ORIGIN_CHANGED
ClutterAllocationFlag that was done in commit dc8e5c7f.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1247
This cannot be made to work reliably. Some factoids:
- Internal devices may be connected via USB.
- The ACPI spec provides the _PLD (Physical location of device) hook to
determine how is an USB device connected, with an anecdotal success
rate. Internal devices may be seen as external and vice-versa, there is
also an "unknown" value that is widely used.
- There may be non-USB keyboards, the old "AT Translated Set 2 Keyboard"
interface does not change on hotplugging.
- Libinput has an internal series of quirks to classify keyboards as
internal of external, also with an "unknown" value.
These heuristics are kinda hopeless to get right by our own hand. Drop
this external keyboard detection in the hope that there will be something
more deterministic to rely on in the future (e.g. the libinput quirks
made available to us directly or indirectly).
Fixes: https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/2378
Related: https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/2353https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1277
In clutter_text_queue_redraw_or_relayout() we check whether the size
of the layout has changed and queue a relayout if it did, otherwise we
only queue a redraw and save some resources.
The current check for this also queues a redraw if the actor has no
valid allocation. That seems right on the first glance since the actor
will be allocated anyway, but we actually want to call
clutter_actor_queue_relayout() again here because that also invalidates
the size cache of the actor which might have been updated and marked
valid in the meantime.
So make sure the size cache is always properly invalidated after the
size of the layout changed and also call clutter_actor_queue_relayout()
in case the actor has no allocation.
This fixes a bug where getting the preferred width of a non-allocated
ClutterText, then changing the string of the ClutterText, and then
getting the preferred width again would return the old cached width (the
width before we changed the string).
The only place where this bug is currently happening is in the overview,
where we call get_preferred_width() on the unallocated ClutterText of
the window clone title: When the window title changes while the
ClutterText is unallocated the size of the title is going to be wrong
and the text might end up ellipsized or too large.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1150
It's effectively used by mutter by abusing a ClutterTimeline to scedule
updates. Timelines are not really suited in places that is done, as it
is really just about getting a single new update scheduled whenever
suitable, so expose the API so we can use it directly.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1218
We could call clutter_stage_schedule_update() and it wouldn't actually
schedule anything, as the master frame clock only tries to reschedule if
1) there is an active timeline, 2) there are pending relayouts, 3) there
are pending redraws, or 4) there are pending events. Thus, a call to
clutter_stage_schedule_update() didn't have any effect if it was called
at the wrong time.
Fix this by adding a boolean state "needs_update" to the stage, set on
clutter_stage_schedule_update() and cleared on
_clutter_stage_do_update(), that will make the master clock reschedule
an update if it is TRUE.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1218
We need to use the framebuffer of the view instead of the onscreen
framebuffer when painting the damage region, otherwise the redraw clips
on rotated monitors won't be shown correctly.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1237
Compare, tile by tile, whether actual damage actually changed any
pixels. While this requires mmap():ing DMA buffers and comparing their
content, we should only ever use shadow buffers when we're using the
software renderer, meaning mmap() is cheap as it doesn't involve any
downloading.
This works by making the shadow framebuffer double buffered, while
keeping track of damage history. When we're about to swap the onscreen
buffer, we compare what part of the posted damage actually changed,
records that into a damage history, then given the onscreen buffer age,
collect all actual damage for that age. The intersection of these tiles,
and the actual damage, is then used when blitting the shadow buffer to
the onscreen framebuffer.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/1157https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1237
Move the damage history tracking to a new ClutterDamageHistory helper
type. The aim is to be able to track damage history elsewhere without
reimplementing the data structure and tracking logic.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1237
This fixes the last "copy everything" paths when clutter doesn't
directly paint onto the onscreen framebuffer. It adds a new hook into
the stage view called before the swap buffer, as at this point, we have
the swap buffer damag regions ready, which corresponds to the regions we
must blit according to the damage reported to clutter.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1237