Given that CoglMatrix is simply a typedef to graphene_matrix_t, we can
remove all the GType machinery and reuse Graphene's.
Also remove the clutter-cogl helper, and cogl_matrix_to_graphene_matrix()
which is now unused.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1439
It turns it to be quite easy to inverse the transform, and doing that
on ClutterActor level means we can actually think about removing
CoglMatrix entirely and using graphene_matrix_t everywhere.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1439
CoglMatrix doesn't have a 1:1 mapping of graphene functions, and
sometimes it's just not worth adding wrappers over it. It is easier
to expose the internal graphene_matrix_t and let callers use it
directly.
Add new cogl_matrix_get_graphene_matrix() helper function, and
simplify Clutter's matrix progress function.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1439
Rename cogl_matrix_get_array() to cogl_matrix_to_float(), and
make it copy the floats to an out argument instead of returning
a pointer to the casted CoglMatrix struct.
The naming change is specifically made to match graphene's,
and ease the transition.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1439
Graphene provides skewing as part of graphene_matrix_t API, and it'll
be easier for the transition to just expose similar API surfaces.
Move the matrix skew methods to CoglMatrix.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1439
Even when a direct client buffer has a compatible format, stride and
modifier for direct scanout, drmModePageFlip() may still fail sometimes.
From testing, it has been observed that it may seemingly randomly fail
with ENOSPC, where all subsequent attempts later on the same CRTC
failing with EBUSY.
Handle this by falling back to flipping after having composited a full
frame again.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/1410
We already correctly set the font-dpi based on user settings in
MetaSettings at each user change and as part of backend initialization,
so there's no point to set it also during x11 backend post-parsing and
using X11 values, as this may happen at later point and lead to a wrong
clutter font DPI value.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1444
It'll allow subclasses to get notified of the before-paint
signal without having to connect to it. This will allow
MetaStage to have proper watches being fired there without
the cost of the signal handling machinery.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1421
Just because X11/XI uses a particular terminology doesn't mean we
have to use the same terms in our own API. The replacement terms
are in line with gtk@1c856a208, which seems a better precedent
for consistency.
Follow-up to commit 17417a82a5.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1425
These are not given directly to the input focus anymore, instead
queued up as events. This way, all actions triggered by the input
method (commit and preedit buffer ones, but also synthesized key
events) queue up the same way, and are thus processed in the exact
same order than they are given to us.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1286
The clutter_input_focus_filter_key_event() function has been made
a more generic filter_event(). Besides its old role about letting
key events go through the IM, it will also process the IM events
that are possibly injected as a result.
Users have been updated to these changes.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1286
Previously we only culled actors that didn't intersect the bounding box
of the redraw clip. Now we also cull those whose paint volume bounds don't
intersect the arbitrary shape of the redraw clip.
This was inspired by the activities overview where idle windows and
workspace previews were being needlessly repainted. In that particular
case this yields more than 10% reduction in render time. But it probably
helps in other situations too.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1359
Clutter device events are special events coming from the backend when an
input device is added or removed.
When such events are processed, we should make the seat to handle them by
calling vfunc that can be implemented by each backend and eventually
emitting the appropriate signal.
If a device is removed, we can also safely dispose it, as it can be
considered stale at this point.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1371
Add clutter device added and removed events to allow processing of them as
it happens in the backends, queuing them and performing actions in order.
This allows not to loose any event that is performed just before removing or
disabling a device, and still process the events in order in the event
queue.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1371
This was from the old clutter-as-application-library days, where it had
to try find a suitable backend. Now we already have a backend selected
(MetaBackend), and the clutter backend is already predecided depending
on that, so we don't need the code that auto detects an appropriate one
anymore.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1364
There is no reason to use Xsettings for the X11 backend, as it comes
from the GSetting store anyway, so move the font setting reading to
ClutterSettings and read directly from GSettings.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1364
The delete event was used for signalling the close button was clicked on
clutter windows. Being a compositor we should never see these, unless
we're running nested. Remove the plumbing of the DELETE event and just
directly call meta_quit() when we see it, if we're running nested.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1364
We checked if we were using the usig the X11 backend to decide when to
deal with a11y event posting - in order to make the clutter code less
windowing system dependent, make this check a check whether we're a
display server or not, in contrast to a window/compositing manager
client. This is made into a vfunc ot ClutterBackendClass, implemented by
MetaClutterBackendNative and MetaClutterBackendX11.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1364
When we pick the frame clock given the associated actor, that frame
clock in fact comes from a picked actor. In order to not end up with
stale frame clocks, which may happen on e.g. hotplugs, monitor layout
changes, or non-optimal frame clocks, which may happen when the parent
used for picking the clock moves to another view, lets listen to
'stage-views-changed' on the actor used for picking the clock too.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/1327https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1404
Let's not expose that outside of mutter quite yet; it's not used in
gnome-shell, and to avoid future breakage if it starts to be used, lets
move it to clutter-mutter.h so only mutter and clutter itself can use
it.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1404
This aims to make sure a view and its resources are destroyed when it
should. Using references might keep certain components (e.g frame clock)
alive for too long.
We currently don't take any long lived references to the stage view
anywhere, so this doesn't matter in practice, but this may change, and
will be used by a to be added test case.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1404
Always force-track the cursor position (so that the X11 backend can keep
it up to date), and if the cursor wasn't part of the sampled
framebuffer when reading pixels into CPU memory, draw it in an extra
pass using cairo after the fact. The cairo based cursor painting only
happens on the X11 backend, as we otherwise inhibit the hw cursor.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1391
On X11 we won't always receive cursor positions, as some other client
might have grabbed the pointer (e.g. for implementing a popup menu). To
make screen casting show a somewhat correct cursor position, we need to
actively poll the X server about the current cursor position.
We only really want to do this when screen casting or taking a
screenshot, so add an API that forces the cursor tracker to track the
cursor position.
On the native backend this is a no-op as we by default always track the
cursor position anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1391
The clutter_actor_get_transformed_position returns the position of the
top left point of the actor, with the actor transformations. That means
that if the actor is rotated 180º it'll return the "screen" position top
right.
Using this to calculate if the actor is in the screen is causing
problems when it's transformted.
This patch adds a new function clutter_actor_get_transformed_extents,
that will return the transformed actor bounding rect.
This new function is used on the update_stage_views so the actor will
get updated. this way rotated actors will be updated if they are on the
screen.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1386
Make clutter_actor_allocate_preferred_size() convenient to use from
layout managers by not "automatically" honouring the fixed position of
the actor, but instead allowing to pass a position to allocate the
actor at.
This way we can move the handling of fixed positions to
ClutterFixedLayout, the layout manager which is responsible for
allocating actors using fixed positions.
This also makes clutter_actor_allocate_preferred_size() more similar to
clutter_actor_allocate_available_size().
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1310
It's currently a bit hard to get the fixed position of an actor. It can
be either done by using g_object_get() with the "fixed-x"/"fixed-y"
properties or by calling clutter_actor_get_position().
Calling clutter_actor_get_position() can return the fixed position, but
it might also return the allocated position if the allocation is valid.
The latter is not the best behavior when querying the fixed position
during an allocation, so introduce a new function
clutter_actor_get_fixed_position() which always gets the fixed position
and returns FALSE in case no fixed position is set.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1310
With the introduction of the shallow relayout mechanism another small
but severe regression sneaked into our layout machinery: We might
allocate an actor twice during the same allocation cycle, with one
allocation happening using the wrong parent.
This issue happens when reparenting an actor from a NO_LAYOUT parent to
a non-NO_LAYOUT parent, in particular it triggered a bug in gnome-shell
when DND reparents a child from the NO_LAYOUT uiGroup to the overviews
Workspace actor after a drag ended. The reason the issue happens is the
following chain of events:
1. child of a NO_LAYOUT parent queues a relayout, this child is added to
the priv->pending_relayouts list maintained by ClutterStage
2. child is reparented to a different parent which doesn't have the
NO_LAYOUT flag set, another relayout is queued, this time a different
actor is added to the priv->pending_relayouts list
3. the relayout happens and we go through the pending_relayouts list
backwards, that means the correct relayout queued during 2. happens
first, then the old one happens and we simply call
clutter_actor_allocate_preferred_size() on the actor, that allocation
overrides the other, correct one.
So fix that issue by adding a method to ClutterStage which removes
actors from the pending_relayouts list again and call this method as
soon as an actor with a NO_LAYOUT parent is detached from the stage.
With that in place, we can also remove the check whether an actor is
still on stage while looping through pending_relayouts. In case
something else is going wrong and the actor is not on stage,
clutter_actor_allocate() will warn anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1356
When picking which frame clock to use, we traverse up in the actor
hierarchy until a suitable frame clock is found. ClutterTimeline
also listens to the 'stage-views-changed' to make sure it's always
attached to the correct frame clock.
However, there is one special situation where neither of them would
work: when the stage doesn't have a frame clock yet, and the actor
of the timeline is outside any stage view. When that happens, the
returned frame clock is NULL, and 'stage-views-changed' is never
emitted by the actor.
Monitor the stage for stage view changes when the frame clock is
NULL.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
An actor may be placed without being on any current stage view; in this
case, to get the ball rolling, walk up the actor tree to find the first
actor where a frame clock can be picked from.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The frame clock owner should be able to explicitly destroy (i.e. make
defunct) a frame clock, e.g. when a stage view is destructed. This is so
that other objects can keep reference to its without it being left
around even after stopped being usable.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Replace the default master clock with multiple frame clocks, each
driving its own stage view. As each stage view represents one CRTC, this
means we draw each CRTC with its own designated frame clock,
disconnected from all the others.
For example this means we when using the native backend will never need
to wait for one monitor to vsync before painting another, so e.g. having
a 144 Hz monitor next to a 60 Hz monitor, things including both Wayland
and X11 applications and shell UI will be able to render at the
corresponding monitor refresh rate.
This also changes a warning about missed frames when sending
_NETWM_FRAME_TIMINGS messages to a debug log entry, as it's expected
that we'll start missing frames e.g. when a X11 window (via Xwayland) is
exclusively within a stage view that was not painted, while another one
was, still increasing the global frame clock.
Addititonally, this also requires the X11 window actor to schedule
timeouts for _NET_WM_FRAME_DRAWN/_NET_WM_FRAME_TIMINGS event emitting,
if the actor wasn't on any stage views, as now we'll only get the frame
callbacks on actors when they actually were painted, while in the past,
we'd invoke that vfunc when anything was painted.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/903
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/3https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
We'd emit multiple "presented" signals per frame, one for "sync" and one
for "completion". Only the latter were ever used, and removing the
differentiation eases the avoidance of cogl onscreen framebuffer frame
callback details leaking into clutter.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Right now the stage only had a signal called 'after-paint' which was not
tied to painting but updating. Change this to offer 4 signals, for the 4
different stages:
* before-update - emitted in the beginning before the actual stage
updating
* before-paint - emitted before painting if there will be any stage
painting
* after-paint - emitted after painting if there was any stage painting
* after-update - emitted as a last step of updating, no matter whether
there were any painting or not
Currently there were only one listener, that should only really have
been called if there was any painting, so no changes to listeners are
needed.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The mutexes was used by ClutterTexture's async upload and to match GDK's
mutexes on X11. GDK's X11 connection does not share anything with
Clutter's, we don't have the Gdk Clutter backend left, and we have
already removed ClutterTexture, so lets remove these mutexes as well.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
A frame clock dispatch doesn't necessarily result in a frame drawn,
meaning we'll end up in the idle state. However, it may be the case that
something still requires another frame, and will in that case have
requested one to be scheduled. In order to not dead lock, try to
reschedule directly if requested after dispatching, if we ended up in
the idle state.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The frame clock wouldn't be useable yet, but none the less, add API to
get the frame clock best suited for driving the actor. Currently this
translates to the fastest one, but that might change.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The frame clock is meant to eventually drive the painting of the view,
in contrast to the master frame clock painting every view on the stage.
Right now it's a useless place holder.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The native backend had a plain counter, and the X11 backend used the
CoglOnscreen of the screen; change it into a plain counter in
ClutterStageCogl. This also moves the global frame count setting to the
frame info constuctor.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
We currently have mutter set a global frame counter on the frame info in
the native backend, but in order to do this from clutter, change the
frame info construction from being implicitly done so when swapping
buffers to having the caller create the frame info and passing that to
the swap buffers call.
While this commit doesn't introduce any other changes than the API, the
intention is later to have the caller be able to pass it's own state
(e.g. the global frame count) along with the frame info.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
We had time unit conversion helpers (e.g. us2ms(), ns2us(), etc) in
multiple places. Clean that up by moving them all to a common file. That
file is clutter-private.h, as it's accessible by both from clutter/ and
src/.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Currently unused, but it's intention is to use as a initial refresh rate
for a with the stage view associated frame clock. It defaults to 60 Hz
if nothing sets it, but the native backend sets it to the associated
CRTCs current mode's refresh rate.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Without an associated actor, or explicit frame clock set, in the future
a timeline will not know how to progress, as there will be no singe
frame clock to assume is the main one. Thus, deprecate the construction
of timelines without either an actor or frame clock set.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The association is inactive, as in it doesn't do anything yet, but it
will later be used to determine what frame clock should be driving the
timeline by looking at what stage view the actor is currently on.
This also adapts sub types (ClutterPropertyTransition) to have
constuctors that takes an actor just as the new ClutterTimeline
constructor.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
This is so something outside of clutter-stage.c (i.e.
clutter-stage-view.c) can eventually do various things
_clutter_stage_do_update() does now while not redrawing the whole stage.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Devices are updated (repicked) as part of the stage update phase, as
their stacking, position and transform might have changed since since
the last update.
The redraw clip was used to avoid unnecessary updating of devices, if
the device in question had it's position outside of the redraw clip. If
the device coordinate was outside of the redraw clip, what was
underneith the device couldn't have changed.
What it failed to do, however, was to update devices if a relayout had
happened in the same update, as it checked the state whether a layout
had happened before attempting to do a relayout, effectively delaying
the device updating to the next update.
This commit changes the behavior to always update the device given the
complete redraw clip caused by all possible relayouts of the same update
as the device update happens in.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
We'd check if there was any queued redraw on the stage, but this is
inappropriate for two reasons:
1) A monitor and area screen cast source only cares about damage on a
subset of the stage.
2) The global pending-redraw is going away when paint scheduling will be
more view centric.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
This will allow anyone to finish any queued redraws making their
corresponding damage end up being posted to the stage views. This will
allow units to check whether, so far, any updates are queued on a
particular stage view.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Add API to add and remove ClutterTimeline objects to the frame clock.
Just as the legacy master clock, having a timeline added to the frame
clock causes the frame clock to continuously reschedule updates until
the timeline is removed.
ClutterTimeline is adapted to be able to be driven by a
ClutterFrameClock. This is done by adding a 'frame-clock' property, and
if set, the timeline will add and remove itself to the frame clock
instead of the master clock.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The timestamp comes from the GSource, meaning it's a more accurate
representation of when the frame started to be dispatched compared to
getting the current time in any callback.
Currently unused.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
In certain scenarios, the frame clock needs to handle present feedback
long before the assumed presentation time happens. To avoid scheduling
the next frame to soon, avoid scheduling one if we were presented half a
frame interval within the last expected presentation time.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
This adds a current unused, apart from tests, frame clock. It just
reschedules given a refresh rate, based on presentation time feedback.
The aiming for it is to be used with a single frame listener (stage
views) that will notify when a frame is presented. It does not aim to
handle multiple frame listeners, instead, it's assumed that different
frame listeners will use their own frame clocks.
Also add a test that verifies that the basic functionality works.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
When a transition is created for the allocation change, it will delay
the new allocation box getting set depending on transition details.
This, however, means that e.g. the 'needs_allocation' flag never gets
cleared if a transition is created, causing other parts of the code to
get confused thinking it didn't pass through a layout step before paint.
Fix this by calling clutter_actor_allocate_internal() with the current
allocation box if a transition was created, so that we'll properly clear
'needs_allocation' flag.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1345
Since we now have the neccessary infrastructure to get notified about
changes to the absolute transformation matrix, we can also invalidate
the stage-views list on updates to this matrix.
So rename absolute_allocation_changed() to absolute_geometry_changed()
to make it clear this function is not only about allocations, and call
that function recursively for all children on changes to the
transformation matrix, too.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1343
If we want to invalidate the stage-views list reliably on changes to the
actors transformation matrices, we also need to get notified about
changes to the custom transformations applied using the
apply_transform() vfunc.
So provide a new API that allows invalidating the transformation matrix
for actors implementing custom transformations, too. This in turn allows
us to cache the matrix applied using the apply_transform() vfunc by
moving responsibility of keeping track of the caching from
clutter_actor_real_apply_transform() to
_clutter_actor_apply_modelview_transform().
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1343
For ClutterText, the resource scale the text is drawn with affects the
size of the allocation: ClutterText will choose a font scale based on
the resource scale, and that font scale can lead to a slight difference
in size compared to the unscaled font.
We currently handle that by queuing a relayout inside the
"resource-scale-changed" signal handler. This solution is a bit
problematic though since it will take one more allocation cycle until
the allocation is actually updated after a scale-change, so the actor is
painted using the wrong allocation for one frame.
Also the current solution can lead to relayout loops in a few cases, for
example if a ClutterText is located near the edge on a 1x scaled monitor
and is moved to intersect a 2x scaled monitor: Now the resource scale
will change to 2 and a new allocation box is calculated; if this
allocation box is slightly smaller than the old one because of the new
font scale, the allocation won't intersect the 2x scaled monitor again
and the resource scale switches back to 1. Now the allocation gets
larger again and intersects the 2x scaled monitor again.
This commit introduces a way to properly support those actors: In case
an actors resource scale might affect its allocation, it should call the
private function clutter_actor_queue_immediate_relayout(). This will
make sure the actor gets a relayout before the upcoming paint happens
afte every resource scale change. Also potential relayout loops can
be handled by the actors themselves using a "phase" argument that's
passed to implementations of the calculate_resource_scale() vfunc.
The new API is private because resource scales are not meant to be used
in a way where the scale affects the allocation. With ClutterText and
the current behavior of Pango, that can't be avoid though, so we need it
anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
Since we now always return a resource scale, we can remove the boolean
return value from clutter_actor_get_resource_scale() and
_clutter_actor_get_real_resource_scale(), and instead simply return the
scale.
While at it, also remove the underscore from the
_clutter_actor_get_real_resource_scale() private API.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
Now that ClutterActor has a convenient API for getting the stage views
an actor is presented on, we can remove a large part of the code for
resource-scale calculation and instead rely on the stage-views list.
The way this works is a bit different from the old resource scales:
clutter_actor_get_resource_scale() always returns a scale, but this
value is only guaranteed to be correct when called from a vfunc_paint()
implementation, in all other cases the value is guessed using the scale
of the parent actor or the last valid scale. Now in case the value
previously reported by clutter_actor_get_resource_scale() turns out to
be wrong, "resource-scale-changed" will be emitted before the next paint
and the actor has a chance to update its resources.
The general idea behind this new implementation is for actors which only
need the scale during painting to continue using
clutter_actor_get_resource_scale() as they do right now, and for actors
which need the resource scale on other occasions, like during size
negotiation, to use the scale reported by
clutter_actor_get_resource_scale() but also listen to the
"resource-scale-changed" signal to eventually redo the work using the
correct scale.
The "guessing" of the scale is done with the intention of always giving
actors a scale to work with so they don't have to fall back to a scale
value the actor itself has to define, and also with the intention of
emitting the "resource-scale-changed" signal as rarely as possible, so
that when an actor is newly created, it won't have to load its resources
multiple times.
The big advantage this has over the old resource scales is that it's now
safe to call clutter_actor_get_resource_scale() from everywhere (before,
calling it from size negotiation functions would usually fail). It will
also make it a lot easier to use the resource scale for complex cases
like ClutterText without risking to get into relayout loops.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
Add private API to ClutterBackend to set a fallback resource scale
available to Clutter. This API will be used for "guessing" the
resource-scale of ClutterActors in case the actor is not attached to a
stage or not properly positioned yet.
We set this value from inside mutters MetaRenderer while creating new
stage-views for each logical monitor. This makes it possible to set the
fallback scale to the scale of the primary monitor, which is the monitor
where most ClutterActors are going to be positioned.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
We're going to refactor resource scales, making the notification of
changes to the resource scale a lot more important than it is right now
(we won't guarantee queried scales are correct outside the paint cycle
anymore).
Having a separate signal/vfunc for this will make the difference between
the new clutter_actor_get_resource_scale() API (which can return a
guessed value) and the notification of changes to the resource scale
(which will be guaranteed to return an up-to-date value) more obvious.
So replace the "resource-scale" property of ClutterActor with a
"resource-scale-changed" signal that's emitted when the resource scale
is recalculated.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1276
ClutterBoxLayout calculates the preferred size of the opposite
orientation (so for example the height if the orientation is horizontal)
by getting the preferred size of the real orientation first, and then
the preferred size of the opposite orientation, using the other size as
for_width/height when doing the request.
Right now, for non-homogeneous layouts this for_width/height does not
adjust for the spacing set on the box layout. This leads to children
being passed a slightly larger for_width/height, which in case of
ClutterText might cause the line to not wrap when it actually should.
This in turn means we can end up with an incorrect preferred size for
the opposite orientation, leading to a wrong allocation.
So fix that and adjust for the spacing just as we do for homogeneous
layouts by subtracting the total spacing from the available size that is
distributed between children.
This fixes the wrong height of the checkbox label reported in
https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/2574.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1333
The property is deprecated and the current implementation simply
redirects it to ClutterActor::background-color, so remove it.
Also update the tests to set the background color directly.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1332
ClutterStage is the one and only subclass of ClutterGroup, but
it overrides basically everything specific to ClutterGroup to
mimic a ClutterActor. What a waste!
Subclass ClutterActor directly and remove all the now useless
vfunc overrides from ClutterStage. Adapt CallyStage to subclass
CallyActor as well.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1332
It is deprecated in favor of the 'z-position' property, and
the implementation itself redirects to the z-position, so
just drop it and replace all get|set_depth calls to their
z-position counterparts.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1332
I noticed my system would fall back to the slow unclipped (and
uncullable) paint path whenever a window touched the left edge of
the screen. Turns out that was a red herring. Just that
`use_clipped_redraw` was uninitialized so clipping/culling was used
randomly.
So the compiler failed to notice `use_clipped_redraw` was uninitialized.
Weirdly, as soon as you fix that it starts complaining that `buffer_age`
might be uninitialized, which appears to be wrong. So we initialize that
too, to shut up the compiler warnings/errors.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1323
The ClutterBindConstraint will change the preferred size an actor
reports so it returns the same size as the source actor in some cases.
This behavior was introduced recently with 4f8e518d.
This can lead to infinite loops in case the source actor is a parent of
the actor the BindConstraint is attached to, that's because calling
get_preferred_size() on the source will recursively call
get_preferred_size() on the actor again.
So to avoid those loops, check if the source is a parent of the actor
we're attached to and don't update the preferred size in that case.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1282
For ClutterClones we need to apply a scale to the texture of the clone
to ensure the painted texture of the source actor actually fits the
allocation of the clone. We're doing this using the transformation
matrix instead of using the scale_x/scale_y properties of ClutterActor
to allow users to scale ClutterClones using that API independently.
Now it's quite a bad idea to get the allocation boxes for calculating
that scale using clutter_actor_get_allocation_box(), since that method
will internally do an immediate relayout of the stage in case the actor
isn't allocated. Another side effect of that approach is that it makes
it impossible to invalidate the transform (which will be needed when we
start caching those matrices) properly.
So since we eventually allocate both the source actor and the clone
ourselves anyway, we can simply use the allocation box inside
clutter_clone_allocate() (which is definitely updated and valid at that
point) to calculate the scale factor.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1181
It seems wrong to use the scale factor of the X axis on the Z axis and
it looks like this has been accidentally changed in commit 570fa3f044.
So use a factor of 1.0 instead to not scale the Z axis at all because
the layout machinery only works in X and Y coordinates.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1181
There are cases where a layout manager used by an actor also wants to
return a custom size when the actor has no children, for example in case
the layout manager requests a fixed size. This is currently impossible
because we only query the layout manager when calculating the preferred
size if the actor has children.
So fix that and also use the layout managers size negotiation functions
in case the actor has no children.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1322
The size of the buffer the texture will be written to by
paint_to_buffer() is determined based on
meta_screen_cast_area_stream_src_get_specs() which uses roundf() to
calculate the width and height after scaling. Because the size of the
texture to be written to that buffer is calculated using ceilf(), it
might exceed the allocated buffer when using fractional scaling.
In 3.36 paint_to_buffer() is used from capture_view() which also uses
roundf() to allocate its buffer. Here this leads to a memory corruption
resulting in a crash when taking screenshots of an area.
Fixes https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/2842https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1320