These are not given directly to the input focus anymore, instead
queued up as events. This way, all actions triggered by the input
method (commit and preedit buffer ones, but also synthesized key
events) queue up the same way, and are thus processed in the exact
same order than they are given to us.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1286
The clutter_input_focus_filter_key_event() function has been made
a more generic filter_event(). Besides its old role about letting
key events go through the IM, it will also process the IM events
that are possibly injected as a result.
Users have been updated to these changes.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1286
This was from the old clutter-as-application-library days, where it had
to try find a suitable backend. Now we already have a backend selected
(MetaBackend), and the clutter backend is already predecided depending
on that, so we don't need the code that auto detects an appropriate one
anymore.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1364
There is no reason to use Xsettings for the X11 backend, as it comes
from the GSetting store anyway, so move the font setting reading to
ClutterSettings and read directly from GSettings.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1364
The delete event was used for signalling the close button was clicked on
clutter windows. Being a compositor we should never see these, unless
we're running nested. Remove the plumbing of the DELETE event and just
directly call meta_quit() when we see it, if we're running nested.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1364
We checked if we were using the usig the X11 backend to decide when to
deal with a11y event posting - in order to make the clutter code less
windowing system dependent, make this check a check whether we're a
display server or not, in contrast to a window/compositing manager
client. This is made into a vfunc ot ClutterBackendClass, implemented by
MetaClutterBackendNative and MetaClutterBackendX11.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1364
When we pick the frame clock given the associated actor, that frame
clock in fact comes from a picked actor. In order to not end up with
stale frame clocks, which may happen on e.g. hotplugs, monitor layout
changes, or non-optimal frame clocks, which may happen when the parent
used for picking the clock moves to another view, lets listen to
'stage-views-changed' on the actor used for picking the clock too.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/1327https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1404
Let's not expose that outside of mutter quite yet; it's not used in
gnome-shell, and to avoid future breakage if it starts to be used, lets
move it to clutter-mutter.h so only mutter and clutter itself can use
it.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1404
This aims to make sure a view and its resources are destroyed when it
should. Using references might keep certain components (e.g frame clock)
alive for too long.
We currently don't take any long lived references to the stage view
anywhere, so this doesn't matter in practice, but this may change, and
will be used by a to be added test case.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1404
Always force-track the cursor position (so that the X11 backend can keep
it up to date), and if the cursor wasn't part of the sampled
framebuffer when reading pixels into CPU memory, draw it in an extra
pass using cairo after the fact. The cairo based cursor painting only
happens on the X11 backend, as we otherwise inhibit the hw cursor.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1391
On X11 we won't always receive cursor positions, as some other client
might have grabbed the pointer (e.g. for implementing a popup menu). To
make screen casting show a somewhat correct cursor position, we need to
actively poll the X server about the current cursor position.
We only really want to do this when screen casting or taking a
screenshot, so add an API that forces the cursor tracker to track the
cursor position.
On the native backend this is a no-op as we by default always track the
cursor position anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1391
The clutter_actor_get_transformed_position returns the position of the
top left point of the actor, with the actor transformations. That means
that if the actor is rotated 180º it'll return the "screen" position top
right.
Using this to calculate if the actor is in the screen is causing
problems when it's transformted.
This patch adds a new function clutter_actor_get_transformed_extents,
that will return the transformed actor bounding rect.
This new function is used on the update_stage_views so the actor will
get updated. this way rotated actors will be updated if they are on the
screen.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1386
Make clutter_actor_allocate_preferred_size() convenient to use from
layout managers by not "automatically" honouring the fixed position of
the actor, but instead allowing to pass a position to allocate the
actor at.
This way we can move the handling of fixed positions to
ClutterFixedLayout, the layout manager which is responsible for
allocating actors using fixed positions.
This also makes clutter_actor_allocate_preferred_size() more similar to
clutter_actor_allocate_available_size().
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1310
It's currently a bit hard to get the fixed position of an actor. It can
be either done by using g_object_get() with the "fixed-x"/"fixed-y"
properties or by calling clutter_actor_get_position().
Calling clutter_actor_get_position() can return the fixed position, but
it might also return the allocated position if the allocation is valid.
The latter is not the best behavior when querying the fixed position
during an allocation, so introduce a new function
clutter_actor_get_fixed_position() which always gets the fixed position
and returns FALSE in case no fixed position is set.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1310
With the introduction of the shallow relayout mechanism another small
but severe regression sneaked into our layout machinery: We might
allocate an actor twice during the same allocation cycle, with one
allocation happening using the wrong parent.
This issue happens when reparenting an actor from a NO_LAYOUT parent to
a non-NO_LAYOUT parent, in particular it triggered a bug in gnome-shell
when DND reparents a child from the NO_LAYOUT uiGroup to the overviews
Workspace actor after a drag ended. The reason the issue happens is the
following chain of events:
1. child of a NO_LAYOUT parent queues a relayout, this child is added to
the priv->pending_relayouts list maintained by ClutterStage
2. child is reparented to a different parent which doesn't have the
NO_LAYOUT flag set, another relayout is queued, this time a different
actor is added to the priv->pending_relayouts list
3. the relayout happens and we go through the pending_relayouts list
backwards, that means the correct relayout queued during 2. happens
first, then the old one happens and we simply call
clutter_actor_allocate_preferred_size() on the actor, that allocation
overrides the other, correct one.
So fix that issue by adding a method to ClutterStage which removes
actors from the pending_relayouts list again and call this method as
soon as an actor with a NO_LAYOUT parent is detached from the stage.
With that in place, we can also remove the check whether an actor is
still on stage while looping through pending_relayouts. In case
something else is going wrong and the actor is not on stage,
clutter_actor_allocate() will warn anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1356
When picking which frame clock to use, we traverse up in the actor
hierarchy until a suitable frame clock is found. ClutterTimeline
also listens to the 'stage-views-changed' to make sure it's always
attached to the correct frame clock.
However, there is one special situation where neither of them would
work: when the stage doesn't have a frame clock yet, and the actor
of the timeline is outside any stage view. When that happens, the
returned frame clock is NULL, and 'stage-views-changed' is never
emitted by the actor.
Monitor the stage for stage view changes when the frame clock is
NULL.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
An actor may be placed without being on any current stage view; in this
case, to get the ball rolling, walk up the actor tree to find the first
actor where a frame clock can be picked from.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The frame clock owner should be able to explicitly destroy (i.e. make
defunct) a frame clock, e.g. when a stage view is destructed. This is so
that other objects can keep reference to its without it being left
around even after stopped being usable.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Replace the default master clock with multiple frame clocks, each
driving its own stage view. As each stage view represents one CRTC, this
means we draw each CRTC with its own designated frame clock,
disconnected from all the others.
For example this means we when using the native backend will never need
to wait for one monitor to vsync before painting another, so e.g. having
a 144 Hz monitor next to a 60 Hz monitor, things including both Wayland
and X11 applications and shell UI will be able to render at the
corresponding monitor refresh rate.
This also changes a warning about missed frames when sending
_NETWM_FRAME_TIMINGS messages to a debug log entry, as it's expected
that we'll start missing frames e.g. when a X11 window (via Xwayland) is
exclusively within a stage view that was not painted, while another one
was, still increasing the global frame clock.
Addititonally, this also requires the X11 window actor to schedule
timeouts for _NET_WM_FRAME_DRAWN/_NET_WM_FRAME_TIMINGS event emitting,
if the actor wasn't on any stage views, as now we'll only get the frame
callbacks on actors when they actually were painted, while in the past,
we'd invoke that vfunc when anything was painted.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/903
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/3https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
We'd emit multiple "presented" signals per frame, one for "sync" and one
for "completion". Only the latter were ever used, and removing the
differentiation eases the avoidance of cogl onscreen framebuffer frame
callback details leaking into clutter.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Right now the stage only had a signal called 'after-paint' which was not
tied to painting but updating. Change this to offer 4 signals, for the 4
different stages:
* before-update - emitted in the beginning before the actual stage
updating
* before-paint - emitted before painting if there will be any stage
painting
* after-paint - emitted after painting if there was any stage painting
* after-update - emitted as a last step of updating, no matter whether
there were any painting or not
Currently there were only one listener, that should only really have
been called if there was any painting, so no changes to listeners are
needed.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The mutexes was used by ClutterTexture's async upload and to match GDK's
mutexes on X11. GDK's X11 connection does not share anything with
Clutter's, we don't have the Gdk Clutter backend left, and we have
already removed ClutterTexture, so lets remove these mutexes as well.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285