We ask XLib the next request serial number before performing other actions
triggered by meta_x11_display_set_input_focus_internal() that doesn't use
the request serial anyways. So, just request it before updating the focus
window as that's the operation that needs it.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/909
When using DesktopIcons extension and clicking in an icon, gnome-shell
starts an infinite loop caused by the first focus change that may trigger
on X11 a focus in/out event that leads to stage activation/deactivation
which never ends.
This happens because as part of meta_x11_display_set_input_focus_xwindow()
to focus the X11 stage window, we unset the display focus, but this also
causes to request the X11 display to unset the focus since we convolute by
calling meta_x11_display_set_input_focus() with no window, that leads to
focusing the no_focus_window and then a focus-in / focus-out dance that the
shell amplifies in order to give back the focus to the stage.
In order to fix this, mimic what meta_display_set_input_focus() does, but
without updating the X11 display, and so without implicitly calling
meta_x11_display_set_input_focus(), stopping the said convolution and
properly focusing the requested xwindow.
Also ensure that we're not doing this when using an older timestamp, since
this check isn't performed anymore.
Fixes https://gitlab.gnome.org/GNOME/mutter/issues/896
Fixes https://gitlab.gnome.org/GNOME/mutter/issues/899https://gitlab.gnome.org/GNOME/mutter/merge_requests/909
This function is already checking for the focus surface client
matching the requestor. The type check was slightly bogus though
as it'd be an screwup in our code, make it an assert instead.
Also, move the check for the client having the focus into the
upper call, so this and wl_data_device.set_selection code can
get more in line.
https://gitlab.gnome.org/GNOME/mutter/issues/878
We have an abstract MetaWaylandDataSource and 2 subclasses for
clipboard/primary data sources. Since the abstraction provided
by the additional sublevel is arguable, push the wl_resource
field up, and leave us with just 2 objects to think about, all
of them containing a wl_resource.
https://gitlab.gnome.org/GNOME/mutter/issues/878
Otherwise we'll end up trying to access the out of date state later.
Fixes the following test failure backtrace:
#0 _g_log_abort ()
#1 g_logv ()
#2 g_log ()
#3 meta_monitor_manager_get_logical_monitor_from_number ()
#4 meta_window_get_work_area_for_monitor ()
#5 meta_window_get_tile_area ()
#6 constrain_maximization ()
#7 do_all_constraints ()
#8 meta_window_constrain ()
#9 meta_window_move_resize_internal ()
#10 meta_window_tile ()
https://gitlab.gnome.org/GNOME/mutter/merge_requests/912
`meta_surface_actor_is_obscured` implies that the actor got successfully culled
out and nothing of it will get painted. This includes that there are no clones,
no effects etc. In this cases we don't want to send frame callbacks, thus avoiding
unnecessary client work.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/918
When setting the root node as child of a clip or transform node, we add a
new reference to it, without removing the one that we've previously added
when getting it from the actor node (and that won't ever be unset by the
auto-pointer since the root_node is re-associated).
So, once we add the root node as child and re-define it, unref it.
Fixes https://gitlab.gnome.org/GNOME/mutter/issues/908
Create the intermediate shadow framebuffer for use exclusively when a
shadowfb is required.
Keep the previous offscreen framebuffer is as an intermediate
framebuffer for transformations only.
This way, we can apply transformations between in-memory framebuffers
prior to blit the result to screen, and achieve acceptable performance
even with software rendering on discrete GPU.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/877
Previously, we would use a single offscreen framebuffer for both
transformations and when a shadow framebuffer should be used, but that
can be dreadfully slow when using software rendering with a discrete GPU
due to bandwidth limitations.
Keep the offscreen framebuffer for transformations only and add another
intermediate shadow framebuffer used as a copy of the onscreen
framebuffer.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/877
A compositor is notably opaque (usually has nothing to be painted on!).
gnome-shell sets this hint, but there's no reason why we wouldn't want
it by default.
Also, the color buffer being cleared messes with stencil clips, as the
clear operation by definition ignores the stencil buffer. We want to
use these more extensively in the future, so just drop this API.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/911
This is a workaround for X11 games which use randr to change the resolution
in combination with NET_WM_STATE_FULLSCREEN when going fullscreen.
Newer versions of Xwayland support the randr part of this by supporting randr
resolution change emulation in combination with using WPviewport to scale the
app's window (at the emulated resolution) to fill the entire monitor.
Apps using randr in combination with NET_WM_STATE_FULLSCREEN expect the
fullscreen window to have the size of the emulated randr resolution since
when running on regular Xorg the resolution will actually be changed and
after that going fullscreen through NET_WM_STATE_FULLSCREEN will size
the window to be equal to the new resolution.
We need to emulate this behavior for these games to work correctly.
Xwayland's emulated resolution is a per X11 client setting and Xwayland
will set a special _XWAYLAND_RANDR_EMU_MONITOR_RECTS property on the
toplevel windows of a client (and only those of that client), which has
changed the (emulated) resolution through a randr call.
This commit checks for that property and if it is set adjusts the fullscreen
monitor rect for this window to match the emulated resolution.
Here is a step-by-step of such an app going fullscreen:
1. App changes monitor resolution with randr.
2. Xwayland sets the _XWAYLAND_RANDR_EMU_MONITOR_RECTS property on all the
apps current and future windows. This property contains the origin of the
monitor for which the emulated resolution is set and the emulated
resolution.
3. App sets _NET_WM_FULLSCREEN.
4. We check the property and adjust the app's fullscreen size to match
the emulated resolution.
5. Xwayland sees a Window at monitor origin fully covering the emulated
monitor resolution. Xwayland sets a viewport making the emulated
resolution sized window cover the full actual monitor resolution.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/739
Add an adjust_fullscreen_monitor_rect virtual method to MetaWindowClass
and call this from setup_constraint_info() if the window is fullscreen.
This allows MetaWindowClass to adjust the monitor-rectangle used to size
the window when going fullscreen, which will be used in further commits
for a workaround related to fullscreen games under Xwayland.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/739
This reverts commit 4918893326.
This commit prevented cogl_stage_cogl_redraw_view() from skipping
swap buffers entirely if the invalidation region ended up empty.
This meant we were actually swapping buffers when we didn't need to.
The source of the glitches was fixed more properly, so this just adds
extra work.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/898
Unknown since when, we started deferring the eglMakeCurrent for the
current framebuffer till we started painting on it, which means we
are preparing for rendering a view without guarantees that the
framebuffer we will paint to is the current drawing surface for the
EGL context.
A fairly common case where that assumption will break is multimonitor
set ups, in this case we will be preparing to paint to a view while
the current draw surface is that of the previously rendered view's.
Mesa will in this case return EGL_BAD_SURFACE when querying the buffer
age, since the surface is not yet the current draw surface. This
makes us give up on buffer age checks, and paint the whole view. Since
the problem repeats when painting the next view, we are effectively
doing full-screen redraws on all monitors.
Since cogl usually works implicitly, and querying the buffer age is
meaningless if you're not meant to paint on a surface, make the surface
the current draw surface implicitly before querying the buffer age.
This brings us glorious partial invalidations back when several views
had to be repainted.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/906
In order to avoid log spamming, just warn once it starts to happen, not
on every frame for every onscreen. Just knowing that this is happening
is a hint that something's going wrong.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/906
Say you're using intel gen3, you poor soul. Your big-GL maxes out at 1.5
unless you use dirty tricks, but you do have GLES2. We try to fall back
to GLES in this case, but we only ever say eglBindAPI(EGL_OPENGL_API).
So when we go to do CreateContext, even though we think we've requested
GLES 2.0, the driver will compare that "2.0" against the maximum big-GL
version, and things will fail.
Fix this by binding EGL_OPENGL_ES_API before trying a GLES context.
https://gitlab.gnome.org/GNOME/mutter/issues/635
In find_onscreen_for_xid() we want to loop over the framebuffers
and skip any that is not onscreen.
The code today does this by negating the framebuffer type variable
and skipping if that equals COGL_FRAMEBUFFER_TYPE_ONSCREEN. This
actually works as the enum used will function as a boolean:
typedef enum _CoglFramebufferType {
COGL_FRAMEBUFFER_TYPE_ONSCREEN,
COGL_FRAMEBUFFER_TYPE_OFFSCREEN
} CoglFramebufferType;
But it is a bit weird logic and fragile if more types are added.
(not that I can think of any different type...)
To simplify this, and to silence a warning in clang this patch just
changes it to a != test.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/905
There are still environment variables for these controls, but having
them in a config file doesn't really make sense for mutter. Even if it
did we probably don't want to be parsing the same file as some
standalone version of cogl.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/902
Since 3.34, the gnome-shell package was cleaned up to only depend
on gnome-control-center-filesystem at build-time. However one of
the gnome-shell tests needs the gettext ITS file for keybindings
provided by the main gnome-control-center package (in fact, the
COPR package is stripped down to just that file), so install that
explicitly.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/901
This allows xdg_popup.grab() to work with styli. Without this check
we would bail out and emit xdg_popup.popup_done, leaving stylus users
unable to interact with popup menus, comboboxes, etc...
Closes: https://gitlab.gnome.org/GNOME/mutter/issues/886
When a touch sequence was rejected, the emulated pointer events would be
replayed with old timestamps. This caused issues with grabs as they
would be ignored due to being too old. This was mitigated by making sure
device event timestamps never travelled back in time by tampering with
any event that had a timestamp seemingly in the past.
This failed when the most recent timestamp that had been received were
much older than the timestamp of the new event. This could for example
happen when a session was left not interacted with for 40+ days or so;
when interacted with again, as any new timestamp would according to
XSERVER_TIME_IS_BEFORE() still be in the past compared to the "most
recent" one. The effect is that we'd always use the `latest_evtime` for
all new device events without ever updating it.
The end result of this was that passive grabs would become active when
interacted with, but would then newer be released, as the timestamps to
XIAllowEvents() would out of date, resulting in the desktop effectively
freezing, as the Shell would have an active pointer grab.
To avoid the situation where we get stuck with an old `latest_evtime`
timestamp, limit the tampering with device event timestamp to 1) only
pointer events, and 2) only during the replay sequence. The second part
is implemented by sending an asynchronous message via the X server after
rejecting a touch sequence, only potentially tampering with the device
event timestamps until the reply. This should avoid the stuck timestamp
as in those situations, we'll always have a relatively up to date
`latest_evtime` meaning XSERVER_TIME_IS_BEFORE() will not get confused.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/886
This way, we can simply pop up the Looking Glass and run:
>>> Meta.add_clutter_debug_flags(Clutter.DebugFlag.PICK, 0, 0)
And measure specific actions or events on GNOME Shell.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/862
As we do not prevent the SwapBuffers call from happening, those also
do count. Results in clip area calculations to be right for monitors
that previously did not get invalidated.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/888