MetaGestureTracker has been separating the "did I handle an event?" and the
"should the event be filtered out?" questions, merge this and make
handle_event() reply to "should the event be only handled by me?".
If a sequence wasn't accepted yet by the gesture tracker, the event will
go through (eg. not handled exclusively by the gesture tracker) and it'll
still be processed by Clutter, triggering gesture actions, and maybe
changing the sequence into other state.
https://bugzilla.gnome.org/show_bug.cgi?id=733631
On X11 this works because only emulated pointer events are listened for. On
wayland, the single touch behavior must be enforced in touch events, ignoring
every other sequence.
https://bugzilla.gnome.org/show_bug.cgi?id=733631
This function tells the obvious on X11, and implements a similar mechanism
on wayland to determine the "pointer emulating" sequence, or one to stick
with when implementing single-touch behavior.
https://bugzilla.gnome.org/show_bug.cgi?id=733631
Due to the way the MetaGestureTracker processes every touch event, this
will tell as closely to Clutter as possible the current number of touches
happening on the stage.
Even though, this is subject to windowing behavior, on X11, rejected touches
will be soon followed by a XI_TouchEnd event, so the compositor will stop
seeing touch sequences that are still operating on clients. On wayland, touch
sequences are processed by the compositor during all their lifetime, so these
will stay on the MetaGestureTracker with META_SEQUENCE_PENDING_END state, yet
still tracked.
https://bugzilla.gnome.org/show_bug.cgi?id=733631
On wayland, touches are initially both handled by the compositor and sent
to clients, proceeding to cancellation on clients only after the compositor
claims the sequence for itself. Implement the cancellation detail through
MetaGestureTracker::state-changed.
https://bugzilla.gnome.org/show_bug.cgi?id=733631
This reverts commit 3b85e4b2b9.
This breaks touch support; reverting would break wayland
(is what this patch tried to fix; we should find a better solution
that works on both).
The current GNOME Shell Alt-F2 restart looks very messy and also
provides no indication to the user what is going on. We need to
restart the compositor to switch in and out of stereo mode, so
add a framework for doing this more cleanly:
Additions:
meta_restart(): restarts the compositor with a message
MetaDisplay::show-restart-message: signal the embedding
shell to show a message
MetaDisplay::restart: signal the embedding shell to restart
itself.
meta_is_restart(): indicates whether the current instance is a
restart so we can suppress login animations.
A helper program meta-restart-helper holds the composite overlay
window up during the restart to avoid visual artifacts.
https://bugzilla.gnome.org/show_bug.cgi?id=733026
When a Wayland window acks our arrangement and we don't really have
anything to modify, we'll pass a sole flag of META_IS_WAYLAND_RESIZE
to meta_window_move_resize_internal using a garbage rect. The existing
code to calculate the new rectangle couldn't really handle this case,
and so the garbage rectangle accidentally got stored. Revamp the flag
checks to be more clear about it.
This fixes the weird positioning issues that sometimes appear when
resizing weston-terminal among others.
We really can't do this unless the backend X server is the same as the
frontend X server, as we pass a frontend XID to the backend, which is
only the case when we're not a Wayland compositor.
This code was supposed to refresh our default icons when the theme
changed, but it actually was a no-op, since the default icons are cached
in a static variable in MetaUI.
I'm not sure the fact that the fallback icons don't update when the
theme changes is an important enough use case to keep working, but I'm
keeping the skeleton function there in case somebody wants to actually
fix it properly.
Now that we have two connections to the X server, the idea of a
ref-counted server grab that might be held across extended portions
of code is very dangerous since we might try to use the backend
connection while the frontend connection is grabbed.
Replace the only usage (which was local) with direct
XGrabServer/XUngrabServer usage and remove the meta_display_grab()
API.
https://bugzilla.gnome.org/show_bug.cgi?id=733068
There's no obvious reason for grabbing the X server when unmanaging
a screen - the only race conditions a server grab solves are those
related to querying and then acting on the results of the query.
Our shutdown sequence is correctly ordered according to the ICCCM -
we first unselect on the root window, and then we destroy the
window owning WM_S<n> so removing the grab should not cause any
problems when we are being replaced with another window manager.
https://bugzilla.gnome.org/show_bug.cgi?id=733068
The only case we have is the case where the two X11 connections are the
same. When on Wayland, the XSync is costly and expensive, and we should
minimize it.
Commit 8100cefd4c fixed a crash during workspace initialization by
tweaking the startup sequence; as a result, the plugin (like gnome-shell)
is now started before workspaces are fully initialized, which breaks
some reasonable assumptions (like always having an active workspace).
This is particularly problematic considering that the code making those
assumptions is not necessarily our own (extensions!), so return to
fully initialize workspaces before the compositor again.
At the same time, make sure to only call meta_workspace_activate()
once during initialization to avoid reintroducing the crash.
https://bugzilla.gnome.org/show_bug.cgi?id=732695
This makes sure that we see them for Wayland clients as well, and don't
time out and crash when we're accessing an invalid window / surface.
Spotted-by: Rui Matos <tiagomatos@gmail.com>
If a sequence moves past a certain distance without being used by a
gesture, reject it so clients may see and react to it ASAP. This makes
gestures to be began by initially quasi-static touchpoints, in addition to
quasi-simultaneous.
Touch events will be caught first by the compositor this way,
whenever the MetaGestureTracker notifies of the accepted/rejected
state of a sequence, XIAllowTouchEvents() will be called on it
accordingly, so it is handled exclusively by the compositor or
punted to clients.
This object tracks both touch sequences happening on the stage and
gestures attached to the stage actor. When a gesture emits
::gesture-begin, All triggering sequences and future ones will be
marked as "accepted" by the compositor, and events will be listened
for meanwhile there are active gestures.
If a sequence goes unclaimed for a short time, it will be
automatically "denied", and punted to the client or shell element
below.
Touch events are largely ignored on GdkEvent emulation, so only
make frames receive pointer events, only the pointer emulating
touch will be reported, and any other further touches will be
ignored, which is about the behavior we want. This makes window
dragging possible again on touch.
Since Wayland configures are more of a hint to the client than anything,
we don't want to save the unconstrained rect when we're just hinting to
the client that it should resize, since it could ignore us. This would
get us stuck in a loop, since meta_window_move_resize_now would use the
unconstrained_rect to resize, and we don't remove the resize from the
queue if we have an outstanding request like that.
This fixes a bunch of traffic / CPU usage when trying to resize
weston-terminal.
For XWayland, we need to make sure to send out mouse events on O-R
windows, otherwise they won't get motion or button events.
The comment mentions being eaten for the compositor, but we already
bypass the compositor for all events that have a window. The return
value just controls whether we pass them to Wayland.
The output_id is more of an opaque identifier for the monitor, based on
its underlying ID from the windowing system. Since we also use the term
"output_id" for the output's index, rename our use of the opaque cookie
"output_id" to "winsys_id".
The GDK and hence GNOME standard is that keys that begin with XF86 according to
libxkbcommon not prefixed with XF86, though gdk_keyval_from_name() strips XF86
if provided. If libxkbcommon doesn't recognize the accelerator name without
XF86, try again adding XF86 to the start.
This restores compatibility with gnome-settings-daemon, schemas, and existing
user configuration.
https://bugzilla.gnome.org/show_bug.cgi?id=727993
It just gets in the way of gnome-shell's log handler (which
includes gjs backtraces optionally), it requires people to understand
what 8 or 16 mean as log levels, and it loses the log domain.
Some plugins and extensions want to be able to know when the sticky
field of a window changes, so add a property for it and allow them
to connect to the notify::on-all-workspaces signal.
When workspaces-only-on-primary is set and a window is moved back to the
primary, we also move it to the active workspace to avoid the confusion
of a visible window suddenly disappearing when crossing the monitor border.
However when the window is not actually moved by the user, preserving the
workspace makes more sense - we already do this in some cases (e.g. when
moving between primary monitors), but miss others (unplugging the previous
monitor); just add an explicit user_op parameter as used elsewhere to cover
all exceptions.
https://bugzilla.gnome.org/show_bug.cgi?id=731760
Remember the last monitor a window was moved to by user action and
try to move it back on monitor changes; this should match user
expectations much better when a monitor is unplugged temporarily.
https://bugzilla.gnome.org/show_bug.cgi?id=731760
When workspaces-only-on-primary is set, a window can be on all
workspaces either because it is on a non-primary workspace, or
because it was explicitly made sticky. Only the latter is reflected
in _NET_WM_STATE, but both will result in a "magic" _NET_WM_DESKTOP,
which we (and probably other WMs) use to set the initial sticky state.
So to avoid confusing other WMs (or ourselves), make sure to only
have _NET_WM_STATE_STICKY reflected in _NET_WM_DESKTOP when unmanaging.
Window state like maximization and minimization should be preserved
over restarts - in a patch review, this would qualify as "needs-work",
so revert the cleanup until the issues are fixed.
This reverts commit dc6decefb5.
Rather than calculate it speculatively with the current properties
which may be too new or too out of date, make sure it always fits
with the proper definition. We update it when we update the toplevel
window for X11, and when a Wayland surface is committed with a newly
attached buffer.
With get_input_region existing, get_input_rect is a misnomer. Really,
it's about the geometry of the output surface, and it's only used that
way in the compositor code.
Way back when in GNOME 3.2, get_input_rect was added when we added
invisible borders. get_outer_rect was always synonymous with server-side
geometry of the toplevel. get_outer_rect was used for both user-side
policy (the "frame rect") and to get the geometry of the window.
Invisible borders were meant to extend the input region of the frame
window silently. Since most users of get_outer_rect cared about the
frame rect, we kept that the same and added a new method, get_input_rect
to get the full rect of the framed window with all invisible borders for
input kept on.
As time went on and CSD and Wayland became a reality, the relationship
between the server-side geometry and the "frame rect" became more
complicated, as can be evidenced by the recent commits. Since clients
don't tend to be framed anymore, they set their own input region.
get_buffer_rect is also sort of a poor name, since X11 doesn't really
have buffers, but we don't really have many other alternatives.
This doesn't change any of the code, nor the meaning. It will always
refer to the rectangle where the toplevel should be placed.
All of the users of get_input_rect don't actually want a synthesized
input rect based off of the current margins. What they really want is
the last-configured size of the toplevel window.
Since we don't properly track this anymore in the generic MetaWindow,
use XGetWindowAttributes to fetch a server-side rectangle. This is a
bad layer violation, but since the window geometry code will have to
be rewritten anyway for the Wayland set_window_geometry, let's just
push a hacky fix for now.
Struts are defined in terms of screen edges, so expand the rectangles
we get via set_builtin_struts() accordingly. However we do want to
allow chrome on edges between monitors, in which case the expansion
would render an entire monitor unusable - don't expand the rectangles
in that case, which means we will only use them for constraining
windows but ignore them for the client-visible _NET_WORKAREA property.
https://bugzilla.gnome.org/show_bug.cgi?id=730527
Like the _NET_WM_STRUT/_NET_WM_STRUT_PARTIAL client properties,
_NET_WORKAREA is defined in terms of screen geometry rather than
taking individual monitors into account. However we do want to
allow system chrome to be attached to a monitor edge rather than
a screen edges under some circumstances. As not all clients can
be assumed to deal gracefully with the resulting workarea, use
those "struts" only internally for constraining windows, but
ignore them when exporting _NET_WORKAREA.
https://bugzilla.gnome.org/show_bug.cgi?id=730527
Since commit 8b2b65246a, we assume that the compositor always
exists. Alas, the assumption is wrong - the compositor is currently
initialized after the screen, but meta_screen_new() itself may
call a compositor function if initialization involves a workspace
switch (which will happen when meta_workspace_activate() is called
more than once and for different workspaces - or in other words,
when _NET_CURRENT_DESKTOP is set and not 0).
So carefully split out the offending bits and only call them after
the compositor has been initialized.
https://bugzilla.gnome.org/show_bug.cgi?id=731332
If we have a tree of a window, a non-attached dialog, and then an
attached dialog, we want to move the second window, not the attached
dialog or the topmost. In other words, we want to move the first
non-attached window, or the first "freefloating window".
This happens in Firefox, whose Preferences dialog is freefloating,
but suboptions of those are modal dialogs.
Stupid apps fullscreen themselves by resizing the client window to
monitor size. A monitor-sized frame rect on the other hand is perfectly
normal on monitors without struts - stop force-fullscreening those
and catch the real baddies instead.
https://bugzilla.gnome.org/show_bug.cgi?id=730681
When opening the window menu without an associated control - e.g.
by right-clicking the titlebar or by keyboard - using coordinates
for the menu position is appropriate. However when the menu is
associated with a window button, the expected behavior in the
shell can be implemented much easier with the full button geometry:
the menu will point to the center of the button's bottom edge
rather than align to the left/right side of the titlebar as it
does now, and the clickable area where a release event does not
dismiss the menu will match the actual clickable area in mutter.
So add an additional show_window_menu_for_rect() function and
use it when opening the menu from a button.
https://bugzilla.gnome.org/show_bug.cgi?id=731058
This can happen since we select for events on the root window, and
clients themselves might not select for input, meaning the X server
will bubble up. Just do nothing and ignore the event in this case.
This should hopefully fix some of the
Window manager warning: Log level 8: meta_window_raise: assertion '!window->override_redirect' failed
Window manager warning: Log level 8: meta_window_focus: assertion '!window->override_redirect' failed
spam that people have been seeing.