Compositors haven't been able to manage more than one screen for
quite a while. Merge MetaCompScreen into MetaCompositor, and update
the API to match.
We still keep MetaScreen in the public compositor API for compatibility
purposes.
We previously separated out MetaDisplay and MetaScreen. mutter
would only manage one screen, but we still kept a list of screens
for simplicity.
With Wayland support, we no longer care about the ability to
manage more than one screen at a time. Remove this by killing
the list of screens, in favor of having just one MetaScreen
in MetaDisplay.
We also kill off active_screen at the same time, since it's
not necessary anymore.
A future cleanup should merge MetaDisplay and MetaScreen. To avoid
breaking API, we should probably keep MetaScreen around as a dummy
type.
display.c is getting a bit crowded. Move most of the handling
out to another file, events.c.
The long-term goal is to have generic event handling here, with
backend-specific handling for the types of windows and such.
If we have a CLICKING grab op we still need to send events to xwayland
so that we get them back for gtk+ to process thus we can't steer
wayland input focus away from it.
https://bugzilla.gnome.org/show_bug.cgi?id=726123
This ensures that we send the proper leave and enter events to wayland
clients.
Particularly, this solves a bug in SSD xwayland windows where clicking
and dragging on the title bar to move the window only works on the odd
turn (unless the pointer moves away from the title bar between
tries). This happens because xwayland gets a button press but doesn't
see the release so when it gets the next button press it discards it
because its pointer button tracking logic says that the button is
already pressed. Sending the proper wayland pointer leave event fixes
it since wayland clients must forget about button state at that point.
https://bugzilla.gnome.org/show_bug.cgi?id=726123
At one point, it was supported to run mutter without a compositor,
but we don't allow that any longer. A lot of code already assumes
display->compositor exists and doesn't check for a NULL pointer,
so just kill the rest of the checks.
Make it a compile-time flag rather than a run-time flag, because
practically any time you're going to be debugging event spewing,
you're going to have to recompile anyway. Remove the WITH_VERBOSE_MODE
checks, too.
Which is used for Wayland popup grabs.
The issue here is that we don't want the code that raises or focuses
windows based on mouse ops to run while a client has a grab.
We still keep the "old" grab infrastructure in place for now, but
ideally we'd replace it eventually with a better grab-op infrastructure.
The only events we handle as XIEvents are FocusIn/Out, Enter and
Leave. Motion, ButtonPress/Release, KeyPress/Release are handled
through clutter instead.
Among other things, this means we don't need to fake motion compression
by peeking over gdk event queue...
Mouse event handling was duplicated, resulting in weird interactions
if clutter was allowed to see certain events (for example under
wayland, where it gets all events). Because now clutter sees all
X events, even when running as an x11 compositor, we can handle
everything using the clutter variants.
At the same time, rewrite a little the passive button grab code,
to make it clear what is being matched on what and why.
The rendering logic before was somewhat complex. We had three independent
cases to take into account when doing rendering:
* X11 compositor. In this case, we're a traditional X11 compositor,
not a Wayland compositor. We use XCompositeNameWindowPixmap to get
the backing pixmap for the window, and deal with the COMPOSITE
extension messiness.
In this case, meta_is_wayland_compositor() is FALSE.
* Wayland clients. In this case, we're a Wayland compositor managing
Wayland surfaces. The rendering for this is fairly straightforward,
as Cogl handles most of the complexity with EGL and SHM buffers...
Wayland clients give us the input and opaque regions through
wl_surface.
In this case, meta_is_wayland_compositor() is TRUE and
priv->window->client_type == META_WINDOW_CLIENT_TYPE_WAYLAND.
* XWayland clients. In this case, we're a Wayland compositor, like
above, and XWayland hands us Wayland surfaces. XWayland handles
the COMPOSITE extension messiness for us, and hands us a buffer
like any other Wayland client. We have to fetch the input and
opaque regions from the X11 window ourselves.
In this case, meta_is_wayland_compositor() is TRUE and
priv->window->client_type == META_WINDOW_CLIENT_TYPE_X11.
We now split the rendering logic into two subclasses, which are:
* MetaSurfaceActorX11, which handles the X11 compositor case, in that
it uses XCompositeNameWindowPixmap to get the backing pixmap, and
deal with all the COMPOSITE extension messiness.
* MetaSurfaceActorWayland, which handles the Wayland compositor case
for both native Wayland clients and XWayland clients. XWayland handles
COMPOSITE for us, and handles pushing a surface over through the
xf86-video-wayland DDX.
Frame sync is still in MetaWindowActor, as it needs to work for both the
X11 compositor and XWayland client cases. When Wayland's video display
protocol lands, this will need to be significantly overhauled, as it would
have to work for any wl_surface, including subsurfaces, so we would need
surface-level discretion.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
The input region was set on the shaped texture, but the shaped texture
was never picked properly, as it was never set to be reactive. Move the
pick implementation and reactivity to the MetaSurfaceActor, and update
the code everywhere else to expect a MetaSurfaceActor.
I implemented pinging, but never actually enabled the feature
properly on Wayland surfaces by setting the net_wm_ping hint to
TRUE, causing the fallback path to always be hit.
Rename net_wm_ping to can_ping so it doesn't take on an
implementation-specific meaning, and set it for all Wayland windows.
This was a bad idea, as ping/pong has moved to a client-specific
request/event pair, rather than a surface-specific one. Revert
the changes we made here and correct the code to make up for it.
This reverts commit aa3643cdde.
Traditionally, WMs unmap windows when minimizing them, and map them
when restoring them or wanting to show them for other reasons, like
upon creation.
However, as metacity morphed into mutter, we optionally chose to keep
windows mapped for the lifetime of the window under the user option
"live-window-previews", which makes the code keep windows mapped so it
can show window preview for minimized windows in other places, like
Alt-Tab and Expose.
I removed this preference two years ago mechanically, by removing all
the if statements, but never went through and cleaned up the code so
that windows are simply mapped for the lifetime of the window -- the
"architecture" of the old code that maps and unmaps on show/hide was
still there.
Remove this now.
The one case we still need to be careful of is shaded windows, in which
we do still unmap the client window. In the future, we might want to
show previews of shaded windows in the overview and Alt-Tab. In that
we'd also keep shaded windows mapped, and could remove all unmap logic,
but we'd need a more complex method of showing the shaded titlebar, such
as using a different actor.
At the same time, simplify the compositor interface by removing
meta_compositor_window_[un]mapped API, and instead adding/removing the
window on-demand.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
The goal here is to make MetaWindow represent a toplevel, managed window,
regardless of if it's X11 or Wayland, and build an abstraction layer up.
Right now, most of the X11 code is in core/ and the wayland code in wayland/,
but in the future, I want to move a lot of the X11 code to a new toplevel, x11/.
X11 window frames use special UI grab ops, like META_GRAB_OP_CLICKING_MAXIMIZE,
in order to work properly. As the frames in this case are X11 clients, we need
to pass through X events in this case. So, similar to how handle_xevent works,
use two variables, bypass_clutter, and bypass_wayland, and set them when we
handle specific events.