This will allow us to have a MetaCursorReference 'subclass' that's
lazily loaded. We currently always load all the images.
The long-term plan is to have a subclass for each "backend" and only
have CoglTexture as a common denominator. For the nested X11 backend,
we use XDefineCursor on our stage window. For the Wayland backend, we
would use set_cursor on our stage surface. For the native backend, we
would use the GBM code that's there right now.
The CoglTexture is there to be a "shared fallback" between all devices,
and also for the get_sprite API.
The odd man out is the X11 compositor case. For that, we need to move
the responsibility of setting the final cursor image out of
MetaCursorTracker, and simply have it be about tracking the used sprite
image and pointer position.
We want to make this private, and have MetaCursorReference be
backend-defined, with the texture possibly loaded on demand.
We can't make the definition of MetaCursorReference truly private yet
because of the XFixes cursor. A victim of MetaCursorTracker trying to
do too many things at once...
I want the MetaCursorTracker to mostly be about retrieving cursor
information. Start moving the code that loads cursor images to a
new file, MetaCursor. Eventually, MetaCursorTracker's APIs will
all take MetaCursorReferences, and we can have a clean backend
split here.
For whatever reason, this hash table was in the generic
implementation section instead of the XSync implementation,
even though it's only used by the XSync implementation.
Use it as a first pass of things to move over.
The reason we don't simply use gdk_window_add_filter directly is
because of some twisted idea that any GDK symbol being used from
core/ is a layer violation. While we certainly want to keep any
serious GDK code out of ui/, event handling is quite important
to have in core/, so simply use a GDK event filter directly.
The code here before was completely wrong. Not only did it mix up
coordinate spaces of "client rect" vs. "frame rect", but it used
meta_frame_get_frame_bounds, which is specifically for the *visible*
bounds of a window!
In the case that we don't have a bounding or input shape region at
all on the client window, the input shape that we should apply is
the surface's natural shape. So, set the region to NULL to get the
natural rect picking semantics.
Really, visible_to_compositor means that the window is shown, e.g.
not minimized. We need to be using a boolean tracking whether we've
called meta_compositor_add_window / meta_compositor_remove_window.
This fixes a jump during window placement when a window appears.
visible_to_compositor should always be in sync with show_window /
hide_window calls, even when unmananging.
This fixes a crash where we call sync_window_state when the window
is unmanaging, since we use visible_to_compositor to determine whether
the compositor will crash.
This is actually wrong; we should be using the knowledge about
whether we have called add_window / remove_window. We'll introduce
this with a new boolean next time.
We guarantee ourselves that a valid pixmap will appear any time
that the window is painted. The window actor will be scheduled
for a repaint if it's added / removed from the scene graph, like
during construction, if the size changes, or if we receive damage,
which are the existing use cases where this function is called.
So, I can't see any reason that we queue a redraw in here.
With the split into surface actors, we don't have an easy place
we can use to queue a redraw, and since it's unnecessary, we can
just drop it on the floor.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
Compositors haven't been able to manage more than one screen for
quite a while. Merge MetaCompScreen into MetaCompositor, and update
the API to match.
We still keep MetaScreen in the public compositor API for compatibility
purposes.
We previously separated out MetaDisplay and MetaScreen. mutter
would only manage one screen, but we still kept a list of screens
for simplicity.
With Wayland support, we no longer care about the ability to
manage more than one screen at a time. Remove this by killing
the list of screens, in favor of having just one MetaScreen
in MetaDisplay.
We also kill off active_screen at the same time, since it's
not necessary anymore.
A future cleanup should merge MetaDisplay and MetaScreen. To avoid
breaking API, we should probably keep MetaScreen around as a dummy
type.
I'm a bit tired of hearing about this when I launch mutter-wayland
nested. Ideally, this would be part of display server integration,
not GNOME integration, so we could simply not make the call when
nested, but oh well.
cogl_texture_get_components can be used on both X11 and Wayland
backends. Technically, the detection is different: we actually
check the actual RENDER format in the old code, while Cogl simply
assumes that any pixmap with a depth >= 32 is ARGB32. Since Cogl
already seems to be working with its internal checks, it makes
more sense to use Cogl's check rather than keeping our own.