Struts are defined in terms of screen edges, so expand the rectangles
we get via set_builtin_struts() accordingly. However we do want to
allow chrome on edges between monitors, in which case the expansion
would render an entire monitor unusable - don't expand the rectangles
in that case, which means we will only use them for constraining
windows but ignore them for the client-visible _NET_WORKAREA property.
https://bugzilla.gnome.org/show_bug.cgi?id=730527
Like the _NET_WM_STRUT/_NET_WM_STRUT_PARTIAL client properties,
_NET_WORKAREA is defined in terms of screen geometry rather than
taking individual monitors into account. However we do want to
allow system chrome to be attached to a monitor edge rather than
a screen edges under some circumstances. As not all clients can
be assumed to deal gracefully with the resulting workarea, use
those "struts" only internally for constraining windows, but
ignore them when exporting _NET_WORKAREA.
https://bugzilla.gnome.org/show_bug.cgi?id=730527
Since commit 8b2b65246a, we assume that the compositor always
exists. Alas, the assumption is wrong - the compositor is currently
initialized after the screen, but meta_screen_new() itself may
call a compositor function if initialization involves a workspace
switch (which will happen when meta_workspace_activate() is called
more than once and for different workspaces - or in other words,
when _NET_CURRENT_DESKTOP is set and not 0).
So carefully split out the offending bits and only call them after
the compositor has been initialized.
https://bugzilla.gnome.org/show_bug.cgi?id=731332
If we have a tree of a window, a non-attached dialog, and then an
attached dialog, we want to move the second window, not the attached
dialog or the topmost. In other words, we want to move the first
non-attached window, or the first "freefloating window".
This happens in Firefox, whose Preferences dialog is freefloating,
but suboptions of those are modal dialogs.
There is no way this value will ever be read, because we set the
cursor_surface to NULL, this is set at the same time as cursor_surface,
and it's only read if cursor_surface is non-NULL.
The smallest possible spread corresponds to an unblurred shadow, which
neither grows nor shrinks - thus the spread should be zero not negative
as returned by our current calculation.
https://bugzilla.gnome.org/show_bug.cgi?id=731353
Stupid apps fullscreen themselves by resizing the client window to
monitor size. A monitor-sized frame rect on the other hand is perfectly
normal on monitors without struts - stop force-fullscreening those
and catch the real baddies instead.
https://bugzilla.gnome.org/show_bug.cgi?id=730681
Avoid populating *_VERSION constants through cflags in pkg-config-file
which could be overridden by the project using it. Properly prefix the
defines with META_ to make gi-scanner happy.
Clutter touch events are translated into events being sent down
the interface resource, with the exception of FRAME/CANCEL events,
which are handled directly via an evdev event filter.
The seat now announces invariably the WL_SEAT_CAPABILITY_TOUCH
capability, this should be eventually updated as devices come and
go.
The creation of MetaWaylandTouchSurface structs is dynamic, attached
to the lifetime of first/last touch on the client surface, and only
if the surface requests the wl_touch interface. MetaWaylandTouchInfo
structs are created to track individual touches, and are locked to
a single MetaWaylandTouchSurface (the implicit grab surface) determined
on CLUTTER_TOUCH_BEGIN.
https://bugzilla.gnome.org/show_bug.cgi?id=724442
There's a race here. If an OR window hides itself, moves, and then shows
itself, we will send a ConfigureNotify for the old size of the window
and might receive it after the client moves itself, causing us to show
the window at the wrong location.
Simply not sending the ConfigureNotify is the easiest thing to do.
Before we unmanage, we send a ConfigureNotify to clients to let them
know if their frame is destroyed. We do this for OR windows too, even if
we really probably shouldn't.
This is based off of the client_rect. Since we listen to ConfigureNotify
on OR windows, we'll receive the event. If we don't ever update the
client_rect when moving or resizing OR windows, then we'll send
ourselves a ConfigureNotify for a 0x0 size and then think that the
client chose a new size for itself. Since our get_paint_volume is based
on that rectangle, but the TFP code inside Cogl uses XGetGeometry
itself, we get weird flickering artifacts.
prelit_control is used for both prelight and pressed states, so the early
return in update_prelit_control() misses the case where prelit_control
already matches the control we are updating, but its state is PRESSED
rather than PRELIGHT. Fix the condition to not have pressed controls
linger around erroneously.
https://bugzilla.gnome.org/show_bug.cgi?id=731058
Since window menus have been moved to the compositor, the pressed
state of the corresponding window buttons is messed up, as it is
reset immediately when getting a LeaveNotify event due to the
compositor taking a grab. Fix this by ignoring that particular
event.
https://bugzilla.gnome.org/show_bug.cgi?id=731058
When opening the window menu without an associated control - e.g.
by right-clicking the titlebar or by keyboard - using coordinates
for the menu position is appropriate. However when the menu is
associated with a window button, the expected behavior in the
shell can be implemented much easier with the full button geometry:
the menu will point to the center of the button's bottom edge
rather than align to the left/right side of the titlebar as it
does now, and the clickable area where a release event does not
dismiss the menu will match the actual clickable area in mutter.
So add an additional show_window_menu_for_rect() function and
use it when opening the menu from a button.
https://bugzilla.gnome.org/show_bug.cgi?id=731058
This can happen since we select for events on the root window, and
clients themselves might not select for input, meaning the X server
will bubble up. Just do nothing and ignore the event in this case.
This should hopefully fix some of the
Window manager warning: Log level 8: meta_window_raise: assertion '!window->override_redirect' failed
Window manager warning: Log level 8: meta_window_focus: assertion '!window->override_redirect' failed
spam that people have been seeing.
Smooth scroll event vectors from clutter have the same dimensions as the
ones from from Xi2, i.e. where 1.0 is 1 discrete scroll step. To scale
these to the coordinate space used by wl_pointer.axis
vertical/horizontal scroll events, multiply the vector by 10.
https://bugzilla.gnome.org/show_bug.cgi?id=729601
Since we often call meta_window_move_resize_now immediately after
mapping a window, we need to make sure that the placed coordinates
are saved in the unconstrained_rect. Ideally, placement positions
wouldn't be part of the constraints system, but instead are just
done inside meta_window_move_resize_internal as part of a special
path.
We're still working out the kinks of one large-scale refactor, so
it's best not to do another one while the first is going on. This
would be a great future cleanup, though: untangling constraints
and placement, alongside the force_placement state machine and
friends.
For Wayland, we want to have everything possible in terms of the frame
rect, or "window geometry" as the Wayland protocol calls it, in order
to properly eliminate some flashing when changing states to fullscreen
or similar.
For this, we need to heavily refactor how the code is structured, and
make it so that meta_window_move_resize_internal is specified in terms
of the frame rect coordinate space, and transforming all entry points
to meta_window_move_resize_internal.
This is a big commit that's hard to tear apart. I tried to split it
as best I can, but there's still just a large amount of changes that
need to happen at once.
Expect some regressions from this. Sorry for any temporary regression
that this might cause.
We have two different coordinate spaces here. One is the rectangle
returned by meta_window_get_frame_rect, which is called the "frame
rect" or "the window geometry", which includes visible frame borders
but not invisible frame borders. The other is "frame->rect" which
corresponds to the frame's server geometry. That is, it includes
both visible and invisible frame borders.
These two were of course the same until we introduced invisible
frame borders, and an executive decision was made to make
meta_window_get_frame_rect return the rectangle bounding the
visible portions of the frame.
As time went on, the "frame rect" turned out to be more useful when
making decisions upon, since the user often doesn't think about the
invisible window geometry as part of the window.
We already calculate what amounts to the "frame rect" in the theme
code, so just change META_CORE_GET_FRAME_RECT to consume that
directly.
Since we're going to be calling meta_window_get_frame_rect in here
soon, I'd rather it be one method call, rather than two. We can't
put it at the toplevel, since that might cause infinite recursion
(e.g. meta_core_get calls meta_window_get_frame_rect calls
meta_ui_get_frame_borders calls meta_core_get, ...)
Now that meta_window_move_resize and friends act in frame rect
coordinates, we need to convert the initial grab_anchor_window_pos
storage to be in frame rect coordinates as well.
This makes Alt+F7 / Alt+F8 work respectively under X11 nested mode.
For the native backend implementation, we'll need a special Clutter
function, so don't implement that for now.
The last commit added support for the "appmenu" button in decorations,
but didn't actually implement it. Add a new MetaWindowMenuType parameter
to the show_window_menu () functions and use it to ask the compositor
to display the app menu when the new button is activated.
https://bugzilla.gnome.org/show_bug.cgi?id=730752
We want to synchronize the button layouts of our server side
decorations and GTK+'s client side ones. However each currently
may contain buttons not supported by the other, which makes this
unnecessarily tricky.
So add support for a new "appmenu" button in the layout, to display
the fallback app menu in the decorations.
https://bugzilla.gnome.org/show_bug.cgi?id=730752
meta_window_get_position() returns the client rect position, which
we then pass to meta_window_move_frame. Just use the existing frame
rect coordinates.
The requested_rect is a strange name for it, because it's not actually
the rect that the user or client requested all the time: in the case of
a simple move or a simple resize, we calculate some of the fields
ourselves.
To the MetaWindow subclass implementations, it just means "the rect
before we constrained it", so just use the name unconstrained_rect.
This also makes it match the name of the MetaWindow field.
It looks weird to have Alt+Space pop up under the cursor instead
of the top-left corner of the window, and the Wayland request will
pass through the coordinates as well.
Add it to the compositor interface, and extend the
_GTK_SHOW_WINDOW_MENU ClientMessage to support it as well.
On X, basing the check whether the pointer is on the window on
Clutter events does not work, as the relevant events are handled
by GDK instead.
So add an X-specific window_has_pointer() implementation to also
fix mouse mode when running as X compositor.
https://bugzilla.gnome.org/show_bug.cgi?id=730541
Using clutter_actor_has_pointer() to test whether the pointer is
on the window makes for clean and nice-looking code, but does not
work in practice - ClutterActor:has-pointer is not recursive, so
we miss when the pointer is on the associated surface actor rather
than the actor itself.
Instead, check whether the window actor contains the core pointer's
pointer actor, which actually works.
https://bugzilla.gnome.org/show_bug.cgi?id=730541
We already do it in the theme code, but not the actual WM code. Since
we include the left/right borders, it only seems fair to include the
bottom border.
This effectively makes it so that shading a window means that the
client window "slot" has 0 height.
Otherwise, the X server might read the backend's connection before
GTK+'s, meaning that it sees the XIGrabKeycode requests before the
CreateWindow.
This fixes keybindings on windows not working immediately.
Thanks to Rui Matos <tiagomatos@gmail.com> and
Julien Cristau <jcristau@debian.org> for helping track down the issue.
Realistically, the user rect contains the unconstrained window
rectangle coordinates that we want to be displaying, in case
something in the constraints change.
Rename it to the "unconstrained_rect", and change the code to always
save it, regardless of current state.
When metacity was originally being built, the purpose of the user
rect was a lot less clear. The code only saved it on user actions,
with various other calls to save_user_window_placement() and a force
mechanism sprinkled in to avoid windows being snapped back to odd
places when constraints changed.
This could lead to odd bugs. For instance, if the user uses some
extension which automatically tiles windows and didn't pass
user_action=TRUE, and then the struts changed, the window would be
placed back at the last place a user moved it to, rather than where
the window was tiled to.
The META_IS_USER_ACTION flag is still used in the constraints code
to determine whether we should allow shoving windows offscreen, so
we can't remove it completely, but we should think about splitting
out the constrainment policies it commands for a bit more
fine-grained control.
https://bugzilla.gnome.org/show_bug.cgi?id=726714
This uses David Herrmann's new logind sessions interface to retrieve
fds for input devices, rather than using a custom setuid helper to do
the management. This vastly simplifies the interface.
This requires systemd v210, at least.
https://bugzilla.gnome.org/show_bug.cgi?id=724604
When switching from the stage cursor to the native cursor, we
forgot to repaint the stage to get rid of the old cursor. Fix
this by having the abstract cursor renderer class track whether
we're using the backend, rather than doing chain-up shenanigans.
The default focus interface uses the button count to determine
whether we should update the pointer focused surface. When releasing
an implicit grab, we need to send the button release events to the
implicitly grabbed surface, so we can't reset the focus surface too
soon. We already explicitly set the focus at the end of implicit
grabs, so counting the buttons after is perfectly fine.
Now that we don't have to regrab to change the cursor, since it's
simply the cursor on the root window, all we have to do is update
the cursor on the screen.
We expect that meta_screen_set_cursor while grabbed will properly
set the cursor on the root window. Make sure this works by simply
always using the root cursor when we have an active grab.
If we send out a configure notify for a window and then have some
other kind of state change, we need to make sure that we continue
to send out that new size, rather than the last size the client
sent us a buffer for.
In particular, a client might give us a 250x250 buffer and then
immediately request fullscreen. We send out a configure for the
monitor size and a state that tells it it's full-screen, but then
it takes focus, and since the client hasn't sent us a buffer for
the new size, we tell it it's fullscreen at 250x250.
Fix this.
It isn't necessary. As an X11 compositor, we'll only see the event
if we have the grab on the window, anyway.
This was causing issues moving windows as a Wayland compositor.
When we're a Wayland compositor, we get all the events, no exceptions,
so we don't need to grab.
This was masking focusing and raising issues under nested that showed
up under native.
If we apply a prediction immediately instead of queueing, we should
also free the operation immediately.
If we discard the prediction queue because we resync fully, we
need to free each operation too.
https://bugzilla.gnome.org/show_bug.cgi?id=729732
If we exit early as not handled, then the normal process_event
handler will fire, and trigger the overlay-key binding. As that's
a special binding that doesn't have a handler, trying to trigger
that handler will crash mutter.
Instead of returning early, just check for xdisplay every time
we try to drive the X grab state machine. We really need a better
solution for this on the Wayland side.
We can only apply a configuration if its outputs match the connected
ones, so discard the current configuration if the set of output changes
(for example for hotplug), otherwise we will crash trying to apply
the bogus previous configuration.
https://bugzilla.gnome.org/show_bug.cgi?id=725637
If we attach to a MetaWindow that disappears before the idle fires,
we'll notice that we can't associate the window properly again and
try to access data on the MetaWindow struct, which might crash.
Install a weak ref that ties the lifetime of the idle to the lifetime
of the MetaWindow.
It seems every GTK+ app does this for some reason at startup. This
is really unfortunate, since we'll have to create and destroy a new
MetaWindow really quickly.
Scale surfaces based on output scale and the buffer scale set by them.
We pick the scale factor of the monitor there are mostly on.
We only handle native i.e non xwayland / legacy clients yet.
https://bugzilla.gnome.org/show_bug.cgi?id=728902
Advertise the scale factor on the output and transform pointer and damage
events as well as input and opaque regions for clients
that scale up by themselves i.e use set_buffer_scale.
We do not scale any 'legacy' apps yet.
https://bugzilla.gnome.org/show_bug.cgi?id=728902
Since commit 6e8d1d79d, move operations are always performed for
the (toplevel) parent of all transient, which is just plain silly
if the dialog is not actually attached to its parent (either because
the dialog is not modal or the setting is disabled).
We need the old rect for two purposes: to find the x/y in a resize-only
action, and to pass into the constraints code for nefarious purposes.
The constraints code takes a frame rectangle, so we convert the original
client rect into a frame rect, but never convert it back since it's
unused for the rest of the function.
Instead of playing games with the variables, just have two,
separately-scoped variables. One is the client rect, the other is the
frame rect.
Ugh. So in the fullscreen case, we need to make sure to specify that
it's a MOVE_ACTION so that we move to the saved position, but we
can't do that in the resizing case since we need to use the resized
rectangle.
The flags are really hurting us here. Perhaps we should make it the
client's responsibility to specify a complete rectangle which we
could resize to; then the weird-o logic would be self-contained in
each front-end.
I'm not convinced this covers all cases, especially when we could have
a dangling weird state pointer, but it fixes our existing two testcases.
For gravity-based resizing, we need to make sure that the requested
rectangle has the proper x/y position given by the gravity resize,
rather than the bogus root_x_nw / root_y_nw parameter.
Make the test for this more explicit.
Restoring the position in our move_resize_internal implementation
is too late. We need to do it at ack-time, before we hand off the
new position to the constraints code.
With our surface_mapped strategy, implement_showing might not
change whether the window has been shown or not, and thus we
might end up clearing pending_compositor_effect before the window
is mapped.
Only clear pending_compositor_effect when the effect has actually
been used.
For the server-initiated resize case, like unmaximize or some forms
of tiling, we dropped the x/y of the server-assigned rectangle on the
floor, which meant the surface didn't move to where it needed to be in
that case. Now, save it internally, and combine it with the dx/dy passed
in during attaches to figure out where we actually need to be.
Make sure to only use it for when we send out a configure notify. We
should use the passed in rectangle for other scenarios, like a
client-initiated resize.
This fixes incorrect surface placement after unmaximization.
For the server-initiated resize case, like unmaximize or some forms
of tiling, we dropped the x/y of the server-assigned rectangle on the
floor, which meant the surface didn't move to where it needed to be in
that case. Now, save it internally, and combine it with the dx/dy passed
in during attaches to figure out where we actually need to be.
This fixes incorrect surface placement after unmaximization.
Remove extend_by_frame and unextend_by_frame. Use a dumb hack in
window.c to translate into window geometry in back. We'll soon track
all rectangles in MetaWindow in terms of the window geometry.
Talking it over with Owen, we weren't sure why this was here.
At one point, we were creating a foreign stage window, so potentially
Clutter didn't select for its own events, but now we're using a standard
stage window, so this seems weird.
Why we did it on the COW, nobody knows. Maybe copy/paste bugginess?
Grab operations are now always taken on the backend connection, and
this breaks GTK+'s event handling.
Instead of taking a grab op, just do the handling ourselves. The
GTK+ connection will get an implicit grab, which means pointer /
keyboard events won't be sent to the rest of mutter, which is good.
When we click on a window with a passive grab, then the event_x
and event_y will be relative to that window, instead of relative to
the stage, which means that picking will be wrong.
Forcibly using root_x / root_y breaks nested mode. Nested mode is
a testing mode that should be replaced by a DRI3-enabled Xephyr,
though. It's getting too hairy to support properly.
Now that we grab devices on the X11 connection, we can run into
cross-connection issues. Since GTK+ frames are on the UI connection,
they'll get the passive grab when we click on them. Forcibly ungrab
on GTK+'s connection before attempting to take a grab on the backend
connection ourselves.
It's been long enough. We can mandate support for these, at least
at build-time. The code doesn't actually compile without either
of these, so just consider that unsupported.
Now that we have a global MetaScreen, we can simply have a global
MetaCursorTracker as well. Keep the get_for_screen() API around for
compatibility, though.
The Alt+F7 and Alt+F8 keybinds for moving and resizing windows allow you
to move and resize the window off the screen, so allow the same for the
menu items as well, since they're marked with the same accelerator.
https://bugzilla.gnome.org/show_bug.cgi?id=728617
This doesn't particularly matter, since we fall through into a default
case that does nothing right below, but this matches the other paths
and it prevents us from falling into a trap if we add other event types
below.
If we start a grab op from a keybind / menu, we'll handle the
ButtonPress and drop the grab then, never giving the window a chance
to handle what it needs to do before the grab is dropped.
This means that if you use Alt+F7 to move a window around, move it
to a side-tiling or maximization area, and then left-click, it will
just hang there in the sky.
The entire point of it was to check whether the window was on the
right screen. Since we don't handle multiple screens anymore, we
don't need to check anything anymore.
Looking at the code paths where is_mouse / is_keyboard are used,
all of them should never be run when dealing with a COMPOSITOR
grab op, since they're filtered out above or the method is just
never run during that time.
It's confusing that COMPOSITOR is in here, and requires us to
be funny with other places in code, so just take it out.
pointer->current needs to always be the surface under the pointer,
even when we have a grab. We do need to make sure we keep the focus
surface the same even when we have a grab, though, so add logic
for that.
In order to correctly fix the issue to make sure we only set the
focused surface to NULL during a grab, but not the current surface,
we need to merge update_current_surface back into repick_for_event
so we have more control over the behavior here.
... not when we do an update.
We only repick when we handle events, not when we update. Perhaps
this is a mistake.
Since update runs before handle_event, this means that when we
drop a grab, update will notice the NULL surface, since we haven't
repicked after the event, and then we'll repick the correct surface.
The end result is that you see a root cursor after a grab ends,
rather than the correct window cursor.
This doesn't fix it, since the current surface becomes NULL when
we start the grab. But it does make the code here more correct when
we fix that bug.
I was talking with other people and they became confused at the
term "double-buffered", since we were also talking about
double-buffering in general, e.g. swapping between two buffers.
Instead, we'll adapt the "pending state" nomenclature that we
already use for the field / variable names.
If we have a focused surface, we need to eat up key events, not
just if we have a non-empty focus resource list. The latter would
happen if we have a focused client but it never called get_keyboard.
The latest Xorg / Xwayland has support for -displayfd being used
in conjunction with an explicit display number. Use that to know
when the X server is ready, rather than UNIX signals, because
they're UNIX signals.
We track changes to windows fullscreen state and stacking order
to determine a monitor's in-fullscreen state, but missed the
obvious case of moving a fullscreen window between monitors.
https://bugzilla.gnome.org/show_bug.cgi?id=728395
Commit 585fdd781c not only removed the tabpopup, but set invalid
handlers (a.k.a. NULL) for those shortcuts; add back handling of
basic handling of those shortcuts by switching instantly without any
popups.
https://bugzilla.gnome.org/show_bug.cgi?id=728423
If we're sending out a configure event, we can't immediately move the
window; we need to instead wait to apply the new position when the
client sends a new buffer.
dx/dy should be against the regular window's rect, and need to
be ignored when we're resizing. Instead, we use gravity to anchor
the window's new rectangle when resizing.
Sophisticated clients, like those using ClutterGtk, will have more
than one focused resource per client, as both Clutter and GDK will
ask for a wl_pointer / wl_keyboard. Support this naturally using
the same "hack" as Weston: multiple resource lists, where we move
elements from one to the other.
In order to support multiple pointers for the same client, we're
going to need to kill it.
This will cause crashes for now, but will be fixed by the next
commit.
Doing this synchronously means that zenity tries to initialize GTK+.
Under Wayland, that will try to connect back to mutter as a display
server. We're waiting on zenity to exit, and zenity is waiting for
a connection response. Deadlock.
Simply assume that zenity will support all the options we feed it,
since it should be the correct version. Perhaps we should replace
our use of zenity with a simple helper binary that we know will
have all the right options if this still isn't good enough.
Our focus stealing prevention is still mostly inherited from metacity;
in particular, a (non-transient) window that is not on the current
workspace will not be given focus. This behavior made sense in the
GNOME 2 days, where workspaces were separated much more strictly.
However this is no longer the case in GNOME 3 - activating a launcher
will switch workspaces if necessary, and so will the app switcher.
There is no good reason to not do the same for other user actions
like clicking a URL or activating a search result, so allow activation
of windows on non-active workspaces if a proper timestamp is supplied,
assuming that this is a strong enough indication that we are dealing
with a legitimate user action.
https://bugzilla.gnome.org/show_bug.cgi?id=728018
Effectively we have been accepting CurrentTime timestamps for years,
but still complained about "stupid pagers" when encountering them;
just accept that we will never limit treating 0 timestamps as current
time to pagers.
https://bugzilla.gnome.org/show_bug.cgi?id=728018
The code that restacks X11 windows at the end first tracks any
old windows we know about, and then handles any windows created.
It starts when it ended, and then walks forwards and then
back looking for the first X11 window it doesn't know about.
However, when there aren't any X11 windows, it flies off the end
of the array and starts looking through random memory.
When it finds the X11 window, it then goes through and then tries
to restack the remaining windows according to how we've sorted
them.
Unfortunately, META_WINDOW_CLIENT_TYPE_X11 is 0, which is quite
common in random memory we have lying around, so we enter that
path and then just crash.
Fix the buffer overrun by adding the proper bounds check to the
search.
You can easily reproduce this by opening a menu while bloatpad
is full-screen. Why it only crashes when full-screen and not
when a standard window, I have no idea.
default_grab_focus tries to add implicit grab semantics where
focus won't take effect if there's a pointer button down. This
is not what we want for popup grabs at all, as it's perfectly
valid to want to drag on a menu while there's a button down.
The idea here is that while we take a WM-side grab, like a compositor
grab or a resizing grab, we need to remove the focus from the Wayland
client.
We make a special exception for CLICKING operations, because these
are really an internal state machine while you're pressing on a button
inside a frame, and in this case, we need to not kill the focus.
Really, it is a special case. When the subsurface is synchronous,
commit changes meaning from being applied immediately to being
queued up for replay later. Handle this explicit special case
with an explicit special case in the code.
This means that in all other paths, we can unconditionally
apply the actor immediately.
Even when it doesn't have a role.
This fixes cursors not quite working right, as they're a "detached"
surface without a role since nobody called set_cursor on them yet.
Instead of using commit_attached_buffer / actor_surface_commit.
We want to kill the return values of these methods because we
really should always be calling them, even if the surface doesn't
have a role.
This is also something that we did upstream. Since we want to
introduce an explicit "xdg_transient" window type for tooltips
and popovers, and since "transient_for" is a confusing dumb
80s term lifted from the ICCCM spec, just rename it.
This was changed upstream a little while ago for C++ compatibility.
It's also the more common term for the operation: you close a window,
you don't delete one. In fact, a delete event might seem like it
would be about resource management instead.
Except while reading _NET_WM_WINDOW_OPACITY, opacity is between 0 and 255. With
guint8, we'll get compiler warnings if arbitrary int values are passed.
https://bugzilla.gnome.org/show_bug.cgi?id=727874
This has one regression: the basic touch support added by
Carlos Garnacho in 991c85f is now partially reverted, since
we ported to Clutter events for this. We'll need to either
port his changes to Clutter, or restructure event handling
in mutter directly.
We can't really support the Gtk+ automatic scaling, as to much
code relies on the GdkWindow and XWindow sizes, etc to match.
In order to keep working we just disable the scaling, meaning
we will pick up the larger fonts, but nothing else. Its not
ideal but it works for now.
https://bugzilla.gnome.org/show_bug.cgi?id=706388
When _GTK_FRAME_EXTENTS changes, we need to redo constraints on
the window - this matters in particular if the toolkit removes
invisible borders when a window is maximized, since otherwise
the maximized window will be positioned as if it still has
invisible borders.
https://bugzilla.gnome.org/show_bug.cgi?id=714707
A gulong is not enough to get 64 bits in all arches, so we must
cast it, or we can corrupt the stack.
This was downstream bug bugzilla.redhat.com/show_bug.cgi?id=1002055
https://bugzilla.gnome.org/show_bug.cgi?id=707267
Apparently some connector technologies don't distinguish between
on and off, and there might be valid use cases for running without
any connected monitor.
In that case, just avoid any configuration at all.
https://bugzilla.gnome.org/show_bug.cgi?id=709009
Each level in the tower is initialized by binding the texture for that
level to an offscreen framebuffer and rendering the previous level as a
textured rectangle. The problem was that we were blending the previous
level with undefined data so argb32 windows with transparencies would
result in artefacts. This makes sure to disable blending when drawing
the textured rectangle.
After reading the atom, scale the value from 0xffffffff to 0xff. Not doing so
causes Clutter to truncate the opacity value, and only read the last two digits.
Hence, 0x7fffffff (50%) becomes 0xff (100%).
https://bugzilla.gnome.org/show_bug.cgi?id=727874
After reading the atom, scale the value from 0xffffffff to 0xff. Not doing so
causes Clutter to truncate the opacity value, and only read the last two digits.
Hence, 0x7fffffff (50%) becomes 0xff (100%).
https://bugzilla.gnome.org/show_bug.cgi?id=727874
A careful analysis of mutter's codebase shows that nothing actually
passes anything but 0 to this. gnome-shell has one instance, but it's
most likely a mistake.
Remove the grab_mask field and the one place in keybindings.c that uses it.
The parameter to begin_grab_op is left in for API compatibility reasons.
Since we get the ClientMessage after the surface is created, there's
no good way to synchronize the two streams. In this case, what we
need to do is delay the surface commit until after we get the
ClientMessage. Ideally, we'd be using a better surface system overall
where committing the surface didn't depend on what type it is, but
oh well, this is a good short-term hack for now.
This is effectively the same, but since we lose the xserver.xml protocol
in the new XWayland DDX, we have to use SIGUSR1 anyway, so might as well
switch over now.
The make_toplevel / window_unmanaging interface has never made
a lot of sense to me. Replace it with set_window, which does
effectively the same thing.
It's still not perfect in the case of XWayland, but I don't think
XWayland will ever make me happy.
It should be META_TYPE_WAYLAND_STAGE, not META_WAYLAND_TYPE_STAGE.
Well, actually, it *should* be META_TYPE_NATIVE_STAGE, because it's
not related to Wayland at all. But that comes later :)
Right now this just has all of the files in one directory. We'll
be introducing more structure to this in the future, and build
a proper backend system.
This will allow us to have a MetaCursorReference 'subclass' that's
lazily loaded. We currently always load all the images.
The long-term plan is to have a subclass for each "backend" and only
have CoglTexture as a common denominator. For the nested X11 backend,
we use XDefineCursor on our stage window. For the Wayland backend, we
would use set_cursor on our stage surface. For the native backend, we
would use the GBM code that's there right now.
The CoglTexture is there to be a "shared fallback" between all devices,
and also for the get_sprite API.
The odd man out is the X11 compositor case. For that, we need to move
the responsibility of setting the final cursor image out of
MetaCursorTracker, and simply have it be about tracking the used sprite
image and pointer position.
We want to make this private, and have MetaCursorReference be
backend-defined, with the texture possibly loaded on demand.
We can't make the definition of MetaCursorReference truly private yet
because of the XFixes cursor. A victim of MetaCursorTracker trying to
do too many things at once...
I want the MetaCursorTracker to mostly be about retrieving cursor
information. Start moving the code that loads cursor images to a
new file, MetaCursor. Eventually, MetaCursorTracker's APIs will
all take MetaCursorReferences, and we can have a clean backend
split here.
For whatever reason, this hash table was in the generic
implementation section instead of the XSync implementation,
even though it's only used by the XSync implementation.
Use it as a first pass of things to move over.
The reason we don't simply use gdk_window_add_filter directly is
because of some twisted idea that any GDK symbol being used from
core/ is a layer violation. While we certainly want to keep any
serious GDK code out of ui/, event handling is quite important
to have in core/, so simply use a GDK event filter directly.
The code here before was completely wrong. Not only did it mix up
coordinate spaces of "client rect" vs. "frame rect", but it used
meta_frame_get_frame_bounds, which is specifically for the *visible*
bounds of a window!
In the case that we don't have a bounding or input shape region at
all on the client window, the input shape that we should apply is
the surface's natural shape. So, set the region to NULL to get the
natural rect picking semantics.
Really, visible_to_compositor means that the window is shown, e.g.
not minimized. We need to be using a boolean tracking whether we've
called meta_compositor_add_window / meta_compositor_remove_window.
This fixes a jump during window placement when a window appears.
visible_to_compositor should always be in sync with show_window /
hide_window calls, even when unmananging.
This fixes a crash where we call sync_window_state when the window
is unmanaging, since we use visible_to_compositor to determine whether
the compositor will crash.
This is actually wrong; we should be using the knowledge about
whether we have called add_window / remove_window. We'll introduce
this with a new boolean next time.
We guarantee ourselves that a valid pixmap will appear any time
that the window is painted. The window actor will be scheduled
for a repaint if it's added / removed from the scene graph, like
during construction, if the size changes, or if we receive damage,
which are the existing use cases where this function is called.
So, I can't see any reason that we queue a redraw in here.
With the split into surface actors, we don't have an easy place
we can use to queue a redraw, and since it's unnecessary, we can
just drop it on the floor.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
Compositors haven't been able to manage more than one screen for
quite a while. Merge MetaCompScreen into MetaCompositor, and update
the API to match.
We still keep MetaScreen in the public compositor API for compatibility
purposes.
We previously separated out MetaDisplay and MetaScreen. mutter
would only manage one screen, but we still kept a list of screens
for simplicity.
With Wayland support, we no longer care about the ability to
manage more than one screen at a time. Remove this by killing
the list of screens, in favor of having just one MetaScreen
in MetaDisplay.
We also kill off active_screen at the same time, since it's
not necessary anymore.
A future cleanup should merge MetaDisplay and MetaScreen. To avoid
breaking API, we should probably keep MetaScreen around as a dummy
type.
I'm a bit tired of hearing about this when I launch mutter-wayland
nested. Ideally, this would be part of display server integration,
not GNOME integration, so we could simply not make the call when
nested, but oh well.
cogl_texture_get_components can be used on both X11 and Wayland
backends. Technically, the detection is different: we actually
check the actual RENDER format in the old code, while Cogl simply
assumes that any pixmap with a depth >= 32 is ARGB32. Since Cogl
already seems to be working with its internal checks, it makes
more sense to use Cogl's check rather than keeping our own.
is_argb32 can be called at any time, including times when we don't
have a texture. In that case, just assume we're ARGB32. The value
really shouldn't be important though.
When I refactored this out into a vfunc, I forgot to change the
code that interprets the result flags to actually respect the
new FRAME_SHAPE_CHANGED result flag.
Since we weren't ever clearing the frame bounds, this meant that
the "shadow clip" wasn't ever updated as a result. Since right now
all Wayland surfaces are considered ARGB32, we always clip shadows
under frames, and thus shadows had this weird "punch-out" from the
first frame shape.
While the ICCCM mandates the use of this, it's not necessary under
a composited environment from my understanding, and it's a flat
out no-op under XWayland.
Looking at the other rootless servers like Xwin/Xquartz, it seems
that they contain code for colormap emulation, but they're actually
never used -- a bug prevents the code from ever being called. Given
that it's been this way since 2003, I'm going to hazard a guess that
not many apps using colormaps. Kill them off.
display.c is getting a bit crowded. Move most of the handling
out to another file, events.c.
The long-term goal is to have generic event handling here, with
backend-specific handling for the types of windows and such.
Previously, a sequence like this would crash a client:
=> surface.attach(buffer)
=> buffer.destroy()
The correct behavior is to wait until we release the buffer before
destroying it.
=> surface.attach(buffer)
=> surface.attach(buffer2)
<= buffer.release()
=> buffer.destroy()
The protocol upstream says that "the surface contents are undefined"
in a case like this. Personally, I think that this is broken behavior
and no client should ever do it, so I explicitly killed any client
that tried to do this.
But unfortunately, as we're all well aware, XWayland does this.
Rather than wait for XWayland to be fixed, let's just allow this.
Technically, since we always copy SHM buffers into GL textures, we
could release the buffer as soon as the Cogl texture is made.
Since we do this copy, the semantics we apply are that the texture is
"frozen" in time until another newer buffer is attached. For simple
clients that simply abort on exit and don't wait for the buffer event
anyhow, this has the added bonus that we'll get nice destroy animations.
If we have a CLICKING grab op we still need to send events to xwayland
so that we get them back for gtk+ to process thus we can't steer
wayland input focus away from it.
https://bugzilla.gnome.org/show_bug.cgi?id=726123
meta_wayland_seat_repick() can be called in various cases while mutter
has a GRAB_OP ongoing which means we could be sending wrong pointer
enter/leave events.
https://bugzilla.gnome.org/show_bug.cgi?id=726123
This ensures that we send the proper leave and enter events to wayland
clients.
Particularly, this solves a bug in SSD xwayland windows where clicking
and dragging on the title bar to move the window only works on the odd
turn (unless the pointer moves away from the title bar between
tries). This happens because xwayland gets a button press but doesn't
see the release so when it gets the next button press it discards it
because its pointer button tracking logic says that the button is
already pressed. Sending the proper wayland pointer leave event fixes
it since wayland clients must forget about button state at that point.
https://bugzilla.gnome.org/show_bug.cgi?id=726123
At one point, it was supported to run mutter without a compositor,
but we don't allow that any longer. A lot of code already assumes
display->compositor exists and doesn't check for a NULL pointer,
so just kill the rest of the checks.