The code that prevents the creation of multiple MonitorInfos for clones
wasn't working due to using the wrong index when getting the already
created info so fix that to use the correct one.
https://bugzilla.gnome.org/show_bug.cgi?id=710610
Use our new "surface_mapped" field to delay the showing of XWayland clients
until we have associated together the window's XID and the Wayland surface ID.
This ensures that when we show this window to the compositor, it will properly
use the Wayland surface for rendering, rather than trying to use COMPOSITE and
crash.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
The goal here is to make MetaWindow represent a toplevel, managed window,
regardless of if it's X11 or Wayland, and build an abstraction layer up.
Right now, most of the X11 code is in core/ and the wayland code in wayland/,
but in the future, I want to move a lot of the X11 code to a new toplevel, x11/.
Moving the mouse over weston-terminal, we can see several issues:
* it often updates late, or not at all
* the attachment of the pointer sprite is wrong
These are because we willy-nilly call seat_update_sprite all over the
place, and often in wrong areas. Set up a set_pointer_surface helper
method that will do the right thing for us in all cases, and call it
on transitions.
Currently, set_cursor isn't properly working. Often, the requests look
like this:
cursor_surface = wl_compositor.create_surface()
cursor_buffer = create_cursor_buffer()
cursor_surface.attach(cursor_buffer, 0, 0)
cursor_surface.commit()
wl_pointer.set_cursor(cursor_surface, 7, 14)
But since the surface doesn't "have a role" when the commit comes in,
we ignore it, and don't do anything with the pending buffer. When
set_cursor is called, however, we don't immediately update anything
since it doesn't have a buffer.
This effectively means that set_cursor has unpredictable side effects.
Weston's toy toolkit reuses the same surface for every buffer, so it
only fails the first time. In clients that use a new surface for every
cursor sprite, the cursor is effectively invisible.
To solve this, change the code to always set the buffer for a surface,
even if it doesn't have any real role.
Refactor our commit() implementation out of wl_surface_commit(),
and pass the double-buffered state around to all our implementation
methods. This allows us to drop a few lines destroying and
reinitializing list of frame callbacks. This is a big cleanup for
the next commit, though.
X11 window frames use special UI grab ops, like META_GRAB_OP_CLICKING_MAXIMIZE,
in order to work properly. As the frames in this case are X11 clients, we need
to pass through X events in this case. So, similar to how handle_xevent works,
use two variables, bypass_clutter, and bypass_wayland, and set them when we
handle specific events.
To prevent corruption, our focus listener needs to be removed even when
the surface is destroyed. So, bailing out when pointer->focus->resource
is NULL just isn't good enough.
Ever since the change to create the output window synchronously at startup,
there hasn't been any time where somebody could set a stage region the
output window was ready, so this was effectively dead code.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
I know it's confusing with the triple negative, but unredirected is how
we track it elsewhere: we have an 'unredirected' flag, and 'should_unredirect'.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
Using a full InputOutput window causes us to make a full Wayland surface
for it, and go through the X server. As the goal of the guard window is
a window for us to stack minimized windows under so we can prevent them
from getting input, it makes sense to use an InputOnly window here.
We currently ignore the unobscured region when we have mapped clones in
meta_window_actor_process_damage and meta_window_actor_damage_all but
use it unconditionally when computing the paint volume.
This is wrong. We should ignore it there as well or we will end up with
empty clones if the cloned window is completly obscured
(like the tray icons in gnome-shell).
https://bugzilla.gnome.org/show_bug.cgi?id=721596
Implement support for synchronous subsurfaces commits. This means that
the client can, by calling wl_subsurface.set_sync, cause its surface
state to be commited not until its parent commits.
This will mean there will be will potentially be one more surface state
(regions, buffer) at the same time: the active surface state, the mutable
pending surface state, and the immutable surface state that was pending
on last surface commit.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=705502
The placement set by either wl_subsurface.place_above or
wl_subsurface.place_below should be applied when the parent surface
invokes wl_surface.commit.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=705502
The position set by wl_subsurface.set_position should be applied when
the parent surface invokes wl_surface.commit.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=705502
Don't allow a client to stack a subsurface next to a subsurface with
another parent, or to a non-parent non-subsurface surface.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=705502
We need to do this for XWayland windows, since we only get the event
telling us they're an XWayland window after the compositor knows about
the window.
I know it's confusing with the triple negative, but unredirected is how
we track it elsewhere: we have an 'unredirected' flag, and 'should_unredirect'.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
Ever since the change to create the output window synchronously at startup,
there hasn't been any time where somebody could set a stage region the
output window was ready, so this was effectively dead code.
We no longer unmap the toplevel windows during normal operation. The
toplevel state is tied to the window's lifetime.
Call meta_compositor_add_window / meta_compositor_remove_window instead...
Traditionally, WMs unmap windows when minimizing them, and map them
when restoring them or wanting to show them for other reasons, like
upon creation.
However, as metacity morphed into mutter, we optionally chose to keep
windows mapped for the lifetime of the window under the user option
"live-window-previews", which makes the code keep windows mapped so it
can show window preview for minimized windows in other places, like
Alt-Tab and Expose.
I removed this preference two years ago mechanically, by removing all
the if statements, but never went through and cleaned up the code so
that windows are simply mapped for the lifetime of the window -- the
"architecture" of the old code that maps and unmaps on show/hide was
still there.
Remove this now.
The one case we still need to be careful of is shaded windows, in which
we do still unmap the client window. Theoretically, we might want to
show previews of shaded windows in the overview and Alt-Tab, so we remove
the complex unmap tracking for this later.
This reverts commit ebe6e3180e.
This is wrong, as mutter's controlling TTY may not be the same
as the active VT, and in fact won't be in the case of systemd
spawning us.
The "correct" API for this is to use David Herrmann's
"Session Positions" system to switch to another VT:
http://lists.freedesktop.org/archives/systemd-devel/2013-December/014956.html
For x defined below, x == -INT32_MAX assuming that the arithmetic
expression actually uses the fpu.
float f = 1.0f;
int32_t x = INT32_MAX * f;
This would result in the calculated clip width/height to be -INT_MAX
if the damage width/height is INT_MAX. To solve this, use a double
variable instead.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=705502
cogl_texture_get_format() has been deprecated, so rather than using
it to figure out beforehand whether the buffer format is supported,
just rely on the import failing if it isn't.
https://bugzilla.gnome.org/show_bug.cgi?id=722347
We need to set the number of components on the CoglTextureRectangle to
prevent wasting too much GPU memory. As we need to do this before we call
cogl_texture_set_region, just remove the meta_texture_rectangle_new wrapper,
and make callers call cogl_texture_rectangle_new_with_size directly.
Under some circumstances, for example when the display controller driver
doesn't report back the correct EDID, or under VirtualBox, Mutter
returns suboptimal strings for an output display name, leading to funny
labels like 'Unknown 0"', or '(null) 0"' in the Settings panel.
This commit improves our heuristic in three ways:
- we now avoid putting inches in the display name if either dimension is
zero
- we use the vendor name in case we're not able to lookup its PnP id
from the database. Previously we would have passed over '(null)'
- as a special edge-case, when neither inches nor vendor are known, we
use the string 'Unknown Display'
Finally, we make the combined vendor + inches string translatable, as
different languages might want to move the size part of the string to a
position different than the end.
https://bugzilla.gnome.org/show_bug.cgi?id=721674
When GDK sends an unmaximize _NET_WM_STATE ClientMessage, it tells us to remove
the _NET_WM_STATE_MAXIMIZED_HORZ and _NET_WM_STATE_MAXIMIZED_VERT states. Before
this time, it would independently call:
meta_window_unmaximize (window, META_MAXIMIZE_HORIZONTAL);
meta_window_unmaximize (window, META_MAXIMIZE_VERTICAL);
Which, besides being foolishly inefficient, would also mess up our saved_rect
tracking, causing the window to only look like it was unmaximized vertically.
Make this code more intelligent, so it causes us to unmaximize in one call.
https://bugzilla.gnome.org/show_bug.cgi?id=722108
This grab was added in commit caf43a123fhttps://bugzilla.gnome.org/show_bug.cgi?id=381127
to minimize window flickering when switching workspaces.
While this grab is held, some signals are emitted to the shell,
which can lead to deadlocks (reproduced under Mali binary OpenGLESv2
drivers).
Now that we are a compositing window manager, we do not have to
worry about flickers, this grab should no longer be necessary.
https://bugzilla.gnome.org/show_bug.cgi?id=721709
Remove some obvious server grabs from the window creation codepath,
also ones that are taken at startup.
During startup, there is no need to grab: we install the event handlers
before querying for the already-existing windows, so there is no danger
that we will 'lose' some window. We might try to create a window twice
(if it comes back in the original query and then we get an event for it)
but the code is already protected against such conditions.
When windows are created later, we also do not need grabs, we just need
appropriate error checking as the window may be destroyed at any time
(or it may have already been destroyed).
The stack tracker is unaffected here - as it listens to CreateNotify and
DestroyNotify events and responds directly, the internal stack
representation will always be consistent even if the window goes away while
we are processing MapRequest or similar.
Now that there are no grabs we don't have to worry about explicitly calling
display_notify_window after grabs have been dropped. Fold that into
meta_window_new_shared().
https://bugzilla.gnome.org/show_bug.cgi?id=721345
The return code of XGetWindowAttributes() indicates whether an error
was encountered or not. There is no need to specifically check the error
trap.
The trap around XAddToSaveSet() was superfluous. We have a global error
trap to ignore any errors here, and there is no need to XSync() as GDK
will later ignore the error asynchronously if one is raised.
Also move common error exit path to an error label.
https://bugzilla.gnome.org/show_bug.cgi?id=721345
In meta_screen_manage_all_windows() we can use our own stack
tracker to get the list of windows - no need to query X again.
A copy is needed because the stack gets modified as part of the loop.
Specifically, meta_stack_tracker_get_stack() at this time returns the
predicted stack, and meta_window_new() performs a few operations
(e.g. framing) which cause immediate changes to the predicted stack.
https://bugzilla.gnome.org/show_bug.cgi?id=721345
meta_window_ensure_frame() creates its own grab and has a comment
claiming that it must be called under a grab too.
But the reasoning given in the comment does not seem relevant here.
We only frame non-override-redirect windows, so we are creating
the frame in response to MapRequest. There is no way that the child
could receive a MapNotify at this point, since that only happens
much later, once we go through the CALC_SHOWING queue and call
XMapWindow() from meta_window_show().
Remove the unnecessary grab.
https://bugzilla.gnome.org/show_bug.cgi?id=721345
Server grabs are not as evil as you might expect, but there is agreement
in that their usage should be limited.
Server grabs can cause things to go rather wrong when mutter emits
a signal while it has grabbed the server. If the receiver of that signal
waits for a synchronous action performed by another client, then you
have a deadlock. This happens with Mali binary GLESv2 drivers :(
https://bugzilla.gnome.org/show_bug.cgi?id=721345
The compositor code used to handle X windows that didn't have a
corresponding MetaWindow (see commit d538690b), which is why the
attribute query is separated.
As that doesn't happen any more, we can clean up. No functional changes.
Suggested by Owen Taylor.
https://bugzilla.gnome.org/show_bug.cgi?id=721345
As logind can give us a new FD at any time when it resumes. Theoretically,
this is still technically wrong, as the MetaCursorTracker holds onto it.
We'll fix this after we port to logind.
When we move focus elsewhere when unmanaging a window, we *need* to move
the focus, so if the target is globally active, move the focus to the
no-focus-window in anticipation that the focus will normally get moved
to the right window when the target window responds to WM_TAKE_FOCUS.
If the window doesn't respond to WM_TAKE_FOCUS, then focus will be left
on the no-focus-window, but there's no way to distinguish whether the
app will respond or not.
https://bugzilla.gnome.org/show_bug.cgi?id=711618
When a client spontaneously focuses their window, perhaps in response
to WM_TAKE_FOCUS we'll get a FocusOut/FocusIn pair with same serial.
Updating display->focus_serial in response to FocusOut then was causing
us to ignore FocusIn and think that the focus was not on any window.
We need to distinguish this spontaneous case from the case where we
set the focus ourselves - when we set the focus ourselves, we're careful
to combine the SetFocus with a property change so that we know definitively
what focus events we have already accounted for.
https://bugzilla.gnome.org/show_bug.cgi?id=720558
Initial placement during meta_window_constrain() can result in changes
to the borders, so we need to recompute our border sizes after
constraining. This fixes incorrect window borders on
initially maximized windows.
https://bugzilla.gnome.org/show_bug.cgi?id=720417
Currently the only way to move a window to another monitor via
keyboard is to start a move operation and move it manually using
arrow keys. We do have all the bits of a dedicated keybinding in
place already, so offer it as a more comfortable alternative.
https://bugzilla.gnome.org/show_bug.cgi?id=671054
The shadow is added in the paint step, not as a separate actor,
so the raise is a no-op. It also gets rid of an annoying misspelling
that's driving me crazy.