Some plugins and extensions want to be able to know when the sticky
field of a window changes, so add a property for it and allow them
to connect to the notify::on-all-workspaces signal.
CLUTTER_ENTER/LEAVE might be processed too, leading to accounting of the
NULL sequence (ie. pointer) in the gesture, and fooling the gesture with
a static extra point that wouldn't go away.
https://bugzilla.gnome.org/show_bug.cgi?id=732235
Until now, touch events sort of rely on XI_Enter/XI_Leave events accompanying
the pointer emulating touch in order to have a stage set on the device, These
events won't happen though if it's not a pointer emulating touch which happens
on the stage, causing touch events to be ignored.
Fix this by ensuring that the input device has a stage on XI_TouchBegin itself,
but only if it's not already set, so we don't possibly steal touch events to
an already interacting stage.
https://bugzilla.gnome.org/show_bug.cgi?id=732234
On X11 the pointer will follow a "pointer emulating" touch sequence, so the
pointer will be effectively left inside the stage after that touch is lifted,
even though the master device stage is unset. This makes pointer events get
ignored until the pointer leaves and enters again the stage.
https://bugzilla.gnome.org/show_bug.cgi?id=732234
The 'state' field should be used for pointer events without button
information. Pointer events that have button information should use
the 'button' field.
https://bugzilla.gnome.org/show_bug.cgi?id=732143
When workspaces-only-on-primary is set and a window is moved back to the
primary, we also move it to the active workspace to avoid the confusion
of a visible window suddenly disappearing when crossing the monitor border.
However when the window is not actually moved by the user, preserving the
workspace makes more sense - we already do this in some cases (e.g. when
moving between primary monitors), but miss others (unplugging the previous
monitor); just add an explicit user_op parameter as used elsewhere to cover
all exceptions.
https://bugzilla.gnome.org/show_bug.cgi?id=731760
Remember the last monitor a window was moved to by user action and
try to move it back on monitor changes; this should match user
expectations much better when a monitor is unplugged temporarily.
https://bugzilla.gnome.org/show_bug.cgi?id=731760
When workspaces-only-on-primary is set, a window can be on all
workspaces either because it is on a non-primary workspace, or
because it was explicitly made sticky. Only the latter is reflected
in _NET_WM_STATE, but both will result in a "magic" _NET_WM_DESKTOP,
which we (and probably other WMs) use to set the initial sticky state.
So to avoid confusing other WMs (or ourselves), make sure to only
have _NET_WM_STATE_STICKY reflected in _NET_WM_DESKTOP when unmanaging.
Window state like maximization and minimization should be preserved
over restarts - in a patch review, this would qualify as "needs-work",
so revert the cleanup until the issues are fixed.
This reverts commit dc6decefb5cbd14d5ab059a59952fd96e86dc763.
Since GTK+ already clips to the extended region for us, there's no need
to combine the two. This does lose the fast-path, but I don't actually
expect this to fire, as when we're composited, we really won't ever get
partial exposes.
mutter is quite bad at using GTK+ correctly, relying on dumb things
like the single-buffering stuff. Hack up a temporary fix for the
newer GTK+ rendering changes.
Rather than calculate it speculatively with the current properties
which may be too new or too out of date, make sure it always fits
with the proper definition. We update it when we update the toplevel
window for X11, and when a Wayland surface is committed with a newly
attached buffer.
With get_input_region existing, get_input_rect is a misnomer. Really,
it's about the geometry of the output surface, and it's only used that
way in the compositor code.
Way back when in GNOME 3.2, get_input_rect was added when we added
invisible borders. get_outer_rect was always synonymous with server-side
geometry of the toplevel. get_outer_rect was used for both user-side
policy (the "frame rect") and to get the geometry of the window.
Invisible borders were meant to extend the input region of the frame
window silently. Since most users of get_outer_rect cared about the
frame rect, we kept that the same and added a new method, get_input_rect
to get the full rect of the framed window with all invisible borders for
input kept on.
As time went on and CSD and Wayland became a reality, the relationship
between the server-side geometry and the "frame rect" became more
complicated, as can be evidenced by the recent commits. Since clients
don't tend to be framed anymore, they set their own input region.
get_buffer_rect is also sort of a poor name, since X11 doesn't really
have buffers, but we don't really have many other alternatives.
This doesn't change any of the code, nor the meaning. It will always
refer to the rectangle where the toplevel should be placed.
The surfaceless context extension can be used to bind a context
without a surface. We can use this to avoid creating a dummy surface
when the CoglContext is first created. Otherwise we have to have the
dummy surface so that we can bind it before the first onscreen is
created.
The main awkward part of this patch is that theoretically according to
the GL and GLES spec if you first bind a context without a surface
then the default state for glDrawBuffers is GL_NONE instead of
GL_BACK. In practice Mesa doesn't seem to do this but we need to be
robust against a GL implementation that does. Therefore we track when
the CoglContext is first used with a CoglOnscreen and force the
glDrawBuffers state to be GL_BACK.
There is a further awkward part in that GLES2 doesn't actually have a
glDrawBuffers state but GLES3 does. GLES3 also defaults to GL_NONE in
this case so if GLES3 is available then we have to be sure to set the
state to GL_BACK. As far as I can tell that actually makes GLES3
incompatible with GLES2 because in theory if the application is not
aware of GLES3 then it should be able to assume the draw buffer is
always GL_BACK.
Reviewed-by: Robert Bragg <robert.bragg@intel.com>
(cherry picked from commit e5f28f1e75db9bdc4f2688f420a74f908f96cf76)
Conflicts:
cogl/winsys/cogl-winsys-egl-kms.c
cogl/winsys/cogl-winsys-egl-x11.c
Some features that were previously available as an extension in GLES2
are now in core in GLES3 so we should be able to specify that with the
gles_availability mask of COGL_EXT_BEGIN so that GL implementations
advertising GLES3 don't have to additionally advertise the extension
for us to take advantage of it.
Reviewed-by: Robert Bragg <robert.bragg@intel.com>
(cherry picked from commit 4d892cd97558da61ba526f947ac0555ebab632d2)
All of the users of get_input_rect don't actually want a synthesized
input rect based off of the current margins. What they really want is
the last-configured size of the toplevel window.
Since we don't properly track this anymore in the generic MetaWindow,
use XGetWindowAttributes to fetch a server-side rectangle. This is a
bad layer violation, but since the window geometry code will have to
be rewritten anyway for the Wayland set_window_geometry, let's just
push a hacky fix for now.
While the comment claims that we may want to keep this around
for optimization purposes, the operations are raw bitmap operations
that would be cleaner done in cairo.
https://bugzilla.gnome.org/show_bug.cgi?id=662962
Struts are defined in terms of screen edges, so expand the rectangles
we get via set_builtin_struts() accordingly. However we do want to
allow chrome on edges between monitors, in which case the expansion
would render an entire monitor unusable - don't expand the rectangles
in that case, which means we will only use them for constraining
windows but ignore them for the client-visible _NET_WORKAREA property.
https://bugzilla.gnome.org/show_bug.cgi?id=730527
Like the _NET_WM_STRUT/_NET_WM_STRUT_PARTIAL client properties,
_NET_WORKAREA is defined in terms of screen geometry rather than
taking individual monitors into account. However we do want to
allow system chrome to be attached to a monitor edge rather than
a screen edges under some circumstances. As not all clients can
be assumed to deal gracefully with the resulting workarea, use
those "struts" only internally for constraining windows, but
ignore them when exporting _NET_WORKAREA.
https://bugzilla.gnome.org/show_bug.cgi?id=730527
Since commit 8b2b65246a86, we assume that the compositor always
exists. Alas, the assumption is wrong - the compositor is currently
initialized after the screen, but meta_screen_new() itself may
call a compositor function if initialization involves a workspace
switch (which will happen when meta_workspace_activate() is called
more than once and for different workspaces - or in other words,
when _NET_CURRENT_DESKTOP is set and not 0).
So carefully split out the offending bits and only call them after
the compositor has been initialized.
https://bugzilla.gnome.org/show_bug.cgi?id=731332