This reverts commit 47e339b46e. The
approach that was used to reduce the amount of work we do on RR events
to the necessary minimum is flawed. It assumes that, when the first
event we see where the retrieved XRRScreenResources.timestamp is
bigger than the previous, we already have all the data we need to
rebuild our view of the world.
That isn't true however, because the X server sends
RRScreenChangeNotify events for every step of the configuration
change, i.e. it lacks an atomic reconfiguration API. In particular, if
the X screen size is one of the changes, when we rebuild our state and
emit monitors-changed, the X screen size might still be the previous
one and since we stop updating ourselves until another reconfiguration
happens (noticed by looking at XRRScreenResources.timestamp) we end up
with the wrong idea of the X screen size.
https://bugzilla.gnome.org/show_bug.cgi?id=738630
This optimization breaks our use of XRRScreenResources' timestamps to
detect hotplugs in case one of the outputs is disconnected and the
remaining ones don't need any mode, position or transform adjustments.
In that scenario, when applying the new configuration, we resize the X
screen but never call XRRSetCrtcConfig() and since XRRSetScreenSize()
doesn't take a timestamp and the X server doesn't update its last set
timestamp, when we next get a RRScreenChangeNotify and update
ourselves, XRRScreenResources.timestamp will still be smaller than
XRRScreenResources.configTimestamp which makes us think we're seeing a
new hotplug. We just don't enter an endless loop because the screen
size that we keep applying is always the same and the X server
short-circuits and stops sending us RRScreenChangeNotifys.
Always calling XRRSetCrtcConfig() ensures that the last set timestamp
will be bigger than configTimestamp in the next event and thus making
us trigger the monitors-changed signal properly.
Note that the X server already does basically the same checks that
we're removing here, so doing this shouldn't be a significant
efficiency loss. See
http://cgit.freedesktop.org/xorg/xserver/tree/randr/rrcrtc.c?h=server-1.16-branch#n539
If the app finished multiple frames before we sent _NET_WM_FRAME_DRAWN,
we could add the send_frame_messages_timer multiple times. In the rare
case that the app immediately closed the window, the older timeout
could potentially then run on the freed actor.
https://bugzilla.gnome.org/show_bug.cgi?id=738686
* Use -1 rather than 0 as a flag for pending queue entries; 0 is
a valid frame_counter value from Cogl.
* Consistently handle the fact we can have more than one pending
entry. It's app misbehavior to submit a new frame before
_NET_WM_FRAME_DRAWN is received; but we accept such frame messages,
so we can't just leak them.
* If we remove send_frame_message_timer, assign the current frame counter
to pending entries.
* To try to avoid regressing on this, when sending _NET_WM_FRAME_TIMINGS
messages, if we have stale messages, or messages with no frame drawn
time, warn and remove them from the queue rather than just accumulating.
* Improve commenting.
https://bugzilla.gnome.org/show_bug.cgi?id=738686
It doesn't make sense to load cursor textures that we might not ever
use. Since the code here also uses CoglTexture2D, and cursors tend
to be NPOT textures, then we won't crash users of cards without
NPOT support. At least until they open the magnifier. :)
Whenever the compositor takes a grab, we're supposed send leave/enter
events to the current surface, which makes sense, as the compositor
has stolen the pointer from the client.
I forget why I added the special case in the first place, but it's
likely a bug that's since been fixed.
This actually fixes a bug: it prevents the need to double-click on
X11 application titlebars when grabbing them.
Windows that set empty input shapes get n_rects of 0 when querying them
later, which makes sense, but the code that interpreted the result
translated it into a NULL input shape, which meant it was the same as
the bounding region. As such, an empty input shape would actually get
interpreted as a full input shape!
We, ourselves, set an empty input shape on tray icon windows in
gnome-shell since we would handle the picking ourselves. This meant that
we'd actually get the MetaSurfaceActorX11 when hovering over the tray
icon, instead of the ShellGTKEmbed that we capture events on and react
to.
This fixes weird tray icon behavior in gnome-shell.
The parent pick() implementation in ClutterActor only recurses if the
vfunc is untouched, which means it's up to the MetaWaylandSurface
implementation to actually recurse, just the same as if an input mask
applied.
https://bugzilla.gnome.org/show_bug.cgi?id=738890
This reverts commit ec8ed1dbb0.
1) It turns out to add a momentary flicker from the transition
between the login screen and user session
2) It actually isn't needed anymore since bug 733026
https://bugzilla.gnome.org/show_bug.cgi?id=740377
Refactor make_default_config() to always sanity-check the configuration to
ensure that it fits within the framebuffer. Previously, this was only done
for the default linear configuration.
In recent versions of the QXL driver, it may set "suggested X|Y" connector
properties. These properties are used to indicate the position at which
multiple displays should be aligned. If all outputs have a suggested position,
the displays are arranged according to these positions, otherwise we fall back
to the default configuration.
At the moment, we trust that the driver has chosen sane values for the
suggested position.
When the output device has hotplug_mode_update (e.g. the qxl driver used in
vms), the displays can be dynamically resized, so the current display
configuration does not often match a stored configuration. When a new
monitor is added, make_default_config() tries to create a new display
configuration by choosing a stored configuration with N-1 monitors, and then
adding a new monitor to the end of the layout. Because the stored config
doesn't match the current outputs, apply_configuration() will routinely
fail, leaving the additional display unconfigured. In this case, it's more
useful to just fall back to creating a new default configuration from
scratch so that all outputs get configured to their preferred mode.
Move logic for creating different types of configurations into separate
functions. This keeps things a bit cleaner and allows us to add alternate
configuration types more easily.
WindowActors can outlive their corresponding window to animate unmap.
Unredirecting the actor does not make sense in that case, so make
sure to not request it.
https://bugzilla.gnome.org/show_bug.cgi?id=740133
When a laptop's lid is closed we try to build and apply a temporary
configuration that disables the laptop's display if we have other
outputs.
This isn't enough though, we must also check if at least one of these
other outputs is enabled otherwise we'll try to resize the screen to
0x0 which (rightfully) hits an assertion.
https://bugzilla.gnome.org/show_bug.cgi?id=739450
The input region currently only gets scaled by the surface
scale while ignoring the output scale, which causes input events to not get
delivered correctly for clients on hidpi screens. So take the output scale
into account when doing so.
https://bugzilla.gnome.org/show_bug.cgi?id=739161
This commit is wrong, it assumes that the scale only applies to the one
set by the client but its not. meta_surface_actor_wayland_scale_texture
also handles the output scale. Revert the commit to fix hidpi for wayland
clients like weston-terminal.
This reverts commit 0364ea9140.
https://bugzilla.gnome.org/show_bug.cgi?id=739161
Since GTK+ commit 3a337156d11a86c7, save()/restore() may only be
used for subelements; in this particular case, the change broke
the backdrop state in decorations. Luckily we don't actually need
the save()/restore() pair anyway, as we only touch the context's
state and always set it explicitly.