In order to pick up all theme information from GTK+, a single style
context is not enough; a style hierarchy that closely matches the widget
hierarchy by GTK+'s client-side decorations will allow this soon.
https://bugzilla.gnome.org/show_bug.cgi?id=741917
Our current use of style contexts is fairly limited - we don't
use them for much more than picking up some color information.
We will soon start to make more elaborate use of GTK style
information, but a single context will no longer be enough
to draw a frame then.
To prepare for this, add a simple ref-counted type to wrap
style information.
https://bugzilla.gnome.org/show_bug.cgi?id=741917
Rather than defining the space to the left and right of buttons, add a
simple spacing property that defines the space between buttons, which is
what GTK+ does for client-side decorations (e.g. GtkButtons in a GtkBox).
Unfortunately the value is hardcoded in GTK+; if it is exposed in the
theme in the future, we should pick it up from there, but for now we
just use the same value as GTK+.
https://bugzilla.gnome.org/show_bug.cgi?id=741917
Basically it's odd to have "button_rect" be a function with all the
foo_rect GdkRectangles around - renaming to get_button_rect() will
free the name for the generically named "rect" once buttons are the
only movable pieces in the frame.
https://bugzilla.gnome.org/show_bug.cgi?id=741917
This reverts commit 47e339b46e. The
approach that was used to reduce the amount of work we do on RR events
to the necessary minimum is flawed. It assumes that, when the first
event we see where the retrieved XRRScreenResources.timestamp is
bigger than the previous, we already have all the data we need to
rebuild our view of the world.
That isn't true however, because the X server sends
RRScreenChangeNotify events for every step of the configuration
change, i.e. it lacks an atomic reconfiguration API. In particular, if
the X screen size is one of the changes, when we rebuild our state and
emit monitors-changed, the X screen size might still be the previous
one and since we stop updating ourselves until another reconfiguration
happens (noticed by looking at XRRScreenResources.timestamp) we end up
with the wrong idea of the X screen size.
https://bugzilla.gnome.org/show_bug.cgi?id=738630
This optimization breaks our use of XRRScreenResources' timestamps to
detect hotplugs in case one of the outputs is disconnected and the
remaining ones don't need any mode, position or transform adjustments.
In that scenario, when applying the new configuration, we resize the X
screen but never call XRRSetCrtcConfig() and since XRRSetScreenSize()
doesn't take a timestamp and the X server doesn't update its last set
timestamp, when we next get a RRScreenChangeNotify and update
ourselves, XRRScreenResources.timestamp will still be smaller than
XRRScreenResources.configTimestamp which makes us think we're seeing a
new hotplug. We just don't enter an endless loop because the screen
size that we keep applying is always the same and the X server
short-circuits and stops sending us RRScreenChangeNotifys.
Always calling XRRSetCrtcConfig() ensures that the last set timestamp
will be bigger than configTimestamp in the next event and thus making
us trigger the monitors-changed signal properly.
Note that the X server already does basically the same checks that
we're removing here, so doing this shouldn't be a significant
efficiency loss. See
http://cgit.freedesktop.org/xorg/xserver/tree/randr/rrcrtc.c?h=server-1.16-branch#n539
If the app finished multiple frames before we sent _NET_WM_FRAME_DRAWN,
we could add the send_frame_messages_timer multiple times. In the rare
case that the app immediately closed the window, the older timeout
could potentially then run on the freed actor.
https://bugzilla.gnome.org/show_bug.cgi?id=738686
* Use -1 rather than 0 as a flag for pending queue entries; 0 is
a valid frame_counter value from Cogl.
* Consistently handle the fact we can have more than one pending
entry. It's app misbehavior to submit a new frame before
_NET_WM_FRAME_DRAWN is received; but we accept such frame messages,
so we can't just leak them.
* If we remove send_frame_message_timer, assign the current frame counter
to pending entries.
* To try to avoid regressing on this, when sending _NET_WM_FRAME_TIMINGS
messages, if we have stale messages, or messages with no frame drawn
time, warn and remove them from the queue rather than just accumulating.
* Improve commenting.
https://bugzilla.gnome.org/show_bug.cgi?id=738686
It doesn't make sense to load cursor textures that we might not ever
use. Since the code here also uses CoglTexture2D, and cursors tend
to be NPOT textures, then we won't crash users of cards without
NPOT support. At least until they open the magnifier. :)
Whenever the compositor takes a grab, we're supposed send leave/enter
events to the current surface, which makes sense, as the compositor
has stolen the pointer from the client.
I forget why I added the special case in the first place, but it's
likely a bug that's since been fixed.
This actually fixes a bug: it prevents the need to double-click on
X11 application titlebars when grabbing them.
Windows that set empty input shapes get n_rects of 0 when querying them
later, which makes sense, but the code that interpreted the result
translated it into a NULL input shape, which meant it was the same as
the bounding region. As such, an empty input shape would actually get
interpreted as a full input shape!
We, ourselves, set an empty input shape on tray icon windows in
gnome-shell since we would handle the picking ourselves. This meant that
we'd actually get the MetaSurfaceActorX11 when hovering over the tray
icon, instead of the ShellGTKEmbed that we capture events on and react
to.
This fixes weird tray icon behavior in gnome-shell.
The parent pick() implementation in ClutterActor only recurses if the
vfunc is untouched, which means it's up to the MetaWaylandSurface
implementation to actually recurse, just the same as if an input mask
applied.
https://bugzilla.gnome.org/show_bug.cgi?id=738890
This reverts commit ec8ed1dbb0.
1) It turns out to add a momentary flicker from the transition
between the login screen and user session
2) It actually isn't needed anymore since bug 733026
https://bugzilla.gnome.org/show_bug.cgi?id=740377
Refactor make_default_config() to always sanity-check the configuration to
ensure that it fits within the framebuffer. Previously, this was only done
for the default linear configuration.
In recent versions of the QXL driver, it may set "suggested X|Y" connector
properties. These properties are used to indicate the position at which
multiple displays should be aligned. If all outputs have a suggested position,
the displays are arranged according to these positions, otherwise we fall back
to the default configuration.
At the moment, we trust that the driver has chosen sane values for the
suggested position.
When the output device has hotplug_mode_update (e.g. the qxl driver used in
vms), the displays can be dynamically resized, so the current display
configuration does not often match a stored configuration. When a new
monitor is added, make_default_config() tries to create a new display
configuration by choosing a stored configuration with N-1 monitors, and then
adding a new monitor to the end of the layout. Because the stored config
doesn't match the current outputs, apply_configuration() will routinely
fail, leaving the additional display unconfigured. In this case, it's more
useful to just fall back to creating a new default configuration from
scratch so that all outputs get configured to their preferred mode.
Move logic for creating different types of configurations into separate
functions. This keeps things a bit cleaner and allows us to add alternate
configuration types more easily.
WindowActors can outlive their corresponding window to animate unmap.
Unredirecting the actor does not make sense in that case, so make
sure to not request it.
https://bugzilla.gnome.org/show_bug.cgi?id=740133