EDID parsing has been refactored to a common meta_output_parse_edid()
function, which ensures the extracted information is the same on both KMS
and X11 backend, so it can be used consistently on eg. settings values.
https://bugzilla.gnome.org/show_bug.cgi?id=742882
This reverts commit 47e339b46e. The
approach that was used to reduce the amount of work we do on RR events
to the necessary minimum is flawed. It assumes that, when the first
event we see where the retrieved XRRScreenResources.timestamp is
bigger than the previous, we already have all the data we need to
rebuild our view of the world.
That isn't true however, because the X server sends
RRScreenChangeNotify events for every step of the configuration
change, i.e. it lacks an atomic reconfiguration API. In particular, if
the X screen size is one of the changes, when we rebuild our state and
emit monitors-changed, the X screen size might still be the previous
one and since we stop updating ourselves until another reconfiguration
happens (noticed by looking at XRRScreenResources.timestamp) we end up
with the wrong idea of the X screen size.
https://bugzilla.gnome.org/show_bug.cgi?id=738630
This optimization breaks our use of XRRScreenResources' timestamps to
detect hotplugs in case one of the outputs is disconnected and the
remaining ones don't need any mode, position or transform adjustments.
In that scenario, when applying the new configuration, we resize the X
screen but never call XRRSetCrtcConfig() and since XRRSetScreenSize()
doesn't take a timestamp and the X server doesn't update its last set
timestamp, when we next get a RRScreenChangeNotify and update
ourselves, XRRScreenResources.timestamp will still be smaller than
XRRScreenResources.configTimestamp which makes us think we're seeing a
new hotplug. We just don't enter an endless loop because the screen
size that we keep applying is always the same and the X server
short-circuits and stops sending us RRScreenChangeNotifys.
Always calling XRRSetCrtcConfig() ensures that the last set timestamp
will be bigger than configTimestamp in the next event and thus making
us trigger the monitors-changed signal properly.
Note that the X server already does basically the same checks that
we're removing here, so doing this shouldn't be a significant
efficiency loss. See
http://cgit.freedesktop.org/xorg/xserver/tree/randr/rrcrtc.c?h=server-1.16-branch#n539
In recent versions of the QXL driver, it may set "suggested X|Y" connector
properties. These properties are used to indicate the position at which
multiple displays should be aligned. If all outputs have a suggested position,
the displays are arranged according to these positions, otherwise we fall back
to the default configuration.
At the moment, we trust that the driver has chosen sane values for the
suggested position.
The X server sends several RRScreenChangeNotify events in a burst when
something happens which, currently, causes us to rebuild our view of
the world as many times and notify the upper layers about it which
causes a lot of bogus repeated work like rebuilding background actors.
We can avoid this extra work by looking at the timestamp in the
XRRScreenResources struct which is updated when an X client (including
us!) last changed something and comparing it with the previous
timestamp.
https://bugzilla.gnome.org/show_bug.cgi?id=738630
meta_monitor_config_match_current() only matches the number of outputs
and if the output connector, vendor, product and serial match.
In the X backend, this means that we can't use it to bypass doing any
work because it won't detect cases where we actually want to update
ourselves like e.g. an output being turned off either by us or by
another X client (e.g. xrandr).
In the native backend, unlike the xrandr backend, we only get called
on real hotplug events and thus should always trigger the common
hotplug code to (possibly) apply a new mode so the check is pointless
anyway.
https://bugzilla.gnome.org/show_bug.cgi?id=738630
In randr events, configTimestamp can be considered the hotplug time,
i.e. whenever the server notices hardware changes, this value will be
updated.
Having that in mind, we can re-work the logic to make it clearer.
There are no semantic changes.
The code here was a bit messy with the addition of
hotplug_mode_update, and the comments were a bit confusing and
inaccurate. Clean it up and comment it a bit better to make the flow and
intention more clear.
RandR's QueryOutputProperty request makes the incredible decision of
throwing a BadName if you pass a property that doesn't exist, which
means that trying to check if a property exists is a royal pain when
using Xlib.
XCB's interface is much more friendly about errors and not having global
state and things like that, so use that instead to query our backlight
property.
If the property doesn't exist, a BadName error will be generated. This
is a terrible API, but it's what we're stuck with. Use
RRGetOutputProperty instead.
The output_id is more of an opaque identifier for the monitor, based on
its underlying ID from the windowing system. Since we also use the term
"output_id" for the output's index, rename our use of the opaque cookie
"output_id" to "winsys_id".