While for normal textures, GL_TEXTURE_2D should be used, when it's an external
texture, binding it using GL_TEXTURE_2D results in an error.
Reading the specification for GL_TEXTURE_EXTERNAL_OES it is unclear whether
getting pixel data from a texture is possible, and tests show it doesn't result
in any data, but in case it would eventually start working, at least bind the
correct target for now.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/362
Don't just set the internal format to the dummy format "any", as that causes
code intended to be unreachable code to be reached. It's not possible to
actually know the internal format of an external texture, however, so it might
not actually correspond to the real format.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/362
DRM_EVENT_CONTEXT_VERSION is the latest context version supported by
whatever version of libdrm is present. Mutter was blindly asserting it
supported whatever version that may be, even if it actually didn't.
With libdrm 2.4.78, setting a higher context version than 2 will attempt
to call the page_flip_handler2 vfunc if it was non-NULL, which being a
random chunk of stack memory, it might well have been.
Set the version as 2, which should be bumped only with the appropriate
version checks.
https://bugzilla.gnome.org/show_bug.cgi?id=781034
This makes the build less verbose, as all .gir generation except for
clutters didn't pass --quiet to g-ir-scanner, making it output long
linking commands. Do this by adding a common introspection_args
variable.
While at it, put -U_GNU_SOURCE in there too, as it was always passed
everywhere as without it the scanner would log warnings.
This is the last remaining feature necessary to achieve
parity with the Autotools build.
A few changes were made to the install locations of the
tests, in order to better acomodate them in Meson:
* Tests are now installed under a versioned folder (e.g.
/usr/share/installed-tests/mutter-4)
* The mutter-cogl.test file is now generated from an .in
file, instead of a series of $(echo)s from within Makefile.
Notice that those tests need very controlled environments
to run correctly. Mutter installed tests, for example, will
failed when running under a regular session due to D-Bus
failing to acquire the ScreenCast and/or RemoteScreen names.
When running installed tests, the working directory for Cogl
tests is /usr/libexec/installed-tests/mutter-cogl-4/conform,
which isn't writable by normal users.
To avoid the adding stray hidden files to the current directory,
adapt the runner script to fallback to $(mktemp) - which is
available on all platform we care about - and avoid adding
hidden files everywhere.
Presumably glReadPixels itself can be more performant with pixel format
conversions than doing a fix-up conversion on the CPU afterwards. Hence,
pick required_format based on the destination rather than the source, so
that it has a better chance to avoid the fix-up conversion.
With CoglOnscreen objects, CoglFramebuffer::internal_format (the source
format) is also wrong. It is left to a default value and never set to
reflect the reality. In other words, read-pixels had an arbitrary
intermediate pixel format that was used in glReadPixels and then fix-up
conversion made it work for the destination.
The render buffers (GBM surface) are allocated as DRM_FORMAT_XRGB8888.
If the destination buffer is allocated as the same format, the Cogl
read-pixels first converts with glReadPixels XRGB -> ABGR because of the
above default format, and then the fix-up conversion does ABGR -> XRGB.
This case was observed with DisplayLink outputs, where the native
renderer must use the CPU copy path to fill the "secondary GPU"
framebuffers.
This patch stops using internal_format and uses the desired destination
format instead.
_cogl_framebuffer_gl_read_pixels_into_bitmap() will still use
internal_format to determine alpha premultiplication state and multiply
or un-multiply as needed. Luckily all the formats involved in the
DisplayLink use case are always _PRE and so is the default
internal_format too, so things work in practise.
Furthermore, the GL texture_swizzle extension can never apply to
glReadPixels. Not even with FBOs, as found in this discussion:
https://gitlab.gnome.org/GNOME/mutter/issues/72
Therefore the target_format argument is hardcoded to something that can
never match anything, which will prevent the swizzle from being assumed.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/313
This function gets hit even today on relatively modern Intel systems (I
have a Haswell Desktop with Mesa 18.2.4) if the pixel format is right.
Presumably it makes things slower for no longer a reason.
According to cb146dc515, this
functionality was refactored into a workaround path in 2012. The commit
message mentions the problem existing before Mesa 8.0.2. The number
refers to https://bugs.freedesktop.org/show_bug.cgi?id=46631 .
The use case where I hit this is when improving support for DisplayLink
video outputs. These are used through a "secondary GPU", and since
DisplayLink does not have a GPU, Mutter uses the CPU copy path with Cogl
read-pixels[1]. If the DisplayLink framebuffer was allocated as
DRM_FORMAT_XRGB8888 (the only format it currently handles correctly),
mesa_46631_slow_read_pixels_workaround would get hit. The render buffer is
the same format as the framebuffer, yet doing the copy XRGB -> XRGB ends
up being slower than XRGB -> XBGR which makes no sense.
This patch is not sufficient to fix the XRGB -> XRGB copy performance,
but it is required.
This patch reverts CoglGpuInfoDriverBug into what it was before
cb146dc515.
[1] This is not actually true until
https://gitlab.gnome.org/GNOME/mutter/merge_requests/278 is
merged.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/313
The actor-shader-effect test actors are 50px wide, but we check the 51st
pixel. This went along undetected until "clutter: Avoid rounding
compensation when invalidating 2D actors" because the paint volumes were
made slightly bigger and the shaders paint all over them (I guess nobody
noticed those actors being actually ~52px wide).
Update the test to check the middle of the opposite edge, so we keep neatly
rounded numbers.
The test does a clutter_actor_set_scale_full() call that only updates
the scale center (i.e. no changes to scale-x/y), but expects to receive
notifications of actor scale changes.
Since "Revert "Revert "ClutterActor: Optimize away idempotent
scale/position updates"" these are optimized away, so just drop the
assumption.
The depth buffer is marked as invalid when 1) the framebuffer is just created,
and 2) whenever GL_DEPTH_TEST is enabled on it. This will ensure the
framebuffers attached depth buffer (if any) is properly cleared before it's
actually used, while saving needless clears while depth testing is disabled
(the default).
https://bugzilla.gnome.org/show_bug.cgi?id=782344
This allows the redraw clip to be more constrained, so MetaCullable doesn't
end up rendering portions of window shadows, frame and background when a
window invalidates (part of) its contents.
https://bugzilla.gnome.org/show_bug.cgi?id=782344
The Wacom Xorg driver assigns a serial number of 1 for any pad that doesn't
have a serial. libinput assigns 0. Just treat 1 as 0 here, there are no pens
with a real serial 1 anyway.
Fixes https://gitlab.gnome.org/GNOME/mutter/issues/414
Implements the `MetaScreenCastWindow` interface for screen-cast
`RecordWindow` mode.
`meta_window_actor_capture_into()` implementation is still pretty crude
and doesn't take into account subsurfaces and O-R windows so menus,
popups and other tooltips won't show in the capture.
This is left as a future improvement for now.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/306
Typically, to stream the content of a window, we need a way to copy the
content of its window-actor into a buffer, transform relative input
coordinates to relative position within the window-actor and a mean to
get the window bounds within the buffer.
For this purpose, add a new GType interface `MetaScreenCastWindow` with
the methods needed for screen-cast window mode:
* meta_screen_cast_window_get_buffer_bounds()
* meta_screen_cast_window_get_frame_bounds()
* meta_screen_cast_window_transform_relative_position()
* meta_screen_cast_window_capture_into()
This interface is meant to be implemented by `MetaWindowActor` which has
access to all the necessary bits to implement them.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/306
To be able to cast windows, which by definition can change in size
dynamically, we need a way to specify the video crop meta to adjust to
the window size whenever it changes.
Add VideoCrop support with a new optional hook `get_videocrop()` in the
`ScreenCastStreamSrcClass` which, if defined, can let the child specify
a rectangle for the video cropping area.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/306
Switch-configs are only to be used in certain circumstances (see
meta_monitor_manager_can_switch_config()) so when ensuring
configuration and attempting to create a linear configuration, use the
linear configuration constructor function directly without going via the
switch config method, otherwise we might incorrectly fall back to the
fallback configuration (only enable primary monitor).
This is a regression introduced by 6267732bec.
Fixes: https://gitlab.gnome.org/GNOME/mutter/issues/342
Which eliminates the 1px jitter that was visible when dragging windows,
and eliminates the flickering that was visible when pushing the cursor
against the right/bottom edges of the screen.
The shader used for computing a vignette currently has two
problems:
* The math is wrong such that the vignette isn't stretched
across the whole actor and so ends abruptly
* There is noticeable banding in its gradient
This commit corrects both problems by fixing the computing
and introducing noise dithering.
If a display device (touchscreen, tablet with libwacom integration flags)
does not receive a monitor through settings. Delegate on the
MetaInputMapper so it receives a mapping through heuristics.
This object takes care of mapping absolute devices to monitors,
to do so it uses 3 heuristics, in this order of preference:
- If a device is known to be builtin, it's assigned to the
builtin monitor.
- If input device and monitor match sizes (with an error margin
of 5%)
- If input device name and monitor vendor/product in EDID match
somehow (from "full", through "partial", to just "vendor")
The most favorable outputs are then assigned to each device, making
sure not to assign two devices of the same kind to the same output.
This object replaces (and is mostly 1:1 with) GsdDeviceMapper in
g-s-d. That object would perform these same heuristics, and let
mutter indirectly know through settings changes. This object allows
doing the same in-process.
Since now we don't set the swap throttled value based
on sync-to-vblank, we can effectively remove it from
Cogl. Throttling swap buffers in Cogl is as much a
historical artifact as sync-to-vblank. Furthermore,
it doesn't make sense to disable it on a compositor,
which is the case with the embedded Cogl.
In addition to that, the winsys vfunc for updating
whenever swap throttling changes could also be removed,
since swap throttling is always enabled now.
Removing it means less code, less branches when running,
and one less config option to deal with.
This also removes the micro-perf test, since it doesn't
make sense for the case where Cogl is embedded into the
compositor.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/191
Externally setting the sync-to-vblank setting was a feature
added as a workaround to old Intel and ATI graphic cards, and
is not needed anymore. Furthermore, it doesn't make sense to
change it on a compositor whatsoever.
This commit removes all the ways to externally change this
setting, as well as the now unused API.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/191
The xf86-input-wacom driver exports a property with the tool type as known by
the driver. This is a more reliable choice than guessing based on the device
name.
In the touchscreen case, we simply use is_touch_device() to guess which one of
the two options it is. Note that this code should never be hit anyway as we
would've succeeded earlier with a previous is_touch_device() call.
If we are lucky enough and the parent actor has the CLUTTER_ACTOR_NO_LAYOUT
flag, we would skip the relayout, but still redraw the parent actor in its
entirety.
In these cases, we can at least just redraw the area affected by the actor
being shown/hidden.