Presumably glReadPixels itself can be more performant with pixel format
conversions than doing a fix-up conversion on the CPU afterwards. Hence,
pick required_format based on the destination rather than the source, so
that it has a better chance to avoid the fix-up conversion.
With CoglOnscreen objects, CoglFramebuffer::internal_format (the source
format) is also wrong. It is left to a default value and never set to
reflect the reality. In other words, read-pixels had an arbitrary
intermediate pixel format that was used in glReadPixels and then fix-up
conversion made it work for the destination.
The render buffers (GBM surface) are allocated as DRM_FORMAT_XRGB8888.
If the destination buffer is allocated as the same format, the Cogl
read-pixels first converts with glReadPixels XRGB -> ABGR because of the
above default format, and then the fix-up conversion does ABGR -> XRGB.
This case was observed with DisplayLink outputs, where the native
renderer must use the CPU copy path to fill the "secondary GPU"
framebuffers.
This patch stops using internal_format and uses the desired destination
format instead.
_cogl_framebuffer_gl_read_pixels_into_bitmap() will still use
internal_format to determine alpha premultiplication state and multiply
or un-multiply as needed. Luckily all the formats involved in the
DisplayLink use case are always _PRE and so is the default
internal_format too, so things work in practise.
Furthermore, the GL texture_swizzle extension can never apply to
glReadPixels. Not even with FBOs, as found in this discussion:
https://gitlab.gnome.org/GNOME/mutter/issues/72
Therefore the target_format argument is hardcoded to something that can
never match anything, which will prevent the swizzle from being assumed.
This function gets hit even today on relatively modern Intel systems (I
have a Haswell Desktop with Mesa 18.2.4) if the pixel format is right.
Presumably it makes things slower for no longer a reason.
According to cb146dc515, this
functionality was refactored into a workaround path in 2012. The commit
message mentions the problem existing before Mesa 8.0.2. The number
refers to https://bugs.freedesktop.org/show_bug.cgi?id=46631 .
The use case where I hit this is when improving support for DisplayLink
video outputs. These are used through a "secondary GPU", and since
DisplayLink does not have a GPU, Mutter uses the CPU copy path with Cogl
read-pixels[1]. If the DisplayLink framebuffer was allocated as
DRM_FORMAT_XRGB8888 (the only format it currently handles correctly),
mesa_46631_slow_read_pixels_workaround would get hit. The render buffer is
the same format as the framebuffer, yet doing the copy XRGB -> XRGB ends
up being slower than XRGB -> XBGR which makes no sense.
This patch is not sufficient to fix the XRGB -> XRGB copy performance,
but it is required.
This patch reverts CoglGpuInfoDriverBug into what it was before
cb146dc515.
[1] This is not actually true until
https://gitlab.gnome.org/GNOME/mutter/merge_requests/278 is
merged.
This macro was introduced so as to be able to be built without GLib.
However, this feature was long ago removed, and in Mutter we depend on
it anyway, so let's get rid of it in favor of more consistency.
This is based on `g_clear_object`, so it will be a bit more consistent
to write (and prevents the headaches from accidentally forgetting a NULL
check).
If texture allocation fails (e.g. on an old GPU with size limit 2048)
then `cogl_texture_new_with_size` was trying to use the same CoglError
twice. The second time was after it had already been freed.
Bug reported and fix provided by Gert van de Kraats.
https://launchpad.net/bugs/1790525
This will allow CoglFramebuffer and its implementations to be exposed
to GJS and other language bindings. This is a necessary part of the
bigger work to make framebuffer management explicit.
CoglOffscreen is effectively a CoglFramebuffer, but it isn't being marked as
such by the GType machinery. This makes it impossible for introspection to
correctly set this class up.
Fix that by adding a COGL_GTYPE_IMPLEMENT_INTERFACE() code into the declaration
of CoglOffscreen. This does not have any functional changes though.
Meson uses the 'dependencies' field to determine and
parallelize build steps, but that isn't entirely true
with 'link_with'; this might cause a race condition
when generating header files while trying to build
them.
Fix that by only using 'dependencies' instead of 'link_with'.
This commit adds meson build support to mutter. It takes a step away
from the three separate code bases with three different autotools setups
into a single meson build system. There are still places that can be
unified better, for example by removing various "config.h" style files
from cogl and clutter, centralizing debug C flags and other configurable
macros, and similar artifacts that are there only because they were once
separate code bases.
There are some differences between the autotools setup and the new
meson. Here are a few:
The meson setup doesn't generate wrapper scripts for various cogl and
clutter test cases. What these tests did was more or less generate a
tiny script that called an executable with a test name as the argument.
To run particular tests, just run the test executable with the name of
the test as the argument.
The meson setup doesn't install test files anymore. The autotools test
suite was designed towards working with installed tests, but it didn't
really still, and now with meson, it doesn't install anything at all,
but instead makes sure that everything runs with the uninstalled input
files, binaries and libraries when running the test suite. Installable
tests may come later.
Tests from cogl, clutter and mutter are run on 'meson test'. In
autotools, only cogl and clutter tests were run on 'make check'.
Install include files in
$prefix/include/mutter-$apiversion/[clutter,cogl,...,meta]/, and
datafiles in /usr/share/mutter-$apiversion/.... We still would conflict
e.g. given that our gettext name is "mutter", and how keybindings are
installed, but it's a step in the right direction.
There are different unit-tests file generated containing lists of tests
the test-runner.sh should run. Running run-tests.sh read the unit-tests
in the current directory, which is inconvenient to do when using meson.
The docs previously suggested that `cogl_frame_info_get_frame_counter`
returned a timestamp of an unknown clock ID. That's not correct. The
cogl source code shows that it does and must use the same clock as
`cogl_get_clock_time`.
Related to https://gitlab.gnome.org/GNOME/mutter/issues/131
Before we just set it to "none", but this was not enough since various
calls will depend on not just the context being active, but the main
rendering surface.
Fixes https://gitlab.gnome.org/GNOME/mutter/issues/21
By the looks of it, commit 95e9fa10ef was taping over an Intel DRI bug
that would make it return post-swizzling pixel data on glReadPixels().
There's been reports over time of that commit resulting in wrong colors
on other drivers, and lately Mesa >17.3 started showing the same symptoms
on Intel.
But texture swizzling works by changing parameters before fragment shaders
and reading pixels from an already drawn FBO/texture doesn't involve those.
This should thus use pixel_format_to_gl_with_target(), which will result in
correctly requesting the same pixel format than the underlying texture,
while still considering it BGRA for the upper layers in the swizzling case.
https://gitlab.gnome.org/GNOME/mutter/issues/72Closes: #72
We just arbitrarily chose the first EGL config matching the passed
attributes, but we then assumed we always got GBM_FORMAT_XRGB8888. That
was not a correct assumption. Instead, make sure we always pick the
format we expect.
Closes: https://gitlab.gnome.org/GNOME/mutter/issues/2
On drivers that do not support glGetTexImage2D (i.e. on GLES),
cogl_texture_get_data() has a "feature" that allows it to download
texture data by rendering the texture on an intermediate framebuffer
object and then reading back the data from there. However, this
feature requires the user to have previously set an "active"
framebuffer object in the context, which makes this very tricky
because it is not clear to the developer that he needs to do that
in order for some code to work on GLES (of course it works on
desktop GL, so nobody notices...) and additionally the code actually
crashes if an active fbo is not set!
This patch basically removes this feature in order to prevent
the crash and is in line with how this code has evolved in cogl-2.0:
https://git.gnome.org/browse/cogl/commit/?id=6d6a277b8e9a63a8268046e5258877ba94a1da5bhttps://bugzilla.gnome.org/show_bug.cgi?id=789961
When creating a renderer with a custom winsys (which is always how
mutter uses cogl) make it possible to pass a user data with the winsys.
Still unused.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
The GL_BGRA definition is not available for GLES2 contexts, which use
the EXT_texture_format_BGRA8888 instead, causing a build failure when
trying to use it in those contexts.
Fortunately, this hack is only relevant for GL, so let's guard it to
prevent the failure in GLES2, where that extension is used instead.
https://bugzilla.gnome.org/show_bug.cgi?id=786568
Those are cached and reused across runs, which doesn't qualify to mesa
as "static" indeed. Properly marking those as dynamic is more true, and
brings in slight performance benefits just by avoiding the resulting
(and later silenced) mesa warning.
https://bugzilla.gnome.org/show_bug.cgi?id=782344
Fixes cogl_texture_get_data() resorting to the wrong conversions when
extracting the texture data. This notably resulted in RGB/RGBA buffers
copied as-is into BGRA buffers, for instance for the fullscreen animation,
or single-window screenshots of such buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=779234