When there's a direct scanout set in the stage view, we
have to use it instead of the view's regular onscreen
framebuffer.
Use the new CoglScanout API to implement blitting to the
stream framebuffer.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1421
PipeWire reuses buffers, and buffer metadatas, when streaming. When
the cursor is moved to outside the stream, the cursor meta also needs
to be updated, otherwise it'll use the cursor position of whatever is
in the buffer.
Don't bail out when cursor is outside the stream, and ensure to record
a metadata-only frame. This only applies to metadata streams.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1417
Always force-track the cursor position (so that the X11 backend can keep
it up to date), and if the cursor wasn't part of the sampled
framebuffer when reading pixels into CPU memory, draw it in an extra
pass using cairo after the fact. The cairo based cursor painting only
happens on the X11 backend, as we otherwise inhibit the hw cursor.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1391
During animation or other things that cause multiple frames in a row
being painted, we might skip recording frames if the max framerate is
reached.
Doing so means we might end up skipping the last frame in a series,
ending with the last frame we sent was not the last one, making things
appear to get stuck sometimes.
Handle this by creating a timeout if we ever throttle, and at the time
the timeout callback is triggered, make sure we eventually send an up to
date frame.
This is handle differently depending on the source type. A monitor
source type reports 1x1 pixel damage on each view its monitor overlaps,
while a window source type simply records a frame from the surface
directly, except without recording a timestamp, so that timestamps
always refer to when damage actually happened.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1361
Now that we don't use the record function to early out depending on
implicit state (don't record pixels if only cursor moved for example),
let it simply report an error when it fails, as we should no longer ever
return without pixels if nothing failed.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1361
Both do more or less the same but with different methods - one puts
pixels into a buffer using the CPU, the other puts pixels into a buffer
using the GPU.
However, they are behaving slightly different, which they shouldn't.
Lets first address the misleading disconnect in naming, and later we'll
make them behave more similarly.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1361
We'd check if there was any queued redraw on the stage, but this is
inappropriate for two reasons:
1) A monitor and area screen cast source only cares about damage on a
subset of the stage.
2) The global pending-redraw is going away when paint scheduling will be
more view centric.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
If there is a paint context available (i.e. for the phases that are
during the actual stage paint), pass it along the callbacks, so that
the callback implementations can change their operation depending on the
paint context state.
This also means we can get the current view from the paint context,
instead of the temporarily used field in the instance struct.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1207
Even though cogl_framebuffer_flush() was supposed to be enough,
it ends up creating streams with odd visual glitches that look
very much like unfinished frames.
Switch back to cogl_framebuffer_finish(), which is admittedly
an overkill, but it's what works for now. There is anedoctal
evidence showing it doesn't incur in worse performance.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/1086
Prior to this commit the stage was drawn separately for each logical
monitor. This allowed to draw different parts of the stage with
different transformations, e.g. with a different viewport to implement
HiDPI support.
Go even further and have one view per CRTC. This causes the stage to
e.g. draw two mirrored monitors twice, instead of using the same
framebuffer on both. This enables us to do two things: one is to support
tiled monitors and monitor mirroring using the EGLStreams backend; the
other is that it'll enable us to tie rendering directly to the CRTC it
will render for. It is also a requirement for rendering being affected
by CRTC state, such as gamma.
It'll be possible to still inhibit re-drawing of the same content
twice, but it should be implemented differently, so that it will still
be possible to implement features requiring the CRTC split.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/1042
This is inspired by 98892391d7 where the usage of
`g_signal_handler_disconnect()` without resetting the corresponding
handler id later resulted in a bug. Using `g_clear_signal_handler()`
makes sure we avoid similar bugs and is almost always the better
alternative. We use it for new code, let's clean up the old code to
also use it.
A further benefit is that it can get called even if the passed id is
0, allowing us to remove a lot of now unnessecary checks, and the fact
that `g_clear_signal_handler()` checks for the right type size, forcing us
to clean up all places where we used `guint` instead of `gulong`.
No functional changes intended here and all changes should be trivial,
thus bundled in one big commit.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/940
Make the monitor implementation do things strictly related to its own
source type, leaving the Spa related logic and cursor read back in the
generic layer, later to be reused by the window source type
implementation.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/413
It scaled the logical monitor rect with scale to get the stream
dimensions, but that is only valid when having
'scale-monitor-framebuffers' enabled. Even when it was, it didn't work
properly, as clutter_stage_capture_into() doesn't work properly with
scaled monitor framebuffers yet.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/415
The 'cursor-mode', which currently is limited to RecordMonitor(), allows
the user to either do screen casts where the cursor is hidden, embedded
in the framebuffer, or sent as PipeWire stream metadata.
The latter allows the user to get cursor updates sent, including the
cursor sprite, without requiring a stage paint each frame. Currently
this is done by using the cursor sprite texture, and either reading
directly from, or drawing to an offscreen framebuffer which is read from
instead, in case the texture is scaled.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/357
To get a consistent behaviour no matter whether HW cursors are in use or
not, make sure to copy the framebuffer content before the stage overlays
(cursor sprite textures) are painted.
https://gitlab.gnome.org/GNOME/mutter/merge_requests/357
The order and way include macros were structured was chaotic, with no
real common thread between files. Try to tidy up the mess with some
common scheme, to make things look less messy.
As of commit 5f5ef3de2cdc816dab82cb7eb5d7171bee0ad2c5 in pipewire the
stream creator can find out the node ID of the stream it created.
So instead of using a special purpose entry to the info property box to
let the application discover stream by monitoring added nodes searching
for the given special purpose entry, just pass the node directly.
https://bugzilla.gnome.org/show_bug.cgi?id=784199
This commit adds basic screen casting and remote desktoping
functionalty. This works by exposing two D-Bus API services:
org.gnome.Mutter.ScreenCast and org.gnome.Mutter.RemoteDesktop.
The remote desktop API is used to create remote desktop sessions. For
each session, a D-Bus object is created, and an application can manage
the session by sending messages to the session object. A remote desktop
session the user to emit input events using the D-Bus methods on the
session object. To get framebuffer content, the application should
create an associated screen cast session.
The screen cast API is used to create screen cast sessions. One can so
far either create stand-alone screen cast sessions, or a screen cast
session associated with a remote desktop session. A remote desktop
associated screen cast session is managed by the remote desktop session.
So far only remote desktop managed screen cast sessions are implemented.
Each screen cast session may have one or more streams. A screen cast
stream is a stream of buffers of some part of the compositor content.
So far API exists for creating streams of monitors and windows, but
only monitor streams are implemented.
When a screen cast session is started, the one PipeWire stream is
created for each screen cast stream created for the session. When this
has happened, a PipeWireStreamAdded signal is emitted on the stream
object, passing a unique identifier. The application may use this
identifier to find the associated stream being advertised by the
PipeWire daemon.
The remote desktop and screen cast functionality must be explicitly be
enabled at ./configure time by passing --enable-remote-desktop to
./configure. Doing this will build both screen cast and remote desktop
support.
To actually enable the screen casting and remote desktop, the user must
enable the experimental feature. See
org.gnome.mutter.experimental-features.
https://bugzilla.gnome.org/show_bug.cgi?id=784199