Virtual input devices aim to enable injecting input events as if they
came from hardware events. This is useful for things such as remote
controlling, for example via a remote desktop session.
The API so far only consists of stumps.
https://bugzilla.gnome.org/show_bug.cgi?id=765009
Depending on clutter_input_device_get_mapping(), or whether the current
tool is either cursor or lens (those don't make any sense in absolute
mode), relative motions will be reported.
This function call only applies to tablets, and thus will error
out unless it's called with CLUTTER_TABLET_DEVICEs. This will
allow setting absolute/relative mapping on those on the fly, as
this is optional.
This will only be practical for pads (and maybe generic buttonsets in
the future?), we just need to know the number as the events will also
contain a number as the identificator.
Those map closely what we get from libinput. Button events have
been made its own separate struct, its semantics fall somewhere
in between of ClutterButtonEvent and ClutterKeyEvent, so is better
emitted as its own set.
Currently the setter doesn't take ownership of the value, but dispose()
will unref it (and thus release someone else's reference). Fix this by
taking ownership of the property value in the setter.
There's no need for a cast for printing an object's type or address,
so we can remove variables that are unused when not building with
CLUTTER_ENABLE_DEBUG.
Mutter (and libmutter users) are the only users of this version of
cogl, and will more or less only use the cogl-1.0, cogl-2.0 and cogl
experimental API variants, and having the possibility of having
different API versions of the same API depending on what file includes
it is error prone and confusing. Lets just remove the possibility of
having different versions of the same API.
https://bugzilla.gnome.org/show_bug.cgi?id=768977
We bypass our build configuration to fetch API from a version which
isn't the one we actually use. Stop bypassing and just admit that the
1.0 API is still there, but still deprecated.
https://bugzilla.gnome.org/show_bug.cgi?id=768977
We didn't include clutter-build-config.h, meaning we included a
different API of cogl than the rest of clutter. This API contains the
function cogl_sqrti which was the only thing used. Lets include the
build config file and stop depending on the API that is no longer
exposed to us.
https://bugzilla.gnome.org/show_bug.cgi?id=768977
CoglFrameInfo is a frame info container associated with a single
onscreen framebuffer. The clutter stage will eventually support drawing
a stage frame with multiple onscreen framebuffers, thus needs its own
frame info container.
This patch introduces a new stage signal 'presented' and a accompaning
ClutterFrameInfo and adapts the stage windows and past onscreen frame
callbacks users to use the signal and new info container.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
Add support for drawing a stage using multiple framebuffers each making
up one part of the stage. This works by the stage backend
(ClutterStageWindow) providing a list of views which will be for
splitting up the stage in different regions.
A view layout, for now, is a set of rectangles. The stage window (i.e.
stage "backend" will use this information when drawing a frame, using
one framebuffer for each view. The scene graph is adapted to explictly
take a view when painting the stage. It will use this view, its
assigned framebuffer and layout to offset and clip the drawing
accordingly.
This effectively removes any notion of "stage framebuffer", since each
stage now may consist of multiple framebuffers. Therefore, API
involving this has been deprecated and made no-ops; namely
clutter_stage_ensure_context(). Callers are now assumed to either
always use a framebuffer reference explicitly, or push/pop the
framebuffer of a given view where the code has not yet changed to use
the explicit-buffer-using cogl API.
Currently only the nested X11 backend supports this mode fully, and the
per view framebuffers are all offscreen. Upon frame completion, it'll
blit each view's framebuffer onto the onscreen framebuffer before
swapping.
Other backends (X11 CM and native/KMS) are adapted to manage a
full-stage view. The X11 CM backend will continue to use this method,
while the native/KMS backend will be adopted to use multiple view
drawing.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
Instead of assuming there is a single onscreen framebuffer, use the
helper functions for setting the frame callback and getting the frame
counter.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
In preperation for having allowing drawing onto multiple onscreen
framebuffers, move the onscreen framebuffer handling to the
corresponding winsys dependent backends.
Currently the onscreen framebuffer is still accessed, but, as can seen
by the usage of "legacy" in the accessor name, it should be considered
the legacy method. Eventually only the X11 Compositing Manager backend
will make use of the legacy single onscreen framebuffer API.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
Split the stage window implementations into three separate objects: one
for X11 as a compositing manager, one for X11 running as a nested
Wayland compositor, and one for running with the native backend.
The new stage window implementations are only thin shells; this is in
preparation for making the stage windows behave more differently.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
Instead of passing around the KMS file descriptor via clutter to cogl,
just make our own clutter backend create the cogl renderer and set the
KSM fd.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
The actor-offscreen-redirect didn't initialize its state properly, so
it could potentially end up with the "was_painted" state being TRUE
from the start, effectively skipping the whole test.
Fixing so that the test even run resulted in the test getting stuck in
a dead lock due to the verification that a frame was drawn was done
from a paint callback. A paint callback had the mutex held, so when the
test case tried to run the main loop, the next paint callback caller
path taken would try to re-lock the same mutex, thus dead lock.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
In cogl use cogl-config.h and in clutter use clutter-build-config.h. We
can't use clutter-config.h in clutter because its already used and
installed.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
Introduce two new clutter backends: MetaClutterBackendX11 and
MetaClutterBackendNative. They are so far only wrap ClutterBackendX11
and ClutterBackendEglNative respectively, but the aim is to move things
from the original clutter backends when needed.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
Since the check for backend->cogl_context was accidentally moved
to clutter_backend_do_real_create_context, the Glib source that
is created at the end of clutter_backend_do_create_context() is
created and added each time create_context() is called, though
create_context() is supposed to be idempotent.
https://bugzilla.gnome.org/show_bug.cgi?id=768243
The 'select-all' action is currently only bound to <ctrl>a, which makes
it awkward to use when caps-lock is active, and is inconsistent with GTK+.
Just accept both upper- and lower-case variants.
https://bugzilla.gnome.org/show_bug.cgi?id=766326
The device name is something more natural, similar to what's seen
in X11, the sysname is rather the event node basename, which may
also vary depending on device insertion/detection time.
Clutter out of tree depends on Cogl's required dependencies via the
pkg-config file. Since Clutter and Cogl moved in tree, it means that
those dependencies need to be checked by Clutter itself, otherwise
headers and libraries won't be found.
ClutterActor should warn if a user tries to add or remove an actor to,
and from, itself on the scene graph.
Clutter will likely crash, or warn way down the line, but if we can make
debugging simpler then we should.
For the GDK backend We're using the GdkDeviceManager API, which maps to
Clutter's own device manager API. GDK has now moved to a per-seat device
management model, and deprecated the device manager singleton one.
In order to avoid the deprecation warnings, we'd have to implement a
model similar to the GDK one inside the generic Clutter API, but that
would also require moving all the others backend to it, which is pretty
pointless.
Instead, we can disable deprecation warnings for the
ClutterDeviceManager implementation inside the GDK backend.
This updates config.h.win32.in to be in-sync with the entries that are in
the config.h.in that is generated by the autotools builds. In particular,
for Visual Studio builds, we default to enable all available drivers ("*").
The function should return true not only if the actor is being painted
by a ClutterClone, but also if it's inside a sub-graph being painted by
a ClutterClone.
https://bugzilla.gnome.org/show_bug.cgi?id=756371
The constrain callback cannot rely on the pointer position of the
corresponding ClutterInputDevice to get the actual delta of the motion
event that is to be constrained since it is only updated when an event is
dispatched. So change the API to pass the previous pointer position when
constraining.
https://bugzilla.gnome.org/show_bug.cgi?id=752752
Compositors need more detailed information about motion events. Make it
possible to retrieve this information when running the evdev backend by
adding the information to the backend specific event struct.
https://bugzilla.gnome.org/show_bug.cgi?id=752752
Since commit 6183eb3632 we disabled swap
throttling in favour of being driven by the GDK frame clock (and thus by
the compositor).
Compositors may decide to unredirect full screen windows to avoid the
performance penalty of the additional copy, especially on X11, which
means that a Clutter application marked as full screen is not going to
be driven by the compositor, and it's not going to be throttled by the
underlying GL machinery. This has a performance impact on constrained
platforms.
For this reason, we should re-enable swap throttling when the window is
full screen.
As the change was introduced especially because of Wayland, we should
check that we're not running as clients under a Wayland compositor; if
we do, we always keep swap throttling disabled, as the compositor will
always manage our output, even when full screen.