Instead of listing every public symbol inside an ancillary file, we can
use compiler annotations. This scheme is also used by GLib and GTK+.
The symbols file is left in tree until the Visual Studio rules are
fixed, but it's not used any more during distcheck.
I double-checked that the exposed ABI is the same before and after this
change, except for symbols that were never meant to be public in the
first place, and that escaped our attention when we generated the first
version of the symbols file.
All backends follow the same pattern of queueing events first in
ClutterMainContext, then copying them to a ClutterStage queue and
immediately free them. Instead, we can just pass ownership of events
directly to ClutterStage thus avoiding the allocation and copy in
between.
https://bugzilla.gnome.org/show_bug.cgi?id=711857
Keep track of the button modifier mask state in
ClutterInputDeviceWayland and push its state to new button events going
out.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=708781
We do not strictly require the 'swap-region' Cogl feature in order to use
clipped redraws: they work equally well with just the 'buffer-age' Cogl
feature.
https://bugzilla.gnome.org/show_bug.cgi?id=726313
_clutter_stage_window_can_clip_redraws is used to check for clipped redraws
support but can_clip_redraws is not implemented by clutter-stage-wayland so
it always returns FALSE causing full screen redraws.
Fix that by implementing can_clip_redraws in clutter-stage-wayland.
https://bugzilla.gnome.org/show_bug.cgi?id=726315
There could be times when we may not necessarily see a device appear
at initialization time, like when we're VT switched away when we
initialize, and thus we can't ever rely on a main seat appearing.
Always create a main seat with logical pointer/keyboard devices, and
tie the first physical seat that comes in to the main seat.
https://bugzilla.gnome.org/show_bug.cgi?id=726199
We're going to create the main seat at an earlier time, when
we don't have the physical libinput_seat yet, so we need to
do the association later.
https://bugzilla.gnome.org/show_bug.cgi?id=726199
The GetSystemMetrics() function returns wrong values for SM_CXSIZEFRAME,
SM_CYSIZEFRAME, SM_CXFIXEDFRAME and SM_CYFIXEDFRAME when built with Visual
Studio 2012 and 2013 (unless the XP compatibility setting for the
PlatformToolset entry is turned on), causing the window of Clutter programs
to automatically shrink to a point where they become unusable.
This patch uses AdjustWindowRectEx() for builds using Visual Studio 2012
and later, which deduces the required height and width of the Window
properly. Unfortunately we can't use this for the VS 2008/2010 builds as
they cause the Window to continually expand as the program is run.
https://bugzilla.gnome.org/show_bug.cgi?id=725873
We should use the Xkb API to query the direction of the key map,
depending on the group. To get a valid result we need to go over
the Unicode equivalents of the key symbols for each group, so we
should cache the result.
The code used to query and cache the key map direction is taken
from GDK.
https://bugzilla.gnome.org/show_bug.cgi?id=705779
We should set the direction on the PangoContext when creating a
PangoLayout based on a best effort between the contents of the text
itself and the text direction of the widget, in case that fails.
https://bugzilla.gnome.org/show_bug.cgi?id=705779
Currently clutter_device_manager_xi2_get_core_device always
does a round trip to query the client.
So avoid that by caching the client pointer and only update it when the
xi devices change.
https://bugzilla.gnome.org/show_bug.cgi?id=725561
ClutterInputDevice's default initial coordinates is (-1, -1) and since
they're updated from events in a relative way it means that the
pointer can go outside the stage right from the first event.
We usually let this up to higher layers to fix through the pointer
constraint callback but that doesn't work if the first event doesn't
put the pointer immediately inside the stage.
https://bugzilla.gnome.org/show_bug.cgi?id=725103
Doing so is unlikely to work reliably. Instead, switching the keymap
should be done at a time when no key is currently pressed down, but
let's leave that task to higher level code.
This allows us to remove key state tracking at yet another level in
the stack since higher level code likely already tracks this for other
purposes.
https://bugzilla.gnome.org/show_bug.cgi?id=725102
The kernel keyboard repeat functionality isn't configurable and
libinput rightfully ignores it.
This implements keyboard repeat in userspace allowing for consumers to
set the initial delay and repeat intervals.
https://bugzilla.gnome.org/show_bug.cgi?id=725102
The evdev backend has always been excluded from Clutter's API
stability guarantee though in an informal way. This commit makes it
explicit by forcing users to define CLUTTER_ENABLE_COMPOSITOR_API.
https://bugzilla.gnome.org/show_bug.cgi?id=725102
Instead of having its own evdev input device processing implementation,
make clutter's evdev backend use libinput to do input device processing
for it.
Two GObject parameters of ClutterInputDeviceEvdev (sysfs-path and
device-path) are removed as they are not used any more.
Before ClutterDeviceManagerEvdev had one virtual core keyboard and one
virtual core pointer device. These are now instead separated into seats,
which all have one virtual core keyboard and pointer device respectively.
The 'global' core keyboard and pointer device are the core keyboard and
pointer device of the first seat that is created.
A ClutterInputDeviceEvdev can, as before, both represent a real physical
device or a virtual device, but is now instead created either via
_clutter_input_device_evdev_new() for real devices, and
_clutter_input_device_new_virtual() for virtual devices.
XKB state and button state is moved to the seat structure and is thus
separated per seat. Seats are not a concept exposed outside of clutter's
evdev backend.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=720566
Due to the way add_device() invariably adds to the master/slave device
lists, while keeping ClutterInputDevices 1:1 with device IDs, it may
leave invalid pointers in the list if add_device() is called multiple
times for the same device ID. There are two situations where this may
happen:
1) If devices are disabled and later enabled: devices are added invariably
to the master/slave lists on constructed(), but then on XIDeviceEnabled
they'd get added yet again.
2) Racy cases where the ClutterDeviceManager is created around the same time
XIHierarchyEvents are sent. When getting the XIDeviceInfo on constructed(),
these devices may already appear as enabled, even though XIDeviceEnabled
is seen through XIHierarchyEvents processed in the event loop sortly after.
This last case can be seen when starting gnome-shell on a different tty,
and entering in the one it's been spawned on, clutter initialization
happens around the same time devices are added back because of the tty
switch, and multiple extra ClutterInputDevices are created.
https://bugzilla.gnome.org/show_bug.cgi?id=724971
Currently we where checking whether the damage_history list contains
more or equal then buffer_age entries. This is wrong because we prepend
our current clip to the list just before the check.
Fix that to check whether we have more entries instead of more or equal.
https://bugzilla.gnome.org/show_bug.cgi?id=724788
The documentation for the s and l components is incorrect; these have to
be percentage values and must have a '%' character right after the
number.
Based on a patch by: Pablo Pissanetzky <pablo@trickplay.com>
https://bugzilla.gnome.org/show_bug.cgi?id=662818
The internal delete_text() implementation takes a start and an end
position, whereas the public delete_chars() method takes a number of
characters to delete starting from the current cursor position.
The :unscaled-font-dpi property is used to override the existing
:font-dpi value when running on high DPI density displays; since it's a
write-only property we don't need to have a separate storage, nor we
need to choose between :font-dpi and :unscaled-font-dpi depending on
whether or not either has been set. If we select which one to use
between :font-dpi and :unscaled-font-dpi when computing the font
resolution, we end up breaking the code that relies on changing
:font-dpi directly on a per-Settings basis.
Like we do for the windowing surfaces, we should have a run time knob
(in the form of an environment variable) to allow changing the scaling
factor of the font resolution.
We need to provide an escape hatch to ClutterCanvas so that it's
possible to override the window-scaling-factor ClutterSetting. This is
going to be useful in the future in case the user has better knowledge
of the window scaling factor that is going to be used with a specific
set of ClutterCanvas contents (e.g. on different outputs or stages).
https://bugzilla.gnome.org/show_bug.cgi?id=705915
ClutterCanvas is a ClutterContent interface implementation; this means
that it can be created and modified regardless of whether it is
associated to a specific actor or a stage. For this reason, we cannot
walk the hierarchy and get the window scaling factor for high DPI
density displays out of the ClutterStage when we create the Cairo
surface that we will use to draw the canvas contents on.
We can use ClutterSettings:window-scaling-factor instead, since it's
what each ClutterStage will use anyway.
This will get slightly more complicated when we support per-output
window scaling factors (like on Wayland), but that will require changes
in the entire settings architecture anyway.
https://bugzilla.gnome.org/show_bug.cgi?id=705915
If we get a change in the window scaling factor we want to resize the
backing store of each stage, so we use the notification on the
ClutterSettings:window-scaling-factor property to do so.
https://bugzilla.gnome.org/show_bug.cgi?id=705915
We want the settings object to handle setting and getting the
window scaling factor value, both through backend-specific settings and
through the CLUTTER_SCALE environment variable. This means turning the
ClutterSettings:window-scaling-factor property into a readwrite one,
instead of write-only, so that ClutterStage implementations will be able
to query the window scaling factor on construction.
https://bugzilla.gnome.org/show_bug.cgi?id=705915
We do some argument validation inside _clutter_stage_do_pick(), which is
the internal version of clutter_stage_get_actor_at_pos(), but we don't
do coordinate space validation, and instead we rely on call sites doing
the right thing.
We should, instead, remove the argument validation from the internal
function, which is pointless and against the coding practices, but do
coordinate space validation internally.
https://bugzilla.gnome.org/show_bug.cgi?id=722322
The current conformance test suite is suboptimal in many ways.
All tests are built into the same binary, which makes adding new tests,
builting tests, and running groups of tests much more awkward than it
needs to be. The first issue, especially, raises the bar of contribution
in a significant way, while the other two take their toll on the
maintainer. All of these changes were introduced back when we had both
Clutter and Cogl tests in tree, and because we were building the test
suite for every single change; since then, Cogl moved out of tree with
all its tests, and we build the conformance test suite only when running
the `check` make target.
This admittedly large-ish commit changes the way the conformance test
suite works, taking advantage of the changes in the GTest API and test
harness.
First of all, all tests are now built separately, using their own test
suite as defined by each separate file. All tests run under the TAP
harness provided by GTest and Automake, to gather a proper report using
the Test Anything Protocol without using the `gtester` harness and the
`gtester-report` script. We also use the Makefile rules provided by GLib
to vastly simplify the build environment for the conformance test suite.
On top of the changes for the build and harness, we also provide new API
for creating and running test suites for Clutter. The API is public,
because the test suite has to use it, but it's minimal and mostly
provides convenience wrappers around GTest that make writing test units
for Clutter easier.
This commit disables all tests in the conformance test suite, as well as
moving the data files outside of the tests/data directory; the next few
commits will re-establish the conformance test suite separately so we
can check that everything works in a reliable way.
When the threshold-trigger-edge property was introduced in
GestureAction, it was late in the cycle and I elected to keep it
private, given the fact that nobody was subclassing GestureAction
outside of Clutter itself.
These days, people are experimenting more with the GestureAction API, so
they will need access to the various knobs that control the class
default behaviour.
https://bugzilla.gnome.org/show_bug.cgi?id=710227
Use G_GNUC_INTERNAL instead of the leading underscore, as we may make
the accessor functions public at some point. Also, clean up the
documentation.
https://bugzilla.gnome.org/show_bug.cgi?id=710227
Our clip coordinates are relative to the stage, not model-view
transformed. cogl_framebuffer_push_rectangle_clip() was accidentally
used instead of cogl_framebuffer_push_scissor_clip() when porting
to the framebuffer clip API.
https://bugzilla.gnome.org/show_bug.cgi?id=719900
Currently, if an actor with an empty paint volume is queued for redraw, it
will union in the box +0+0x1x1 to the stage clip bounds - avoid that
by special casing empty paint volumes.
https://bugzilla.gnome.org/show_bug.cgi?id=719747
The PaintNode hierarchy should have the ability to retrieve the
current active framebuffer by itself, instead of asking Cogl using the
global state API.
In order to do this, we ask the root node of a PaintNode graph for the
active framebuffer. In the current, 1.x-compatibility mode we have two
potential root node types: ClutterRootNode, used by ClutterStage; and
ClutterDummyNode, used a local root for each actor. The former takes a
framebuffer as part of its construction; the latter takes the actor that
acts as the local top-level during the actor's paint sequence, which
means we can get the active framebuffer from the stage associated to the
actor.
By keeping track of the active framebuffer on the node themselves we can
drop the usage of cogl_get_draw_framebuffer() in their implementation.
The text-cache conformance test breaks because ClutterText gets a paint
without an active framebuffer associated to the ClutterStage. Keep a
fallback while we investigate the issue.
Cogl 1.18 deprecated the global clipping API in favour of the
per-framebuffer one, but since we're using the 2.0 API internally we
don't have access to the deprecated symbols any more.
This is pretty much a mechanical port for all the places where we're
still using the old 1.x API.
Instead of asking every internal user to get the stage and get the
active framebuffer from it, we can wrap it up ourselves, and do some
sanity checks as well.
Dispose() may be called more than once, so calling g_free directly
on the device name is unsafe. Instead, use g_clear_pointer() to
make sure we don't attempt to free the memory again.
https://bugzilla.gnome.org/show_bug.cgi?id=719563
When support for implicit animation of actor position was added,
the optimization for not queueing when allocating an actor back
to the same location was lost. This optimization is important
since when we are hierarchically allocating down from the top of
the stage we constantly reallocate the actors at the top of the
hierarchy back to the same place.
https://bugzilla.gnome.org/show_bug.cgi?id=719368
When the source actor potentially changes size, that shouldn't
necessarily result in the target actor being redrawn - it should
be like when a child of a container is reallocated due to changes
in its siblings or parent - it should redraw only to the extent
that it is moved and resized. Privately export an internal function
from clutter-actor.c to allow getting this right.
https://bugzilla.gnome.org/show_bug.cgi?id=719367
Since the journal is flushed on context switches, trying to use a cached
buffer means that we will use glReadPixels when picking, which isn't what
we want. Instead, always use a clipped draw, and remove the logic for
caching the pick buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=712563
The table layout manager has various issues:
• no support for RTL flipping
• most of the layout API is legacy, and has been replaced by the
alignment and expansion flags on ClutterActor
• the animation API is legacy, and has been replaced by the
implicitly animatable allocation
• the spanning cells handling is a bit awkward, as is its API
On top of that, we imported the grid layout management policy from GTK+
into ClutterGridLayout, which provides all the required features in a
more well-designed API.
Instead of wasting time and resources updating TableLayout, we should
deprecate it and point developers of the GridLayout.
This adds clutter_event_add/remove_filter which adds a callback
function which will receive all Clutter events just before the event
signal is emitted for them. The event filter will be invoked
regardless of any grabs or captures. This will be used by Mutter which
wants to access the events at a lower level then the event bubbling
mechanism. It needs to see all mouse motion events even if there is a
grab in place.
https://bugzilla.gnome.org/show_bug.cgi?id=707560
The state that the X server sends for button events, by specification,
contains the button state before the event. We need to synthesize in
the result of the event in order to determine what the current button
state is.
https://bugzilla.gnome.org/show_bug.cgi?id=712322
XFixesShowCursor / XFixesHideCursor does not actually take the suppled
window argument into account -- the effect is actually global. Use
XDefineCursor instead.
https://bugzilla.gnome.org/show_bug.cgi?id=707071
Destroying an actor is supposed to destroy all of its children, so
it makes sense to reason that destroying the stage should destroy all
of its children, too.
Unfortunately, it seems that the stage removed all of its children
without destroying them before chaining up to what would destroy all
of its children, for whatever reason. Change this to a destroy so
resources get cleaned up.
The TextureNode premultiplies the blend color passed to the node
constructor, so we need to document the fact properly to avoid
causing premultiplication twice.
We can also allow passing NULL for a color, and use a fully opaque
white, to make the code slightly more friendly.
ClutterTextureNode will do that for us when converting the ClutterColor
to a CoglColor, so we can simply pass a white color with the correct
alpha channel.
The calculation (n - 1) * spacing to compute the total spacing is
only correct for n >= 1 - if there are no visible rows/cols, the
required spacing is 0 rather than negative.
https://bugzilla.gnome.org/show_bug.cgi?id=709434
For some XI2 we do not have a Stage associated to the event window.
Original patch by: Giovanni Campagna <scampa.giovanni@gmail.com>
Signed-off-by: Emmanuele Bassi <ebassi@gnome.org>
https://bugzilla.gnome.org/show_bug.cgi?id=708439
On high DPI density displays we create surfaces with a size scaled up by
a certain factor. Even if the contents stay at the same relative size
and position, we need to compensate the scaling both when changing the
surface size, and when dealing with input.
https://bugzilla.gnome.org/show_bug.cgi?id=705915
In order to transparently support high DPI density displays, we must
maintain all coordinates and sizes exactly as they are now — but draw
them on a surface that is scaled up by a certain factor. In order to
do that we have to change the viewport and initial transformation
matrix so that they are scaled up by the same factor.
https://bugzilla.gnome.org/show_bug.cgi?id=705915
The added support is very very basic (single touch, motion only,
no acceleration, no pressure recognition), but anything more
complex requires a state machine that will be hopefully provided
by libinputcommon in the future.
And at least, with this patch the pointer moves, which will be
useful for people testing wayland in 3.10 without a physical mouse.
https://bugzilla.gnome.org/show_bug.cgi?id=706494
In situations when the default backend would fail (for example
when compiled with X11 support but run without DISPLAY), or
when the application is using backend specific code, it makes
sense to let the application choose the backend explicitly.
https://bugzilla.gnome.org/show_bug.cgi?id=707869
It was a bad idea to add it, because clutter events are batched,
so by the time the application processes one, the keyboard state
internally tracked by clutter could be already different.
Instead, apps should use clutter_event_get_state_full()
https://bugzilla.gnome.org/show_bug.cgi?id=706494
We can't dispatch a motion event for EV_REL (because we don't
have yet the other half of the event), but we can't also queue
them at the end of processing (because we may lose some history
or have button/keys intermixed).
Instead, we use EV_SYN, which means "one logical event was
completed", and let the winsys-independent code do the actual
motion compression.
https://bugzilla.gnome.org/show_bug.cgi?id=706494
When we release a device, we lose all the events after that point,
so our state can become stale. Similarly, we need to sync the
state with the effectively pressed keys when we reclaim.
This ensures that modifier keys don't get stuck when switching
VTs using a keybinding.
https://bugzilla.gnome.org/show_bug.cgi?id=706494
When talking to other applications or serializing the modifier
state (and in particular when implementing a wayland compositor),
the effective modifier state alone is not sufficient, one needs
to know the base, latched and locked modifiers.
Previously one could do with backend specific functionality
such as clutter_device_manager_evdev_get_xkb_state(), but the
problem is that the internal data structures are updated as
soon as the events are fetched from the upstream source, but
the events are reported to the application some time later,
and thus the two can get out of sync.
This way, on the other hand, the information is cached in the
event, and provided to the application with the value that
was current when the event was generated.
https://bugzilla.gnome.org/show_bug.cgi?id=706494
These two devices are logically tied togheter, and their state
should always be the same. Also, we need to update them after
the event is queued, as the current modifier state (as opposed to the
modifier mask in the event) should include also the effect of the last
key press/release.
https://bugzilla.gnome.org/show_bug.cgi?id=706494
Add a new callback that is called prior to emitting pointer
motion events and that can modify the new pointer position.
The main purpose is allowing multiscreen apps to prevent the
pointer for entering the dead area that exists when the screens
are not the same size, but it could also used to implement
pointer barriers.
A callback is needed to make sure that the hook is called early
enough and the Clutter state is always consistent.
https://bugzilla.gnome.org/show_bug.cgi?id=706652
If we don't get passed a CoglFramebuffer when creating the root paint
node then we ask Cogl to give us the current draw buffer.
This allows the text-cache conformance test to pass, but it'll require
further investigation.
Currently the default values according to their param spec don't
match the actually used defaults, so update the former to reflect
the actual behavior.
https://bugzilla.gnome.org/show_bug.cgi?id=703809
Whether a child should receive extra space should be determined
by the expand property, not [xy]_fill (which just determine how
additional space should be used). The behavior is already correct
when using the ClutterActor:[xy]_expand properties, but needs
fixing for the corresponding ClutterBoxLayoutChild property.
https://bugzilla.gnome.org/show_bug.cgi?id=703809
Just as BoxLayout, BinLayout uses an odd interpretation of the box
passed into allocate(): to define a child area of (w x h) starting at
(x, y), callers need to pass a box of (x, 2 * x + w, y, 2 * y + h).
This behavior is just confusing, change it to use the full box for
child allocations.
https://bugzilla.gnome.org/show_bug.cgi?id=703809
Currently, BoxLayout interprets the box passed into allocate() in
a fairly peculiar way:
- in the direction of the box, all space between [xy]1 and [xy]2
is distributed among children (e.g. children occupy the entire
width/height of the box, offset by [xy]1)
- in the opposite direction, expanded children receive space
between [xy]1 and the height/width of the box (e.g. children
occupy the width/height of the box minus [xy]1, offset by [xy]1)
The second behavior doesn't make much sense, so adjust it to interpret
the box parameter in the same way as the first one.
https://bugzilla.gnome.org/show_bug.cgi?id=703809
The implicitly created transitions are removed when complete by the
implicit transition machinery. The remove-on-complete hint is for
user-provided transitions.
https://bugzilla.gnome.org/show_bug.cgi?id=705739
ClutterTransition:remove-on-complete uses the ClutterTimeline::stopped
signal, as it's the signal that tells us that the timeline's duration
has fully elapsed.
Mouse wheel events come as EV_REL/REL_WHEEL, and we can convert
them to clutter events on the assumption that scrolling with
the wheel is always vertical.
https://bugzilla.gnome.org/show_bug.cgi?id=705710
xkb_state_update_key() needs to be called only on state transitions,
otherwise the state tracking gets confused and locks certain modifiers
forever.
https://bugzilla.gnome.org/show_bug.cgi?id=705710
A wayland compositor needs to have more keyboard state than
ClutterModifierState exposes, so it makes sense for it to use
xkb_state directly. Also, it makes sense for it to provide
it's own keymap, to ensure a consistent view between the compositor
and the wayland clients.
https://bugzilla.gnome.org/show_bug.cgi?id=705710
All evdev devices are slave devices, which means that xkb state
and pointer position must be shared by emulating a core keyboard
and a core pointer. Also, we must make sure to add all modifier
state (keyboard and button) to our events.
https://bugzilla.gnome.org/show_bug.cgi?id=705710
Hardware keycodes in Clutter events are x11 keycodes, which are
the same as evdev + 8, but we need to reverse the translation when
explicitly asked for an evdev keycode.
https://bugzilla.gnome.org/show_bug.cgi?id=705710
In some cases, applications (or actually, wayland compositors)
don't have the required permissions to access evdev directly, but
can do so with an external helper like weston-launch.
Allow them to do so with a custom callback that replaces the regular
open() path.
https://bugzilla.gnome.org/show_bug.cgi?id=704269
This is necessary to avoid a deadlock with the compositor. When setting
a stage size before the stage was shown this would trigger a redraw
inside clutter_stage_wayland_resize. This redraw would result
in a call into eglSwapBuffers which would attach a buffer to the surface
and commit. Unfortunately this would happen before the role for the
surface was set. This would result in the compositor not relaying to the
client that the desired frame was shown.
With this change the call to wl_shell_surface_set_toplevel is always
made before the first redraw.
https://bugzilla.gnome.org/show_bug.cgi?id=704457
We should not create a shell surface and set the role for that shell
surface if the surface was a foreign one provided through
clutter_wayland_set_wl_surface
https://bugzilla.gnome.org/show_bug.cgi?id=699578
This adds support for optionally a providing a foreign Wayland surface
to a ClutterStage before it is first show. Setting a foreign surface
prevents Cogl from allocating a surface and shell surface for the stage
automatically.
v2: add CLUTTER_AVAILABLE_IN_1_16 annotation and API reference docs
(review from Emmanuele Bassi)
v3: set a boolean to indicate that this stage is using a foreign surface
(Rob Bradford)
https://bugzilla.gnome.org/show_bug.cgi?id=699578
This allows the integration of Clutter with another library, like GTK+,
that is dispatching the events itself. This is implemented by calling
into the cogl_wayland_renderer_set_event_dispatch_enabled() and since
that function must be called on the newly created renderer the newly
added clutter_wayland_disable_event_retrieval must be called before
clutter_init()
https://bugzilla.gnome.org/show_bug.cgi?id=704279
Since commit 4543ed6ac3 in Cogl, Cogl will now try to consume
Windows message itself. This doesn't really cause any problems because
both message loops just call DispatchMessage which will cause the
message to be routed through Clutter's window procedure either way.
However, it's not great to have two sources listening for messages so
this patch disables Cogl's message retrieval.
https://bugzilla.gnome.org/show_bug.cgi?id=701356
This removes a bit of work that we have to do for every device, and makes it
easy for mutter to patch out parts of the event mask it doesn't want.
https://bugzilla.gnome.org/show_bug.cgi?id=703969
The Wayland 1.0 API allows orthoganal components of an application to
query the shell and compositor themselves by querying their own
wl_registry. The corresponding API in Cogl has been removed so Clutter
shouldn't call it anymore.
https://bugzilla.gnome.org/show_bug.cgi?id=703878
The Wayland server API has changed so that wl_shm_buffer is no longer
a type of wl_buffer and wl_buffer will become an opaque type. This
changes ClutterWaylandSurface to accept resources for a wl_buffer
instead of directly taking the wl_buffer so that it can do different
things depending on whether the resource points to an SHM buffer or a
normal buffer. This matches similar changes to Cogl:
https://git.gnome.org/browse/cogl/commit/?id=9b35e1651ad0e46ed48989https://bugzilla.gnome.org/show_bug.cgi?id=703608
Cogl 1.16 has deprecated a lot of API which it will be difficult for
Clutter to catch up with. For the time being the warnings are just
being disabled to keep the build output clean.
https://bugzilla.gnome.org/show_bug.cgi?id=703877
There is no reasonable use case for having the functions, the virtual
functions, and the signals for realization and unrealization; the
concept belongs to an older era, when we though it would have been
possible to migrate actors across different GL contexts, of in case a GL
context would not have been available until the main loop started
spinning. That is most definitely not possible today, and too much code
would utterly break if we ever supported that.
The majority of Clutter input events require a time so that that the
upper levels of abstraction can identify the ordering of events and also
work out a click count.
Although some Wayland events have microsecond timestamps not all those
that Clutter expects do have. Therefore we would need to create some
fake times for those events. Instead we always calculate our own time
using the monotonic time.
https://bugzilla.gnome.org/show_bug.cgi?id=697285
If we allow content repeats on the texture nodes, then we need to use
the "automatic" wrap mode for the texture layer in the pipeline, instead
of the clamp-to-edge one.
Reported-by: Matthew Watson <matthew@endlessm.com>
Signed-off-by: Emmanuele Bassi <ebassi@gnome.org>
This reverts commit 6dd9da05c7.
Windowing system features we need are not mapped on cogl_has_feature().
Signed-off-by: Emmanuele Bassi <ebassi@gnome.org>
Cogl (as of 0b2b46ce) now only sets the shell surface as toplevel when
the CoglOnscreen is shown.
Without calling wl_shell_surface_set_toplevel the compositor will not
know what role to give to the compositor and thus the stage will not
appear.
When we look to support multiple roles / foreign surfaces we will need
to revisit this call and ensure we only call it when we are working in
the default case.
https://bugzilla.gnome.org/show_bug.cgi?id=703188
Since Cogl also polls on this file descriptor we can get into situations
where our event source is woken up to handle events but those events
have instead been handled by Cogl resulting in the source sitting in
poll().
We can safely rely on Cogl to handle the polling on the event source and
to dispatch those events.
https://bugzilla.gnome.org/show_bug.cgi?id=702202
When the cursor visibility changes, we have to relayout the ClutterText
actor instead of just redrawing it - as the cursor changes the
PangoLayout size, a size request cycle is needed.
https://bugzilla.gnome.org/show_bug.cgi?id=702610
While we still don't want to perform implicit transitions on unmapped
actors, we can relax the requirement on having been painted once; the
was_painted flag was introduced to avoid performing implicit transitions
on the :allocation property, but for that we can use the
needs_allocation flag instead, as needs_allocation will be set to FALSE
when we have been painted as well.
Thus, we retain our original goal of not having actors "flying" into
position on their first allocation, without the side effect of
preventing animations when emitting the ::show signal.
When setting the font using clutter_text_set_font_description(), the
font settings on a ClutterText actor can be reset when there is a dpi
changes signaled by the backend.
https://bugzilla.gnome.org/show_bug.cgi?id=702016
1ddef9576d87c98fafbcefe3108f04866630c2cd had its logic the
wrong way round, a gesture should begin as soon as the requested number
of touchpoints is reached. Correcting this fixes tap events
https://bugzilla.gnome.org/show_bug.cgi?id=700980
When we changed the MetaGroup to handle internal effects, we updated
has_effects(), but forgot to fix the equivalent has_constrains() and
has_actions() method.
Now, if we clear the constraints or the actions on an actor, and we
call has_constraints() or has_actions(), we get an false positive.
When using a ClutterOffscreenEffect, the size of the offscreen buffer
allocated to perform the effect is currently computed using the paint
volume of the actor it's attached to and in the case the paint volume
cannot be computed, the effect falls back to using the stage's size.
If you scale an actor enough so its paint volume is much bigger that
the size of the stage, you can end up running out of memory (which
leads to your application crashing).
https://bugzilla.gnome.org/show_bug.cgi?id=699675
The "should this implicit transition be skipped" check should live into
its own function, where we can actually explain what it does and which
conditions should be respected.
Instead of just blindly skipping actors that are unmapped, or haven't
been painted yet, we should add a couple of escape hatches.
First of all, we don't want :allocation to be implicitly animated until
we have been painted (thus allocated) once; this avoids actors "flying
in" into their allocation.
We also want to allow implicit transitions on the opacity even if we
haven't been painted yet; the internal optimization that we employ in
clutter_actor_paint() and skips painting fully transparent actors is
exactly that: an internal optimization. Caller code should not be aware
of this change, and it should not influence code outside of ClutterActor
itself.
The rest of the conditions are the same: if the easing state's duration
is zero, or if the actor is both unmapped and not in a cloned branch of
the scene graph, then implicit transitions are pointless, as they won't
be painted.
https://bugzilla.gnome.org/show_bug.cgi?id=698766
This should actually ensure that the calculations of the Z translation
for the projection matrix is resolved to "variable * CONSTANT". The
original factors are left in code so it's trivial to revert to the
trigonometric operations if need be, even without reverting this commit.
The ClutterActor::paint signal is deprecated, and connecting to it even
to get notifications will disable clipped redraws because of violations
of the paint volume.
The only actual valid use case for notifications of a successful frame
is on the ClutterStage, so we should add new (experimental) API for it,
so that users can actually subscribe to it — at least if you're writing
a compositor.
Shoving a signal in a performance critical path is not an option, and
I'm not sure I want to commit to an API like this yet. I reserve the
right to revisit this decision in the future.
https://bugzilla.gnome.org/show_bug.cgi?id=698783
Currently, clutter_canvas_set_size() causes invalidation of the canvas
contents only if the newly set size is different. There are cases when
we want to invalidate the content regardless of the size set, but we
cannot do that right now without possibly causing two invalidations,
for instance:
clutter_canvas_set_size (canvas, new_width, new_height);
clutter_content_invalidate (canvas);
will cause two invalidations if the newly set size is different than
the existing one. One way to work around it is to check the current
size of the canvas and either call set_size() or invalidate() depending
on whether the size differs or not, respectively:
g_object_get (canvas, "width", &width, "height", &height, NULL);
if (width != new_width || height != new_height)
clutter_canvas_set_size (canvas, new_width, new_height);
else
clutter_content_invalidate (canvas);
this, howevere, implies knowledge of the internals of ClutterCanvas,
and of its optimizations — and encodes a certain behaviour in third
party code, which makes changes further down the line harder.
We could remove the optimization, and just issue an invalidation
regardless of the surface size, but it's not something I'd be happy to
do. Instead, we can add a new function specifically for ClutterCanvas
that causes a forced invalidation regardless of the size change. If we
ever decide to remove the optimization further down the road, we can
simply deprecate the function, and make it an alias of invalidate()
or set_size().
Since we are trying to eliminate the ClutterGeometry type, we should
replace the only entry point still using it: the ::cursor-event signal
of ClutterText.
Instead of passing the cursor geometry, we should add an accessor
function.
The combination of signal and getter for the cursor geometry means that
we can deprecate ClutterText::cursor-event, and mark it for removal in
Clutter 2.0.
https://bugzilla.gnome.org/show_bug.cgi?id=682789
On the other backends we will get some sort of expose event after
showing the stage's window which will queue a redraw. These expose
events don't exist on Wayland so nothing will cause Clutter to queue a
redraw. Weston doesn't bother displaying anything for the stage's
surface until the first buffer is sent, which of course it will never
receive if Clutter doesn't paint anything. This patch just makes it
explicitly queue a redraw after the stage is shown so that we will
always pass at least one frame to the compositor.
The bug can be seen by running test-stage-sizing. That example doesn't
have any animations so it won't try to queue any redraws until
something interacts with it. On the other hand something like
test-actors works fine without the patch because it constantly queues
redraws anyway in order to display the animation.
https://bugzilla.gnome.org/show_bug.cgi?id=696791
If clutter is built with both X11 and Wayland support, prefer the
(more complete for now) X11 backend. This matches GTK+'s current
ordering.
This allows distributions to ship a clutter version with both backends
built, and using an envvar to switch to the wayland backend and test
applications.
In the future, applications would be able to choose which backend
they prefer and in which order.
https://bugzilla.gnome.org/show_bug.cgi?id=695838
If an actor has not been painted yet, or it's not going to be painted,
we can ignore transitions queued on it.
By ignoring transitions on actors that have not been painted yet, we can
avoid doing work during the set up phase of the scene graph, as well as
avoiding actors "flying in" from nowhere.
Obviously, we have to take into account potential clones, so we need to
check that the actor is not part of a cloned branch of the scene graph,
as well as checking if the actor has mapped clones.
If an actor is unmapped then it won't be painted, so we can safely
short-circuit out of _clutter_actor_queue_redraw_full() if the mapped
flag is not set.
We need, on the other hand, make an exception for Clones, otherwise
they won't receive notification that the source actor has changed
and they won't be painted.
This allows us to ignore redraws queued on children of invisible
parents, and avoid traversing the scene graph.
Instead of using signal notifications, we should be able to keep track
of the clones of an actor from within ClutterActor itself, using private
API. There's no point in pretending that people can actually create a
Clone class out of tree, given the amount of invariants we have to punch
through in order to implement a proper replicator node of the scene
graph, so we can just skip the signal emissions and just do the right
thing at the right time.
More comments are warranted: these functions are pretty much full of
potential side effects, and I'd really like to avoid keeping everything
in my head forever.
Along with the comments and the type casting reduction, I sneaked in a
one line change that is clearly correct after reading the flow of the
whole thing: we queue only a relayout after three potential redraws have
been queued. If we manage to miss a redraw and yet still get a relayout
then it means that most of our assumptions are fundamentally wrong, and
that we ought to dump this whole business of computer programming, and
just go back to being a hunter-gatherer species.
Since XIQueryVersion, the bad API that it is, chooses the first client
version that it gets, we need to ensure that we pass XIQueryVersion the
new XI2.3 version, knowing fully well that Clutter won't be confused
by the new features.
https://bugzilla.gnome.org/show_bug.cgi?id=692466
The X server should fill in the minor version that it supports in the
case where it only supports the older version. We should not get a
BadRequest or fail the version check if we pass something higher.
https://bugzilla.gnome.org/show_bug.cgi?id=692466
Text exposed by the AtkText methods should be the text
displayed to the user (like the internal method
clutter_text_get_display_text). So it should use the password-char
if it is being used.
This is also a security concern.
The original code inside ClutterActor that dealt with Transitions
stopping was written for the ::completed signal, thus the code was
correctly handling the lifetime of the instances; when we moved to the
::stopped signal, we assumed that it worked in the same way - with less
conditions to be checked, obviously, but fundamentally similar to the
::completed signal. Sadly, I screwed up the signal definition, and the
signal ended up calling our handlers, but not the default one that did
the cleanup and released references on the Animatable instance.
After fixing the Timeline::stopped signal, we can go back to the
previous code.
Thanks to Craig Hughes for the help in tracking down this mess.
https://bugzilla.gnome.org/show_bug.cgi?id=695158
A copy and paste thinko: the ::stopped signal is using the
ClutterTimelineClass.completed slot instead of the .stopped one,
thus preventing sub-classes of ClutterTimeline from overriding the
signal's default closure.
When stopping the transition we need to release the reference we
maintain while removing the Transition from the hash table inside an
actor. If we fail to do so, the Transition is never released, which
means we leak the Animatable instance we tied to it.
https://bugzilla.gnome.org/show_bug.cgi?id=695158
The ClutterOffscreenEffect.get_target_size() method has been deprecated,
and replaced by the get_target_rect() one. We can easily switch to the
latter, and avoid the deprecation warning.
https://bugzilla.gnome.org/show_bug.cgi?id=670004
The target size is not always enough, there are cases where the offset
used to paint the target must also be available for developers
implementing an OffscreenEffect.
The get_target_rect() method returns the rectangle used to paint the
target, with the offsets in the ClutterRect:origin and the texture size
in the ClutterRect:size fields, respectively.
The get_target_size() method should be deprecated, given that its
replacement is generally more useful.
https://bugzilla.gnome.org/show_bug.cgi?id=670004
If we pass TRUE for x_align and FALSE for y_align, the full available
width should be passed to clutter_get_preferred_height, and the same
should be true in the other dimension.
https://bugzilla.gnome.org/show_bug.cgi?id=694237
Instead of directly accessing the instance fields. This removes a
compiler warning after the constification of g_get_prgname(), and it
seems to me to be generally more correct.
Instead of using a custom apply_transform(), paint(), and pick()
implementations, we can simply apply a transformation to the children of
a ScrollActor.
https://bugzilla.gnome.org/show_bug.cgi?id=686225
As wayland-client.h and wayland-server.h can't be included together,
split the Wayland backend file into clutter-backend-wayland.h, which
only defines the types, and clutter-backend-wayland-priv.h, which
actually uses the Wayland client types.
Signed-off-by: Daniel Stone <daniel@fooishbar.org>
https://bugzilla.gnome.org/show_bug.cgi?id=692851
The definition of wl_display differs between Wayland clients and
servers, and it's unsafe to include both wayland-client.h and
wayland-server.h at the same time. Fudge around this by making the
compositor public API use void * rather than struct wl_display *.
Signed-off-by: Daniel Stone <daniel@fooishbar.org>
https://bugzilla.gnome.org/show_bug.cgi?id=692851
This function is deprecated and has been replaced by set_display() on
the renderer. This is done in the get_renderer() vfunc of both the x11
and gdk backends already.
Actually cogl_xlib_set_diplay() is now a no-op and can be safely removed.
https://bugzilla.gnome.org/show_bug.cgi?id=687652
Being able to set a marker at a normalized point on a timeline, instead
of using a specific time, is a nice fit with the current Timeline class
API.
https://bugzilla.gnome.org/show_bug.cgi?id=694319
If anything in the system changes the config for fontconfig then an
XSetting will be set to record the last timestamp of the config file.
This is presumably so that applications can be notified that it has
changed and can reload the configuration. However once this setting is
set it will remain set for the lifetime of the X server. This causes
Clutter to handle the setting during the initialisation of the
backend. Previously this would cause problems because Clutter would
end up creating the default PangoFontMap before the backend has
created the CoglContext. The PangoFontMap would in turn cause the
default CoglContext to be created. Clutter will then later create its
own CoglContext which means there will be two and the first one will
be leaked. Cogl currently can't really cope with multiple contexts
being created so it falls apart.
This patch fixes it to skip reloading the config for fontconfig if
there isn't a default font map yet. The config will presumably
naturally be read with the latest values when it is finally created
anyway so it doesn't need to be read immediately.
https://bugzilla.gnome.org/show_bug.cgi?id=693696
New experimental API is added to allow changing the way that redraws
are timed for a stage to include a "sync delay" - a period after
the vertical blanking period where Clutter simply waits for updates.
In detail, the algorithm is that when the master clock is restarted
after drawing a frame (in the case where there are timelines running)
or started fresh in response to a queued redraw or relayout, the
start is scheduled at the next sync point (sync_delay ms after the
predicted vblank period) rather than done immediately.
https://bugzilla.gnome.org/show_bug.cgi?id=692901
In commit 8f4e39b6d7 the Wayland code was updated to use the new
xkbcommon API. This involved changing the common XKB code shared with
the evdev input backend. However the evdev input backend was not
modified so it wouldn't compile. This patch just makes a minor change
to update it.
https://bugzilla.gnome.org/show_bug.cgi?id=693348
Use the buffer_age extension when available to recycle backbuffer contents
instead of blitting from the back to front buffer when doing clipped redraws.
The picking is now done in a pixel that is going to be repaired during the next
redraw cycle for non static scences.
This should improve performance and avoid tearing.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=669122
This allows us to report to the backend that the stage's back buffer has been trashed
while handling picking. If the backend is keeping track of the contents of back buffers
so it can minimize how much of the stage is redrawn then it needs to know when we do pick
renders so it can invalidate the back buffer.
Based on patch from Robert Bragg <robert@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=669122
The behaviour imitates GtkEntry and ignores attributes from markup because Pango
barfs on invalid markup. Also add an example to the text-field interactive test.
https://bugzilla.gnome.org/show_bug.cgi?id=686477
As x11 considers num lock and scroll lock to be modifiers, code that
checks for an exact modifier combination will fail if naively done when
num lock or scroll lock are turned on. Applications that want to ignore
these modifiers will need to use XKB to manually mask out the modifier
state.
As it is very unlikely that applications will want to care about the
state of num lock or scroll lock for key press/key release events, mask
out the num lock and scroll lock keys automatically.
https://bugzilla.gnome.org/show_bug.cgi?id=690664
This has been disabled since February 2008, on the grounds that XFixes
didn't work reliably for hiding cursors. This has almost certainly been
fixed then and seems to work entirely reliably across a number of X
servers released in the past few years, and is definitely better than a
1x1 black dot for a cursor.
Helpfully though, where the spec states that the cursor will be hidden
when inside the specified window or one of its children, it actually
only uses the window to look up the Screen, and hides the cursor across
the entire Screen. So, when using this, we also need to track crossing
events.
If it's still broken, this needs to be fixed in the X server.
https://bugzilla.gnome.org/show_bug.cgi?id=690497
Signed-off-by: Daniel Stone <daniel@fooishbar.org>