Semantic changes to Wayland means that we cannot rely on the compositor
setting a pointer buffer for us if set it to nil. The first part of fixing
this is to create an shm buffer containing the bytes for our cursor.
The best way to do this currently is to load the cursor from the well known
location where weston instals its cursor images. The code to implemente this
was derivedlifted from the Wayland backend in GTK+.
Currently, we're emitting the ClutterActor::destroy at the end of the
dispose implementation - right before we chain up to the parent
implementation.
The point of emission makes the ::destroy signal handlers able to just
use the actor pointer - as the actor state will have been mostly cleared
by the time application can run. This (undocumented) behaviour severely
limits the amount of things you can do inside a ::destroy signal
handler, thus making the ::destroy signal just a weird weak reference,
instead of a proper way to break application reference cycles.
Given that this change relaxes some of the conditions, this change
should be safe - obviously, if anything happens, we'll back it out; the
conformance and interactive tests confirm that, for common patterns of
usage, this change does not break existing code.
GLib has a nice, atomic object clearing function that allows us to drop
code looking like:
if (priv->object != NULL)
{
g_object_unref (priv->object);
priv->object = NULL;
}
from the ::dispose implementation.
I always have to think twice before returning a value from an event
signal handler, and I've been writing them for the past 10 years, so
it's conceivable that application developers that start with Clutter
will find them confusing as well.
Simple symbolic names should be easier to use.
The depth cueing through GL fog has been broken for a long while, now.
The fog-related API in Clutter is ridiculously limited, and harks back
to simpler times; the ClutterFog structure is not enough to express all
the GL fog machinery, and required application code to connect to the
Stage's paint implementation and drop into Cogl directly.
Additionally, the fixed pipeline fog machinery in GL simply does not
work with premultiplied alpha, unless you use a shader - and in that
case it would only work for textures. Let's deprecate it, and just
don't do anything if somebody has the brilliant idea of setting the
:use-fog property to TRUE.
Sadly, we need to remove the G_GNUC_NULL_TERMINATED annotation from
ClutterBox packing API; the compiler will otherwise emit a warning
for perfectly legal statements like:
clutter_box_pack (box, child, NULL);
because of the missing sentinel.
See also: g_object_new().
GLib has a "diagnostic mode" switch that can be checked to enable debug
messages on deprecated properties and signals, as these are purely
run-time constructs, and as such cannot be caught by compiler warnings.
The diagnostic mode is toggled by a simple environment variable, and
can be used to ease porting of application code.
We can use something similar to mark deprecated virtual functions and
other run-time constructs; to avoid collisions, we should use our own
environment variable, CLUTTER_ENABLE_DIAGNOSTIC.
Instead of using a PaintVolume for a 2D region, and an internal
function, use the newly added queue_redraw_with_clip() method.
This removes the last bit of internal API usage in the
ClutterX11TexturePixmap actor.
https://bugzilla.gnome.org/show_bug.cgi?id=660997
Add a public version of the clipped queue redraw, using a 2D clip. This
allows implementing actors with trackable 2D clipped regions, like the
ClutterX11TexturePixmap, outside of Clutter itself.
https://bugzilla.gnome.org/show_bug.cgi?id=660997
The Wayland protocol now has events represent when a pointer enters the
surface and when it leaves again.
For leaves the surface is not set in the event, for enters the surface is set.
Simply use this to determine whether to emit CLUTTER_ENTER or CLUTTER_LEAVE.
Previously the wl_shell object held the methods that allowed a client to
request changes to the shell's state associated with a surface. These methods
have now been moved to a wl_shell_surface object.
This change allows configure events to be handled inside the stage rather than
the backend.
This makes the option_xkb_* symbols declared for the evdev device manager
and the wayland device manager private so we don't get symbol collisions
if both of these backends are enabled.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
this fixes how clutter-device-manager-evdev.h is included to fix a build
problem caused by not being able to find the header.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This updates the evdev input backend code to compile and also updates
the code to not refer to the default stage and instead check for a
stage to be associated with the input device. If no stage is currently
associated with a device generating events then the events are dropped
on the floor.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds internal api to be able to query the stage currently
associated with a given input device so input backends shouldn't need to
refer to the default stage.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a --enable-wayland-compositor configure option which will add
support for a ClutterWaylandSurface actor which can be used to aid in
writing Wayland compositors using Clutter by providing a ClutterActor to
represent Wayland client surfaces.
Notably this configure option isn't tied into any particular backend
since conceptually the compositor support can be used in conjunction
with any clutter backend that has corresponding Cogl support.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This updates Wayland support in line with upstream changes to the Wayland
API and protocol.
This update means we no longer use the Cogl stub winsys so a lot of code
that had to manually interact with EGL and implement a swap_buffers
mechanism could be removed and instead we now depend on Cogl to handle
those things for us.
This update also adds an input device manager consistent with other
clutter backends.
Note: to use the client side "wayland" clutter backend you need to have
built Cogl with --enable-wayland-egl-platform. If Cogl has been built
with support for multiple winsys backends then you should run
applications with COGL_RENDERER=EGL in the environment.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Generate a .bat file to generate the clutter-enum-types.[ch] for use
during the Visual C++ build process, which will greatly simplify the
maintenanace of the VS build files as public headers are added or removed
during the development process.
ClutterInputDeviceX11 has been made private, so we cannot access it from
outside of clutter-input-device-core-x11.c. We should have simple
accessors for the min/max keycode, which is the only detail that we use.
Clutter-WARNING **: Unable to compile the GLSL
shader: Fragment shader failed to compile with the following errors:
The attached patch (against current git) should print out more
information what makes it easier to answer user feedback.
https://bugzilla.gnome.org/show_bug.cgi?id=664252
While working through the Python3/pygobject bindings, I came across a missing
(allow-none) in clutter_state_set_key(). This allows the API to specify to None
as the source_target.
https://bugzilla.gnome.org/show_bug.cgi?id=664996
The builtin effects ClutterColorizeEffect, ClutterDesaturateEffect and
ClutterShaderEffect all have properties which only affect the
rendering of the final texture not the contents of it. When these
properties are updated we should queue a repaint of the effect not
the actor so that we don't waste time repainting the contents of the
offscreen buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=665052
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
There was an #ifdef'd section of code for profiling that was using the
wrong variable name so it would not build.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Previously the offscreen effect was keeping track of the size of the
texture so that it could detect when a different size is requested and
create a new texture. However this breaks if a subclass overrides
create_texture to make the texture bigger because in that case the
size of the texture will always be different from the calculated size
of the actor. This patch makes it also track the size of the fbo that
was requested before being passed through create_texture() and it
instead uses that to detect when a new FBO is needed.
https://bugzilla.gnome.org/show_bug.cgi?id=665040
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
It should be possible to define markers in ClutterScript when
describing a ClutterTimeline.
The syntax is a trivial:
"markers" : [
{ "name", <marker-name>, "time" : <msecs> }
]
While at it, we should document it inside the API reference, as well
as fleshing out the ClutterTimeline description.
To allow language bindings to properly override Script.connect_signals()
they'll need access access to Script.connect_signals_full().
Thanks to Jeremy Moles for reporting.
Since we have a _clutter_debug_message() function compiled in
unconditionally we have no further need for the equivalent conditional
version defined in clutter-profile.[ch]: we can simply do the work in
one function.
We still ship clutter_get_show_fps() and clutter_get_debug_enabled() as
public entry points. Yet another case of missing API review prior to the
1.0 release, so really the bucket stops around my desk.
Let's deprecate these two useless functions, and reduce the API
footprint of Clutter.
This function should have never been made public in the first place; its
output depends on a configuration option of Clutter, and it's basically
useful only for internal debugging.
Make it consistent across the various build options (with or without
profiling enabled), and add a timestamp using the monotonic clock to
every debug message.
The clutter_get_timestamp() output depends on whether Clutter was
compiled with debugging support — it's meant to be used only by the
debugging notes, and it should not be used for anything else.
Instead of calling cogl_set_depth_test_enabled and
cogl_set_backface_culling_enabled ClutterDeformEffect now uses the
experimental CoglPipeline API. Those global state functions will soon
be deprecated in Cogl and they are implemented by flushing a temporary
override pipline which isn't ideal.
Using the new culling API we can also avoid having a separate buffer
of indices for the back of the texture by just changing the culling
mode to cull front baces instead of the back.
https://bugzilla.gnome.org/show_bug.cgi?id=663636
This changes ClutterDeformEffect to use a CoglAttributeBuffer with a
CoglPrimitive instead of the old CoglVertexBuffer. The old vertex
buffer code is now implemented in terms of the attribute buffer code
and it will eventually be deprecated. Using CoglPrimitives should be
slightly more efficient.
This also changes the struct we store the vertices to be
CoglVertexP3T2C4 instead of CoglTextureVertex. The latter is
technically not compatible with neither vertex buffers nor attribute
buffers because it contains a CoglColor and the internal members of
that are private so it is not valid to assume it contains 4 bytes and
use that as an attribute. Also it contains padding so it ends up
redundantly creating a larger buffer. CoglTextureVertex is in the
public API for the deform_vertex virtual so we still have to maintain
that. Instead of directly manipulating the array to upload, the
application is now passed a stack allocated temporary struct which
gets converted to a CoglVertexP3T2C4. This also means that we can map
the buffer as write only and still let the application read-write the
vertex.
The paint debug code to draw line strips for the deform mesh was
previously trying to set a red source material. However this wasn't
working because the material color was being overwritten by the color
attribute in the vertex buffer. This patch fixes that by creating a
seperate primitive for the lines and not adding the color
attribute. The lines code was also drawing both the front and back
indices. I don't think that entirely makes sense so I've just changed
it to draw only the front indices. Maybe painting both would make more
sense if backface culling was still enabled.
https://bugzilla.gnome.org/show_bug.cgi?id=663636
When invalidating the deform effect, we are invalidating the vertices
shaping the deformation of an actor. Therefore, there is no need to
trigger a redraw of the associated actor, we can just repaint the
effect.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=663720
* deprecate-default-stage:
evdev: do not associate device with stage
evdev: don't even process events without a default stage
docs: Note default stage deprecation in README
docs: Remove clutter_stage_get_default()
stage: Deprecate the default stage
script: Do not use clutter_stage_get_default()
cally/actor: Do not use the default stage as a fallback
Try to mop up the default stage mess
performance/*: Do not use clutter_stage_get_default()
interactive/*: Do not use clutter_stage_get_default()
Merge with a11y
micro-bench/*: Do not use clutter_stage_get_default()
accessibility/*: Do not use clutter_stage_get_default()
conform/*: Do not use clutter_stage_get_default()
The VBLANK environmental variable is done universally in clutter-main.c
as in commits e8562089 (main: Add a sync-to-vblank global flag) and
db211a21 (Remove per-backend CLUTTER_VBLANK envvar), so remove these things
here as well.
https://bugzilla.gnome.org/show_bug.cgi?id=663999
The VBLANK environmental variable is done universally in clutter-main.c
as in commits e8562089 (main: Add a sync-to-vblank global flag) and
db211a21 (Remove per-backend CLUTTER_VBLANK envvar), so remove these things
here as well.
-Make the contents of config.h.win32.in more like config.h.in
-Define CLUTTER_INPUT_WIN32 accordingly (no GDK3 defines yet, until
GDK3 on Windows is more stable)
The evdev system is a bit different from other input systems in
Clutter because it's completly decorrelated from anything graphic.
In the case of embedded devices with no proper windowing system, you
might want to not implicitly create a default stage when you're
receiving the first input event.
This patch changes this behavior by not forwarding any event if you
don't have a default stage.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=651718
A lot of the example code in the cookbook and the API reference still
uses the default stage — sometimes as if it were a non-default one,
which once again demonstrates how the default stage was a flawed concept
that just confused people.
Using the default stage as a fallback is wrong in all circumstances.
In this specific case, if an actor is not associated to a stage then it
cannot possibly be the key focus.
The default stage was a neat concept when we started Clutter out,
somewhere in the Jurassic era; a singleton instance that gets created at
initialization time, and remains the same for the entire duration of the
process.
Worked well enough when Clutter was a small library meant to be used to
write fullscreen media browsers, but since the introduction of multiple
stages, and Clutter being used to create all sorts of applications, the
default stage is just a vestigial remainder of that past, like an
appendix; something that complicates the layout of the code and
introduces weird behaviour, so that you notice its existence only when
something goes wrong.
Some platforms we do support, though, only have one framebuffer, so it
makes sense for them to have only one stage.
At this point, the only sane thing to do is to go through the same code
paths on all platforms, and that code path is the stage instance
creation and initialization — i.e. clutter_stage_new() (or
g_object_new() with CLUTTER_TYPE_STAGE).
For platforms that support multiple stages, nothing has changed: the stage
created by clutter_stage_get_default() will be set as the default one;
if nobody calls it, the default stage is never created, and it just
lives on as a meaningless check.
For platforms that only support one stage, clutter_stage_new() and
clutter_stage_get_default() will behave exactly the same the first time
they are called: both will create a stage, and set it as the default.
Calling clutter_stage_new() a second time is treated as a programmer
error, and will result in Clutter aborting. This is a behavioural change
because the existing behaviour or creating a new ClutterStage instance
with the same ClutterStageWindow private implementation is, simply put,
utterly braindamaged and I should have *never* had written it, and I
apologize for it. In my defence, I didn't know any better at the time.
This is the first step towards the complete deprecation of
clutter_stage_get_default() and clutter_stage_is_default(), which will
come later.
Instead of implementing create_stage() and a constructor for
ClutterStageOSX, we can use the default implementations in
ClutterBackend, and spare us some code duplication.
Create the device manager during the event initialization, where it
makes sense.
This allows us to get rid of the per-backend get_device_manager()
virtual function, and just store the DeviceManager pointer into the
ClutterBackend structure.
All StageWindow implementation already have back pointers, but we need a
unified API to actually set them from the generic code path; we can use
properties on the StageWindow interface — though this requires fixing
all backends at the same time, to avoid GObject complaining.
Instead of piggybacking on the EGL backend, let's create a small
ClutterBackend for the CEx100 platforms. This allows us to handle the
CEx100-specific details in a much cleaner way.
All the functionality that ClutterBackendCogl provided has been moved
into ClutterBackend itself, so there is no need to have this class
around in the source.
Cogl-based backends can derive directly from ClutterBackend.
Don't replace create_context(): given that the X11 backend already uses
Cogl for the context creation, we can just provide the right data
structures ourselves.
Since we use Cogl for the context creation we can now provide a default
context creation that should just work, plus a couple of hooks to allow
plugging into the creation sequence for platforms supported by Cogl that
require special handling — like foreign displays or alpha-enabled swap
chains.
The various backends have now two choices: either replace the
create_context() in its entirety, or plug themselves into the default
context creation.
Input backends are, in some cases, independent from the windowing system
backends; we can initialize input handling using a model similar to what
we use for windowing backends, including an environment variable and
compile-/run-time checks.
This model allows us to remove the backend-specific init_events(), and
use a generic implementation directly inside the base ClutterBackend
class, thus further reducing the backend-specific code that every
platform has to implement.
This requires some minor surgery to every single backend, to make sure
that the function exposed to initialize the event loop is similar and
performs roughly the same operations.
Right now, we pass through to Pango unrecognized hexadecimal formats
when parsing colors from strings. Since we parse all possible formats
ourselves, we can do validation ourselves as well, and avoid the Pango
path.
Previously, if there was whitespace between "l" and the comma before the
alpha value, parsing would fail. This patch allows that whitespace
making it consistent with whitespace being allowed everywhere else.
https://bugzilla.gnome.org/show_bug.cgi?id=663594
These methods are short-hands for accessing the position and size,
which are already shorthands for accessing the various dimensional
and positional attributes. Plus, they use ClutterGeometry, which is a
fairly bad data type for a rectangle.
We still use ClutterGeometry internally in a couple of places, but we
should really move away from that flawed rectangle data type, and use
the Cairo one.
Sadly, we still have some public API that we cannot remove yet.
Setting the default frame rate does not do anything even remotely
useful, unless synchronization to the vertical refresh rate is also
disabled - which can only be done through environment variable or
through configuration file. Having a programmatic way to change the
default frame rate is, thus, completely pointless.
Changing the default frame rate through environment variable and
configuration file is still allowed.
A GDK backend for clutter was added in commits 610a9c17
(Add A new GDK backend) and f14cbf5b (gdk: Allow disabling event retrieval)
These are not included in the Win32 builds for the moment as GDK3 is
quite unstable on Windows at this time
-Update clutter-version.h.win32.in to reflect the state of
clutter-version.h.in commit 21a24c86 (updated in commit 8249e488)
-Also add clutter_check_windowing_backend to clutter.symbols as a result.
The clutter-stage.h header still has a bunch of macros that have, for
reasons unknown[*], survived the 1.0 API cut and have long since been
deprecated. Let's hide them under the deprecated/ carpet and let us
never speak of them ever again.
[*] pretty sure alcohol or other psychotropic substances were involved
but I take the 5th on that.
Instead of defining new symbols for the windowing systems enabled at
configure time, we can reuse the same symbols for both the compile time
and run time checks, e.g.:
#ifdef CLUTTER_WINDOWING_X11
if (clutter_check_windowing_backend (CLUTTER_WINDOWING_X11))
/* use the clutter_x11_* API */
else
#endif
#ifdef CLUTTER_WINDOWING_WIN32
if (clutter_check_windowing_backend (CLUTTER_WINDOWING_WIN32))
/* use the clutter_win32_* API */
#endif
This scheme allows us to ensure that the input system namespace is free
for us to use and select at run time in later versions of Clutter.
We need debugging notes, to see what's happening when handling events.
We need to queue a (clipped) redraw when receiving a GDK_EXPOSE event.
We need to check the device (both master and source) of the event using
the GdkEvent API, and pass them to the ClutterEvent using the
corresponding Clutter API.