ClutterBox functionality has been implemented by ClutterActor, and
proxied by the Box subclass; with the removal of the abstract bit on
ClutterActor, we can safely deprecated ClutterBox.
ClutterActor now has all the API and capabilities for being a concrete
class:
- layout management, through delegation
- container implementation and API
- background color
This means that a simple scene can be built straight out of actors
without using subclasses except for the Stage.
This is the first step towards the deprecation of most of the Actor
subclasses provided by Clutter.
ClutterActor can do better by default than just giving up immediately.
An actor can check for the clip region, and for its children's paint
volume, for instance.
Just these two should give us a better default implementation for newly
written code.
The minimum preferred size of a Flow layout manager is the size of a
column or a row, as the whole point of the layout policy enforced by
the Flow layout manager is to reflow when needed.
ClutterBox's color and color-set properties can be implemented as
proxies for the ClutterActor's newly added background-color and
background-color-set properties, respectively.
This also allows us to get rid of the paint() implementation inside
ClutterBox altogether.
Each actor should have a background color property, disabled by default.
This property allows us to cover 99% of the use cases for
ClutterRectangle, and brings us one step closer to being able to
instantiate ClutterActor directly.
And make sure that overriding Container and calling
clutter_actor_add_child() will result in the same sequence of operations
as the current set_parent()+queue_relayout()+signal_emit pattern.
Existing containers can continue using:
clutter_actor_set_parent (child, CLUTTER_ACTOR (container));
clutter_actor_queue_relayout (CLUTTER_ACTOR (container));
g_signal_emit_by_name (container, "actor-added", child);
and newly written containers overriding Container.add() can simply call:
clutter_actor_add_child (CLUTTER_ACTOR (container), child);
instead.
We need to queue a relayout when removing a visible child from a visible
parent.
We also need to insert the child at the right position (depending on the
depth) so that newly added actors will be painted on top.
Remove four more floats from ClutterActorPrivate.
The fixed minimum and natural sizes should be stored inside the
ClutterLayoutInfo structure, along with the fixed position.
Add a failsafe against a NULL parent, to avoid a segfault when calling
clutter_actor_allocate() on the Stage.
We also need to deal with floating point values: straight comparison is
not going to cut it.
ClutterActor has various properties controlling the allocation:
- x-align, y-align
- margin-top, margin-bottom, margin-left, margin-right
These properties should adjust the ClutterActorBox passed from the
parent actor to its children when calling clutter_actor_allocate(),
so that the child can just allocate its children at the right origin
with the right available size.
The actor class should be able to hold the margin offsets like it does
for expand and alignment flags.
Instead of filling the private data structure with data, we should be
able to use an ancillary data structure, given that all this data is
optional and might never be set in the first place.
In case no layout manager was set during construction, we fall back to a
FixedLayout. The FixedLayout has the property of making the fixed
positioning and sizing API, as well as the various Constraints, work
out of the box.
Now that ClutterActor implements the Container contract we can actually
defer the size negotiation to a ClutterLayoutManager directly from the
default implementation of the Actor's virtual functions.
We can provide most of the ClutterContainer implementation directly
within ClutterActor — basically removing the need of having the
Container interface in the first place. For backward compatibility
reasons we can keep the interface, but let Actor implement it directly.
Let's try and move away from the reverse implicit scene graph build API,
which we mutuated from GTK+, towards a more traditional node/child API.
The set_parent()/unparent() API is confusing, unless you know the
history; having a add_child()/remove_child() methods pair makes it more
explicit.
We can easily implement the old set_parent()/unparent() pair in terms of
the newly add_child()/remove_child() one.
Enclose the check inside a #ifdef CLUTTER_ENABLE_DEBUG ... #endif, so
that we can compile it out; also, use g_string_append() instead of the
g_string_append_printf() function, given that we're just concatenating
strings.
The ::redraw virtual function was a throwback from olden times, and has
been thoroughly replaced by the equivalent vfunc on the StageWindow
interface. We can safely remove it, now, and simplify the flow of the
redraw code inside ClutterStage.
Semantic changes to Wayland means that we cannot rely on the compositor
setting a pointer buffer for us if set it to nil. The first part of fixing
this is to create an shm buffer containing the bytes for our cursor.
The best way to do this currently is to load the cursor from the well known
location where weston instals its cursor images. The code to implemente this
was derivedlifted from the Wayland backend in GTK+.
Currently, we're emitting the ClutterActor::destroy at the end of the
dispose implementation - right before we chain up to the parent
implementation.
The point of emission makes the ::destroy signal handlers able to just
use the actor pointer - as the actor state will have been mostly cleared
by the time application can run. This (undocumented) behaviour severely
limits the amount of things you can do inside a ::destroy signal
handler, thus making the ::destroy signal just a weird weak reference,
instead of a proper way to break application reference cycles.
Given that this change relaxes some of the conditions, this change
should be safe - obviously, if anything happens, we'll back it out; the
conformance and interactive tests confirm that, for common patterns of
usage, this change does not break existing code.
GLib has a nice, atomic object clearing function that allows us to drop
code looking like:
if (priv->object != NULL)
{
g_object_unref (priv->object);
priv->object = NULL;
}
from the ::dispose implementation.
I always have to think twice before returning a value from an event
signal handler, and I've been writing them for the past 10 years, so
it's conceivable that application developers that start with Clutter
will find them confusing as well.
Simple symbolic names should be easier to use.
The depth cueing through GL fog has been broken for a long while, now.
The fog-related API in Clutter is ridiculously limited, and harks back
to simpler times; the ClutterFog structure is not enough to express all
the GL fog machinery, and required application code to connect to the
Stage's paint implementation and drop into Cogl directly.
Additionally, the fixed pipeline fog machinery in GL simply does not
work with premultiplied alpha, unless you use a shader - and in that
case it would only work for textures. Let's deprecate it, and just
don't do anything if somebody has the brilliant idea of setting the
:use-fog property to TRUE.
Sadly, we need to remove the G_GNUC_NULL_TERMINATED annotation from
ClutterBox packing API; the compiler will otherwise emit a warning
for perfectly legal statements like:
clutter_box_pack (box, child, NULL);
because of the missing sentinel.
See also: g_object_new().
GLib has a "diagnostic mode" switch that can be checked to enable debug
messages on deprecated properties and signals, as these are purely
run-time constructs, and as such cannot be caught by compiler warnings.
The diagnostic mode is toggled by a simple environment variable, and
can be used to ease porting of application code.
We can use something similar to mark deprecated virtual functions and
other run-time constructs; to avoid collisions, we should use our own
environment variable, CLUTTER_ENABLE_DIAGNOSTIC.
Instead of using a PaintVolume for a 2D region, and an internal
function, use the newly added queue_redraw_with_clip() method.
This removes the last bit of internal API usage in the
ClutterX11TexturePixmap actor.
https://bugzilla.gnome.org/show_bug.cgi?id=660997
Add a public version of the clipped queue redraw, using a 2D clip. This
allows implementing actors with trackable 2D clipped regions, like the
ClutterX11TexturePixmap, outside of Clutter itself.
https://bugzilla.gnome.org/show_bug.cgi?id=660997
The Wayland protocol now has events represent when a pointer enters the
surface and when it leaves again.
For leaves the surface is not set in the event, for enters the surface is set.
Simply use this to determine whether to emit CLUTTER_ENTER or CLUTTER_LEAVE.
Previously the wl_shell object held the methods that allowed a client to
request changes to the shell's state associated with a surface. These methods
have now been moved to a wl_shell_surface object.
This change allows configure events to be handled inside the stage rather than
the backend.
This makes the option_xkb_* symbols declared for the evdev device manager
and the wayland device manager private so we don't get symbol collisions
if both of these backends are enabled.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
this fixes how clutter-device-manager-evdev.h is included to fix a build
problem caused by not being able to find the header.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This updates the evdev input backend code to compile and also updates
the code to not refer to the default stage and instead check for a
stage to be associated with the input device. If no stage is currently
associated with a device generating events then the events are dropped
on the floor.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds internal api to be able to query the stage currently
associated with a given input device so input backends shouldn't need to
refer to the default stage.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a --enable-wayland-compositor configure option which will add
support for a ClutterWaylandSurface actor which can be used to aid in
writing Wayland compositors using Clutter by providing a ClutterActor to
represent Wayland client surfaces.
Notably this configure option isn't tied into any particular backend
since conceptually the compositor support can be used in conjunction
with any clutter backend that has corresponding Cogl support.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This updates Wayland support in line with upstream changes to the Wayland
API and protocol.
This update means we no longer use the Cogl stub winsys so a lot of code
that had to manually interact with EGL and implement a swap_buffers
mechanism could be removed and instead we now depend on Cogl to handle
those things for us.
This update also adds an input device manager consistent with other
clutter backends.
Note: to use the client side "wayland" clutter backend you need to have
built Cogl with --enable-wayland-egl-platform. If Cogl has been built
with support for multiple winsys backends then you should run
applications with COGL_RENDERER=EGL in the environment.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Generate a .bat file to generate the clutter-enum-types.[ch] for use
during the Visual C++ build process, which will greatly simplify the
maintenanace of the VS build files as public headers are added or removed
during the development process.
ClutterInputDeviceX11 has been made private, so we cannot access it from
outside of clutter-input-device-core-x11.c. We should have simple
accessors for the min/max keycode, which is the only detail that we use.
Clutter-WARNING **: Unable to compile the GLSL
shader: Fragment shader failed to compile with the following errors:
The attached patch (against current git) should print out more
information what makes it easier to answer user feedback.
https://bugzilla.gnome.org/show_bug.cgi?id=664252
While working through the Python3/pygobject bindings, I came across a missing
(allow-none) in clutter_state_set_key(). This allows the API to specify to None
as the source_target.
https://bugzilla.gnome.org/show_bug.cgi?id=664996
The builtin effects ClutterColorizeEffect, ClutterDesaturateEffect and
ClutterShaderEffect all have properties which only affect the
rendering of the final texture not the contents of it. When these
properties are updated we should queue a repaint of the effect not
the actor so that we don't waste time repainting the contents of the
offscreen buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=665052
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
There was an #ifdef'd section of code for profiling that was using the
wrong variable name so it would not build.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Previously the offscreen effect was keeping track of the size of the
texture so that it could detect when a different size is requested and
create a new texture. However this breaks if a subclass overrides
create_texture to make the texture bigger because in that case the
size of the texture will always be different from the calculated size
of the actor. This patch makes it also track the size of the fbo that
was requested before being passed through create_texture() and it
instead uses that to detect when a new FBO is needed.
https://bugzilla.gnome.org/show_bug.cgi?id=665040
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
It should be possible to define markers in ClutterScript when
describing a ClutterTimeline.
The syntax is a trivial:
"markers" : [
{ "name", <marker-name>, "time" : <msecs> }
]
While at it, we should document it inside the API reference, as well
as fleshing out the ClutterTimeline description.
To allow language bindings to properly override Script.connect_signals()
they'll need access access to Script.connect_signals_full().
Thanks to Jeremy Moles for reporting.
Since we have a _clutter_debug_message() function compiled in
unconditionally we have no further need for the equivalent conditional
version defined in clutter-profile.[ch]: we can simply do the work in
one function.
We still ship clutter_get_show_fps() and clutter_get_debug_enabled() as
public entry points. Yet another case of missing API review prior to the
1.0 release, so really the bucket stops around my desk.
Let's deprecate these two useless functions, and reduce the API
footprint of Clutter.
This function should have never been made public in the first place; its
output depends on a configuration option of Clutter, and it's basically
useful only for internal debugging.
Make it consistent across the various build options (with or without
profiling enabled), and add a timestamp using the monotonic clock to
every debug message.
The clutter_get_timestamp() output depends on whether Clutter was
compiled with debugging support — it's meant to be used only by the
debugging notes, and it should not be used for anything else.
Instead of calling cogl_set_depth_test_enabled and
cogl_set_backface_culling_enabled ClutterDeformEffect now uses the
experimental CoglPipeline API. Those global state functions will soon
be deprecated in Cogl and they are implemented by flushing a temporary
override pipline which isn't ideal.
Using the new culling API we can also avoid having a separate buffer
of indices for the back of the texture by just changing the culling
mode to cull front baces instead of the back.
https://bugzilla.gnome.org/show_bug.cgi?id=663636
This changes ClutterDeformEffect to use a CoglAttributeBuffer with a
CoglPrimitive instead of the old CoglVertexBuffer. The old vertex
buffer code is now implemented in terms of the attribute buffer code
and it will eventually be deprecated. Using CoglPrimitives should be
slightly more efficient.
This also changes the struct we store the vertices to be
CoglVertexP3T2C4 instead of CoglTextureVertex. The latter is
technically not compatible with neither vertex buffers nor attribute
buffers because it contains a CoglColor and the internal members of
that are private so it is not valid to assume it contains 4 bytes and
use that as an attribute. Also it contains padding so it ends up
redundantly creating a larger buffer. CoglTextureVertex is in the
public API for the deform_vertex virtual so we still have to maintain
that. Instead of directly manipulating the array to upload, the
application is now passed a stack allocated temporary struct which
gets converted to a CoglVertexP3T2C4. This also means that we can map
the buffer as write only and still let the application read-write the
vertex.
The paint debug code to draw line strips for the deform mesh was
previously trying to set a red source material. However this wasn't
working because the material color was being overwritten by the color
attribute in the vertex buffer. This patch fixes that by creating a
seperate primitive for the lines and not adding the color
attribute. The lines code was also drawing both the front and back
indices. I don't think that entirely makes sense so I've just changed
it to draw only the front indices. Maybe painting both would make more
sense if backface culling was still enabled.
https://bugzilla.gnome.org/show_bug.cgi?id=663636
When invalidating the deform effect, we are invalidating the vertices
shaping the deformation of an actor. Therefore, there is no need to
trigger a redraw of the associated actor, we can just repaint the
effect.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=663720
* deprecate-default-stage:
evdev: do not associate device with stage
evdev: don't even process events without a default stage
docs: Note default stage deprecation in README
docs: Remove clutter_stage_get_default()
stage: Deprecate the default stage
script: Do not use clutter_stage_get_default()
cally/actor: Do not use the default stage as a fallback
Try to mop up the default stage mess
performance/*: Do not use clutter_stage_get_default()
interactive/*: Do not use clutter_stage_get_default()
Merge with a11y
micro-bench/*: Do not use clutter_stage_get_default()
accessibility/*: Do not use clutter_stage_get_default()
conform/*: Do not use clutter_stage_get_default()
The VBLANK environmental variable is done universally in clutter-main.c
as in commits e8562089 (main: Add a sync-to-vblank global flag) and
db211a21 (Remove per-backend CLUTTER_VBLANK envvar), so remove these things
here as well.
https://bugzilla.gnome.org/show_bug.cgi?id=663999
The VBLANK environmental variable is done universally in clutter-main.c
as in commits e8562089 (main: Add a sync-to-vblank global flag) and
db211a21 (Remove per-backend CLUTTER_VBLANK envvar), so remove these things
here as well.
-Make the contents of config.h.win32.in more like config.h.in
-Define CLUTTER_INPUT_WIN32 accordingly (no GDK3 defines yet, until
GDK3 on Windows is more stable)
The evdev system is a bit different from other input systems in
Clutter because it's completly decorrelated from anything graphic.
In the case of embedded devices with no proper windowing system, you
might want to not implicitly create a default stage when you're
receiving the first input event.
This patch changes this behavior by not forwarding any event if you
don't have a default stage.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=651718
A lot of the example code in the cookbook and the API reference still
uses the default stage — sometimes as if it were a non-default one,
which once again demonstrates how the default stage was a flawed concept
that just confused people.
Using the default stage as a fallback is wrong in all circumstances.
In this specific case, if an actor is not associated to a stage then it
cannot possibly be the key focus.
The default stage was a neat concept when we started Clutter out,
somewhere in the Jurassic era; a singleton instance that gets created at
initialization time, and remains the same for the entire duration of the
process.
Worked well enough when Clutter was a small library meant to be used to
write fullscreen media browsers, but since the introduction of multiple
stages, and Clutter being used to create all sorts of applications, the
default stage is just a vestigial remainder of that past, like an
appendix; something that complicates the layout of the code and
introduces weird behaviour, so that you notice its existence only when
something goes wrong.
Some platforms we do support, though, only have one framebuffer, so it
makes sense for them to have only one stage.
At this point, the only sane thing to do is to go through the same code
paths on all platforms, and that code path is the stage instance
creation and initialization — i.e. clutter_stage_new() (or
g_object_new() with CLUTTER_TYPE_STAGE).
For platforms that support multiple stages, nothing has changed: the stage
created by clutter_stage_get_default() will be set as the default one;
if nobody calls it, the default stage is never created, and it just
lives on as a meaningless check.
For platforms that only support one stage, clutter_stage_new() and
clutter_stage_get_default() will behave exactly the same the first time
they are called: both will create a stage, and set it as the default.
Calling clutter_stage_new() a second time is treated as a programmer
error, and will result in Clutter aborting. This is a behavioural change
because the existing behaviour or creating a new ClutterStage instance
with the same ClutterStageWindow private implementation is, simply put,
utterly braindamaged and I should have *never* had written it, and I
apologize for it. In my defence, I didn't know any better at the time.
This is the first step towards the complete deprecation of
clutter_stage_get_default() and clutter_stage_is_default(), which will
come later.
Instead of implementing create_stage() and a constructor for
ClutterStageOSX, we can use the default implementations in
ClutterBackend, and spare us some code duplication.
Create the device manager during the event initialization, where it
makes sense.
This allows us to get rid of the per-backend get_device_manager()
virtual function, and just store the DeviceManager pointer into the
ClutterBackend structure.
All StageWindow implementation already have back pointers, but we need a
unified API to actually set them from the generic code path; we can use
properties on the StageWindow interface — though this requires fixing
all backends at the same time, to avoid GObject complaining.
Instead of piggybacking on the EGL backend, let's create a small
ClutterBackend for the CEx100 platforms. This allows us to handle the
CEx100-specific details in a much cleaner way.
All the functionality that ClutterBackendCogl provided has been moved
into ClutterBackend itself, so there is no need to have this class
around in the source.
Cogl-based backends can derive directly from ClutterBackend.
Don't replace create_context(): given that the X11 backend already uses
Cogl for the context creation, we can just provide the right data
structures ourselves.
Since we use Cogl for the context creation we can now provide a default
context creation that should just work, plus a couple of hooks to allow
plugging into the creation sequence for platforms supported by Cogl that
require special handling — like foreign displays or alpha-enabled swap
chains.
The various backends have now two choices: either replace the
create_context() in its entirety, or plug themselves into the default
context creation.