The comment says that we're going to load textures in a loop until we
still have work to do, or if one iteration took more than 5
milliseconds, to avoid blowing up our frame budget, but the check is for
5 seconds, which is hardly a sensible value.
We remove 2 pixels from the height of the cursor, but we should also
remove the same amount from the position on the y axis, so that the
cursor caret appears centered in the allocated height.
https://bugzilla.gnome.org/show_bug.cgi?id=655491
ClutterScript should be able to automatically call gettext() and friends
on strings loaded from a UI definition, prior to passing the string to
the object it is constructing.
The basic implementation is trivial:
- set a translation domain on the ClutterScript instance
- mark the translatable strings inside the JSON data, like:
"property" : {
"translatable" : true,
"string" : "a translatable string"
}
The hard part is now getting the tools we use to extract the
translatable strings to understand the JSON format we use inside
ClutterScript.
Let's keep a budget of 16.6 milliseconds per frame, and reduce it by the
amount of time spent in each phase of the frame processing. If any phase
goes over the allocated budget then we use the diagnostic mode
facilities to warn the app developer.
It is sometimes useful to be able to have better control on when a
repaint function is called. Currently, all repaint functions are called
prior to the stages update phase of the frame processing.
We can introduce flags to represent the point in the frame update
process in which we wish Clutter called the repaint function.
As a bonus, we can also add a flag that causes adding a repaint function
to spin the master clock.
In theory, handlers connected to the ::allocation-changed signal may be
able to modify the actor's real allocation and allocation flags,
especially now that we use STATIC_SCOPE; let's avoid this, so that we
don't regret it later.
The show and hide implementation inside ClutterStage ended up being
recursive, and the hide implementation would actually show the children
of the stage unconditionally.
Whoopsie.
The ActorBox passed to the ::allocation-changed signal should be
annotated as STATIC_SCOPE, given that it's a pointer to a structure
inside ClutterActorPrivate - hence there's no risk of it actually being
freed from a signal handler. This allows the GSignal machinery to avoid
a costly copy/free for each signal emission.
If the redraw entry is not cleared, queueing a redraw from a signal
handler could reinsert the same object in the stage redraw list,
causing the segfault later (as the object is immediately freed)
https://bugzilla.gnome.org/show_bug.cgi?id=671173
This just adds some padding pointers so that we can later add more
virtual functions without breaking ABI.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Clutter applications using evdev are typically fullscreen applications
associated with a single virtual termainal. When switching away from
the applications associated tty then Clutter should stop managing all
evdev devices and re-probe for devices when the application regains
focus by switching back to the tty. To facilitate this, this patch
adds clutter_evdev_release_devices() and clutter_evdev_reclaim_devices()
functions.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
When using the Wayland backend this sets a constraint that the
CoglRenderer selects the Wayland EGL winsys.
When a Wayland compositor display is set it now also sets a constraint
that the render should use EGL because only EGL renderers will set up
the required wl_drm global object.
The X11 backend now sets the X11 constraint.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
The core input devices when XInput doesn't work were being created as
generic ClutterInputDevices instead of ClutterInputDeviceX11s. This
meant the keycode_to_evdev virtual wouldn't work.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a virtual function to ClutterInputDevice to translate a
keycode from the hardware_keycode member of ClutterKeyEvent to an
evdev keycode. The function can fail so that input backends that don't
have a sensible way to translate to evdev keycodes can return FALSE.
There are implementations for evdev, wayland and X. The X
implementation assumes that the X server is using an evdev driver in
which case the hardware keycodes are the evdev codes plus 8.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Because evdev isn't associated with the display system, it doesn't
have any easy way to associate an input device with a stage.
Previously Clutter would never set a stage for an input device and
leave it up to the application to set it. To make it easier for
applications which just have a single fullscreen stage (which is
probably the most common use case for evdev) the device manager now
associates all input devices with the first stage that is created
unless something has already set a stage.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
The shm buffer format enum values were renamed and the explicitly
premultiplied format was dropped since it's now assumed if the buffer
has an alpha component then it's premultiplied.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
The pick method doesn't do anything special over the default pick
method provided by ClutterActor so there's no need to implement it.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a signal that's emitted whenever a wayland surface is damaged
that allows sub-classes to override the default handler to change
how clipped redraws are queued if the sub-class doesn't simply draw
a rectangle. The signal can also be used just to track damage.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a "cogl-texture" gobject property so that a compositor may
listen for notifications of changes to the texture used to paint.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This patch renames the width/height properties to
surface-width/surface-height so that they won't override the
width/height properties of ClutterActor which have different
semantics.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a NEEDS_XKB_UTILS automake conditional that's set to true if
either the wayland backend is enabled or the evdev input backend is
enabled since they both depend on clutter-xkb-utils.c and we need
to avoid listing the file twice since that leads to duplicate symbols
and the build fails.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
When a new buffer is attached and we update the width and height
properties for the surface we now also call clutter_actor_set_size()
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a clutter_wayland_surface_get_surface() function for querying
the struct wl_surface * associated with a ClutterWaylandSurface.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This exposes a clutter_wayland_surface_set_surface() function. The
implementation ignores requests to re-set the same surface and since now
has code to cleanup old surface state before setting the new surface.
(previously the surface was construct only so this wasn't necessary)
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
When disposing a ClutterWaylandSurface we now make sure to unref any
pipeline we created and unref any surface buffer textures we created.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
There was a GArray member named damage that wasn't being used which this
patch removes.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
If wayland compositor support has been enabled then we make sure to
install the corresponding public headers and a
clutter-wayland-compositor.pc pkgconfig file.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
We currently check for the IN_DESTRUCTION flag inside the
add_child_internal() function.
This check disallows calling methods that change the stacking order
within the destruction sequence, by triggering a critical warning first,
and leaving the actor in an undefined state, which then ends up being
caught by an assertion.
The reproducible sequence is:
- actor gets destroyed;
- another actor, linked to the first, will try to change the
stacking order of the first actor;
- changing the stacking order is a composite operation composed
by the following steps:
1. ref() the child;
2. remove_child_internal(), which removes the reference;
3. add_child_internal(), which adds a reference;
- the state of the actor is not changed between (2) and (3), as
it could be an expensive recomputation;
- if (3) bails out, then the actor is in an undefined state, but
still alive;
- the destruction sequence terminates, but the actor is unparented
while its state indicates being parented instead.
- assertion failure.
The obvious fix would be to decompose each set_child_*_sibling() method
into proper remove_child()/add_child(), with state validation; this may
cause excessive work, though, and trigger a cascade of other bugs in
code that assumes that a change in the stacking order is an atomic
operation.
Another potential fix is to just remove this check here, and let code
doing stacking order changes inside the destruction sequence of an actor
continue doing the work.
The third fix is to silently bail out early from every
set_child_*_sibling() and set_child_at_index() method, and avoid doing
work.
I have a preference for the second solution, since it involves the least
amount of work, and the least amount of code duplication.
See bug: https://bugzilla.gnome.org/show_bug.cgi?id=670647
This solves a crash on GNOME Shell, as compute the extents
for some StWidgets could lead to call st_widget_get_theme_node,
and it is a fatal error to call this on a widget that it not
beed added to a stage.
Make it like the clutter-version.h.in template. Since we aren't having
Windows-specific items in here (such as CLUTTER_FLAVOUR), perhaps we
could get the dllexport stuff in clutter-version.h.in, where it can be
used when necessary, and this file would be gone.
GLib introduced macros that allows defining the lower and upper bounds
of the API to be used by application code.
The lower bound allows to define the minimum version that will trigger
deprecation warnings; the upper bound defines the maximum version that
will trigger compiler warnings for unavailable symbols.
This scheme allows gradually porting application code to a new version
of the API, especially in case of resynchronization after multiple
development cycles.
Now that ClutterActor has a default paint volume, subclasses may wish
to retrieve it without chaining up to the parent's implementation of
the get_paint_volume() function.
The get_default_paint_volume() returns a ClutterPaintVolume pointer
to the paint volume as computed by the default implementation of the
get_paint_volume() virtual function; it can only be used immediately,
as it's not guaranteed to survive across multiple frames.
Creating PaintVolume instances is not possible, and it's not recommended
anyway. It is, though, necessary to union paint volumes, especially with
2D boxes, in some cases.
Clutter should provide a simple convenience function that allows
unioning volumes to boxes in a moderately efficient way.
https://bugzilla.gnome.org/show_bug.cgi?id=670021
It should be possible to adapt the abicheck.sh script so that it
actually tests the ABI of libclutter-1.0.so taking into account
the backends that were compiled into Clutter, and avoid expected
failures if Clutter was not built with a specific backend.
https://bugzilla.gnome.org/show_bug.cgi?id=670680
We cannot deprecate ClutterAlpha yet. We cannot also implement
ClutterAlpha in terms of ClutterTimeline, because multiple Alpha
instances can be attached to the same Timeline. So we can start
with a "soft" deprecation: just a warning in the documentation
stating that ClutterAlpha will be deprecated, and removed, in the
future, and that newly-written code should use ClutterTimeline
instead.
We can use ClutterTimeline and its progress mode inside
ClutterAnimation; obviously, we have to maintain the invariants because
of the ClutterAnimation:alpha property, but if all you set is the :mode
property using one of the Clutter animation modes then we can skip the
ClutterAlpha entirely.
Instead of having the easing functions be dependent of ClutterAlpha, and
static to the clutter-alpha.c source file, we should make them generic
and move them to their own internal header and source files. This will
allow to re-use them in the near future.
Since Cogl has started restricting what cogl 1.x api is exposed when
COGL_ENABLE_EXPERIMENTAL_2_0_API is defined and since we build all
Clutter internals with COGL_ENABLE_EXPERIMENTAL_2_0_API defined this
patch makes a first pass at reducing our internal use of the Cogl 1.x
api.
The most notable api that's no longer exposed to us internally is
the cogl_material_ api so this switches all Clutter internals to use the
cogl_pipeline_ api instead. This patch also makes quite a bit of
progress removing internal uses of CoglHandle although there is still
more to go.
The experimental cogl_texture_pixmap_x11_new() api was recently changed
to take an explicit context argument and return a GError on failures.
This updates Clutter's use of the api accordingly.
We were only exposing clutter_backend_get_cogl_context() if
COGL_ENABLE_EXPERIMENTAL_2_0_API had been defined but the CoglContext
api is also available if COGL_ENABLE_EXPERIMENTAL_API has been defined.
As it was it meant that code opting into the experimental Cogl api
but not limiting to the 2.0 only api would have to #define
COGL_ENABLE_EXPERIMENTAL_2_0_API before including clutter.h but make
sure it wasn't defined when including cogl.h which was particularly
awkward.
Recently the cogl_framebuffer_swap_* apis were moved into the
cogl_onscreen_* namespace since only CoglOnscreen framebuffers can be
double buffered. This renames all uses of the cogl_framebuffer_swap_*
apis in Clutter.
The experimental cogl_pipeline_new() api was recently changed so it
explicitly takes a CoglContext. This updates all calls to
cogl_pipeline_new() in clutter accordingly.
ATK_ROLE_CANVAS is not a suitable role, as the user (in general) can't
draw on the Stage. CallyStage implements AtkWindow, so the proper role
is ATK_ROLE_WINDOW
Removing atkcomponent, focus_tracker, etc. Emitting focus state change
from the stage. Now things are more simple, and stop to use some
of the soon-to-be-deprecated signals on ATK.
It should be possible to do:
clutter_stage_set_fullscreen (stage, TRUE);
clutter_actor_show (stage);
and have the stage be full screen as soon as it is shown.
Currently, we need to call clutter_actor_realize() prior to calling
set_fullscreen(), otherwise the backing X window will not be set,
and ClutterStageX11 will silently discard the change.
If set_fullscreen() was called prior to realization, ClutterStageX11
should delay setting the fullscreen hint until the realize() chain
has been successfully executed.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2515
If you execute the following sequence :
stage = clutter_stage_new ();
clutter_actor_set_size (stage, 1280, 800);
clutter_actor_realize (stage);
Then you end up creating an onscreen buffer of size 1280x800 but
ClutterStageX11 storing the stage size at 640x480.
This patch resync the 2 implementation by using the ClutterStage's
size in both classes when realizing.
Signed-off-by: Lionel Landwerlin <llandwerlin@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=667540
Unconditionally creating CoglPipeline and CoglSnippets inside the class
initialization functions does not seem to be enough when dealing with
headless builds.
Our last resort is to lazily create the base pipeline the first time we
try to copy it, during the instance initialization.
The class initialization function may be called when Clutter hasn't been
fully initialized — for instance, when scanning the source with gtk-doc
or with the introspection scanner.
The allocation code for BoxLayout contains a sequence of brain farts
that make it barely working since the synchronization of the layout
algorithm to the one in GtkBox.
The origin of the layout is inverted, and it doesn't take into
consideration a modified allocation origin (for actors the provide
padding or margin).
The pack-start property is broken, and it only works because we walk the
children list backwards; this horribly breaks when a child changes
visibility. Plus, we count invisible children, which leads to
allocations getting insane origins (either close to -MAX_FLOAT or
MAX_FLOAT).
Finally, the allocation is applied twice even for non-animated cases.
https://bugzilla.gnome.org/show_bug.cgi?id=669291
* clutter_wayland_input_device_get_wl_input_device for the input device
* clutter_wayland_stage_get_wl_surface for the Wayland surface
* clutter_wayland_stage_get_wl_shell_surface for the shell surface
This converts the blur, colorize and desaturate effects to use
snippets instead of CoglPrograms. Cogl can handle the snippets much
more efficiently than programs so this should be a performance win. It
also fixes the problem that Cogl would end up recompiling the program
for every instance of the effects because Clutter was not reusing the
same program.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
The blur effect needs to pass a uniform to the GLSL shader so that it
can know the texture coordinate offset from one texel to another. To
calculate this the blur effect was previously using the allocation
size of the actor rounded up to the next power of two. Presumably the
assumption was that Cogl would round up the size of the texture to the
next power of two when allocating the texture. However this is not be
true if the driver supports NPOT textures. Also it doesn't take into
account the paint volume of the actor which may cause the texture to
be a completely different size. This patch just changes to directly
use the size of the texture.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Sometimes a subclass of ClutterOffscreenEffect wants to paint with a
completely custom material. In that case it is awkward to modify the
material returned owned by ClutterOffscreenEffect so it makes more
sense to just get the texture and manage its own material.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
All of the pipelines used for ClutterTexture actors share a common
pipeline ancestor created with cogl_pipeline_copy. Previously this
ancestor had a dummy 1x1 texture attached to it so that it would end
up with the same state as the child pipelines that will render with a
texture. Cogl now has a mechanism to specify that a texture will be
used with a pipeline layer without having to create an actual texture.
This patch makes it use that to avoid having an unused texture.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we have N children and the user passes N (or a number beyond N) to
clutter_actor_insert_child_at_index, we should respond by adding the
child at the end, not silently doing nothing.
This should avoid trying to fix the origin of a paint volume set from
the allocation's origin, and thus breaking everything.
A PaintVolume for an actor is defined to be relative to the actor's
modelview unless specifically modified by internal functions; the origin
of an actor's allocation is, on the other hand, parent-relative.
There are times when we don't want to remove all children and count of
the reference count to drop to 0 to ensure destruction; there are cases,
such as managed environments, where it's preferable to ensure that the
children of an actor get actually destroyed.
ClutterActor has a background-color property, now; we should use it for
the Stage, re-implement the color property in terms of background-color.
and deprecate the Stage property.
Being able to easily set the number of repeats has been a request for
the animation framework for some time now. The usual way to implement
this is: connect to the ::completed signal, use a static counter, and
stop the timeline when the counter hits a specific spot.
In the same light as the :auto-reverse property, we can make it easier
to implement this common functionality by adding a :repeat-count
property that, when set, limits the amount of loops that a Timeline can
perform before stopping itself.
In fact, we can implement the :loop property in terms of the
:repeat-count property just by using a sentinel value mapping to
"infinity", and map loop=FALSE to repeat-count=0, and loop=TRUE to
repeat-count=-1.
The clutter_timeline_clone() method was a pretty dumb idea when it was
introduced, back when we still had the ClutterEffectTemplate and the
clutter_effect_* animation API. It has since become an API wart: we
cannot change or add new properties to be cloned without the risk of
breaking existing code. All in all, cloning a GObject is just a matter
of calling g_object_new() with the wanted properties.
Let's deprecate this throwback of the Olden Days™, so that we can remove
it for good once we break for 2.0.
When the ClutterTextBuffer support inside ClutterText was merged, it
introduced a regression that was identified and fixed in bug 659116.
The optimization to not paint empty ClutterText actors is only valid
is the actor is not editable, or if the cursor is not visible.
Courtesy of GLib and GTK+. The abicheck.sh is a simple, Linux-only,
script to check that we're not leaking private symbols, or that the
clutter.symbols file hasn't been updated.
In theory, it should go inside the distcheck phase.
A bunch of private symbols have escaped into the SO; let's rectify this
situation by using the '_' private prefix, or making them static as they
should have been.
Cogl now requires that all applications integrate their main loop with
Cogl so that it can listen for events from winsys. This patch just
adds Cogl's GSource to the main loop.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Some of Cogl's experimental apis have changed so that the buffer apis
now need to be passed a context argument and some drawing apis have been
replaced with cogl_framebuffer_ drawing apis that take explicit
framebuffer and pipeline arguments.
These changes were made as part of Cogl moving towards a more stateless
api that doesn't rely on a global context.
This patch updates Clutter to work with the latest Cogl api and bumps
the required Cogl version to 1.9.5.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Similar to the clutter_actor_iter_remove(), but it'll call destroy()
instead of remove_child().
We can also reimplement the ::destroy default handler using it, and make
it more compact.
There is a typo in the check for a negative index: the index variable
should be index_, not index - unfortunately, the latter can still be
resolved to index(3), so compiler and linker are perfectly happy.
https://bugzilla.gnome.org/show_bug.cgi?id=669730
An editable ClutterText will reset the selection and cursor whenever the
contents are changed — even if those contents are the same. As this may
confuse the user, we should check if we're setting the exact same string,
and bail out if necessary.
The reverse of position_to_coords().
While providing documentation on how to implement it using the
PangoLayout API, I realized that the verbosity of it all, plus the usage
of the Pango API, was not worth it, and decided to expose the method we
are using internally.
GValueArray is on its way to deprecation in GLib; as far as the
ListModel class is concerned, a plain C array of GValue is a perfectly
suitable replacement for the GValueArray usage. It actually is an
improvement, given that it's going to take less memory.
ClutterActor stopped requiring to override the map and unmap virtual
functions some time ago.
Now that ClutterActor implements the Container interface, overriding map
and unmap to control the MAPPED state of the children is pretty much
going to be a source of bugs and misunderstandings.
Plus, the ordering of the unmap, destroy, dispose, and finalize calls
should be be documented properly.
The documentation should clarify all that.
When calling clutter_actor_destroy(), ClutterActor calls
update_map_state() on itself to unset the REALIZED and MAPPED states,
prior to running the dispose() implementation.
The default dispose() will call remove_child() (either directly or
through the Container implementation), which will check for the MAPPED
state and then run update_map_state() again. We use the previously set
MAPPED state to decide whether or not the parent should queue for a
relayout/redraw when removing a visible children.
If the MAPPED flag was cleared prior to remove_child(), though, it'll
always be unset by the time we get to remove_child(), and this will
cause missing redraws/relayouts; we were ignoring this prior the
post-First Apocalypse changes because we were doing:
if (was_mapped)
clutter_actor_queue_relayout (parent);
clutter_actor_queue_redraw (parent);
which is obviously wrong. Once I removed that glaring brain damage from
the remove_child() implementation, bugs started appearing — bugs that
were probably the reason why we introduced that brain damage in the
first place, instead of checking the source of those bugs.
The obvious fix is to avoid clearing up the actor's state on destroy()
until we remove the actor from its parent. This also reduces the amount
of work we do, and the code paths that can potentially go wrong.
Since the code dealing with ClutterShader is pretty self-contained, now,
we can safely move it outside of the main ClutterActor source file and
into its own. This will allow us to just drop a bunch of files when
branching for 2.0.
The YUV support depends on the driver support, and not only not many
drivers support YUV natively: the supported colorspaces are pretty much
useless.
The proper way to do YUV to RGB colorspace conversion on the GPU is to
use a fragment shader; for that, ClutterTexture and Cogl provide enough
API to achieve a good result - see the Clutter-GStreamer implementation,
for instance.
ClutterFixedLayout is the default layout manager for ClutterActor.
Existing subclasses of ClutterActor will get a fixed layout manager
regardless of whether they are going to use it, but since it sets the
CLUTTER_ACTOR_NO_LAYOUT flag, it will introduce regressions on actors
that perform their own layout management.
The CLUTTER_ACTOR_NO_LAYOUT flag was a bit of a mistake in the first
place, as it was introduced as a last minute workaround in the 1.0
process to deal with broken stuff in Moblin. It's going to be a target
for deprecation towards a removal when we start the 2.0 process.
Iterating over children and ancestors of an actor is a relatively common
operation. Currently, you only have one option: start a for() loop, get
the first child of the actor, and advance to the next sibling for the
list of children; or start a for() loop and advance to the parent of the
actor.
These operations can be easily done through the ClutterActor API, but
they all require going through the public API, and performing multiple
type checks on the arguments.
Along with the DOM API, it would be nice to have an ancillary, utility
API that uses an iterator structure to hold the state, and can be
advanced in a loop.
https://bugzilla.gnome.org/show_bug.cgi?id=668669
During the gutting of ClutterBox, the destroy and dispose implementation
were removed. The former, especially, destroyed all children - which
usually meant that the redraw queues for the childre was cleared as
well. The removal introduced crashes when a Box was destroyed while its
children were still queueing redraws.
Symbolic names are better than magic numbers, even if they are
well-established and won't likely change.
This maps to a commit in GTK+ that introduced the same names; it
was decided to go for PRIMARY, MIDDLE, and SECONDARY because of
the confusion that may arise when the button order gets flipped
in left-handed configurations - the "left" button (i.e. 1) becomes
the right-most button, and the "right" button (i.e. 3) becomes
the left-most button.
https://bugzilla.gnome.org/show_bug.cgi?id=668692
Also update the code to set the size of the stage to set it to the size of the
output. In future versions of the Wayland protocol we'll get a configure
message advising of us of the size we can be to achieve fullscreen.
* stage-state:
docs: Update ClutterStageState flags
wayland: Use the Stage state tracking
gdk: Use the Stage state tracking
win32: Use the Stage state tracking
x11: Use the Stage state tracking
osx: Use the Stage state tracking
stage: Add state tracking
State changes on the Stage are currently deferred to the windowing
system backends, but the code is generally the same, and it should
be abstracted neatly inside the Stage class itself.
There's also the extra caveat for backends that state changes on a
Stage must also emit a ClutterEvent of type CLUTTER_STAGE_STATE, a
requirement that needlessly complicates the backend code.
GLib has gained support for compiling ancillary data files into the same
binary blob as a library or as an executable.
We should add this feature to ClutterScript, so that it's possible to
bundle UI definitions with an application.
The only actor that results in a mix of the old Container API and the
new Actor API is ClutterStage. By inheritance, a Stage is a Group, but
we don't want it to behave like a Group - as it already overrides most
of the Actor API, and the reason why it was made as a Group in the
first place was convenience for adding/removing children.
Given that touching Group to make it aware of the new Actor API has
rapidly devolved into a struggle between a Demiurge that tries to
avoid breakage and a Chaos that finds new and interesting ways to
break ClutterGroup, let's declare API bankruptcy here and now.
ClutterStage should override ClutterContainer methods, and use the
layout management of ClutterFixedLayout as the proper class that it
was meant to be ages ago. Let ClutterGroup rot in pieces.
Now that we reinstated Group to its "former glory", we need to ensure
that applications using the deprecated containers with the new DOM API
in ClutterActor can actually work - or, at least, not break horribly.
This actually means making sure that ClutterStage and ClutterGroup can
cope with the DOM, while retaining their old implementations, as well as
their bizarre idiosyncrasies and their utter, utter brokenness.
Making Group just a proxy to Actor broke some behaviour that application
and toolkit code was relying on. Let's keep Group around to fight
another day.
This commit fixes gnome-shell as far as I can test it.
A Group is a just a ClutterActor with the layout-manager property set at
instance initialization time. It doesn't need anything else from
ClutterActor's vtable, except the slightly custom show_all/hide_all
implementation, and a simplified get_paint_volume.
Instead of chaining up, given that we want to bypass chaining up and
just set the allocation. This also allows us to bail out of the
overridden allocate vfunc check, given that we want the default Actor
behaviour to apply - including eventual layout manager delegates.
The usual way to implement a container actor is to override the
allocate() virtual function, chain up, and then allocate the actor's
children.
Clutter now has the ability to delegate layout management to
ClutterLayoutManager directly; in the allocation, this is done by
checking whether the actor has children, and then call
clutter_layout_manager_allocate() from within the default implementation
of the ClutterActor::allocate() vfunc. The same vfunc that everyone, has
been chaining up to.
Whoopsie.
Well, we can check if there's a layout manager, and if it's NULL, we
bail out. Except that there's a default layout manager, and it's the
fixed layout manager, so that classes like Group and Stage work by
default.
Double whoopsie.
The fix for this scenario is a bit nasty; we have to check if the actor
class has overridden the allocate() vfunc or not, before actually
looking at the layout manager. This means that classes that override the
allocate() vfunc are expected to do everything that ClutterActor's
default implementation does - which I think it's a fair requirement to
have.
For newly written code, though, it would probably be best if we just
provided a function that does the right thing by default, and that
you're supposed to be calling from within the allocate() vfunc
implementation, if you ever chose to override it. This new function,
clutter_actor_set_allocation(), should come with a warning the size of
Texas, to avoid people thinking it's a way to override the whole "call
allocate() on each child" mechanism. Plus, it should check if we're
inside an allocation sequence, and bail out if not.
If we want to set a default layout manager, we need to do so inside
init(), as it's not guaranteed that people subclassing Actor and
overriding ::constructed will actually chain up as they should.
The default pick() behaviour does not take into consideration the
children of a ClutterActor because the existing containter actors
usually override pick(), chain up, and then paint their children.
With ClutterActor now a concrete class, though, we need a way to pick
its children without requiring a sub-class; we could simply iterate over
the children inside the default pick() implementation, but this would
lead to double painting, which is not acceptable.
A moderately gross hack is to check if the Actor instance did override
the pick() implementation, and if it is not the case, paint the children
in pick mode.
The hide_all() method is pretty much pointless, as hiding an actor will
automatically prevent its children from being painted. The show_all()
method would only be marginally useful, if actors weren't set to be
visible by default when added to another actor - which was the case when
we introduced show_all() and hide_all().
* Abstracts the buffer for text in ClutterText
* Allows implementation of undo/redo.
* Allows use of non-pageable memory for text
in the case of sensitive passwords.
* Implement a test with two ClutterText using the same
buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=652653
When dereferencing GArray.data to a C structure you need a double cast
from guint8* to void*, and then from void* to the actual type. This
avoids compiler warnings, especially when using clang on OSX.
The concept of "internal child" only meant anything when we had a
separate API for containers and actors. Now that we plugged that
particular hole, we can drop all the hacks we used to have in place
to work around its design limitations.
It can be convenient to be able to set, or get, all the components of an
actor's margin at the same time; since we already have a boxed type for
storing a margin, an accessors pair based on it is not a complicated
addition to the API.
Inside the set_child_[above|below]_sibling() and set_child_at_index() we
should be using the internal API for mutating the children list, instead
of the delegate functions. This ensures that we go through a single,
well-defined code path for all operations on the list of children of
an actor.
We have a replacement in ClutterActor, now.
The old ClutterContainer API needs to be deprecated, and the raise() and
lower() virtual functions need a default implementation, so we can check
for implementations overriding them, by using the diagnostic mode like
we do for add(), remove(), and foreach().
The sort_depth_order() virtual function just doesn't do anything, as it
should have been made ages ago.
The Actor wrappers for the Container methods also need to be deprecated.
ClutterActor provides four methods for changing the paint sequence order
of its children:
raise_top()
raise()
lower()
lower_bottom()
The first and last one being just wrappers around raise() and lower(),
respectively. These methods have various issues: they omit the parent,
preferring to retrieve it from the actor passed as the first argument;
this does not match the new style of API introduced to operate on the
list of children of an actor.
Additionally, the raise() and lower() methods of ClutterActor call into
the Container interface, and are not really aptly named (raise() in
particular collides with the completely unrelated 'raise' keyword in
Python, and usually needs to be wrapped in order to be used at all).
Furthermore, we need public methods that Container can call from its
default implementation, as well as methods to port current Container
implementations.
Finally, since we have insert_child_at_index(), we should also have an
equivalent set_child_at_index() as well.
The internal versions of add_child() and remove_child() currently use
boolean arguments to control things like the ChildMeta instances and
the emissions of signals; using more than one boolean argument is an
indication that you need flags to avoid readability issues, as well as
providing a way to add new behaviours without a combinatorial explosion
of arguments, later on.
I don't feel comfortable with this feature, and its implementation
still has too many rough edges. We can safely punt it for now, and
introduce it at a later point, as it doesn't block existing features
or API.
This virtual function will let layout managers with legacy expansion
flags be able to influence the lazy computation of the expansion flags
on ClutterActor.
We need to paint the background color in the default class handler for
two reasons: it's logically appropriate, and we don't want actor
subclasses overriding the ::paint class handler to change behaviour only
because somebody decided to set the background color.
The old add(), remove(), and foreach() virtual functions are deprecated;
ClutterContainer should warn if the public API detects that the vfuncs
have been overridden.
Strictly speaking, it's still legal to override those vfuncs: you can
chain up to the default vtable, or you could just provide an equivalent
implementation. The goal is to avoid having to override the Container
interface, until we can safely deprecate it and remove it in Clutter
2.0.
Instead of making ClutterActor implement the basic add/remove/foreach
virtual functions of ClutterContainer, we can simply do that from
within the ClutterContainer implementation.
The get_children(), foreach(), and foreach_with_internals() methods and
virtual functions are superceded by the Actor API, and should not be
used in newly written code.
Given the size and scope of the changes in ClutterActor, we ought to
rewrite the overall description of what an actor is, what it does, and
how are you supposed to use it and subclass it.
This will make things interesting.
We have better replacements in ClutterActor, that do The Right Thing™
instead of deferring control and requiring reimplementation in every
single container actor.
The correct sequence of actions should be remove(old) → insert(new), not
insert(new) → remove(old). We can implement a simple delegate insertion
functions to insert the new child between the previous and next siblings
of the old child.
While we're at it, let's also add a unit test for replace_child().
Providing a default get_paint_volume() that takes into account the
children of an actor was a goal of the whole First Apocalypse; if we
make all the containers rely on it, and yet we return a FALSE value
(meaning: we don't have a valid paint volume) even when we do have it,
then we are going to break the whole machinery, though.
Cally is doing a bunch of list traversals through the list returned by
ClutterContainer.get_children(); this means a traversal already, plus
a bunch of allocations. We can do better than that, now that we have
a proper graph iteration API inside ClutterActor.
The insert_child_at_index, insert_below and insert_above messed up the
first and last child pointers in various cases. This commit fixes all
the instances of first and last child pointers being stale or set to
NULL.
Instead of getting the list of children to iterate over it, let's use
the newly added child iteration API; this should save us a bunch of
allocations, as well as indirections.
Ported: ClutterBinLayout and ClutterBoxLayout.
Instead of requiring every consumer of the ClutterActor API that wishes
to iterate over the children of an actor to use the get_children()
method, we should provide an iteration API directly inside ClutterActor
itself.
Instead of storing the list of children, let's turn Actor inside a
proper node of a tree.
This change adds the following members to the Actor private data
structure:
first_child
last_child
prev_sibling
next_sibling
and removes the "children" GList from it; iteration is performed through
the direct pointers to the siblings and children.
This change makes removal, insertion before a sibling, and insertion
after a sibling constant time operations, but it still retains the
feature of ClutterActor.add_child() to build the list of children while
taking into account the depth set on the newly added child, to allow the
default painter's algorithm we employ inside the paint() implementation
to at least try and cope with the :depth property (albeit in a fairly
naïve way). Changes in the :depth property will not change the paint
sequence any more: this functionality will be restored later.
ClutterTransformInfo is a (private) ancillary data structure that
contains all the decomposed transformation data, i.e. rotation angles
and centers, scale factors and centers, and anchor point. This data
structure is stored in the GData of the actor instance instead of the
actor's private data. This change gives us:
• a smaller, cleaner private data structure;
• no size penalty for untransformed actors;
• static constant storage for the defaults, shared across all
instances;
• cache locality for all the decomposed transformation data,
given that the structure size is smaller.
At the end of the day, the only authoritative piece of information for
actor transformation is the CoglMatrix that we initialize in
apply_transform() from all the decomposed parameters, and that can stay
inside the private data structure of ClutterActor.
There are only two kinds of actors that allow underallocations,
according to the API contract:
• ClutterStage, as it is a viewport and it doesn't have an implicit
minimum size;
• Actors using the CLUTTER_ACTOR_NO_LAYOUT escape hatch, which allows
them to bail out from our layout management policies.
The warning about underallocations should take these two exceptions
under consideration.
The Group functionality is now provided by ClutterActor.
Sadly, we need to keep the ClutterGroup structure definition in the
non-deprecated header because ClutterStage inherits from Group - an API
wart that was never fixed during the 0.x cycles, and that we'll have to
keep around until we can break API.
ClutterBox functionality has been implemented by ClutterActor, and
proxied by the Box subclass; with the removal of the abstract bit on
ClutterActor, we can safely deprecated ClutterBox.
ClutterActor now has all the API and capabilities for being a concrete
class:
- layout management, through delegation
- container implementation and API
- background color
This means that a simple scene can be built straight out of actors
without using subclasses except for the Stage.
This is the first step towards the deprecation of most of the Actor
subclasses provided by Clutter.
ClutterActor can do better by default than just giving up immediately.
An actor can check for the clip region, and for its children's paint
volume, for instance.
Just these two should give us a better default implementation for newly
written code.
The minimum preferred size of a Flow layout manager is the size of a
column or a row, as the whole point of the layout policy enforced by
the Flow layout manager is to reflow when needed.
ClutterBox's color and color-set properties can be implemented as
proxies for the ClutterActor's newly added background-color and
background-color-set properties, respectively.
This also allows us to get rid of the paint() implementation inside
ClutterBox altogether.
Each actor should have a background color property, disabled by default.
This property allows us to cover 99% of the use cases for
ClutterRectangle, and brings us one step closer to being able to
instantiate ClutterActor directly.
And make sure that overriding Container and calling
clutter_actor_add_child() will result in the same sequence of operations
as the current set_parent()+queue_relayout()+signal_emit pattern.
Existing containers can continue using:
clutter_actor_set_parent (child, CLUTTER_ACTOR (container));
clutter_actor_queue_relayout (CLUTTER_ACTOR (container));
g_signal_emit_by_name (container, "actor-added", child);
and newly written containers overriding Container.add() can simply call:
clutter_actor_add_child (CLUTTER_ACTOR (container), child);
instead.
We need to queue a relayout when removing a visible child from a visible
parent.
We also need to insert the child at the right position (depending on the
depth) so that newly added actors will be painted on top.
Remove four more floats from ClutterActorPrivate.
The fixed minimum and natural sizes should be stored inside the
ClutterLayoutInfo structure, along with the fixed position.
Add a failsafe against a NULL parent, to avoid a segfault when calling
clutter_actor_allocate() on the Stage.
We also need to deal with floating point values: straight comparison is
not going to cut it.
ClutterActor has various properties controlling the allocation:
- x-align, y-align
- margin-top, margin-bottom, margin-left, margin-right
These properties should adjust the ClutterActorBox passed from the
parent actor to its children when calling clutter_actor_allocate(),
so that the child can just allocate its children at the right origin
with the right available size.
The actor class should be able to hold the margin offsets like it does
for expand and alignment flags.
Instead of filling the private data structure with data, we should be
able to use an ancillary data structure, given that all this data is
optional and might never be set in the first place.
In case no layout manager was set during construction, we fall back to a
FixedLayout. The FixedLayout has the property of making the fixed
positioning and sizing API, as well as the various Constraints, work
out of the box.
Now that ClutterActor implements the Container contract we can actually
defer the size negotiation to a ClutterLayoutManager directly from the
default implementation of the Actor's virtual functions.
We can provide most of the ClutterContainer implementation directly
within ClutterActor — basically removing the need of having the
Container interface in the first place. For backward compatibility
reasons we can keep the interface, but let Actor implement it directly.
Let's try and move away from the reverse implicit scene graph build API,
which we mutuated from GTK+, towards a more traditional node/child API.
The set_parent()/unparent() API is confusing, unless you know the
history; having a add_child()/remove_child() methods pair makes it more
explicit.
We can easily implement the old set_parent()/unparent() pair in terms of
the newly add_child()/remove_child() one.
Enclose the check inside a #ifdef CLUTTER_ENABLE_DEBUG ... #endif, so
that we can compile it out; also, use g_string_append() instead of the
g_string_append_printf() function, given that we're just concatenating
strings.
The ::redraw virtual function was a throwback from olden times, and has
been thoroughly replaced by the equivalent vfunc on the StageWindow
interface. We can safely remove it, now, and simplify the flow of the
redraw code inside ClutterStage.
Semantic changes to Wayland means that we cannot rely on the compositor
setting a pointer buffer for us if set it to nil. The first part of fixing
this is to create an shm buffer containing the bytes for our cursor.
The best way to do this currently is to load the cursor from the well known
location where weston instals its cursor images. The code to implemente this
was derivedlifted from the Wayland backend in GTK+.
Currently, we're emitting the ClutterActor::destroy at the end of the
dispose implementation - right before we chain up to the parent
implementation.
The point of emission makes the ::destroy signal handlers able to just
use the actor pointer - as the actor state will have been mostly cleared
by the time application can run. This (undocumented) behaviour severely
limits the amount of things you can do inside a ::destroy signal
handler, thus making the ::destroy signal just a weird weak reference,
instead of a proper way to break application reference cycles.
Given that this change relaxes some of the conditions, this change
should be safe - obviously, if anything happens, we'll back it out; the
conformance and interactive tests confirm that, for common patterns of
usage, this change does not break existing code.
GLib has a nice, atomic object clearing function that allows us to drop
code looking like:
if (priv->object != NULL)
{
g_object_unref (priv->object);
priv->object = NULL;
}
from the ::dispose implementation.
I always have to think twice before returning a value from an event
signal handler, and I've been writing them for the past 10 years, so
it's conceivable that application developers that start with Clutter
will find them confusing as well.
Simple symbolic names should be easier to use.
The depth cueing through GL fog has been broken for a long while, now.
The fog-related API in Clutter is ridiculously limited, and harks back
to simpler times; the ClutterFog structure is not enough to express all
the GL fog machinery, and required application code to connect to the
Stage's paint implementation and drop into Cogl directly.
Additionally, the fixed pipeline fog machinery in GL simply does not
work with premultiplied alpha, unless you use a shader - and in that
case it would only work for textures. Let's deprecate it, and just
don't do anything if somebody has the brilliant idea of setting the
:use-fog property to TRUE.
Sadly, we need to remove the G_GNUC_NULL_TERMINATED annotation from
ClutterBox packing API; the compiler will otherwise emit a warning
for perfectly legal statements like:
clutter_box_pack (box, child, NULL);
because of the missing sentinel.
See also: g_object_new().
GLib has a "diagnostic mode" switch that can be checked to enable debug
messages on deprecated properties and signals, as these are purely
run-time constructs, and as such cannot be caught by compiler warnings.
The diagnostic mode is toggled by a simple environment variable, and
can be used to ease porting of application code.
We can use something similar to mark deprecated virtual functions and
other run-time constructs; to avoid collisions, we should use our own
environment variable, CLUTTER_ENABLE_DIAGNOSTIC.
Instead of using a PaintVolume for a 2D region, and an internal
function, use the newly added queue_redraw_with_clip() method.
This removes the last bit of internal API usage in the
ClutterX11TexturePixmap actor.
https://bugzilla.gnome.org/show_bug.cgi?id=660997
Add a public version of the clipped queue redraw, using a 2D clip. This
allows implementing actors with trackable 2D clipped regions, like the
ClutterX11TexturePixmap, outside of Clutter itself.
https://bugzilla.gnome.org/show_bug.cgi?id=660997
The Wayland protocol now has events represent when a pointer enters the
surface and when it leaves again.
For leaves the surface is not set in the event, for enters the surface is set.
Simply use this to determine whether to emit CLUTTER_ENTER or CLUTTER_LEAVE.
Previously the wl_shell object held the methods that allowed a client to
request changes to the shell's state associated with a surface. These methods
have now been moved to a wl_shell_surface object.
This change allows configure events to be handled inside the stage rather than
the backend.
This makes the option_xkb_* symbols declared for the evdev device manager
and the wayland device manager private so we don't get symbol collisions
if both of these backends are enabled.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
this fixes how clutter-device-manager-evdev.h is included to fix a build
problem caused by not being able to find the header.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This updates the evdev input backend code to compile and also updates
the code to not refer to the default stage and instead check for a
stage to be associated with the input device. If no stage is currently
associated with a device generating events then the events are dropped
on the floor.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds internal api to be able to query the stage currently
associated with a given input device so input backends shouldn't need to
refer to the default stage.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This adds a --enable-wayland-compositor configure option which will add
support for a ClutterWaylandSurface actor which can be used to aid in
writing Wayland compositors using Clutter by providing a ClutterActor to
represent Wayland client surfaces.
Notably this configure option isn't tied into any particular backend
since conceptually the compositor support can be used in conjunction
with any clutter backend that has corresponding Cogl support.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
This updates Wayland support in line with upstream changes to the Wayland
API and protocol.
This update means we no longer use the Cogl stub winsys so a lot of code
that had to manually interact with EGL and implement a swap_buffers
mechanism could be removed and instead we now depend on Cogl to handle
those things for us.
This update also adds an input device manager consistent with other
clutter backends.
Note: to use the client side "wayland" clutter backend you need to have
built Cogl with --enable-wayland-egl-platform. If Cogl has been built
with support for multiple winsys backends then you should run
applications with COGL_RENDERER=EGL in the environment.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Generate a .bat file to generate the clutter-enum-types.[ch] for use
during the Visual C++ build process, which will greatly simplify the
maintenanace of the VS build files as public headers are added or removed
during the development process.
ClutterInputDeviceX11 has been made private, so we cannot access it from
outside of clutter-input-device-core-x11.c. We should have simple
accessors for the min/max keycode, which is the only detail that we use.
Clutter-WARNING **: Unable to compile the GLSL
shader: Fragment shader failed to compile with the following errors:
The attached patch (against current git) should print out more
information what makes it easier to answer user feedback.
https://bugzilla.gnome.org/show_bug.cgi?id=664252
While working through the Python3/pygobject bindings, I came across a missing
(allow-none) in clutter_state_set_key(). This allows the API to specify to None
as the source_target.
https://bugzilla.gnome.org/show_bug.cgi?id=664996
The builtin effects ClutterColorizeEffect, ClutterDesaturateEffect and
ClutterShaderEffect all have properties which only affect the
rendering of the final texture not the contents of it. When these
properties are updated we should queue a repaint of the effect not
the actor so that we don't waste time repainting the contents of the
offscreen buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=665052
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
There was an #ifdef'd section of code for profiling that was using the
wrong variable name so it would not build.
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
Previously the offscreen effect was keeping track of the size of the
texture so that it could detect when a different size is requested and
create a new texture. However this breaks if a subclass overrides
create_texture to make the texture bigger because in that case the
size of the texture will always be different from the calculated size
of the actor. This patch makes it also track the size of the fbo that
was requested before being passed through create_texture() and it
instead uses that to detect when a new FBO is needed.
https://bugzilla.gnome.org/show_bug.cgi?id=665040
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
It should be possible to define markers in ClutterScript when
describing a ClutterTimeline.
The syntax is a trivial:
"markers" : [
{ "name", <marker-name>, "time" : <msecs> }
]
While at it, we should document it inside the API reference, as well
as fleshing out the ClutterTimeline description.
To allow language bindings to properly override Script.connect_signals()
they'll need access access to Script.connect_signals_full().
Thanks to Jeremy Moles for reporting.
Since we have a _clutter_debug_message() function compiled in
unconditionally we have no further need for the equivalent conditional
version defined in clutter-profile.[ch]: we can simply do the work in
one function.
We still ship clutter_get_show_fps() and clutter_get_debug_enabled() as
public entry points. Yet another case of missing API review prior to the
1.0 release, so really the bucket stops around my desk.
Let's deprecate these two useless functions, and reduce the API
footprint of Clutter.
This function should have never been made public in the first place; its
output depends on a configuration option of Clutter, and it's basically
useful only for internal debugging.
Make it consistent across the various build options (with or without
profiling enabled), and add a timestamp using the monotonic clock to
every debug message.
The clutter_get_timestamp() output depends on whether Clutter was
compiled with debugging support — it's meant to be used only by the
debugging notes, and it should not be used for anything else.
Instead of calling cogl_set_depth_test_enabled and
cogl_set_backface_culling_enabled ClutterDeformEffect now uses the
experimental CoglPipeline API. Those global state functions will soon
be deprecated in Cogl and they are implemented by flushing a temporary
override pipline which isn't ideal.
Using the new culling API we can also avoid having a separate buffer
of indices for the back of the texture by just changing the culling
mode to cull front baces instead of the back.
https://bugzilla.gnome.org/show_bug.cgi?id=663636
This changes ClutterDeformEffect to use a CoglAttributeBuffer with a
CoglPrimitive instead of the old CoglVertexBuffer. The old vertex
buffer code is now implemented in terms of the attribute buffer code
and it will eventually be deprecated. Using CoglPrimitives should be
slightly more efficient.
This also changes the struct we store the vertices to be
CoglVertexP3T2C4 instead of CoglTextureVertex. The latter is
technically not compatible with neither vertex buffers nor attribute
buffers because it contains a CoglColor and the internal members of
that are private so it is not valid to assume it contains 4 bytes and
use that as an attribute. Also it contains padding so it ends up
redundantly creating a larger buffer. CoglTextureVertex is in the
public API for the deform_vertex virtual so we still have to maintain
that. Instead of directly manipulating the array to upload, the
application is now passed a stack allocated temporary struct which
gets converted to a CoglVertexP3T2C4. This also means that we can map
the buffer as write only and still let the application read-write the
vertex.
The paint debug code to draw line strips for the deform mesh was
previously trying to set a red source material. However this wasn't
working because the material color was being overwritten by the color
attribute in the vertex buffer. This patch fixes that by creating a
seperate primitive for the lines and not adding the color
attribute. The lines code was also drawing both the front and back
indices. I don't think that entirely makes sense so I've just changed
it to draw only the front indices. Maybe painting both would make more
sense if backface culling was still enabled.
https://bugzilla.gnome.org/show_bug.cgi?id=663636
When invalidating the deform effect, we are invalidating the vertices
shaping the deformation of an actor. Therefore, there is no need to
trigger a redraw of the associated actor, we can just repaint the
effect.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=663720
* deprecate-default-stage:
evdev: do not associate device with stage
evdev: don't even process events without a default stage
docs: Note default stage deprecation in README
docs: Remove clutter_stage_get_default()
stage: Deprecate the default stage
script: Do not use clutter_stage_get_default()
cally/actor: Do not use the default stage as a fallback
Try to mop up the default stage mess
performance/*: Do not use clutter_stage_get_default()
interactive/*: Do not use clutter_stage_get_default()
Merge with a11y
micro-bench/*: Do not use clutter_stage_get_default()
accessibility/*: Do not use clutter_stage_get_default()
conform/*: Do not use clutter_stage_get_default()
The VBLANK environmental variable is done universally in clutter-main.c
as in commits e8562089 (main: Add a sync-to-vblank global flag) and
db211a21 (Remove per-backend CLUTTER_VBLANK envvar), so remove these things
here as well.
https://bugzilla.gnome.org/show_bug.cgi?id=663999
The VBLANK environmental variable is done universally in clutter-main.c
as in commits e8562089 (main: Add a sync-to-vblank global flag) and
db211a21 (Remove per-backend CLUTTER_VBLANK envvar), so remove these things
here as well.
-Make the contents of config.h.win32.in more like config.h.in
-Define CLUTTER_INPUT_WIN32 accordingly (no GDK3 defines yet, until
GDK3 on Windows is more stable)
The evdev system is a bit different from other input systems in
Clutter because it's completly decorrelated from anything graphic.
In the case of embedded devices with no proper windowing system, you
might want to not implicitly create a default stage when you're
receiving the first input event.
This patch changes this behavior by not forwarding any event if you
don't have a default stage.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=651718
A lot of the example code in the cookbook and the API reference still
uses the default stage — sometimes as if it were a non-default one,
which once again demonstrates how the default stage was a flawed concept
that just confused people.
Using the default stage as a fallback is wrong in all circumstances.
In this specific case, if an actor is not associated to a stage then it
cannot possibly be the key focus.
The default stage was a neat concept when we started Clutter out,
somewhere in the Jurassic era; a singleton instance that gets created at
initialization time, and remains the same for the entire duration of the
process.
Worked well enough when Clutter was a small library meant to be used to
write fullscreen media browsers, but since the introduction of multiple
stages, and Clutter being used to create all sorts of applications, the
default stage is just a vestigial remainder of that past, like an
appendix; something that complicates the layout of the code and
introduces weird behaviour, so that you notice its existence only when
something goes wrong.
Some platforms we do support, though, only have one framebuffer, so it
makes sense for them to have only one stage.
At this point, the only sane thing to do is to go through the same code
paths on all platforms, and that code path is the stage instance
creation and initialization — i.e. clutter_stage_new() (or
g_object_new() with CLUTTER_TYPE_STAGE).
For platforms that support multiple stages, nothing has changed: the stage
created by clutter_stage_get_default() will be set as the default one;
if nobody calls it, the default stage is never created, and it just
lives on as a meaningless check.
For platforms that only support one stage, clutter_stage_new() and
clutter_stage_get_default() will behave exactly the same the first time
they are called: both will create a stage, and set it as the default.
Calling clutter_stage_new() a second time is treated as a programmer
error, and will result in Clutter aborting. This is a behavioural change
because the existing behaviour or creating a new ClutterStage instance
with the same ClutterStageWindow private implementation is, simply put,
utterly braindamaged and I should have *never* had written it, and I
apologize for it. In my defence, I didn't know any better at the time.
This is the first step towards the complete deprecation of
clutter_stage_get_default() and clutter_stage_is_default(), which will
come later.
Instead of implementing create_stage() and a constructor for
ClutterStageOSX, we can use the default implementations in
ClutterBackend, and spare us some code duplication.
Create the device manager during the event initialization, where it
makes sense.
This allows us to get rid of the per-backend get_device_manager()
virtual function, and just store the DeviceManager pointer into the
ClutterBackend structure.
All StageWindow implementation already have back pointers, but we need a
unified API to actually set them from the generic code path; we can use
properties on the StageWindow interface — though this requires fixing
all backends at the same time, to avoid GObject complaining.
Instead of piggybacking on the EGL backend, let's create a small
ClutterBackend for the CEx100 platforms. This allows us to handle the
CEx100-specific details in a much cleaner way.
All the functionality that ClutterBackendCogl provided has been moved
into ClutterBackend itself, so there is no need to have this class
around in the source.
Cogl-based backends can derive directly from ClutterBackend.
Don't replace create_context(): given that the X11 backend already uses
Cogl for the context creation, we can just provide the right data
structures ourselves.
Since we use Cogl for the context creation we can now provide a default
context creation that should just work, plus a couple of hooks to allow
plugging into the creation sequence for platforms supported by Cogl that
require special handling — like foreign displays or alpha-enabled swap
chains.
The various backends have now two choices: either replace the
create_context() in its entirety, or plug themselves into the default
context creation.
Input backends are, in some cases, independent from the windowing system
backends; we can initialize input handling using a model similar to what
we use for windowing backends, including an environment variable and
compile-/run-time checks.
This model allows us to remove the backend-specific init_events(), and
use a generic implementation directly inside the base ClutterBackend
class, thus further reducing the backend-specific code that every
platform has to implement.
This requires some minor surgery to every single backend, to make sure
that the function exposed to initialize the event loop is similar and
performs roughly the same operations.
Right now, we pass through to Pango unrecognized hexadecimal formats
when parsing colors from strings. Since we parse all possible formats
ourselves, we can do validation ourselves as well, and avoid the Pango
path.
Previously, if there was whitespace between "l" and the comma before the
alpha value, parsing would fail. This patch allows that whitespace
making it consistent with whitespace being allowed everywhere else.
https://bugzilla.gnome.org/show_bug.cgi?id=663594
These methods are short-hands for accessing the position and size,
which are already shorthands for accessing the various dimensional
and positional attributes. Plus, they use ClutterGeometry, which is a
fairly bad data type for a rectangle.
We still use ClutterGeometry internally in a couple of places, but we
should really move away from that flawed rectangle data type, and use
the Cairo one.
Sadly, we still have some public API that we cannot remove yet.
Setting the default frame rate does not do anything even remotely
useful, unless synchronization to the vertical refresh rate is also
disabled - which can only be done through environment variable or
through configuration file. Having a programmatic way to change the
default frame rate is, thus, completely pointless.
Changing the default frame rate through environment variable and
configuration file is still allowed.
A GDK backend for clutter was added in commits 610a9c17
(Add A new GDK backend) and f14cbf5b (gdk: Allow disabling event retrieval)
These are not included in the Win32 builds for the moment as GDK3 is
quite unstable on Windows at this time
-Update clutter-version.h.win32.in to reflect the state of
clutter-version.h.in commit 21a24c86 (updated in commit 8249e488)
-Also add clutter_check_windowing_backend to clutter.symbols as a result.
The clutter-stage.h header still has a bunch of macros that have, for
reasons unknown[*], survived the 1.0 API cut and have long since been
deprecated. Let's hide them under the deprecated/ carpet and let us
never speak of them ever again.
[*] pretty sure alcohol or other psychotropic substances were involved
but I take the 5th on that.
Instead of defining new symbols for the windowing systems enabled at
configure time, we can reuse the same symbols for both the compile time
and run time checks, e.g.:
#ifdef CLUTTER_WINDOWING_X11
if (clutter_check_windowing_backend (CLUTTER_WINDOWING_X11))
/* use the clutter_x11_* API */
else
#endif
#ifdef CLUTTER_WINDOWING_WIN32
if (clutter_check_windowing_backend (CLUTTER_WINDOWING_WIN32))
/* use the clutter_win32_* API */
#endif
This scheme allows us to ensure that the input system namespace is free
for us to use and select at run time in later versions of Clutter.
We need debugging notes, to see what's happening when handling events.
We need to queue a (clipped) redraw when receiving a GDK_EXPOSE event.
We need to check the device (both master and source) of the event using
the GdkEvent API, and pass them to the ClutterEvent using the
corresponding Clutter API.
The code is generally wrong, and does not work. We need to skip the
GdkWindow creation when we have a foreing window, but we still need to
create the Cogl onscreen buffer and connect it to the GdkWindow's native
resource.
Just like the other backends can disable the internal event handling,
and use clutter_<backend>_handle_event() to do the native → Clutter
event translation.