Instead of using the allocation-changed signal, use the queue-relayout
signal on the source to queue a relayout on the actor to which the
BindConstraint has been attached to.
The ::allocation-changed signal is not always enough, given that a
BindConstraint can use the position as well as the size of an actor to
drive the allocation of another; in this regard, it's much similar
to a ClutterClone, which requires a notification on every change, even
potential, and not just real ones, given the short-circuiting done
inside ClutterActor.
Instead of delegating the check for the ActorMeta:enabled property to
the sub-classes of ClutterActorMeta, ClutterActor can do the check prior
to using the ClutterActorMeta instances.
The interpolate() method does what it says on the tin: it interpolates
between two colors using the given factor.
ClutterColor uses it to register a progress function for Intervals.
When picking a size for the last slice in a texture, Cogl would always
pick the biggest power of two size that doesn't create too much
waste and is less than or equal to the previous slice size. However
this can end up creating a texture that is bigger than needed if there
is a smaller power of two.
For example, if the maximum waste is 127 (the current default) and we
try to create a texture that is 257 pixels wide it will decide that
the next power of two (512) is too much waste (255) so it will create
the first slice at 256 pixels wide. Then we only have 1 pixel left to
allocate but Cogl would pick the next smaller size that has a small
enough waste which is 128. But of course 1 is already a power of two
so that's redundantly oversized by 127.
This patch fixes it so that whenever it finds a size that would be big
enough, instead of using exactly that it picks the next power of two
up from the size we need to fill.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2355
A Clone:source property might be NULL, and we should not penalize
performance when we can just bail out early, because that would kind of
defeat the point.
Whenever the allocation is changed on a child of a ClutterTableLayout
and animations are not in effect then it would store a copy of the
allocation in the child meta data. However it was not freeing the old
copy of the allocation so it would end up with a small leak.
Instead of just changing it to free the old value this patch makes it
store the allocation inline in the meta data struct because it seems
that the size of an actor box is already quite small compared to the
size of the meta data struct so it is probably not worth having a
separate allocation for it. To detect the case when there has not yet
been an allocation a separate boolean is used instead of storing NULL.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2358
All the nifty things you discover when translating strings not exposed
to anyone. First the clutter-wide record of the number of typos in one
string. Second, ClutterTexture happened to have the only property blurbs
ending with a '.', remove them.
the "position" property of ClutterText is really the position of the
cursor. Rename the nick accordingly not to confuse it with the position
of the actor itself and be consistent with all the other cursor-related
properties.
The descriptions for the 'y-align' and 'x-align' properties talk about a
layer and a layer manager. It seems that these properties are the
alignement factors relative to the BinLayout, so document them
accordingly.
There are ordering issues in the pixmap destruction with current and
past X11 server, Mesa and dri2. Under some circumstances, an X pixmap
might be destroyed with the GLX pixmap still referencing it, and thus
the X server will decide to destroy the GLX pixmap as well; then, when
Cogl tries to destroy the GLX pixmap, it gets BadDrawable errors.
Clutter 1.2 used to trap + sync all calls to glXDestroyPixmap(), but
then we assumed that the ordering issue had been solved. So, we're back
to square 1.
I left a Big Fat Comment™ right above the glXDestroyPixmap() call
referencing the bug and the reasoning behind the trap, so that we don't
go and remove it in the future without checking that the issue has been
in fact solved.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2324
After commit 8dd8fbdb some errors appear if you try work directly
against cally:
* cally.pc.in removed some elements. After install clutter, doing
pkg-config --cflags cally-1.0
fails due missing winsys
* cally headers were moved from clutter-1.0/cally to
clutter-1.0/clutter/cally. Applications using it (yes I know,
nobody is officially using it) would require to:
* Change their include.
* Add directly a dependency to cally, in order to use the cally.pc
file with the correct directory include.
Note: Take into account that accessibility support still works (ie:
clutter_get_accessibility_enabled). This bug only prevents
applications to work directly against cally (ie: create a CallyActor
subclass)
http://bugzilla.clutter-project.org/show_bug.cgi?id=2353
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Landing the paint-box branch accidentally added two slots to the
ClutterEffectClass vtable, plus the get_paint_volume() function
pointer. This is an ABI break from 1.4.
Like we do for the Quartz backend, we should turn on the -xobjective-c
compiler flag for the Fruity backend.
This does not mean that the backend actually works.
The marshaller was defined as OBJECT,OBJECT,PARAM but the signal
definition used only two arguments. Since the signal never worked
and we never got any report about it, nobody could be possibly
using the ::child-notify signal.
Since we added child properties to the Container interface we made a
guarantee that the ::child-notify signal would be emitted whenever a
property was set using clutter_container_child_set*().
We were lying.
The child_notify virtual function was not implemented, and the signal
was never emitted.
We also used a G_LIKELY() macro while checking for non-NULL on a
function pointer that was by default set to NULL, thus making the
setting of child properties far less efficient than needed.
The clutter stage has a list of entries of actors waiting to be redrawn.
Each entry has a "clip" ClutterPaintVolume member which represents which
how much of the actor needs to get redrawn. It's possible for there to
be no clip associated with the entry. In this case, the clip member is
invalid, the has_clip member should be set to false.
This commit fixes a bug where the has_clip member was not being
initially, explicitly set to false for new entries, and not being
explicitly set to false in the event the clip associated with the entry
is freed.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2350
Signed-off-by: Robert Bragg <robert@linux.intel.com>
In all the changes made recently to how we handle redraws and adding
support for paint-volumes we stopped looking at explicit clip regions
passed to _clutter_actor_queue_redraw_with_clip.
In _clutter_actor_finish_queue_redraw we had started always trying to
clip the redraw to the paint-volume of the actor, but forgot to consider
that the user may have already determined the clip region for us!
Now we first check if the given clip != NUll and if so we don't need to
calculate the paint-volume of the actor.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2349
One of the later changes made on the paint volume branch before merging
with master was to make paint volumes opt in only since we couldn't make
any safe assumptions about how custom actors may constrain their
painting. We added very conservative implementations for the existing
Clutter actors - including for ClutterTexture which
ClutterX11TexturePixmap is a sub-class of - but we were conservative to
the extent of explicitly checking the GType of the actor so we would
avoid making any assumptions about sub-classes. The upshot was that we
neglected to implement the get_paint_volume vfunc for
ClutterX11TexturePixmap.
This patch provides an implementation that simply reports the actor's
allocation as its paint volume. Also unlike for other core actors it
doesn't explicitly check the GType so we are assuming that all existing
sub-classes of ClutterX11TexturePixmap constrain their drawing to the
actor's transformed allocation. If anyone does want to draw outside the
allocation in future sub-classes, then they should also provide an
updated get_paint_volume implementation.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2349
When using the debug function _cogl_debug_dump_materials_dot_file to
write a dot file representing the sparse graph of material state we now
only show a link between materials and layers when the material directly
owns that layer reference (i.e. just those referenced in
material->layer_differences) This makes it possible to see when
ancestors of a material are being deferred too for layer state.
For example when looking at the graph if you see that a material has an
n_layers of 3 but there is only a link to 2 layers, then you know you
need to look at it's ancestors to find the last layer.
In 4ee05f8e21 the namespace for the clutter keysym macros were
changed to CLUTTER_KEY_* but the win32 events backend was still
referring to the old names.
GObject ≥ 2.26.0 added a nice convenience call for installing properties
from an array of GParamSpec. Since we're already storing all GParamSpec
in an array in order to use them with g_object_notify_by_pspec(), this
turns out nicely for us.
Since we do not depend on GLib 2.26 (yet), we need to provide a simple
private wrapper that implements the fall back to the default
g_object_class_install_property() call.
ClutterDragAction has been converted as a proof of concept.
During destruction, the StageWindow implementation associated to a Stage
might be NULL. We need to add more checks for a) the IN_DESTRUCTION flag
being set and b) the StageWindow pointer being NULL. Otherwise, we will
get warnings during the destruction of the Stage.
Both of the cogl_texture_2d_sliced_new functions called the
slices_create function which creates the underlying GL
textures. However this was also called by init_base so the textures
would end up being created twice. This would make it leak the GL
textures and the arrays which point to them.
The internal copy of JSON-GLib was meant to go away right after the 1.0
release, given that JSON-GLib was still young and relatively unknown.
Nowadays, many projects started depending on this little library, and
distributions ship it and keep it up to date.
Keeping a copy of JSON-GLib means keeping it up to date; unfortunately,
this would also imply updating the code not just for the API but for the
internal implementations.
Starting with the 1.2 release, Clutter preferably dependend on the
system copy; with the 1.4 release we stopped falling back automatically.
The 1.6 cycle finally removes the internal copy and requires a copy of
JSON-GLib installed on the target system in order to compile Clutter.
Since re-working how redraws are queued it is no longer necessary to
dirty the pick buffer in _clutter_actor_real_queue_redraw since this
should now reliably be handled in _clutter_stage_queue_actor_redraw.
This adds two internal functions relating to explicit traversal of the
scenegraph:
_clutter_actor_foreach_child
_clutter_actor_traverse
_clutter_actor_foreach_child just iterates the immediate children of an
actor, and with a new ClutterForeachCallback type it allows the
callbacks to break iteration early.
_clutter_actor_traverse traverses the given actor and all of its
decendants. Again traversal can be stopped early if a callback returns
FALSE.
The first intended use for _clutter_actor_traverse is to maintain a
cache pointer to the stage for all actors. In this case we will need to
update the pointer for all descendants of an actor when an actor is
reparented in any way.
This adds a private getter to query the number of children an actor has.
One use planned for this API is to avoid calling get_paint_volume on
such actors. (It's not clear what the best semantics for
get_paint_volume are for actors with children, so we are considering
leaving the semantics undefined for the initial clutter 1.4 release)
We now explicitly track the list of children each actor has in a private
GList. This gives us a reliable way to know how many children an actor
has - even for composite actors that don't implement the container
interface. This also will allow us to directly traverse the scenegraph
in a more generalized fashion. Previously the scenegraph was
more-or-less represented implicitly according the implementation of
paint methods.
When using the CLUTTER_PAINT=paint-volumes debug option we try and show
when a paint volume couldn't be determined by drawing a blue outline of
the allocation instead. There was a typo though and instead we were
drawing an outline the size of the stage instead of for the given actor.
This fixes that and removes a FIXME comment relating to the blue outline
that is now implemented.
To allow Clutter to queue clipped redraws when a clone actor changes we
need to be able to report a paint volume for clone actors. This patch
makes ClutterClones query the paint volume of their source actor and
masquerade it as their own volume.
This reverts commit ca44c6a7d8abe9f2c548bee817559ea8adaa7a80.
In reality there are probably lots of actors that depend on the exact
semantics as they are documented so this change isn't really acceptable.
For example when the font changes in ClutterText we only queue a
relayout, and since it's possible that the font will have the same size
and the actor won't get a new allocation it wouldn't otherwise queue a
redraw.
Since queue_redraw requests now get deferred until just before a paint
run it is actually no longer a problem to queue the redraw here.
Instead of immediately, recursively emitting the "queue-redraw" signal
when clutter_actor_queue_redraw is called we now defer this process
until all stage updates are complete. This allows us to aggregate
repeated _queue_redraw requests for the same actor avoiding redundant
paint volume transformations. By deferring we also increase the
likelihood that the actor will have a valid paint volume since it will
have an up to date allocation; this in turn means we will more often be
able to automatically queue clipped redraws which can have a big impact
on performance.
Here's an outline of the actor queue redraw mechanism:
The process starts in clutter_actor_queue_redraw or
_clutter_actor_queue_redraw_with_clip.
These functions queue an entry in a list associated with the stage which
is a list of actors that queued a redraw while updating the timelines,
performing layouting and processing other mainloop sources before the
next paint starts.
We aim to minimize the processing done at this point because there is a
good chance other events will happen while updating the scenegraph that
would invalidate any expensive work we might otherwise try to do here.
For example we don't try and resolve the screen space bounding box of an
actor at this stage so as to minimize how much of the screen redraw
because it's possible something else will happen which will force a full
redraw anyway.
When all updates are complete and we come to paint the stage (see
_clutter_stage_do_update) then we iterate this list and actually emit
the "queue-redraw" signals for each of the listed actors which will
bubble up to the stage for each actor and at that point we will
transform the actors paint volume into screen coordinates to determine
the clip region for what needs to be redrawn in the next paint.
Note: actors are allowed to queue a redraw in reseponse to a
queue-redraw signal so we repeat the processing of the list until it
remains empty. An example of when this happens is for Clone actors or
clutter_texture_new_from_actor actors which need to queue a redraw if
their source queues a redraw.
For Clone actors we will need a way to report the volume of the source
actor as the volume of the clone actor. To make this work though we need
to be able to replace the reference to the source actor with a reference
to the clone actor instead. This adds a private
_clutter_paint_volume_set_reference_actor function to do that.
This adds a way to initialize a paint volume from another source paint
volume. This lets us for instance pass the contents of one paint volume
back through the out param of a get_paint_volume implementation.
This makes clutter_actor_queue_redraw simply bail out early if the actor
isn't a descendant of a ClutterStage since the request isn't meaningful
and it avoids a crash when trying to queue a clipped redraw against the
stage to clear the actors old location.
This splits out all the clutter_paint_volume code from clutter-actor.c
into clutter-paint-volume.c. Since clutter-actor.c and
clutter-paint-volume.c both needed the functionality of
_fully_transform_vertices, this function has now been moved to
clutter-utils.c as _clutter_util_fully_transform_vertices.
There are too many examples where the default assumption that an actor
paints inside its allocation isn't true, so we now return FALSE in the
base implementation instead. This means that by default we are saying
"we don't know the paint volume of the actor", so developers need to
implement the get_paint_volume virtual to take advantage of culling and
clipped redraws with their actors.
This patch provides very conservative get_paint_volume implementations
for ClutterTexture, ClutterCairoTexture, ClutterRectangle and
ClutterText which all explicitly check the actor's object type to avoid
making any assumptions about subclasses.
We were always explicitly checking priv->needs_allocation in
_clutter_actor_queue_redraw_with_clip, but we only need to do that if
the CLUTTER_REDRAW_CLIPPED_TO_ALLOCATION flag is used.
This initializes priv->last_paint_box with a degenerate box, so a newly
allocated actor added to the scenegraph and made visible only needs to
trigger a redraw of its initial position. If we don't have a valid
last_paint_box though we would instead trigger a full stage redraw.
To make comparing the performance with culling/clipped redraws
enabled/disabled fairer we now avoid querying the paint box when they
are disabled, so that results should reflect how the cost of
transforming paint volumes into screen space etc gets offset against the
benefit of culling.
In clutter_stage_allocate at the end we were always querying the latest
allocation set and using the geometry to assert the viewport and then
kicking a full redraw. These only need to be done when the allocation
really changes, so we now read the previous allocation at the start of
the function and compare at the end. This was stopping clipped redraws
from being used in a lot of cases.
To consider that we've see a number of drivers that can struggle to get
going and may produce a bad first frame we now force the first 2 frames
to be full redraws. This became a serious issue after we started using
clipped redraws more aggressively because we assumed that after the
first frame the full framebuffer was valid and we only redraw the
content that changes. With buggy drivers though, applications would be
left with junk covering a lot of the stage until some event triggered a
full redraw.
This is a workaround for a race condition when resizing windows while
there are in-flight glXCopySubBuffer blits happening.
The problem stems from the fact that rectangles for the blits are
described relative to the bottom left of the window and because we can't
guarantee control over the X window gravity used when resizing so the
gravity is typically NorthWest not SouthWest.
This means if you grow a window vertically the server will make sure to
place the old contents of the window at the top-left/north-west of your
new larger window, but that may happen asynchronous to GLX preparing to
do a blit specified relative to the bottom-left/south-west of the window
(based on the old smaller window geometry).
When the GLX issued blit finally happens relative to the new bottom of
your window, the destination will have shifted relative to the top-left
where all the pixels you care about are so it will result in a nasty
artefact making resizing look very ugly!
We can't currently fix this completely, in-part because the window
manager tends to trample any gravity we might set. This workaround
instead simply disables blits for a while if we are notified of any
resizes happening so if the user is resizing a window via the window
manager then they may see an artefact for one frame but then we will
fallback to redrawing the full stage until the cooling off period is
over.
Instead of triggering a full stage redraw for Expose events we use the
geometry of the exposed region given in the event to queue a clipped
redraw of the stage.
Clutter has now taken responsibility for managing its viewport,
projection matrix and view transform as part of ClutterStage so
_cogl_setup_viewport is no longer used by anything, and since it's quite
an obscure API anyway it's we've taken the opportunity to remove the
function.
Since clutter_actor_queue_redraw now automatically clips redraws
according to the paint volume of the actor we have to be careful to
ensure we really force a full redraw when the stage is allocated a new
size or the stage viewport changes.
We have bent the originally documented semantics a bit so now where we
say "Queueing a new layout automatically queues a redraw as well" it
might be clearer to say "Queuing a new layout implicitly queues a redraw
as well if anything in the layout changes".
This should be close enough to the original semantics to not cause any
problems.
Without this change then we we fail to take advantage of clipped redraws
in a lot of cases because queuing a redraw with priv->needs_allocation
== TRUE will automatically be promoted to a full stage redraw since it's
not possible to determine a valid paint-volume.
Also queuing a redraw here will end up registering a redundant clipped
redraw for the current location, doing quite a lot of redundant
transforms, and then later when re-allocated during layouting another
queue redraw would happen with the correct paint-volume.
This uses actor paint volumes to perform culling during
clutter_actor_paint.
When performing a clipped redraw (because only a few localized actors
changed) then as we traverse the scenegraph painting the actors we can
now ignore actors that don't intersect the clip region. Early testing
shows this can have a big performance benefit; e.g. 100% fps improvement
for test-state with culling enabled and we hope that there are even much
more compelling examples than that in the real world,
Most Clutter applications are 2Dish interfaces and have quite a lot of
actors that get continuously painted when anything is animated. The
dynamic actors are often localized to an area of user focus though so
with culling we can completely avoid painting any of the static actors
outside the current clip region.
Obviously the cost of culling has to be offset against the cost of
painting to determine if it's a win, but our (limited) testing suggests
it should be a win for most applications.
Note: we hope we will be able to also bring another performance bump
from culling with another iteration - hopefully in the 1.6 cycle - to
avoid doing the culling in screen space and instead do it in the stage's
model space. This will hopefully let us minimize the cost of
transforming the actor volumes for culling.
This makes clutter_actor_queue_redraw transparently use an actor's paint
volume to queue a clipped redraw.
We save the actors paint box each time it is painted so that when
clutter_actor_queue_redraw is called we can determine the old and new
location of the actor so we know the full bounds of what must be redrawn
to clear its old view and show the new.
This makes _clutter_actor_transform_and_project_box a static function
and removes the prototype from clutter-private.h since it is no longer
used outside clutter-actor.c
The base implementation for the actor queue_relayout method was queuing
an implicit redraw, but there shouldn't be anything implied from the
mere process of queuing a redraw that should force us to queue a redraw.
If actors are moved as a part of relayouting later then they will queue
a redraw. Also clutter_actor_queue_relayout() still also explicitly
queues a redraw so I think this may have been doubly redundant.
If clutter_actor_allocate finds it necessary to update an actors
allocation then it now also queue a redraw of that actor. Currently we
queue redraws for actors very early on when queuing a relayout instead
of waiting to determine the final outcome of relayouting to determine if
a redraw is really required. With this in place we can move away from
preemptive queuing of redraws.
clutter_actor_queue_relayout currently queues a relayout and a redraw,
but the plan is to change it to only queue a relayout and honour the
documentation by assuming that the process of relayouting will
result queuing redraws for any actors whos allocation changes.
This doesn't make that change it just adds an internal
_clutter_actor_queue_only_relayout function which
clutter_actor_queue_relayout now uses as well as calling
clutter_actor_queue_redraw.
This adds a private ->relayout_pending boolean similar in spirit to
redraw_pending. This will allow us to queue a relayout without
implicitly queueing a redraw; instead we can depend on the actions
of a relayout to queue any necessary redraw.
When clutter_texture_new_from_actor is use we need to track when the
source actor queues a redraw or a relayout so we can also queue a redraw
or relayout for the texture actor.
There is an internal _clutter_actor_queue_redraw_with_clip API that gets
used for texture-from-pixmap to minimize what we redraw in response to
Damage events. It was previously working in terms of a ClutterActorBox
but it has now been changed so an actor can queue a redraw of volume
instead.
The plan is that clutter_actor_queue_redraw will start to transparently
use _clutter_actor_queue_redraw_with_clip when it can determine a paint
volume for the actor.
For the blur effect we use a BLUR_PADDING constant to pad out the volume
of the source actor on the x and y axis. Previously we were offsetting
the origin negatively using BLUR_PADDING and then adding BLUR_PADDING
to the width and height, but we should have been adding 2*BLUR_PADDING
instead.
This ensures that clipped redraws are disabled when using
CLUTTER_PAINT=redraws. This may seem unintuitive given that this option
is for debugging clipped redraws, but we can't draw an outline outside
the clip region and anything we draw inside the clip region is liable to
leave a trailing mess on the screen since it won't be cleared up by
later clipped redraws.
This adds a debug option to visualize the paint volumes of all actors.
When CLUTTER_PAINT=paint-volumes is exported in the environment before
running a Clutter application then all actors will have their bounding
volume drawn in green with a label corresponding to the actors type.
This is a fairly extensive second pass at exposing paint volumes for
actors.
The API has changed to allow clutter_actor_get_paint_volume to fail
since there are times - such as when an actor isn't a descendent of the
stage - when the volume can't be determined. Another example is when
something has connected to the "paint" signal of the actor and we simply
have no way of knowing what might be drawn in that handler.
The API has also be changed to return a const ClutterPaintVolume pointer
(transfer none) so we can avoid having to dynamically allocate the
volumes in the most common/performance critical code paths. Profiling was
showing the slice allocation of volumes taking about 1% of an apps time,
for some fairly basic tests. Most volumes can now simply be allocated on
the stack; for clutter_actor_get_paint_volume we return a pointer to
&priv->paint_volume and if we need a more dynamic allocation there is
now a _clutter_stage_paint_volume_stack_allocate() mechanism which lets
us allocate data which expires at the start of the next frame.
The API has been extended to make it easier to implement
get_paint_volume for containers by using
clutter_actor_get_transformed_paint_volume and
clutter_paint_volume_union. The first allows you to query the paint
volume of a child but transformed into parent actor coordinates. The
second lets you combine volumes together so you can union all the
volumes for a container's children and report that as the container's
own volume.
The representation of paint volumes has been updated to consider that
2D actors are the most common.
The effect apis, clutter-texture and clutter-group have been update
accordingly.
Previously we used the transformed allocation but that doesn't take
into account actors with depth which may be projected outside the
area covered by the transformed allocation.
The blur effect will sample pixels on the edges of the offscreen buffer,
so we want to add a padding to avoid clamping the blur.
We do this by creating a larger target texture, and updating the paint
volume of the actor during paint to take that padding into account.
We should be using the real, on-screen, transformed size of the actor to
size and position the offscreen buffer we use to paint the actor for an
effect.
An Effect implementation might override the paint volume of the actor to
which it is applied to. The get_paint_volume() virtual function should
be added to the Effect class vtable so that any effect can get the
current paint volume and update it.
The clutter_actor_get_paint_volume() function becomes context aware, and
does the right thing if called from within a ClutterEffect pre_paint()
or post_paint() implementation, by allowing all effects in the chain up
to the caller to modify the paint volume.
An actor has an implicit "paint volume", that is the volume in 3D space
occupied when painting itself.
The paint volume is defined as a cuboid with the origin placed at the
top-left corner of the actor; the size of the cuboid is given by three
vectors: width, height and depth.
ClutterActor provides API to convert the paint volume into a 2D box in
screen coordinates, to compute the on-screen area that an actor will
occupy when painted.
Actors can override the default implementation of the get_paint_volume()
virtual function to provide a different volume.
*** WARNING: THIS COMMIT CHANGES THE BUILD ***
Do not recurse into the backend directories to build private, internal
libraries.
We only recurse from clutter/ into the cogl sub-directory; from there,
we don't recurse any further. All the backend-specific code in Cogl and
Clutter is compiled conditionally depending on the macros defined by the
configure script.
We still recurse from the top-level directory into doc, clutter and
tests, because gtk-doc and tests do not deal nicely with non-recursive
layouts.
This change makes Clutter compile slightly faster, and cleans up the
build system, especially when dealing with introspection data.
Ideally, we also want to make Cogl part of the top-level build, so that
we can finally drop the sed trick to change the shared library from the
GIR before compiling it.
Currently disabled:
‣ OSX backend
‣ Fruity backend
Currently enabled but untested:
‣ EGL backend
‣ Windows backend
ClutterAnimator currently has a number of bugs related to its
referencing of its internal timeline.
1) The default timeline created in _init is not unreffed (it appears the
programmer has wrongly thought ClutterTimeline has a floating reference
based on the use of g_object_ref_sink in _set_timeline)
2) The timeline and slave_timeline vars are unreffed in finalize instead
of dispose
3) The signal handlers set up in _set_timeline are not disconnected when
the animator is disposed
http://bugzilla.clutter-project.org/show_bug.cgi?id=2347
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This reorganizes the loop for clutter_actor_contains so that it is a
for loop rather than a while loop. Although this is mostly just
nitpicking, I think this change could make the loop slightly faster if
not optimized because it doesn't perform the self == descendant check
twice and it is clearer.
The documentation for clutter_actor_contains didn't specify what
happens when self==descendant. A strict reading of it might lead you
to think that it would return FALSE because in that case the
descendant isn't an immediate child or a deeper descendant. The code
actually would return TRUE. I think this is more useful so this patch
fixes the docs rather than the code.
When removing all keys in a ClutterAnimator, the hash table with
object/property name pairs went out of sync. This change makes
the animator always clear this hash table upon key-removal; and
refreshing it if the animator's timeline is running.
Fixes bug #2335
Each time a material property changes we look to see if any of its
ancestry has become redundant and if so we prune that redundant
ancestry.
There was a problem with the logic that handles this though because we
weren't considering that a material which is a layer state authority may
still defer to ancestors to define the state of individual layers.
For example a material that derives from a parent with 5 layers can
become a STATE_LAYERS authority by simply changing it's ->n_layers count
to 4 and in that case it can still defer to its ancestors to define the
state of those 4 layers.
This patch checks first if a material is a layer state authority and if
so only tries to prune its ancestry if it also *owns* all the individual
layers it depends on. (I.e. if g_list_length
(material->layer_differences) != material->n_layers then it's not safe
to try pruning its ancestry!)
http://bugzilla-attachments.gnome.org/attachment.cgi?id=170907
There is GL_INVALID_ENUM error for GL_DEPTH_STENCIL when call
glRenderbufferStorage() with OpenGL ES backend. So enable this
only for OpenGL backend.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
This is all internal, so we shouldn't need it; unfortunately, it seems
we're passing invalid data internally, so for the time being catching
inconsistencies should at least emit a warning for us to backtrace.
This adds a check in clutter_actor_real_queue_redraw after calling
_clutter_actor_get_stage_internal to check in case the actor doesn't yet
have an associated stage so we can avoid passing a NULL stage pointer to
_clutter_stage_set_pick_buffer_valid which could cause a crash.
*** This is an API change ***
The general pattern for axis-aligned arguments is:
x argument
y argument
If we consider columns an x-aligned argument, and row a y-aligned
argument, then we need to update the TableLayout functions to be:
column
row
and not:
row
column
We have an optimization to track when there are multiple picks per
frame so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There was a problem though in that we were tracking this information in
the ClutterMainContext, but conceptually this doesn't really make sense
because the pick buffer is associated with a stage framebuffer and there
can be multiple stages for one context.
This patch moves the state tracking to ClutterStage.
This reverts commit d7e86e2696.
This was a half baked patch that was pushed a bit early since it broke
test-texture-pick-with-alpha + the commit message refers to a change on
the wip/paint-box branch that hasn't happened yet.
We have an optimization to track when there are multiple picks per
frames so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There were two problems with how we were tracking this state though.
Firstly we were tracking this information in the ClutterMainContext, but
conceptually this doesn't really make sense because the pick buffer is
associated with a stage framebuffer and there can be multiple stages for
one context. Secondly - since the change to how redraws are queued - we
weren't marking the pick buffer as invalid when a queuing a redraw, we
were only marking the buffer invalid when signaling/finishing the
queue-redraw process, which is now deferred until just before a paint.
This meant using clutter_stage_get_actor_at_pos after a scenegraph
change could give a wrong result if it just read from an existing (but
technically invalid) pick buffer.
This patch moves the state tracking to ClutterStage, and ensures the
buffer is invalidated in _clutter_stage_queue_actor_redraw.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2283
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The request mode set by the box layout was previously width-for-height
in a vertical layout and height-for-width in a horizontal layout which
seems to be wrong. For example, if width-for-height is used in a
vertical layout then the width request will come second with the
for_height parameter set. However a vertical layout doesn't pass the
for_height parameter on to its children so doing the requests in that
order doesn't help. If the layout contains a ClutterText then both the
width and height request for it will have -1 for the for_width and
for_height parameters so the text would end up allocated too small.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2328
If set_cogl_texture() is called after unsetting the Texture's material
then we really want to make a copy of the template.
Also, we should assert more often if the internal state goes horribly
wrong: at least, we'll have a backtrace.
The order of the row_span and column_span arguments was different in
the declaration from that in the definition. This was causing the
gtk-doc to also have the wrong order.
If COGL_OBJECT_DEBUG is defined then cogl-object-private.h will call
COGL_NOTE in the ref and unref macros. For this to work the debug
header needs to also be included or COGL_NOTE won't necessarily be
defined.
If the FlowLayout layout manager wasn't allocated the same size it
requested then it should blow its caches and recompute the layout
with the given allocation size.
Instead of using the fixed position and size API, use the newly added
update_allocation() virtual function in ClutterConstraint to change the
allocation of a ClutterActor. This allows using constraints inside
layout managers, and also allows Constraints to react to changes in the
size of an actor without causing relayout cycles.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2319
The Constraint should plug directly into the allocation mechanism, and
modify the allocation of the actor to which they are applied to. This is
similar to the mechanism used by the Effect class to modify the paint
sequence of an actor.
In line with the changes made in f5f066df9c to clean up how Clutter
deals with transformations of actors this patch updates the code in
clutter-offscreen-effect.c. We now query the projection matrix from the
stage instead of the perspective and instead of duplicating the logic to
setup the stage view transform we now use
_clutter_actor_apply_modelview_transform for the stage instead.
cogl_util_next_p2 is declared in cogl-util.h which is a private header
so it shouldn't be possible for an application to use it. It's
probably not a function we'd like to export from Cogl so it seems
better to keep it private. This patch renames it to _cogl_util_next_p2
so that it won't be exported from the shared library.
The documentation for the function is also slightly wrong because it
stated that the function returned the next power greater than
'a'. However the code would actually return 'a' if it's already a
power of two. I think the actual behaviour is more useful so this
patch changes the documentation rather than the code.
Previously CoglVertexBuffer would always set the flush options flags
to at least contain COGL_MATERIAL_FLUSH_FALLBACK_MASK. The code then
later checks whether any flags are set before deciding whether to copy
the material to implement the overrides. This means that it would
always end up copying the material even if there are no fallback
layers. This patch changes it so that it only sets
COGL_MATERIAL_FLUSH_FALLBACK_MASK if fallback_layers != 0.
If a single arbfp program is being shared between multiple CoglMaterials
then we need to make sure we update all program.local params when
switching between materials. Previously we had a dirty flag to track
when combine_constant params were changed but didn't take in to account
that different materials sharing the same program may have different
combine constants.
Previously the backend private state was used to either link to an
authority material or provide authoritative program state. The mechanism
seemed overly complex and felt very fragile. I made a recent comment
which added a lot of documentation to make it easier to understand but
still it didn't feel very elegant.
This patch takes a slightly different approach; we now have a
ref-counted ArbfpProgramState object which encapsulates a single ARBfp
program and the backend private state now just has a single member which
is a pointer to one of these arbfp_program_state objects. We no longer
need to cache pointers to our arbfp-authority and so we can get rid of
a lot of awkward code that ensured these pointers were
updated/invalidated at the right times. The program state objects are
not tightly bound to a material so it will also allow us to later
implement a cache mechanism that lets us share state outside a materials
ancestry. This may help to optimize code not following the
recommendations of deriving materials from templates, avoiding one-shot
materials and not repeatedly modifying materials because even if a
material's ancestry doesn't naturally lead us to shareable state we can
fallback to searching for shareable state using central hash tables.
This adds a way to iterate the layer indices of the given material since
cogl_material_get_layers has been deprecated. The user provides a
callback to be called once for each layer.
Because modification of layers in the callback may potentially
invalidate any number of the internal CoglMaterialLayer structures and
invalidate the material's layer cache this should be more robust than
cogl_material_get_layers() which used to return a const GList *
pointing directly to internal state.
This fixes the material backends to declare their constant vtable in the
c file with a corresponding extern declaration in the header. This
should fix complaints about duplicate symbols seen on OSX.
Instead of lazily incorporating combine constants as arbfp PARAM
constants in the source directly we now use program.local parameters
instead so we can avoid repeating codegen if a material's combine
constant is updated. This should be a big win for applications animating
a constant used for example in an animated interpolation, such as
gnome-shell.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2280
This makes it so we don't consider LAYER_STATE_TEXTURE changes to affect
the arbfp code. This should avoid a lot of unneeded passes of
code generation for applications modifying the texture for a layer.
This makes it so we only notify backends of either a single material
change or a single layer change. Previously all material STATE_LAYERS
changes would be followed by a more detailed layer change.
For backends that perform code generation for fragment processing they
typically need to understand the details of how layers get changed to
determine if they need to repeat codegen. It doesn't help them to report
a material STATE_LAYERS change for all layer changes since it's so
broad, they really need to wait for the layer change to be notified.
What does help though is to report a STATE_LAYERS change for a change in
material->n_layers because they typically do need to repeat codegen in
that case.
This fixes a number of issues relating to how we track the arbfp private
state associated with CoglMaterials. At the same time it adds much more
extensive code documentation to try and make it a bit more approachable.
When notifying a backend about a layer being modified we now pass the
layers current owner for reference. NB: Although a layer can indirectly
be referenced by multiple layers, a layer is considered immutable once
it has dependants, so there is only ever one material associated with a
layer being modified. Passing the material pointer to the backends
layer_pre_change callback can be useful for backends that associate
their private state with materials and may need to update that state in
response to layer changes.
This renames the get_arbfp_authority function to
get_arbfp_authority_no_check to clarify that the function doesn't
validate that the authority cache is still valid by looking at the age
of the referenced material. The function should only be used when we
*know* the cache has already been checked.
We now pass a boolean to _cogl_material_pre_change_notify to know when
a material change is as a result of a layer change. We plan to use this
information to avoid notifying the backends about material changes if
they are as a result of layer changes. This will simplify the handling
of state changes in the backends because they can assume that layer and
material changes are mutually exclusive.
This adds an internal _cogl_material_get_layer_combine_constant function
so we can query the current layer combine constant back. We should
probably make this a public property getter, but for now we just need
this so we can read the constant in the arbfp backend.
We are going to start tracking more per-texture unit state with arbfp
private state so this adds an internal UnitState type and we allocate an
array of these when setting up a new private state structure. The first
thing that has been moved into this is the sampled boolean to know when
a particular texture unit gets sampled from in the generated arbfp code.
This avoids the use of of gcc constructor and destructor attributes to
initialize the cogl uprof context and optionally print a cogl uprof
report at app exit. We now initialize the uprof context in
cogl_context_create instead.
When building with --enable-profile we now depend on the uprof-0.3
developer release which brings a few improvements:
» It lets us "fix" how we initialize uprof so that instead of using a shared
object constructor/destructor (which was a hack used when first adding
uprof support to Clutter) we can now initialize as part of clutter's
normal initialization code. As a side note though, I found that the way
Clutter initializes has some quite serious problems whenever it
involves GOptionGroups. It is not able to guarantee the initialization
of dependencies like uprof and Cogl. For this reason we still use the
contructor/destructor approach to initialize uprof in Cogl.
» uprof-0.3 provides a better API for adding custom columns when reporting
timer and counter statistics which lets us remove quite a lot of manual
report generation code in clutter-profile.c.
» uprof-0.3 provides a shared context for tracking mainloop timer
statistics. This means any mainloop based library following the same
"Mainloop" timer naming convention can use the shared context and no
matter who ends up owning the final mainloop the statistics will always
be in the same place. This allows profiling of Clutter with an
external mainloop such as with the Mutter compositor.
» uprof-0.3 can export statistics over dbus and comes with an ncurses
based ui to vizualize timer and counter stats live.
The latest version of uprof can be cloned from:
git://github.com/rib/UProf.git
When try_creating_fbo fails it deletes any intermediate render buffers
that were created. However it doesn't clear the list so I think if it
failed a second time it would try to delete the render buffers
again. This could potentially cause problems if a subsequent fbo is
created because the destructor for the original might delete the
renderbuffers of the new fbo.
Since a ClutterClone may be allocated a different size than its source
actor we need to apply a scale factor before painting the source actor.
We were manually using cogl_scale to do this in clutter_clone_paint but
really this kind of thing is best handled in an implementation of the
apply_transform virtual so Clutter can be aware of the transform for
other purposes, such as input transformations. Also we want to provide
an implementation of the get_paint_volume virtual where Clutter will
also expect to be able to use the apply_transform virtual to transform
the volume into its parent's coordinate space.
If a NULL clip is passed to clutter_stage_glx_add_redraw_clip then we
update the redraw clip to have width of 0, but we weren't setting
stage_glx->initialized_redraw_clip = TRUE. This could result in a full,
unclipped stage redraw being reduced to a clipped redraw.
This adds a verbose warning that will be displayed if
clutter_actor_allocate is passed an actor that isn't a descendent of a
ClutterStage. Layouting should always bubble up from a stage so this
condition is likely to indicate a buggy container that allocating a
child that it has already unparented.
When building actor relative transforms, instead of using the matrix
stack to combine transformations and making assumptions about what is
currently on the stack we now just explicitly initialize an identity
matrix and apply transforms to that.
This removes the full_vertex_t typedef for internal transformation code
and we just use ClutterVertex.
ClutterStage now implements apply_transform like any other actor now
and the code we had in _cogl_setup_viewport has been moved to the
stage's apply_transform instead.
ClutterStage now tracks an explicit projection matrix and viewport
geometry. The projection matrix is derived from the perspective whenever
that changes, and the viewport is updated when the stage gets a new
allocation. The SYNC_MATRICES mechanism has been removed in favour of
_clutter_stage_dirty_viewport/projection() APIs that get used when
switching between multiple stages to ensure cogl has the latest
information about the onscreen framebuffer.
This adds _clutter_actor_get_stage_internal to clutter-private.h since
we plan to use it in clutter-offscreen-effect when preparing to
redirect an actor offscreen.
Instead of doing the shlib trick, build ClutterJson (if needed) inside
the top-level clutter/ directory - similar to a non-recursive layout.
Hopefully, one day, we'll be able to do this with a real non-recursive
layout.
Let's try to keep Cogl's build as non-recursive as possible, in the hope
that one day we'll be able to make it fully non-recursive along with the
rest of Clutter.
The keysyms defines in clutter-keysyms.h are generated from the X11 key
symbols headers by doing the equivalent of a pass of sed from XK_* to
CLUTTER_*. This might lead to namespace collisions, down the road.
Instead, we should use the CLUTTER_KEY_* namespace.
This commit includes the script, taken from GDK, that parses the X11
key symbols and generates two headers:
- clutter-keysyms.h: the default included header, with CLUTTER_KEY_*
- clutter-keysyms-compat.h: the compatibility header, with CLUTTER_*
The compat.h header file is included if CLUTTER_DISABLE_DEPRECATED is
not defined - essentially deprecating all the old key symbols.
This does not change any ABI and, assuming that an application or
library is not compiling with CLUTTER_DISABLE_DEPRECATED, the source
compatibility is still guaranteed.
Make sure we don't use deprecated API internally by adding
CLUTTER_DISABLE_DEPRECATED to the AM_CPPFLAGS.
This requires adding -UCLUTTER_DISABLE_DEPRECATED to the introspection
scanner CFLAGS, otherwise the deprecated API will never be introspected
and the data generated will not be compatible.
When animating an actor through clutter_actor_animate() and friends we
might want forcibly detach the animation instance from the actor in
order to start a new one - for instance, in response to user
interaction.
Currently, there is no way to do that except in a very convoluted way,
by emitting the ::completed signal and adding a special case in the
signal handlers; this is due to the fact that clutter_actor_animate()
adds more logic than the one added by clutter_animation_set_object(),
so calling set_object(NULL) or unreferencing the animation instance
itself won't be enough.
The right way to approach this is to add a new method to Clutter.Actor
that detaches any eventual Animation currently referencing it.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2051
If we're depending on an uninstalled .gir, use --include-uninstalled.
We need to explicitly specify Cogl to let the scanner know it's also
uninstalled.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Flushing the framebuffer state can cause some drawing to occur if the
framebuffer has a clip stack which needs the stencil buffer. This was
causing the array pointers set up by enable_state_for_drawing_buffer
to get mangled so it would crash when it hits glDrawArrays. This patch
moves the framebuffer state flush to before it sets up the array
pointers.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2297
I think this is what commit 2cf1405506 intended to do since it
specifically mentioned cleaning up the trap in
clutter_x11_texture_pixmap_set_pixmap, but although it moved the untrap
to only be done in the case where Pixmap != None it left the position of
the trap itself unchanged. This meant the error trapping wouldn't be
balanced if pixmap == None since the untrap wouldn't be done. We now
only trap and untrap around the XGetGeometry call done when pixmap !=
None.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2303
With currently distributed versions of Mesa, calling XFreePixmap()
before glxDestroyPixmap() will cause an X error from DRI. So, we
need to make sure that we get rid of the CoglTexturePixmapX11 before
we XFreePixmap().
clutter_x11_texture_pixmap_dispose(): Call
clutter_x11_texture_pixmap_set_pixmap() instead of using XFreePixmap
directly so that we leverage the text-clearing hack and destroy
things in the right order.
clutter_x11_texture_pixmap_set_pixmap(): Don't do a pointless roundtrip
and trap a pointless error when setting pixmap to None.
clutter_x11_texture_pixmap_set_pixmap(): Free damage resources when
we are setting Pixmap to None.
clutter_x11_texture_pixmap_set_window(): When setting a new window
or setting the window to None, immedediately call
cluter_x11_texture_pixmap_set_pixmap(). This means that set_window(None)
immediately will free any referenced resources related to the window.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2303
Comprehensively add (out) annotations to functions parameters
returning int/float/double.
Not handled here: structure out returns like ClutterColor or
ClutterPerspective or GValue that should get (out caller-allocates).
Not handled here: Cogl
http://bugzilla.clutter-project.org/show_bug.cgi?id=2302
(element-type) should have a full name like Clutter.Actor rather than
a non-namespaced name like Actor. gobject-introspection has become
more strict about this with the recent scanner rewrite.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2301
*** This is an API change ***
Replaced the original drag-threshold property with two separate
horizontal (x-drag-threshold) and vertical (y-drag-threshold)
thresholds.
It is some times necessary to have different drag thresholds for the
horizontal and vertical axes. For example, when a draggable actor is
inside a horizontal scrolling area, only vertical movement must begin
dragging. That can be achieved by setting the x-drag-threshold to
G_MAXUINT while y-drag-threshold is something usual, say, 20 pixels.
This is different than drag axis, because after the threshold
has been cleared by the pointer, the draggable actor can be dragged
along both axes (if allowed by the drag-axis property).
http://bugzilla.clutter-project.org/show_bug.cgi?id=2291
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Creating new materials for every Texture instance results in a lot of
ARBfp programs being generated/compiled. Since most textures will just
be similar we should create a template material for all of them, and
then copy it in every instance. Cogl will try to optimize the generation
of the program and, hopefully, will reuse the same program most of the
time.
With this change, a simple test shows that loading 48 textures will
result in just two programs being compiled - with and without batching
enabled.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2295
When disposing a material layer of type 'texture' we should check that
the texture handle is still valid before calling cogl_handle_unref().
This avoids an assertion failure when disposing a ClutterTexture.
This patch merges in substantial work from
Emmanuele Bassi <ebassi@linux.intel.com>
* Use new introspection --include-uninstalled API since we don't want
to try to find the clutter-1.0.pc file before it's installed.
* Use --pkg-export for Clutter-1.0.gir, since we want the .gir file to
contain the associated pkg-config file.
* Drop the use of --pkg for dependencies; those come from the associated
.gir files. (Actually, --pkg is almost never needed)
* Add --quiet
http://bugzilla.clutter-project.org/show_bug.cgi?id=2292
Intel CE3100 and CE4100 have several planes (framebuffers) and a
hardware blender to blend the planes togeteher to produce the final
image.
clutter_cex100_set_plane() lets you configure which framebuffer clutter
will use for its rendering.
• Use the public COGL_HAS_GLES[12] define instead of the HAVE_COGL_*
ones which are private and defined in config.h,
• Install clutter-egl-headers.h which is needed by clutter-egl.h,
• Remove clutter-stage.h as it's uneeded and does not work since the
single clutter.h include policy,
• Install the egl headers into their own egl directory as the x11 and
glx backends do. The include should then be <clutter/egl/clutter-egl.h>,
so document it. It does not really break anything as nobody could
have used those broken headers.
Intel CE3100 and CE4100 SoCs are designed for TVs. They have separate
framebuffers that are blended together by a piece of hardware to make
the final output. The library that allows you to initialize and
configure those planes is called GDL. A EGL GDL winsys can then be
use with those planes as NativeWindowType to select which plane to use.
This patch adds a new ClutterBackendCex100 backend that can be
selected at compile time with the new --with-flavour=cex100 option.
Some minor fixes here and there: missing include, wrongly placed #endif,
unused variable warning fixes, missing #ifdef.
Make ClutterStageEGL a subclass of either ClutterStageX11 or GObject
depending if you compile with X11 support (EGLX) or not (native).
*** This is an API change ***
The create_target() virtual function should return a CoglHandle to a
texture; clutter_offscreen_effect_get_target(), instead, returns a
CoglMaterial to be painted in the implementation of the paint_target()
virtual function.
Instead of equating textures with materials, and confusing the user of
the API, we should mark the difference more prominently.
First of all, we should return a CoglMaterial* (now that we have that
as a public type) in get_target(); having handles all over the place
does not make it easier to distinguish the semantics of the virtual
functions.
Then we should rename create_target() to create_texture(), to make it
clear that what should be returned is a texture that is used as the
backing for the offscreen framebuffer.
Commit eae4561929 tried to clean how it checks for the private actor
flags. However the check for the 'IN_DESTRUCTION' flag in the Win32
backend got inverted so it would always clear the current
context. This was causing _cogl_check_driver_valid to fail later and
then the realize would get stuck in a infinite loop.
When we free a state because there are no more keys with it as a target use a
goto to re-initialize temporary variables that have become invalid.
Fixing bug #2273
In 965907deb3 the picking was changed to render the full stage
instead of a single pixel whenever picking is performed more than once
between paints. However the condition in the if-statement was
backwards so it would end up always doing a full stage render.
The glx and egl(x) backends export some internal symbols. Hide these
symbols (using '_' prefix) to reduce ABI differentiation between the
glx and eglx flavours.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2267
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
It can be useful to be able to forcibly break the grab set up by the
ClickAction. The newly added release() method provides a mechanism to
release the grab and unset the :held state of the ClickAction.
This clarifies the documentation for clutter_actor_queue_redraw to
explain that custom actors should call this whenever some private state
changes that affects painting *or* picking.
The expectation is that actors should call clutter_actor_queue_redraw
when ever some private state changes that affects painting *or* picking.
ClutterTexture was not doing this for pick_with_alpha property changes.
The idea is that if we see multiple picks per frame then that implies
the visible scene has become static. In this case we can promote the
next pick render to be unclipped so we have valid pick values for the
entire stage. Now we can continue to read from this cached buffer until
the stage contents do visibly change.
Thanks to Luca Bruno on #clutter for this idea!
Weak materials are ones that don't take a reference on their parent and
they are associated with a callback that notifies when the material is
destroyed, because its parent was freed or modified.
More details can be found at:
http://wiki.clutter-project.org/wiki/CoglDesign/CoglMaterial
For now the concept is internal only but the plan is to make this public
at some point once we have tested the design internally.
Following the commits:
c03544da - clutter-shader: use cogl_program_set_uniform_xyz API
a26119b5 - tests: Remove use of cogl_program_use
Remove the users of cogl_program_uniform_* and cogl_program_use() in the
shader-based effects.
In the case where there is no error log for arbfp we were returning a
"" string literal. The other paths were using g_strdup to return a
string that could be freed with g_free. This makes the arbfp path return
g_strdup ("") instead.
There are quite a few if {} else {} blocks for dealing with arbfp else
glsl and the first block is guarded with #ifdef HAVE_COGL_GL. In this
case though the #endif was before the else so it wouldn't compile for
gles.
We need to include cogl-shader-private.h to have the
COGL_SHADER_TYPE_GLSL define. When building for opengl this wasn't
noticed probably because some other header indirectly includes this
file. It was a problem when building for gles2 though.
Instead of using the deprecated cogl_program_uniform_xyz functions we
now use the cogl_program_set_uniform methods. It looks like this should
also fix a problem with clutter-shader too in that previously we weren't
calling cogl_program_use before cogl_program_uniform_xyz so setting
uniforms would only work while the shader is enabled.
Instead of exposing an API that provides an OpenGL state machine style
where you first have to bind the program to the context using
cogl_program_use() followed by updating uniforms using
cogl_program_uniform_xyz we now have uniform setter methods that take an
explicit CoglHandle for the program.
This deprecates cogl_program_use and all the cogl_program_uniform
variants and provides the following replacements:
cogl_program_set_uniform_1i
cogl_program_set_uniform_1f
cogl_program_set_uniform_int
cogl_program_set_uniform_float
cogl_program_set_uniform_matrix
--quiet has been added to g-ir-scanner in the 0.9.1 cycle. We really
want to be able to compile clutter with 0.6.14 to be able to reuse
gir files that are distributed in current distributions.
Use the INTROSPECTION_SCANNER_ARGS (previously unused) variable to
convey --quiet when necessary.
Fixes: http://bugzilla.clutter-project.org/show_bug.cgi?id=2265
CoglAtlas chooses a fairly large default initial size of either
512x512 or 1024x1024 depending on the texture format. There is a
chance that this size will not be supported on some platforms which
would be catastrophic for the glyph cache because it would mean that
it would always fail to put any glyphs in the cache so text wouldn't
work. To fix this the atlas code now checks whether the chosen initial
size is supported by the texture driver and if not it will get halved
until it is supported.
Previously when creating a new rectangle map it would try increasingly
larger texture sizes until GL_MAX_TEXTURE_SIZE is reached. This is bad
because it queries state which should really be owned by the texture
driver. Also GL_MAX_TEXTURE_SIZE is often a conservative estimate so
larger texture sizes can be used if the proxy texture is queried
instead.
Previously each node in the rectangle map tree would store the total
remaining space in all of its children to use as an optimization when
adding nodes. With this it could skip an entire branch of the tree if
it knew there could never be enough space for the new node in the
branch. This modifies that slightly to instead store the largest
single gap. This allows it to skip a branch earlier because often
there would be a lot of small gaps which would add up to enough a
space for the new rectangle, but the space can't be used unless it is
in a single node.
The rectangle map still needs to keep track of the total remaining
space for the whole map for the debugging output so this has been
added back in to the CoglRectangleMap struct. There is a separate
debugging function to verify this value.
Previously when the atlas needs to be migrated it would start by
trying with the same size as the existing atlas if there is enough
space for the new texture. However even if the atlas is completely
sorted there will always be some amount of waste so when the atlas
needs to grow it would usually end up redundantly trying the same size
when it is very unlikely to fit. This patch changes it so that there
must be at least 6% waste available after the new texture is added
otherwise it will start with the next atlas size.
When iterating over the rectangle map a stack is used to implement a
recursive algorithm. Previously this was slice allocating a linked
list. Now it uses a GArray which is retained with the rectangle map to
avoid frequent allocations which is a little bit faster.
Previously the remaining space was managed as part of the
CoglRectangleMap struct. Now it is stored per node so that at any
point in the hierarchy we can quickly determine how much space is
remaining in all of the node's children. That way when adding a
rectangle we can miss out entire branches more quickly if we know that
there is no way the new rectangle would fit in that branch.
This also adds a function to recursively verify the cached state in
the nodes such as the remaining space and the number of
rectangles. This function is only called when the dump-atlas-image
debug flag is set because it is potentially quite slow.
The glyph cache is now stored in a CoglAtlas structure instead of the
custom atlasing code. This has the advantage that it can share code
with the main texture atlas and that it supports reorganizing the
atlas when it becomes full. Unlike the texture atlas, the glyph cache
can use multiple atlases which would be neccessary if the maximum
texture size is reached and we need to create a second
texture. Whenever a display list is created it now has to register a
callback with the glyph cache so that the display list can be
recreated whenever any of the atlases are reorganized. This is needed
because the display list directly stores texture coordinates within
the atlas texture and they would become invalid when the texture is
moved.
The ensure_glyphs_for_layout now works in two steps. First it reserves
space in the atlas for all of the glyphs. The atlas is created with
the DISABLE_MIGRATION flag so that it won't actually copy any textures
if any rearranging is needed. Whenever the position is updated for a
glyph then it is marked as dirty. After space for all of the glyphs
has been reserved it will iterate over all dirty glyphs and redraw
them using Cairo. The rendered glyph is then stored in the texture
with a sub texture update.
The glyphs need to all be set at the right location before starting to
create the display list because the display list stores the texture
coordinates of the glyph. If any of the glyphs were moved around then
the parts of the display list that was created already would become
invalid. To make this work, ensure_glyphs_for_layout is now always
called before rendering a layout or a layout line.
_cogl_atlas_new now has two extra parameters to specify the format of
the textures it creates as well as a set of flags to modify the
behavious of the atlas. One of the flags causes the new textures to be
cleared and the other causes migration to avoid actually copying the
textures. This is needed to use CoglAtlas from the pango glyph cache
because it needs to use COGL_PIXEL_A_8 and to clear the textures as it
does not fill in the gaps between glyphs. It needs to avoid copying
the textures so that it can work on GL implementations without FBO
support.
Instead of storing a pointer to the CoglRectangleMap and a handle to
the atlas texture in the context, there is a now a separate data
structure called a CoglAtlas to manage these two. The context just
contains a pointer to this. The code to reorganise the atlas has been
moved from cogl-atlas-texture.c to cogl-atlas.c
This adds an internal CoglCallbackList type which is just a GSList of
of function pointers along with a data pointer to form a
closure. There are functions to add and remove items and to invoke the
list of functions. This could be used in a number of places in Cogl.
This simply renames CoglAtlas to CoglRectangleMap without making any
functional changes. The old 'CoglAtlas' is just a data structure for
managing unused areas of a rectangle and it doesn't neccessarily have
to be used for an atlas so it wasn't a very good name.
Textures within a layer were compared for equality by comparing their
texture handle. However this means that sub textures and atlas
textures which may be internally using the same GL handle would not be
batched together. Instead it now tries to determine the underlying GL
handle using either the slice override or _cogl_texture_get_gl_texture
and then compares those.
When filtering on allowed formats for atlas textures, it now masks out
the BGR and AFIRST bits in addition to the premult bit. That way it
will accept RGB and RGBA formats in any component order.
In theory it could also accept luminance and alpha-only textures but I
haven't added this because presumably if the application has requested
these formats then it has some reason not to use a full RGB or RGBA
texture and we should respect that.
See commits:
7daeb217 blur-effect: Do not inherit from ShaderEffect
1ec57743 desaturate-effect: Do not inherit from ShaderEffect
We might avoid using shaders at all in the future for simple effects.
Since BlurEffect and DesaturateEffect are using the shader API
implicitly and not using ClutterShaderEffect, we need to check if the
underlying GL implementation supports the GLSL shading language and warn
if not.
Hide the fact that we're using a fragment shader, in case we're able in
the future to use a material layer combine function when painting the
offscreen target texture.
We might want to switch the BlurEffect from a box-blur to a
super-sampling of the texture target, in order to make it cheap(er).
If we inherit from ShaderEffect, though, we're setting in stone the
fact that we are going to use a fragment shader for blurring.
Since there is not parametrization of the blur, the code necessary
to implement effect is pretty small, and we can use the Cogl API
directly.
Instead of calling cogl_program_use() around the paint_target()
chain-up, we can use the newly added API in CoglMaterial to attach
user-defined shaders to the offscreen target material.
* wip/table-layout:
Add ClutterTableLayout, a layout showing children in rows and columns
box-layout: Use allocate_align_fill()
bin-layout: Migrate to allocate_align_fill()
actor: Add allocate_align_fill()
test-flow-layout: Use BindConstraints
A TableLayout is a layout manager that allocates its children in rows
and columns. Each child is assigned to a cell (or more if a cell span
is set).
The supported child properties are:
• x-expand and y-expand: if this cell with try to allocate the
available extra space for the table.
• x-fill and y-fill: if the child will get all the space available in
the cell.
• x-align and y-align: if the child does not fill the cell, then
where the child will be aligned inside the cell.
• row-span and col-span: number of cells the child will allocate for
itself.
Also, the TableLayout has row-spacing and col-spacing for specifying
the space in pixels between rows and between columns.
We also include a simple test of the layout manager, and the
documentation updates.
The TableLayout was implemented starting from MxTable and
ClutterBoxLayout.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2038
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Layout managers are using the same code to allocate a child while taking
into consideration:
• horizontal and vertical alignment
• horizontal and vertical fill
• the preferred minimum and natural size, depending
on the :request-mode property
• the text direction for the horizontal alignment
• an offset given by the fixed position properties
Given the amount of code involved, and the amount of details that can go
horribly wrong while copy and pasting such code in various classes - let
alone various projects - Clutter should provide an allocate() variant
that does the right thing in the right way. This way, we have a single
point of failure.
This adds a wrapper macro to clutter-private that will use
g_object_notify_by_pspec if it's compiled against a version of GLib
that is sufficiently new. Otherwise it will notify by the property
name as before by extracting the name from the pspec. The objects can
then store a static array of GParamSpecs and notify using those as
suggested in the documentation for g_object_notify_by_pspec.
Note that the name of the variable used for storing the array of
GParamSpecs is obj_props instead of properties as used in the
documentation because some places in Clutter uses 'properties' as the
name of a local variable.
Mose of the classes in Clutter have been converted using the script in
the bug report. Some classes have not been modified even though the
script picked them up as described here:
json-generator:
We probably don't want to modify the internal copy of JSON
behaviour-depth:
rectangle:
score:
stage-manager:
These aren't using the separate GParamSpec* variable style.
blur-effect:
win32/device-manager:
Don't actually define any properties even though it has the enum.
box-layout:
flow-layout:
Have some per-child properties that don't work automatically with
the script.
clutter-model:
The script gets confused with ClutterModelIter
stage:
Script gets confused because PROP_USER_RESIZE doesn't match
"user-resizable"
test-layout:
Don't really want to modify the tests
http://bugzilla.clutter-project.org/show_bug.cgi?id=2150
The special handling for texture unit 1 caught the case where unit
1 was changed for transient purposes, but didn't properly handle
the case where the actual non-transient texture was different between
two materials with no transient binding in between.
If the actual texture has changed when flushing, mark unit 1 as dirty
and needing a rebind.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2261
This makes CoglProgram/Shader automatically detect when the user has
given an ARBfp program by checking for "!!ARBfp1.0" at the beginning of
the user's source.
ARBfp local parameters can be set with cogl_program_uniform_float
assuming you pass a @size of 4 (all ARBfp program.local parameters
are vectors of 4 floats).
This doesn't expose ARBfp environment parameters or double precision
local parameters.
Previously we had an internal only _cogl_material_set_user_program to
redirect legacy usage of cogl_program_use() through CoglMaterial. This
instead makes the API public because until we implement our planned
"snippet" framework we need a stop-gap solution for using shaders in
Cogl.
The plan is to also support ARBfp with the cogl_program/shader API so
this API will also allow clutter-gst to stop using direct OpenGL calls
that conflict with Cogl's state tracking.
A change to a layer is also going to be a change to its owning material
so we have to chain up in _cogl_material_layer_pre_change_notify and
call _cogl_material_pre_change_notify. Previously we were only
considering if the owning material was referenced in the journal but
that ignores that it might also have dependants. We no longer need to
flush the journal directly in layer_pre_change_notify.
In _cogl_material_layer_pre_change_notify when we see that a layer has
dependants and it can't be modified directly then we allocate a new
layer. In this case we also have to link the new layer to its required
owner. If the immutable layer we copied had the same owner though we
weren't unlinking that old layer.
In _cogl_material_pre_change_notify we need to identify if it's a sparse
property being changed and if so initialize the state group if the given
material isn't currently the authority for it.
Previously we were unconditionally calling
_cogl_material_initialize_state which would e.g. NULL the layer
differences list of a material each time a layer change was notified.
It would also call _cogl_material_initialize_state for non-sparse
properties which should always be valid at this point so the function
has been renamed to _cogl_material_initialize_sparse_state to make this
clearer with a corresponding g_return_if_fail check.
This fixes how we copy layer differences in
_cogl_material_copy_layer_differences.
We were making a redundant g_list_copy of the src differences and then
iterating the src list calling _cogl_material_add_layer_difference for
each entry which would double the list length, but the initial copy
directly referenced the original layers which wasn't correct.
Also we were initializing dest->n_layers before copying the layer
differences but the act of copying the differences will re-initialize
n_layers to 0 when adding the first layer_difference since it will
trigger a layer_pre_change_notify and since the dest material isn't yet
a STATE_LAYERS authority the state group is initialized before allowing
the change.
In _cogl_material_texture_storage_change_notify we were potentially
dereferencing layer->texture without checking first that it is the
authority of texture state. We now use
_cogl_material_layer_get_texture() instead.
This improve the dot file output available when calling
_cogl_debug_dump_materials_dot_file. The material graph now directly
points into the layer graph and the layers now show the texture unit
index.
DRM is available on more platforms than Linux (e.g. kFreeBSD), but
Clutter currently FTBFS there because of not being an alternative to
the __linux__ code (where it should be HAVE_DRM).
Instead of copying the DRM data structures, we should use libdrm when
falling back to directly requesting to wait for the vblank.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2225
Based on a patch by: Emilio Pozuelo Monfort <pochu27@gmail.com>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When setting :use-markup we always pass the contents of the Text actor
to clutter_text_set_markup_internal(); if string contains any markup,
this ends up being parsed and used - even when :use-markup is set to
FALSE.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2239
When the texture is set on a layer so that it is back to the parent's
texture it would clear the texture change flag but it wouldn't unref
the texture. The free function for a material layer does not unref the
texture if the change flag is cleared so the texture would end up
leaking. This happens for ClutterTexture because it disposes the
texture by setting layer 0 of the material to COGL_INVALID_HANDLE
which ends up the same as the default material.
In _cogl_material_layer_pre_paint we were mistakenly dereferencing the
layer->texture member for the passed layer instead of dereferencing the
texture state authority which was causing crashes in some cases.
LayoutMeta instances are created lazily. If an actor is added to a
container with a layout manager then the first time the layout manager
might be creating the LayoutMeta instance could be during the allocation
cycle caused by calling clutter_actor_show(). When a LayoutMeta is
instantiated for the first time, a list of properties can be set - and
this might lead to the emission of the ::layout-changed signal. The
signal is, most typically, going to cause a relayout to be queued, and a
warning will be printed on the terminal.
We should freeze the emission of the ::layout-changed signal during the
creation of the LayoutMeta instances, and thaw it after that.
If a Texture has been set to:
• keep its size synchronized with the image data
• maintain the aspect ratio of the image data
then it should also change its request mode depending on the orientation
of the image data, so that layout managers have a fighting chance of
sizing it correctly.
Added initialization of minimum window size property on Cocoa
side. This property works when user change window size by mouse
dragging. But when size is changed by clutter_actor_set_size this
property will not help and was added another check in
clutter_stage_osx_resize. Also osx_get_geometry was refactoried
because it returns incorrect values in many cases but now size is
saved in [Window reshape] in requisition_width/height and this value
will be returned in any case to frontend.
It's important step of initialization because all features calls from
font rendering libs based on this parameter. By default it equals to
-1 and test-text-cache test crashes in this case.
Trick with hiding view while showing the stage affects on responder
chain. The main view ceases to be first responder and we should
manually set first responder.
Problem was in incorrect application initialization.
[NSApplication sharedApplication] should be call in backend init(not
in init stage). It doesn't require any data and only makes a
connection to window server.
Cleanup clutter_backend_osx_post_parse function and move context
initialization to clutter_backend_osx_create_context. The OpenGL pixel
format attributes were taken as is. Also move bringing application to
foreground in clutter_stage_osx_realize, it seems there is best place
for it.
Viewport didn't initialized before OGL drawing and it causes crash on
assert so added viewport initalization to
clutter_stage_osx_realize. Also showing the stage causes drawing
function but other part of the system(in particular conformance tests)
don't expect it and aren't ready at this moment.
Mention the XFixes extension for compositors using input regions to let
events "pass through" the stage.
Thanks to: Robert Bragg <robert@linux.intel.com>
When we disable the event retrieval, we now just disable the X11 event
source, not the event selection. We need to make that clear to
applications, especially compositors, which might expect complete
control over the selection.
Currently, we select input events and GLX events conditionally,
depending on whether the user has disabled event retrieval.
We should, instead, unconditionally select input events even with event
retrieval disabled because we need to guarantee that the Clutter
internal state is maintained when calling clutter_x11_handle_event()
without requiring applications or embedding toolkits to select events
themselves. If we did that, we'd have to document the events to be
selected, and also update applications and embedding toolkits each time
we added a new mask, or a new class of events - something that's clearly
not possible.
See:
http://bugzilla.clutter-project.org/show_bug.cgi?id=998
for the rationale of why we did conditional selection. It is now clear
that a compositor should clear out the input region, since it cannot
assume a perfectly clean slate coming from us.
See:
http://bugzilla.clutter-project.org/show_bug.cgi?id=2228
for an example of things that break if we do conditional event
selection on GLX events. In that specific case, the X11 server ≤ 1.8
always pushed GLX events on the queue, even without selecting them; this
has been fixed in the X11 server ≥ 1.9, which means that applications
like Mutter or toolkit integration libraries like Clutter-GTK would stop
working on recent Intel drivers providing the GLX_INTEL_swap_event
extension.
This change has been tested with Mutter and Clutter-GTK.
This makes the gles2 cogl_program_use consistent with the GL version by
not binding the program immediately and instead leaving it to
cogl-material.c to bind the program when actually drawing something.
Previously custom uniforms were tracked in _CoglGles2Wrapper but as part
of a process to consolidate the gl/gles2 shader code it seems to make
sense for this state to be tracked in the CoglProgram object instead.
http://bugzilla.o-hand.com/show_bug.cgi?id=2179
Instead of having to query GL and translate the GL enum into a
CoglShaderType each time cogl_shader_get_type is called we now keep
track of the type in CoglShader.
The Animatable interface was created specifically for the Animation
class. It turns out that it might be fairly useful to others - such as
ClutterAnimator and ClutterState.
The newly-added API in this cycle for querying and accessing custom
properties should not require that we pass a ClutterAnimation to the
implementations: the Animatable itself should be enough.
This is necessary to allow language bindings to wrap
clutter_actor_animate() correctly and do type validation and
demarshalling between native values and GValues; an Animation instance
is not available until the animate() call returns, and validation must
be performed before that happens.
There is nothing we can do about the animate_property() virtual
function - but in that case we might want to be able to access the
animation from an Animatable implementation to get the Interval for
the property, just like ClutterActor does in order to animate
ClutterActorMeta objects.
XGetGeometry is a great piece of API, since it gets a lot of stuff that
are moderately *not* geometry related - the root window, and the depth
being two.
Since we have multiple conditions depending on the result of that call
we should split them up depending on the actual error - and each of them
should have a separate error message. This makes debugging simpler.
It's possible - though not recommended - that user code causes the
destruction of an actor in one of the notification handlers for
flag-based properties. We should protect the multiple notification
emission with g_object_ref/unref.
Nothing was storing the shader type when a shader was created so it
would get confused about whether it was a custom vertex or fragment
shader.
Also the 'type' member of CoglShader was a GLenum but the only place
that read it was treating it as if it was CoglShaderType. This changes
it be CoglShaderType.
When loading an RGB image GdkPixbuf will pad the rowstride so that the
beginning of each row is aligned to 4 bytes. This was causing us to
fallback to the code that copies the buffer. It is probably safe to
avoid copying the buffer if we can detect that the rowstride is simply
an alignment of the packed rowstride.
This also changes the copying fallback code so that it uses the
aligned rowstride. However it is now extremely unlikely that the
fallback code would ever be used.
In commit b780413e5a the GdkPixbuf loading code was changed so that
if it needs to copy the pixbuf then it would tightly pack it. However
it was still using the rowstride from the pixbuf so the image would
end up skewed. This fixes it to use the real rowstride.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2235
In OpenGL the 'shininess' lighting parameter is floating point value
limited to the range 0.0→128.0. This number is used to affect the size
of the specular highlight. Cogl materials used to only accept a number
between 0.0 and 1.0 which then gets multiplied by 128.0 before sending
to GL. I think the assumption was that this is just a weird GL quirk
so we don't expose it. However the value is used as an exponent to
raise the attenuation to a power so there is no conceptual limit to
the value.
This removes the mapping and changes some of the documentation.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2222
When flushing a fixed-function or arbfp material it would always call
disable_glsl to try to get rid of the previous GLSL shader. This is
needed even if current_use_program_type is not GLSL because if an
application calls cogl_program_uniform then Cogl will have to bind the
program to set the uniform. If this happens then it won't update
current_use_program_type presumably because the enabled state of arbfp
is still valid.
The problem was that disable_glsl would only select program zero when
the current_use_program_type is set to GLSL which wouldn't be the case
if cogl_program_uniform was called. This patch changes it to just
directly call _cogl_gl_use_program_wrapper(0) instead of having a
separate disable_glsl function. The current program is cached in the
cogl context anyway so it shouldn't cause any extra unnecessary GL
calls.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2232
g_ascii_dtostr was being used in four separate arguments to
g_string_append_printf but all invocations of it were using the same
buffer. This would end up with all of the arguments having the same
value which would depend on whichever order the compiler evaluates
them in. This patches changes it to use a multi-dimensional array and
a loop to fill in the separate buffers.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2219
The ARBfp programs are created with a printf() wrapper, which usually
fails in non-en locales as soon as you start throwing things like
floating point values in the mix.
We should use the g_ascii_dtostr() function which places a double into a
string buffer in a locale-independent way.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2219
This function creates a CoglBitmap which internally references a
CoglBuffer. The map and unmap functions will divert to mapping the
buffer. There are also now bind and unbind functions which should be
used instead of map and unmap whenever the data doesn't need to be
read from the CPU but will instead be passed to GL for packing or
unpacking. For bitmaps created from buffers this just binds the
bitmap.
cogl_texture_new_from_buffer now just uses this function to wrap the
buffer in a bitmap rather than trying to bind the buffer
immediately. This means that the buffer will be bound only at the
point right before the texture data is uploaded.
This approach means that using a pixel array will take the fastest
upload route if possible, but can still fallback to copying the data
by mapping the buffer if some conversion is needed. Previously it
would just crash in this case because the texture functions were all
passed a NULL pointer.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2112
The docs for GdkPixbuf say that the last row of the image won't
necessarily be allocated to the size of the full rowstride. The rest
of Cogl and possibly GL assumes that we can copy the bitmap with
memcpy(height*rowstride) so we previously would copy the pixbuf data
to ensure this. However if the rowstride is the same as bpp*width then
there is no way for the last row to be under-allocated so in this case
we can just directly upload from the gdk pixbuf. Now that CoglBitmap
can be created with a destroy function we can make it keep a reference
to the pixbuf and unref it during its destroy callback. GdkPixbuf
seems to always pack the image with no padding between rows even if it
is RGB so this should end up always avoiding the memcpy.
The fallback code for when we do have to copy the pixbuf is now
simplified so that it copies all of the rows in a single loop. We only
copy the useful region of each row so this should be safe. The
rowstride of the CoglBitmap is now always allocated to bpp*width
regardless of the rowstride of the pixbuf.
The CoglBitmap struct is now only defined within cogl-bitmap.c so that
all of its members can now only be accessed with accessor
functions. To get to the data pointer for the bitmap image you must
first call _cogl_bitmap_map and later call _cogl_bitmap_unmap. The map
function takes the same arguments as cogl_pixel_array_map so that
eventually we can make a bitmap optionally internally divert to a
pixel array.
There is a _cogl_bitmap_new_from_data function which constructs a new
bitmap object and takes ownership of the data pointer. The function
gets passed a destroy callback which gets called when the bitmap is
freed. This is similar to how gdk_pixbuf_new_from_data
works. Alternatively NULL can be passed for the destroy function which
means that the caller will manage the life of the pointer (but must
guarantee that it stays alive at least until the bitmap is
freed). This mechanism is used instead of the old approach of creating
a CoglBitmap struct on the stack and manually filling in the
members. It could also later be used to create a CoglBitmap that owns
a GdkPixbuf ref so that we don't necessarily have to copy the
GdkPixbuf data when converting to a bitmap.
There is also _cogl_bitmap_new_shared. This creates a bitmap using a
reference to another CoglBitmap for the data. This is a bit of a hack
but it is needed by the atlas texture backend which wants to divert
the set_region virtual to another texture but it needs to override the
format of the bitmap to ignore the premult flag.
The 'format' member of CoglTexture2DSliced is returned by
cogl_texture_get_format. All of the other backends return the internal
format of the GL texture in this case. However the sliced backend was
returning the format of the image data used to create the texture. It
doesn't make any sense to retain this information because it doesn't
necessarily indicate the format of the actual texture. This patch
changes it to store the internal format instead.
The P_() macro adds a context for the property nick and blurb. In order
to make xgettext recognize it, we need to drop glib-gettexize inside the
autogen.sh script and ship a modified Makefile.in.in with Clutter.
Moves preprocessor #ifdef __linux_ above else statement, avoiding the
lack of an else block if __linux__ is not defined.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2212
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The introspection scanner does not include '.' by default, so it was
always using the installed copy of Clutter-1.0.gir. Which obviously
wouldn't work if we didn't have one.
In ddb9016be4 the GL texture driver backend was changed to include
cogl-material-opengl-private.h instead of cogl-material-private.h.
However the gles texture backend was missed from this so it was giving
a compiler warning about using an undeclared function.
glTexSubImage3D was being called directly in cogl-texture-3d.c but the
function is only available since GL version 1.2 so on Windows it won't
be possible to directly link to it. Also under GLES it is only
available conditionally in an extension.
In ddb9016be4 the texture backends were changed to include
cogl-material-opengl-private.h instead of cogl-material-private.h.
However the 3D texture backend was missed from this so it was giving a
compiler warning about using an undeclared function.
This moves the code supporting _cogl_material_flush_gl_state into
cogl-material-opengl.c as part of an effort to reduce the size of
cogl-material.c to keep it manageable.
In general cogl-material.c has become far to large to manage in one
source file. As one of the ways to try and break it down this patch
starts to move some of lower level texture unit state management out
into cogl-material-opengl.c. The naming is such because the plan is to
follow up and migrate the very GL specific state flushing code into the
same file.
When the support for redirecting the legacy fog state through cogl
material was added in 9b9e764dc, the code to handle copying the fog
state in _cogl_material_copy_differences was missed.
The CoglTexture2DSliced backend has a fallback for when the
framebuffer extension is missing so it's not possible to use
glGenerateMipmap. This involves keeping a copy of the upper-left pixel
of the tex image so that we can temporarily enable GL_GENERATE_MIPMAP
on the texture object and do a sub texture update by reuploading the
contents of the first pixel. This patch copies that mechanism to the
2D and 3D backends. The CoglTexturePixel structure which was
previously internal to the sliced backend has been moved to
cogl-texture-private.h so that it can be shared.
* wip/xkb-support:
x11: Use XKB to translate keycodes into key symbols
x11: Use XKB to track the Locks state
x11: Use XKB detectable auto-repeat
x11: Add a Keymap ancillary object
x11: Store the group inside the event platform data
events: Add platform-data to allocated Events
build: Check for the XKB extension
Some apps or some use cases don't need to clear the stage on immediate
rendering GPUs. A media player playing a fullscreen video or a
tile-based game, for instance.
These apps are redrawing the whole screen, so we can avoid clearing the
color buffer when preparing to paint the stage, since there is no
blending with the stage color being performed.
We can add an private set of hints to ClutterStage, and expose accessors
for each potential hint; the first hint is the 'no-clear' one.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2058
Using 'r' to name the third component is problematic because that is
commonly used to represent the red component of a vector representing
a color. Under GLSL this is awkward because the texture swizzling for
a vector uses a single letter for each component and the names for
colors, textures and positions are synonymous. GLSL works around this
by naming the components of the texture s, t, p and q. Cogl already
effectively already exposes this naming because it exposes GLSL so it
makes sense to use that naming consistently. Another alternative could
be u, v and w. This is what Blender and Direct3D use. However the w
component conflicts with the w component of a position vertex.
This adds a publicly exposed experimental API for a 3D texture
backend. There is a feature flag which can be checked for whether 3D
textures are supported. Although we require OpenGL 1.2 which has 3D
textures in core, GLES only provides them through an extension so the
feature can be used to detect that.
The textures can be created with one of two new API functions :-
cogl_texture_3d_new_with_size
and
cogl_texture_3d_new_from_data
There is also internally a new_from_bitmap function. new_from_data is
implemented in terms of this function.
The two constructors are effectively the only way to upload data to a
3D texture. It does not work to call glTexImage2D with the
GL_TEXTURE_3D target so the virtual for cogl_texture_set_region does
nothing. It would be possible to make cogl_texture_get_data do
something sensible like returning all of the images as a single long
image but this is not currently implemented and instead the virtual
just always fails. We may want to add API specific to the 3D texture
backend to get and set a sub region of the texture.
All of those three functions can throw a GError. This will happen if
the GPU does not support 3D textures or it does not support NPOTs and
an NPOT size is requested. It will also fail if the FBO extension is
not supported and the COGL_TEXTURE_NO_AUTO_MIPMAP flag is not
given. This could be avoided by copying the code for the
GL_GENERATE_MIPMAP TexParameter fallback, but in the interests of
keeping the code simple this is not yet done.
This adds a couple of functions to cogl-texture-driver for uploading
3D data and querying the 3D proxy
texture. prep_gl_for_pixels_upload_full now also takes sets the
GL_UNPACK_IMAGE_HEIGHT parameter so that 3D textures can have padding
between the images. Whenever 3D texture is uploading, both the height
of the images and the height of all of the data is specified (either
explicitly or implicilty from the CoglBitmap) so that the image height
can be deduced by dividing by the depth.
Under big GL, glext.h is included automatically by gl.h. However under
GLES this doesn't appear to happen so it has to be included explicitly
to get the defines for extensions. This patch changes the
clutter_gl_header to be called cogl_gl_headers and it can now take a
space seperated list of multiple headers. This is then later converted
to a list of #include lines which ends up cogl-defines.h. The gles2
and gles1 backends now add their respective ext header to this list.
There are many places in the texture backend that need to do
conversion using the CoglBitmap code. Currently none of these
functions can throw an error but they do return a value to indicate
failure. In future it would make sense if new texture functions could
throw an error and in that case they would want to use a CoglBitmap
error if the failure was due to the conversion. This moves the
internal CoglBitmap error from the quartz backend to be public in
cogl-bitmap.h so that it can be used in this way.
We can use this error in more unsupported situations than just when we
have a Cogl feature flag for the error. For example if a non-sliced
texture is created with dimensions that are too large then we could
throw this error. Therefore it seems good to rename to something more
general.
Previously when comparing whether the settings for a layer are equal
it would only check if one of them was enabled. If so then it would
assume the other one was enabled and continue to compare the texture
environment. Now it also checks whether the enabledness differs.
If we have XKB support then we should be using it to turn on the
detectable auto-repeat; this allows avoiding the peeking trick
that emulates it inside the event handling code.
Now that we have private, per-event platform data, we can start putting
it to good use. The first, most simple use is to store the key group
given the event's modifiers. Since we assume a modern X11, we use XKB
to retrieve it, or we simply fall back to 0 by default.
The data is exposed as a ClutterX11-specific function, within the
sanctioned clutter_x11_* namespace.
Events allocated by Clutter should have a pointer to platform-specific
data; this would allow backends to add separate structures for holding
ancillary data, whilst retaining the ClutterEvent structure for use on
the stack.
In theory, for Clutter 2.x we might just want to drop Event and use an
opaque structure, or a typed data structure inheriting from
GTypeInstance instead.
This adds a COGL_OBJECT_INTERNAL_DEFINE macro and friends that are the
same as COGL_OBJECT_DEFINE except that they prefix the cogl_is_*
function with an underscore so that it doesn't get exported in the
shared library.
Previously COGL_OBJECT_DEFINE would always define deprecated
cogl_$type_{ref,unref} functions even if the type is new or if the
type is entirely internal. An application would still find it
difficult to use these because they wouldn't be in the headers, but it
still looks bad that they are exported from the shared library. This
patch changes it so that the deprecated ref counting functions are
defined using a separate macro and only the types that have these
functions in the headers call this macro.
Since 365605cf42, materials and layers are represented in a tree
structure that allows traversing up through parents and iterating down
through children. This re-works the related typedefs and reparenting
code so that they can be shared.
Up until now, the "behaviours" member of an actor definition was parsed
by the ClutterScript parser itself - even though it's not strictly
necessary.
In an effort to minimize the ad hoc code in the Script parser, we should
let ClutterActor handle all the special cases that involve
actor-specific members.
Under big GL, _cogl_texture_driver_size_supported uses the proxy
texture to check whether the given texture size is supported. Proxy
textures aren't available under GLES so previously this would just
return TRUE to assume all texture sizes are supported. This patch
makes it use glGetIntegerv with GL_MAX_TEXTURE_SIZE to give a second
best guess.
This fixes the sliced texture backend so that it will use slices when
the texture is too big.
When an intermediate buffer is used for downloading texture data it
was using the wrong byte length for a row so the copy back to the
user's buffer would fail.
The fallback for when glGetTexImage is not available renders the
texture to the framebuffer to read the data using glReadPixels. This
patch just sets the COGL_MATERIAL_FILTER_NEAREST filter mode on the
material before rendering to avoid linear filtering which would alter
the texture data.
The fallback for when glGetTexImage is not available draws parts of
the texture to the framebuffer and uses glReadPixels to extract the
data. However it was using cogl_rectangle to draw and then immediately
using raw glReadPixels to fetch the data. This won't cause a journal
flush so the rectangle won't necessarily have hit the framebuffer
yet. Instead it now uses cogl_read_pixels which does flush the
journal.
There were a few problems flushing texture overrides so that sliced
textures would not work:
* In _cogl_material_set_layer_texture it ignored the 'overriden'
parameter and always set texture_overridden to FALSE.
* cogl_texture_get_gl_texture wasn't being called correctly in
override_layer_texture_cb. It returns a gboolean to indicate the
error status but this boolean was being assigned to gl_target.
* _cogl_material_layer_texture_equal did not take into account the
override.
* _cogl_material_layer_get_texture_info did not return the overridden
texture so it would always use the first texture slice.
There was a lot of common code that was copied to all of the backends
to convert the data to a suitable format and wrap it into a CoglBitmap
so that it can be passed to _cogl_texture_driver_upload_subregion_to_gl.
This patch moves the common code to cogl-texture.c so that the virtual
just takes a CoglBitmap that is already in the right format.
Previously cogl_texture_get_data would pretty much directly pass on to
the get_data texture virtual function. This ended up with a lot of
common code that was copied to all of the backends. For example, the
method is expected to return the required data size if the data
pointer is NULL and to calculate its own rowstride if the rowstride is
0. Also it needs to convert the downloaded data if GL can't support
that format directly.
This patch moves the common code to cogl-texture.c so the virtual is
always called with a format that can be downloaded directly by GL and
with a valid rowstride. If the download fails then the virtual can
return FALSE in which case cogl-texture will use the draw and read
fallback.
For point sprites you are usually drawing the whole texture so you
most often want GL_CLAMP_TO_EDGE. This patch removes the override for
COGL_MATERIAL_WRAP_MODE_AUTOMATIC when point sprites are enabled for a
layer so that it will clamp to edge.
This adds a new API call to enable point sprite coordinate generation
for a material layer:
void
cogl_material_set_layer_point_sprite_coords_enabled (CoglHandle material,
int layer_index,
gboolean enable);
There is also a corresponding get function.
Enabling point sprite coords simply sets the GL_COORD_REPLACE of the
GL_POINT_SPRITE glTexEnv when flusing the material. There is no
separate application control for glEnable(GL_POINT_SPRITE). Instead it
is left permanently enabled under the assumption that it has no affect
unless GL_COORD_REPLACE is enabled for a texture unit.
http://bugzilla.openedhand.com/show_bug.cgi?id=2047
Recently I added a _cogl_debug_dump_materials_dot_file function for
debugging the sparse material state. This extends the state dumped to
include the graph of layer state also.
We were mistakenly only initializing layer->layer_index for new layers
associated with texture units > 0. This had gone unnoticed because
normally layers associated with texture unit0 have a layer index of 0
too. Mutter was hitting this issue because it was initializing layer 1
before layer 0 for one of its materials so layer 1 was temporarily
associated with texture unit 0.
* cally-merge:
cally: Add introspection generation
cally: Improving cally doc
cally: Cleaning CallyText
cally: Refactoring "window:create" and "window:destroy" emission code
cally: Use proper backend information on CallyActor
cally: Check HAVE_CONFIG_H on cally-util.c
docs: Fix Cally documentation
cally: Clean up the headers
Add binaries of the Cally examples to the ignore file
docs: Add Cally API reference
Avoid to load cally module on a11y examples
Add accessibility tests
Initialize accessibility support on clutter_init
Rename some methods and includes to avoid -Wshadow warnings
Cally initialization code
Add Cally
Toolkits and applications not written in C might still need access to
the Cally API to write accessibility extensions based on it for their
own native elements.
We might want pieces higher in the stack (like Mx) to handle XSettings
events as well, and swallowing them by removing them from the events
queue would make it impossible.
Previously "window:create" and "window:destroy" were emitted on
CallyUtil. Although it works, and CallyUtil already have callbacks to
stage_added/removed signals, I think that it is more tidy/clear to do
that on CallyRoot:
* CallyRoot already has code to manage ClutterStage addition/removal
* In fact, we can see CallyRoot as the object exposing the a11y
information from ClutterStageManager, so it fits better here.
* CallyUtil callbacks these signals are related to key event
listeners (key snooper simulation). One of the main CallyUtil
responsabilities is managing event (connecting, emitting), so I
would prefer to not start to add/mix more functionalities here.
Ideally it would be better to emit all CallyStage methods from
CallyStage, but it is clear that "create" and "destroy" are more easy
to emit from a external object
Previously cogl_set_fog would cause a flush of the Cogl journal and
would directly bang the GL state machine to setup fogging. As part of
the ongoing effort to track most state in CoglMaterial to support
renderlists this now adds an indirection so that cogl_set_fog now just
updates ctx->legacy_fog_state. The fogging state then gets enabled as a
legacy override similar to how the old depth testing API is handled.
This is a blind patch because I don't know enough about the osx backend
and the osx backend probably doesn't even work these days anyway but
since people have filed bugs specifically on OSX that imply they don't
have a depth or stencil buffer this tries to fix that.
Maybe someone will eventually pick up the osx backend again and verify
if this helps.
http://bugzilla.clutter-project.org/show_bug.cgi?id=1394
Since we'll want to share the fallback logic with CoglVertexArray this
moves the malloc based fallback (for when OpenGL doesn't support vertex
or pixel buffer objects) into cogl-buffer.c.
Explicitly warn if we detect that a CoglBuffer is being freed while it
is still mapped. Previously we silently unmapped the buffer, but it's
not something we want to encourage.
This makes CoglBuffer track the last used bind target as a private
property. This is later used when binding a buffer to map instead of
always using the PIXEL_UNPACK target.
This also adds some additional sanity checks that code doesn't try to
nest binds to the same target or bind a buffer to multiple targets at
the same time.
This adds three new feature flags COGL_FEATURE_TEXTURE_NPOT_BASIC,
COGL_FEATURE_TEXTURE_NPOT_MIPMAP and COGL_FEATURE_TEXTURE_NPOT_REPEAT
that can tell you if your hardware supports non power of two textures,
npot textures + mipmaps and npot textures + wrap modes other than
CLAMP_TO_EDGE.
The pre-existing COGL_FEATURE_TEXTURE_NPOT feature implies all of the
above.
By default GLES 2 core supports npot textures but mipmaps and repeat
modes can only be used with power of two textures. This patch also makes
GLES check for the GL_OES_texture_npot extension to determine if mipmaps
and repeating are supported with npot textures.
glDisableVertexAttribArray was defined to glEnableVertexAttribArray so
it would probably cause crashes if it was ever used. Presumably
nothing is using these yet because the generic attributes are not yet
tied to shader attributes in a predictable way.
For testing purposes, either to identify bugs in Cogl or the driver or
simulate lack of PBO support COGL_DEBUG=disable-pbos can be used to
fallback to malloc instead.
The pango renderer was causing lots of override materials to be allocated
because the vertex_buffer API converts AUTOMATIC mode into REPEAT for
backwards compatibility. By explicitly setting the wrap mode to
CLAMP_TO_EDGE when creating the glyph_material then the vertex_buffer
API will leave it untouched.
This allows you to tell Cogl that you are planning to replace all the
buffer's data once it is mapped with cogl_buffer_map. This means if the
buffer is currently being accessed by the GPU then the driver doesn't
have to stall and wait for it to finish before it can access it from the
CPU and can instead potentially allocate a new buffer with undefined
data and map that.
Make Cally follow the single-include header file policy of Clutter and
Cogl; this means making cally.h the single include header, and requires
a new cally-main.h file for the functions defined by cally.h.
Also:
• clean up the licensing notice and remove the FSF address;
• document the object structures (instance and class);
• G_GNUC_CONST-ify the get_type() functions;
• reduce the padding for CallyActor sub-classes;
• reduce the amount of headers included.
Initialize the accessibility support calling cally_accessibility_init
Take into account that this is required to at least be sure that
CallyUtil class is available.
It also modifies cally_accessibility_module_init in order to return
if the initialization was fine (and the name, removing the module word).
It also removes the gnome accessibility hooks, as it is not anymore
module code.
Solves CB#2098
This commit includes a method to init the a11y support. Two main purposes:
* Register the different Atk factories.
* Ensure that there are a AtkUtil implementation class available.
Part of CB#2097
The Clutter Accessibility Library is an implementation of the ATK,
the Accessibility Toolkit, which exposes Clutter actors to accessibility
tools. This allows not only writing accessible user interfaces, but also
allows testing and verification frameworks based on A11Y technologies to
inspect and test a Clutter scene graph.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2097
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This changes the cogl_is_XYZ function prototypes generated when using
the COGL_OBJECT_DEFINE macro to take a void * argument instead of a
CoglHandle argument.
This removes cogl_pixel_array_new which just took a size in bytes.
Without the image size and pixel format then the driver often doesn't
have enough information to allocate optimal GPU memory that can be
textured from directly. This is because GPUs often have ways to
spatially alter the layout of a texture to improve cache access patterns
which may require special alignment and padding dependant in the images
width, height and bpp.
Although currently we are limited by OpenGL because it doesn't let us
pass on the width and height when allocating a PBO, the hope is that we
can define a better extension at some point.
The usage hint should be implied by the CoglBuffer subclass type so the
public getter and setter APIs for manually changing the usage hint of a
CoglBuffer have now been removed.
Instead of having to extend cogl_is_buffer with new buffer types
manually this now adds a new COGL_BUFFER_DEFINE macro to be used instead
of COGL_OBJECT_DEFINE for CoglBuffer subclasses. This macro will
automatically register the new type with ctx->buffer_types which will
iterated by cogl_is_buffer. This is the same coding pattern used for
CoglTexture.
This adds a _cogl_debug_dump_materials_dot_file function that can be
used to dump all the descendants of the default material to a file using
the dot format which can then be converted to an image to visualize.
In _cogl_material_pre_change_notify if a material with descendants is
modified then we create a new material that is a copy of the one being
modified and reparent those descendants to the new material.
This patch ensures we drop the reference we get from cogl_material_copy
since we can rely on the descendants to keep the new material alive.
The commit to split the fragment processing backends out from
cogl-material.c (3e1323a636) broke the GLES 1 and 2 builds the
fix was to guard the code in each backend according to the
COGL_MATERIAL_BACKEND_XYZ defines which are setup in
cogl-material-private.h.
The documentation for cogl_vertex_buffer_indices_get_for_quads was
using ugly ASCII art to draw the diagrams. These have now been
replaced with PNG figures.
CoglMaterialWrapMode was missing from the cogl-sections.txt file so it
wasn't getting displayed. There were also no documented return values
from the getters.
The tesselator code uses some defines that it expects to be in the GL
headers such as GLAPI and GLAPIENTRY. These are used to mark the entry
points as exportable on each platform. We don't really want the
tesselator code to use these but we also don't want to modify the C
files so instead they are #defined to be empty in the stub glu.h. That
header is only included internally when building the tesselator/ files
so it shouldn't affect the rest of Cogl.
GLES also doesn't have a GLdouble type so we just #define this to be a
regular double.
cogl_material_copy was taking a reference on the original texture when
making a copy. However it then calls _cogl_material_set_parent on the
material which also takes a reference on the parent. The second
reference is cleaned up whenever _cogl_material_unparent is called and
this is also called by _cogl_material_free. However, it seems that
nothing was cleaning up the first reference. I think the reference is
entirely unnecessary so this patch removes it.
The AlignConstraint update is using only the width/height of the source,
but it should also take into account the position.
Also, instead of using the ::notify signal, it should follow the
BindConstraint, and switch to the ::allocation-changed signal, since
it's less expensive (one emission instead of four notifications, one for
each property we use).
We had several different ways of exposing experimental API, in one case
the symbols had no special suffix, in two other ways the symbols were
given an _EXP suffix but in different ways.
This makes all experimental API have an _EXP suffix which is handled
using #defines in the header so the prototypes in the .c and .h files
don't have the suffix.
The documented reason for the suffix is so that anyone watching Cogl for
ABI changes who sees symbols disappear will hopefully understand what's
going on.