The fixed function fogging provided by OpenGL only works with unmultiplied
colors (or if the color has an alpha of 1.0) so since we now premultiply
textures and colors by default a note to this affect has been added to
clutter_stage_set_fog and cogl_set_fog.
test-depth.c no longer uses clutter_stage_set_fog for this reason.
In the future when we can depend on fragment shaders we should also be
able to support fogging of premultiplied primitives.
This test assumes that the textures will be stored internally with exactly
the color given so that specific texture combining arithmetic can be
tested. Using COGL_PIXEL_FORMAT_ANY allows Cogl to internally premultiply
the textures, so we have to explicitly request an unmultiplied format.
Many operations, like mixing two textures together or alpha-blending
onto a destination with alpha, are done most logically if texture data
is in premultiplied form. We also have many sources of premultiplied
texture data, like X pixmaps, FBOs, cairo surfaces. Rather than trying
to work with two different types of texture data, simplify things by
always premultiplying texture data before uploading to GL.
Because the default blend function is changed to accommodate this,
uses of pure-color CoglMaterial need to be adapted to add
premultiplication.
gl/cogl-texture.c gles/cogl-texture.c: Always premultiply
non-premultiplied texture data before uploading to GL.
cogl-material.c cogl-material.h: Switch the default blend functions
to ONE, ONE_MINUS_SRC_ALPHA so they work correctly with premultiplied
data.
cogl.c: Make cogl_set_source_color() premultiply the color.
cogl.h.in color-material.h: Add some documentation about
premultiplication and its interaction with color values.
cogl-pango-render.c clutter-texture.c tests/interactive/test-cogl-offscreen.c:
Use premultiplied colors.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Merge branch 'master-clock-updates'
* master-clock-updates: (22 commits)
Change the paint forcing on the Text cache text
[timelines] Improve marker hit check and don't fudge the delta
Revert "[timeline] Don't clamp the elapsed time when a looping tl reaches the end"
[tests] Don't add a newline to the end of g_test_message calls
[test-timeline] Add a marker at the beginning of the timeline
[timeline] Don't clamp the elapsed time when a looping tl reaches the end
[master-clock] Throttle if no redraw was performed
[docs] Update Clutter's API reference
Force a paint instead of calling clutter_redraw()
Fix clutter_redraw() to match the redraw cycle
Run the repaint functions inside the redraw cycle
Remove useless manual timeline ticking
Move elapsed-time calculations into ClutterTimeline
Limit the frame rate when not syncing to VBLANK
Decrease the main-loop priority of the frame cycle
Avoid motion-compression in test-picking test
Compress events as part of the frame cycle
Remove stage update idle and do updates from the master clock
Call g_main_context_wakeup() when we start running timelines
Remove unused msecs_delta member
...
The changes in the master clock and the repaint cycle have been
changed, and broke the way the test for the Text actor cache of
PangoLayouts forces a redraw.
We have to call clutter_actor_paint() on the Stage embedding the
Text actor we want to test; this is kinda fugly because if the
Layout has changed it will end up causing a reallocation cycle
in the middle of the Text actor paint. Since it's a test case,
and since forcing redraws is a bit of a hack as well, we can
close both our eyes on that.
This reverts commit 9c5663d671.
The patch was causing problems for applications that expect the
elapsed_time to be at either end of the timeline when the completed
signal is fired. For example test-behave swaps the direction of the
timeline in the completed handler but if the time has overflowed the
end then the timeline would only take a short time to get back the
beginning. This caused the animation to just vibrate around the
beginning.
The new-frame signal of a timeline was previously guaranteed to be
emitted with the elapsed_time set to the end before it emits the
completed signal. This doesn't necessarily make sense for looping
timelines because it would cause the elapsed time to be clamped to a
slightly off value whenever the timeline restarts. This patch makes it
perform the wrap around before emitting the new-frame signal so that
the elapsed time always corresponds to the time elapsed since the
timeline was started.
Additionally it no longer fudges the msecs_delta property to make the
marker check work so clutter_timeline_get_delta will always return the
wall clock time since the last frame.
The master clock now works fine whether or not there are any stages,
so in the timeline conformance tests don't need to set up their
own times.
Set CLUTTER_VBLANK=none for the conformance tests, which in addition
to removing an test-environment dependency, will result in the ticking
for timeline tests being throttled to the default frame rate.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Instead of calculating a delta in the master clock, and passing that
into each timeline, make each timeline individually responsible for
remembering the last time and computing the delta.
This:
- Fixes a problem where we could spin infinitely processing
timeline-only frames with < 1msec differences.
- Makes timelines consistently start timing on the first frame;
instead of doing different things for the first started timeline
and other timelines.
- Improves accuracy of elapsed time computations by avoiding
accumulating microsecond => millisecond truncation errors.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
clutter-master-clock.c clutter-master-clock.h: When the
SYNC_TO_VBLANK feature is not available, wait for 1/frame_rate
seconds since the start of the last frame before drawing the next
frame. Add _clutter_master_clock_start_running() to abstract
the usage of g_main_context_wakeup()
clutter-stage.c: Add _clutter_master_clock_start_running()
clutter-main.c: Update docs for clutter_set_default_frame_rate()
clutter_get_default_frame_rate() to no longer talk about timeline
frame rates.
test-text-perf.c test-text.c: Set a frame rate of 1000fps so that
frame-rate limiting doesn't affect the result.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Using clutter_stage_get_actor_at_pos() rather than synthesizing
events; the synthesized events were being compressed, so we were
only tesitng one pick per frame.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The clutter_stage_fullscreen() and clutter_stage_unfullscreen() are
a GDK-ism. The underlying implementation is already using an accessor
with a boolean parameter.
This should take the amount of collisions between properties, methods
and signals to zero.
The :fullscreen property is very much confusing as it is implemented.
It can be written to a value, but the whole process might fail. If we
set:
g_object_set (stage, "fullscreen", TRUE, NULL);
and the fullscreen process fails or it is not implemented, the value
will be reset to FALSE (if we're lucky) or left TRUE (most of the
times).
The writability is just a shorthand for invoking clutter_stage_fullscreen()
or clutter_stage_unfullscreen() depending on a boolean value without
using an if.
The :fullscreen property also greatly confuses high level languages,
since the same symbol is used:
- for a method name (Clutter.Stage.fullscreen())
- for a property name (Clutter.Stage.fullscreen)
- for a signal (Clutter.Stage::fullscreen)
For these reasons, the :fullscreen should be renamed to :fullscreen-set
and be read-only. Implementations of the Stage should only emit the
StageState event to change from normal to fullscreen, and the Stage
will automatically update the value of the property and emit a notify
signal for it.
ClutterEvent is not really gobject-introspection friendly because
of the whole discriminated union thing. In particular, if you get
a ClutterEvent in a signal handler, you probably can't access the
event-type-specific fields, and you probably can't call methods
like clutter_key_event_symbol() either, because you can't cast the
ClutterEvent to a ClutterKeyEvent.
The cleanest solution is to turn every accessor into ClutterEvent
methods, accepting a ClutterEvent* and internally checking the event
type.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1585
The test was quiting after the 2nd frame but it should be the third frame because
the test doesn't actually check results until the third frame due to the workaround
for drivers with broken glReadPixels.
(When first written the test would have been verified with the
clutter_main_quit() commented out which gives visual feedback of what the
test does, so the off by one would have snuck in just before uncommenting
and pushing.)
The load-finished signal has a GError* argument which is meant to
signify whether the loading was successful. However many of the
places in ClutterTexture that emit this signal directly pass their
'error' variable which is a GError** and will be NULL or not
completely independently of whether there was an error. If the
argument was dereferenced it would probably crash.
The test-texture-async interactive test case should also verify
that the ::load-finished signal is correctly emitted.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1622
The texture filters are now a property of the material layer rather
than the texture object. Whenever a texture is painted with a material
it sets the filters on all of the GL textures in the Cogl texture. The
filter is cached so that it won't be changed unnecessarily.
The automatic mipmap generation has changed so that the mipmaps are
only generated when the texture is painted instead of every time the
data changes. Changing the texture sets a flag to mark that the
mipmaps are dirty. This works better if the FBO extension is available
because we can use glGenerateMipmap. If the extension is not available
it will temporarily enable automatic mipmap generation and reupload
the first pixel of each slice. This requires tracking the data for the
first pixel.
The COGL_TEXTURE_AUTO_MIPMAP flag has been replaced with
COGL_TEXTURE_NO_AUTO_MIPMAP so that it will default to
auto-mipmapping. The mipmap generation is now effectively free if you
are not using a mipmap filter mode so you would only want to disable
it if you had some special reason to generate your own mipmaps.
ClutterTexture no longer has to store its own copy of the filter
mode. Instead it stores it in the material and the property is
directly set and read from that. This fixes problems with the filters
getting out of sync when a cogl handle is set on the texture
directly. It also avoids the mess of having to rerealize the texture
if the filter quality changes to HIGH because Cogl will take of
generating the mipmaps if needed.
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
Units as they have been implemented since Clutter 0.4 have always been
misdefined as "logical distance unit", while they were just pixels with
fractionary bits.
Units should be reworked to be opaque structures to hold a value and
its unit type, that can be then converted into pixels when Clutter needs
to paint or compute size requisitions and perform allocations.
The previous API should be completely removed to avoid collisions, and
a new type:
ClutterUnits
should be added; the ability to install GObject properties using
ClutterUnits should be maintained.
Timelines no longer work in terms of a frame rate and a number of
frames but instead just have a duration in milliseconds. This better
matches the working of the master clock where if any timelines are
running it will redraw as fast as possible rather than limiting to the
lowest rated timeline.
Most applications will just create animations and expect them to
finish in a certain amount of time without caring about how many
frames are drawn. If a frame is going to be drawn it might as well
update all of the animations to some fraction of the total animation
rather than rounding to the nearest whole frame.
The 'frame_num' parameter of the new-frame signal is now 'msecs' which
is a number of milliseconds progressed along the
timeline. Applications should use clutter_timeline_get_progress
instead of the frame number.
Markers can now only be attached at a time value. The position is
stored in milliseconds rather than at a frame number.
test-timeline-smoothness and test-timeline-dup-frames have been
removed because they no longer make sense.
All the underlying implementation and the public entry points have
been switched to floats; the only missing bits are the Actor properties
that deal with positioning and sizing.
This usually means a major pain when dealing with GValues and varargs
functions. While GValue will warn you when dealing with the wrong
conversions, varags will simply die an horrible (and hard to debug)
death via segfault. Nothing much to do here, except warn people in the
release notes and hope for the best.
If it doesn't queue a redraw and allow the backend to clear and swap
the buffers then the results will be skewed because it is not
predictable when the driver will actually render the scene.
Previously indices were tightly bound to a particular Cogl vertex buffer
but we would like to be able to share indices so now we have
cogl_vertex_buffer_indices_new () which returns a CoglHandle.
In particular we could like to have a shared set of indices for drawing
lists of quads that can be shared between the pango renderer and the
Cogl journal.
cogl_enable_depth_test and cogl_enable_backface_culling have been renamed
and now have corresponding getters, the new functions are:
cogl_set_depth_test_enabled
cogl_get_depth_test_enabled
cogl_set_backface_culling_enabled
cogl_get_backface_culling_enabled
Originally cogl_vertex_buffer_add_indices let the user pass in their own unique
ID for the indices; now the Id is generated internally and returned to the
caller.
It's now possible to add arrays of indices to a Cogl vertex buffer and
they will be put into an OpenGL vertex buffer object. Since it's quite
common for index arrays to be static it saves the OpenGL driver from
having to validate them repeatedly.
This changes the cogl_vertex_buffer_draw_elements API: It's no longer
possible to provide a pointer to an index array at draw time. So
cogl_vertex_buffer_draw_elements now takes an indices identifier that
should correspond to an idendifier returned when calling
cogl_vertex_buffer_add_indices ()
Setting up layer combine functions and blend modes is very awkward to do
programatically. This adds a parser for string based descriptions which are
more consise and readable.
E.g. a material layer combine function could now be given as:
"RGBA = ADD (TEXTURE[A], PREVIOUS[RGB])"
or
"RGB = REPLACE (PREVIOUS)"
"A = MODULATE (PREVIOUS, TEXTURE)"
The simple syntax and grammar are only designed to expose standard fixed
function hardware, more advanced combining must be done with shaders.
This includes standalone documentation of blend strings covering the aspects
that are common to blending and texture combining, and adds documentation
with examples specific to the new cogl_material_set_blend() and
cogl_material_layer_set_combine() functions.
Note: The hope is to remove the now redundant bits of the material API
before 1.0
The CoglTexture constructors expose the "max-waste" argument for
controlling the maximum amount of wasted areas for slicing or,
if set to -1, disables slicing.
Slicing is really relevant only for large images that are never
repeated, so it's a useful feature only in controlled use cases.
Specifying the amount of wasted area is, on the other hand, just
a way to mess up this feature; 99% the times, you either pull this
number out of thin air, hoping it's right, or you try to do the
right thing and you choose the wrong number anyway.
Instead, we can use the CoglTextureFlags to control whether the
texture should not be sliced (useful for Clutter-GST and for the
texture-from-pixmap actors) and provide a reasonable value for
enabling the slicing ourself. At some point, we might even
provide a way to change the default at compile time or at run time,
for particular platforms.
Since max_waste is gone, the :tile-waste property of ClutterTexture
becomes read-only, and it proxies the cogl_texture_get_max_waste()
function.
Inside Clutter, the only cases where the max_waste argument was
not set to -1 are in the Pango glyph cache (which is a POT texture
anyway) and inside the test cases where we want to force slicing;
for the latter we can create larger textures that will be bigger than
the threshold we set.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Signed-off-by: Neil Roberts <neil@linux.intel.com>
* master:
[cogl-vertex-buffer] Ensure the clip state before rendering
[test-text-perf] Small fix-ups
Add a test for text performance
[build] Ensure that cogl-debug is disabled by default
[build] The cogl GE macro wasn't passing an int according to the format string
Use the right internal format for GL_ARB_texture_rectangle
[actor_paint] Ensure painting is a NOP for actors with opacity = 0
Make backface culling work with vertex buffers
- Fix a typo in a for loop in create_label which left 'i'
uninitialised and caused a crash if you're unlucky.
- Set the CLUTTER_VBLANK=none environment variable.
- Don't quit on a keypress.
- Trailing whitespace tidy up.
Print out the cursor and selection positions in order to verify
the behaviour of the Text actor.
This is a likely candidate for a conformance test unit as well.
The picking test has two configurables: the number of actors and
the number of events per frame. It makes sense to have them as
command line options to test with multiple configurations without
having to change defines and recompile.
COGLenum, COGLint and COGLuint which were simply typedefs for GL{enum,int,uint}
have been removed from the API and replaced with specialised enum typedefs, int
and unsigned int. These were causing problems for generating bindings and also
considered poor style.
The cogl texture filter defines CGL_NEAREST and CGL_LINEAR etc are now replaced
by a namespaced typedef 'CoglTextureFilter' so they should be replaced with
COGL_TEXTURE_FILTER_NEAREST and COGL_TEXTURE_FILTER_LINEAR etc.
The shader type defines CGL_VERTEX_SHADER and CGL_FRAGMENT_SHADER are handled by
a CoglShaderType typedef and should be replaced with COGL_SHADER_TYPE_VERTEX and
COGL_SHADER_TYPE_FRAGMENT.
cogl_shader_get_parameteriv has been replaced by cogl_shader_get_type and
cogl_shader_is_compiled. More getters can be added later if desired.
With the recent change to internal floating point values, ClutterUnit
has become a redundant type, defined to be a float. All integer entry
points are being internally converted to floating point values to be
passed to the GL pipeline with the least amount of conversion.
ClutterUnit is thus exposed as just a "pixel with fractionary bits",
and not -- as users might think -- as generic, resolution and device
independent units. not that it was the case, but a definitive amount
of people was convinced it did provide this "feature", and was flummoxed
about the mere existence of this type.
So, having ClutterUnit exposed in the public API doubles the entry
points and has the following disadvantages:
- we have to maintain twice the amount of entry points in ClutterActor
- we still do an integer-to-float implicit conversion
- we introduce a weird impedance between pixels and "pixels with
fractionary bits"
- language bindings will have to choose what to bind, and resort
to manually overriding the API
+ *except* for language bindings based on GObject-Introspection, as
they cannot do manual overrides, thus will replicate the entire
set of entry points
For these reason, we should coalesces every Actor entry point for
pixels and for ClutterUnit into a single entry point taking a float,
like:
void clutter_actor_set_x (ClutterActor *self,
gfloat x);
void clutter_actor_get_size (ClutterActor *self,
gfloat *width,
gfloat *height);
gfloat clutter_actor_get_height (ClutterActor *self);
etc.
The issues I have identified are:
- we'll have a two cases of compiler warnings:
- printf() format of the return values from %d to %f
- clutter_actor_get_size() taking floats instead of unsigned ints
- we'll have a problem with varargs when passing an integer instead
of a floating point value, except on 64bit platforms where the
size of a float is the same as the size of an int
To be clear: the *intent* of the API should not change -- we still use
pixels everywhere -- but:
- we remove ambiguity in the API with regard to pixels and units
- we remove entry points we get to maintain for the whole 1.0
version of the API
- we make things simpler to bind for both manual language bindings
and automatic (gobject-introspection based) ones
- we have the simplest API possible while still exposing the
capabilities of the underlying GL implementation
There were several functions I believe no one is currently using that were
only implemented in the GL backend (cogl_offscreen_blit_region and
cogl_offscreen_blit) that have simply been removed so we have a chance to
think about design later with a real use case.
There was one nonsense function (cogl_offscreen_new_multisample) that
sounded exciting but in all cases it just returned COGL_INVALID_HANDLE
(though at least for GL it checked for multisampling support first!?)
it has also been removed.
The MASK draw buffer type has been removed. If we want to expose color
masking later then I think it at least would be nicer to have the mask be a
property that can be set on any draw buffer.
The cogl_draw_buffer and cogl_{push,pop}_draw_buffer function prototypes
have been moved up into cogl.h since they are for managing global Cogl state
and not for modifying or creating the actual offscreen buffers.
This also documents the API so for example desiphering the semantics of
cogl_offscreen_new_to_texture() should be a bit easier now.
The units in the Timeline test suite just rely on the timeline
being a timeout automatically advanced by the main loop. This
is not the case anymore, since the merge of the master-clock.
To make the test units work again we need to "emulate" the master
clock without effectively having a stage to redraw; we do this
by creating a frame source and manually advancing the timelines
we create for test purposes, using the advance_msecs() "protected"
method.
With the change in commit 87e4e2 painting of hidden source actors
in ClutterClone was fixed. This commit changes the test-actor-clone
to visually verify this.
Setting the wrap mode on the PangoLayout seems to have disappeared
during the text-actor-layout-height branch merge so this brings it
back. The test for this in test-text-cache no longer needs to be
disabled.
We also shouldn't set the width on the layout if there is no wrapping
or ellipsizing because otherwise it implicitly enables wrapping. This
only matters if the actor gets allocated smaller than its natural
size.
Bug 1138 - No trackable "mapped" state
* Add a VISIBLE flag tracking application programmer's
expected showing-state for the actor, allowing us to
always ensure we keep what the app wants while tracking
internal implementation state separately.
* Make MAPPED reflect whether the actor will be painted;
add notification on a ClutterActor::mapped property.
Keep MAPPED state updated as the actor is shown,
ancestors are shown, actor is reparented, etc.
* Require a stage and realized parents to realize; this means
at realization time the correct window system and GL resources
are known. But unparented actors can no longer be realized.
* Allow children to be unrealized even if parent is realized.
Otherwise in effect either all actors or no actors are realized,
i.e. it becomes a stage-global flag.
* Allow clutter_actor_realize() to "fail" if not inside a toplevel
* Rework clutter_actor_unrealize() so internally we have
a flavor that does not mess with visibility flag
* Add _clutter_actor_rerealize() to encapsulate a somewhat
tricky operation we were doing in a couple of places
* Do not realize/unrealize children in ClutterGroup,
ClutterActor already does it
* Do not realize impl by hand in clutter_stage_show(),
since showing impl already does that
* Do not unrealize in various dispose() methods, since
ClutterActor dispose implementation already does it
and chaining up is mandatory
* ClutterTexture uses COGL while unrealizable (before it's
added to a stage). Previously this breakage was affecting
ClutterActor because we had to allow realize outside
a stage. Move the breakage to ClutterTexture, by making
ClutterTexture just use COGL while not realized.
* Unrealize before we set parent to NULL in clutter_actor_unparent().
This means unrealize() implementations can get to the stage.
Because actors need the stage in order to detach from stage.
* Update clutter-actor-invariants.txt to reflect latest changes
* Remove explicit hide/unrealize from ClutterActor::dispose since
unparent already forces those
Instead just assert that unparent() occurred and did the right thing.
* Check whether parent implements unrealize before chaining up
Needed because ClutterGroup no longer has to implement unrealize.
* Perform unrealize in the default handler for the signal.
This allows non-containers that have children to work properly,
and allows containers to override how it's done.
* Add map/unmap virtual methods and set MAPPED flag on self and
children in there. This allows subclasses to hook map/unmap.
These are not signals, because notify::mapped is better for
anything it's legitimate for a non-subclass to do.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1513 - Allow passing in ClutterPickMode to
clutter_stage_get_actor_at_pos()
At the moment, clutter_stage_get_actor_at_pos() uses CLUTTER_PICK_ALL
internally to find an actor. It would be useful to allow passing in
ClutterPickMode to clutter_stage_get_actor_at_pos(), so that the caller
can specify CLUTTER_PICK_REACTIVE as a criteria.
Three tests are now performed on the picked squares. First there is no
covering actor which is the same as the original test. Then there is a
hidden covering actor which should not affect the results. Finally
there is a covering actor with a clip set on it so that only actors
at the borders of the stage should be pickable.
The wrap mode sub-test inside the ClutterText layout cache test
unit has been broken by the recent changes inside the Text actor.
The sub-test itself might require tweaking.
The cogl_is_* functions were showing up quite high on profiles due to
iterating through arrays of cogl handles.
This does away with all the handle arrays and implements a simple struct
inheritance scheme. All cogl objects now add a CoglHandleObject _parent;
member to their main structures. The base object includes 2 members a.t.m; a
ref_count, and a klass pointer. The klass in turn gives you a type and
virtual function for freeing objects of that type.
Each handle type has a _cogl_##handle_type##_get_type () function
automatically defined which returns a GQuark of the handle type, so now
implementing the cogl_is_* funcs is just a case of comparing with
obj->klass->type.
Another outcome of the re-work is that cogl_handle_{ref,unref} are also much
more efficient, and no longer need extending for each handle type added to
cogl. The cogl_##handle_type##_{ref,unref} functions are now deprecated and
are no longer used internally to Clutter or Cogl. Potentially we can remove
them completely before 1.0.
There is no need for a custom hsl to rgb converter since Clutter implements
this logic; originally it wasn't quite as optimal, but that has now been
fixed.
Using test-cogl-vertex-buffer as a test case which is CPU bound due to
hls -> rgb conversions this alternative algorithm looked to be ~10%
faster when tested on an X61s Lenovo.
The test is a sanity check that dynamic updating of vertex data via the cogl
vertex buffer api works and has reasonable performance. (though it can't be
considered a well designed benchmark since it wastes casual amounts of CPU
time simply choosing pretty colors.)
The code also aims to demonstrate one way of creating, updating and efficiently
drawing a quad mesh structure via the vertex buffer api which could be applied
to lots of different use cases.
Only have load-data-async and load-async properties, both are construct
only and the latter adds the former load-size-async behavior on top of
load-data-async.
The fog and perspective API is currently split in two parts:
- the floating point version, using values
- the fixed point version, using structures
The relative properties are using the structure types, since they
are meant to set multiple values at the same time. Instead of
using bare values, the whole API should be coalesced into two
simple calls using structures to match the GObject properties.
Thus:
clutter_stage_set_fog (ClutterStage*, const ClutterFog*)
clutter_stage_get_fog (ClutterStage*, ClutterFog*)
clutter_stage_set_perspective (ClutterStage*, const ClutterPerspective*)
clutter_stage_get_perspective (ClutterStage*, ClutterPerspective*)
Which supercedes the fixed point and floating point variants.
More importantly, both ClutterFog and ClutterPerspective should
using floating point values, since that's what get passed to
COGL anyway.
ClutterFog should also drop the "density" member, since ClutterStage
only allows linear fog; non-linear fog distribution can be achieved
using a signal handler and calling cogl_set_fog() directly; this keeps
the API compact yet extensible.
Finally, there is no ClutterStage:fog so it should be added.
The ClutterColor API has some inconsistencies:
- the string deserialization function does not match the rest of
the conversion function naming policy; the naming should be:
clutter_color_parse() -> clutter_color_from_string()
and the first parameter should be the ClutterColor that will
be set from the string, not the string itself (a GDK-ism).
- the fixed point API should not be exposed, especially in the
form of ClutterFixed values
- the non-fixed point HLS conversion functions do not make any
sense. The values returned should be:
hue := range [ 0, 360 ]
luminance := range [ 0, 1 ]
saturation := range [ 0, 1 ]
like the current fixed point API does. Returning a value in
the [ 0, 255 ] range is completely useless
- the clutter_color_equal() should be converted for its use inside
a GHashTable; a clutter_color_hash() should be added as well
- the second parameter of the clutter_color_shade() function should
be the shading factor, not the result (another GDK-ism). this way
the function call can be translated from this:
color.shade(out result, factor)
to the more natural:
color.shade(factor, out result)
This somewhat large commit fixes all these issues and updates the
internal users of the API.
The libdisable-npots library is just used as a helper as part of
make test so it should not be installed.
If noinst_* is used then automake will generate a static library but
this won't work with LD_PRELOAD so we then need an extra custom rule
to link that into a shared library. The custom rule uses the $(LINK)
Makefile var which gets put in the Makefile because of the static
library. We pass libtool a stub -rpath option which causes it to
generate a shared library.
The test now explicitly reads back from the framebuffer to sanity check that
texturing is happening as expected, and it now uses a fixed 2x2 texture instead
of redhand.png since redhand.png doesn't have a power of two size which can
cause the vertex buffer code to complain on hardware not supporting npot
textures.
The TEST_CONFORM_TODO macro is a simple placeholder macro that
adds the test function to the "/todo" namespace and skips the
test.
It can be used for tests that are known to fail because of bugs
that haven't been fixed yet, or because of features not yet
implemented.
test-vertex-buffer-configuous now needs redhand.png so it should be
copied in to the build directory. This is copied from similar code in
the tests/interactive Makefile.
ClutterModel has an interactive test but lacks a conformance
unit for automatic testing.
This is the beginning of that unit, which covers the population
and iteration over a ListModel.
Sometimes a test unit should not be executed depending on a
condition. It would be good to have a macro doing this, along
with TEST_CONFORM_SIMPLE().
Additionally, the skipped unit should be added to a specific
namespace, so that any coverage report will be able to catch it.
For this reason, here's TEST_CONFORM_SKIP() which follows the
syntax:
TEST_CONFORM_SKIP (condition, namespace, function);
If condition evaluates to FALSE the test is skipped and the
unit added to the "/skipped" namespace.
The test no longer requires an XID argument to run; instead it creates its
own X Window. The test now also aims to demonstrate whether mipmapping is
working, and clearly informs you if fallbacks are being used for GLX tfp.
Fixes some blending issues when using color arrays since we were
conflicting with the cogl_enable state + fixes a texture layer
validation bug.
Adds a basic textured triangle to test-vertex-buffer-contiguous.
The test simply creates an odd sized texture with different colors at
each of the four corners. It then renders the texture and verifies
that the colors are the expected values. This should help ensure that
the sliced texture rendering code is working properly.
The :alignment property is prone to generate confusion: developers
will set it thinking that the contents of a ClutterText will
automagically align themselves.
Instead of using the generic term :alignment, and following the
GTK+ convention, we should use a more specific term, conveying the
actual effect of the property: alignment of the lines with respect
to each other, and not to the overall allocated area.
See bug 1428:
http://bugzilla.openedhand.com/show_bug.cgi?id=1428
This makes it consistent with cogl_rectangle_with_{multi,}texture_coords.
Notably the reason cogl_rectangle_with_{multi,}texture_coords wasn't changed
instead is that the former approach lets you describe back facing rectangles.
(though technically you could pass negative width/height values to achieve
this; it doesn't seem as neat.)
Bug 1349 - Using the anchor point to set the scale center is messy
The branch adds an extra center point for scaling which can be used
for example to set a scale about the center without affecting the
position of the actor.
The scale center can be specified as a unit offset from the origin or
as a gravity. If specified as a gravity it will be stored as a
fraction of the actor's size so that the position will track when the
actor changes size.
The anchor point and rotation centers have been modified so they can
be set with a gravity in the same way. However, only the Z rotation
exposes a property to set using a gravity because the other two
require a Z coordinate which doesn't make sense to interpret as a
fraction of the actor's width or height.
Conflicts:
clutter/clutter-actor.c
During the upgrade to cogl material, test-backface-culling was
switched to use cogl_rectangle instead of cogl_texture_rectangle to
draw the textures. However, cogl_rectangle takes a width and height
instead of the the top-left and bottom-right vertices so the
rectangles were being drawn in the wrong place.
* generic-actor-clone:
Remove CloneTexture from the API
[tests] Clean up the Clone interactive test
Rename ActorClone to Clone/2
Rename ActorClone to Clone/1
Improves the unit test to verify more awkward scaling and some corresponding fixes
Implements a generic ClutterActorClone that doesn't need fbos.