When building with --enable-profile we now depend on the uprof-0.3
developer release which brings a few improvements:
» It lets us "fix" how we initialize uprof so that instead of using a shared
object constructor/destructor (which was a hack used when first adding
uprof support to Clutter) we can now initialize as part of clutter's
normal initialization code. As a side note though, I found that the way
Clutter initializes has some quite serious problems whenever it
involves GOptionGroups. It is not able to guarantee the initialization
of dependencies like uprof and Cogl. For this reason we still use the
contructor/destructor approach to initialize uprof in Cogl.
» uprof-0.3 provides a better API for adding custom columns when reporting
timer and counter statistics which lets us remove quite a lot of manual
report generation code in clutter-profile.c.
» uprof-0.3 provides a shared context for tracking mainloop timer
statistics. This means any mainloop based library following the same
"Mainloop" timer naming convention can use the shared context and no
matter who ends up owning the final mainloop the statistics will always
be in the same place. This allows profiling of Clutter with an
external mainloop such as with the Mutter compositor.
» uprof-0.3 can export statistics over dbus and comes with an ncurses
based ui to vizualize timer and counter stats live.
The latest version of uprof can be cloned from:
git://github.com/rib/UProf.git
When building actor relative transforms, instead of using the matrix
stack to combine transformations and making assumptions about what is
currently on the stack we now just explicitly initialize an identity
matrix and apply transforms to that.
This removes the full_vertex_t typedef for internal transformation code
and we just use ClutterVertex.
ClutterStage now implements apply_transform like any other actor now
and the code we had in _cogl_setup_viewport has been moved to the
stage's apply_transform instead.
ClutterStage now tracks an explicit projection matrix and viewport
geometry. The projection matrix is derived from the perspective whenever
that changes, and the viewport is updated when the stage gets a new
allocation. The SYNC_MATRICES mechanism has been removed in favour of
_clutter_stage_dirty_viewport/projection() APIs that get used when
switching between multiple stages to ensure cogl has the latest
information about the onscreen framebuffer.
In 965907deb3 the picking was changed to render the full stage
instead of a single pixel whenever picking is performed more than once
between paints. However the condition in the if-statement was
backwards so it would end up always doing a full stage render.
The idea is that if we see multiple picks per frame then that implies
the visible scene has become static. In this case we can promote the
next pick render to be unclipped so we have valid pick values for the
entire stage. Now we can continue to read from this cached buffer until
the stage contents do visibly change.
Thanks to Luca Bruno on #clutter for this idea!
The P_() macro adds a context for the property nick and blurb. In order
to make xgettext recognize it, we need to drop glib-gettexize inside the
autogen.sh script and ship a modified Makefile.in.in with Clutter.
Initialize the accessibility support calling cally_accessibility_init
Take into account that this is required to at least be sure that
CallyUtil class is available.
It also modifies cally_accessibility_module_init in order to return
if the initialization was fine (and the name, removing the module word).
It also removes the gnome accessibility hooks, as it is not anymore
module code.
Solves CB#2098
Every time we request a CoglPangoFontMap, either internally or
externally, we should have one available.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
In commit c0a553163b I changed the format used to read the picking
pixel to COGL_PIXEL_FORMAT_RGB_888 because it was convenient to avoid
the premult conversion. However this broke picking on GLES on some
platforms because for that glReadPixels is only guaranteed to support
GL_RGBA with GL_UNSIGNED_BYTE. Since the last commit cogl_read_pixels
will always use that format but it will end up with a conversion back
to RGB_888. This patch avoids that conversion and avoids the premult
conversion by reading in RGBA_8888_PRE.
http://bugzilla.openedhand.com/show_bug.cgi?id=2057
We kind of assume that stuff will break well before during the
ClutterBackend::create_context() implementation if we fail to create a
GL context. We do, however, have error reporting in place inside the
Backend API to catch those cases. Unfortunately, since we switched to
lazy initialization of the Stage, there can be a case of GL context
creation failure that still leads to a successful initialization - and a
segmentation fault later on. This is clearly Not Good™.
Let's try to catch a failure in all the places calling create_context()
and report back to the user the error in a meaningful way, before
crashing and burning.
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
cogl_read_pixels() no longer asserts that the format passed in is
RGBA_8888 but instead accepts any format. The appropriate GL enums for
the format are passed to glReadPixels so OpenGL should be perform a
conversion if neccessary.
It currently assumes glReadPixels will always give us premultiplied
data. This will usually be correct because the result of the default
blending operations for Cogl ends up with premultiplied data in the
framebuffer. However it is possible for the framebuffer to be in
whatever format depending on what CoglMaterial is used to render to
it. Eventually we may want to add a way for an application to inform
Cogl that the framebuffer is not premultiplied in case it is being
used for some special purpose.
If the requested format is not premultiplied then Cogl will convert
it. The tests have been changed to read the data as premultiplied so
that they won't be affected by the conversion. Picking in Clutter has
been changed to use COGL_PIXEL_FORMAT_RGB_888 because it doesn't need
the alpha component. clutter_stage_read_pixels is left unchanged
because the application can't specify a format for that so it seems to
make most sense to store unpremultiplied values.
http://bugzilla.openedhand.com/show_bug.cgi?id=1959
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
Some of the ClutterDebugFlags are not meant as a logging facility: they
actually change Clutter's behaviour at run-time.
It would be useful to have this distinction ratified, and thus split
ClutterDebugFlags into two: one DebugFlags for logging facilities and
another set of flags for behavioural changes.
This split is warranted because:
• it should be possible to do "CLUTTER_DEBUG=all" and only have
log messages on the output
• it should be possible to use behavioural modifiers even on a
Clutter that has been compiled without debugging messages
support
The commit adds two new debugging flags:
ClutterPickDebugFlags - controlled by the CLUTTER_PICK environment
variable
ClutterPaintDebugFlags - controlled by the CLUTTER_PAINT environment
variable
The PickDebugFlags are:
nop-picking
dump-pick-buffers
While the PaintDebugFlags is:
disable-swap-events
The mechanism is equivalent to the CLUTTER_DEBUG environment variable,
but it does not depend on the debug level selected when configuring and
compiling Clutter. The picking and painting debugging flags are
initialized at clutter_init() time.
http://bugzilla.openedhand.com/show_bug.cgi?id=1991
• Remove unused variables.
• Do not pre-initialize ClutterActor's GType; pre-emptive optimizations
like these are more black magic than real optimization.
Instead of creating the default stage during initialization we can
now safely create it whenever clutter_stage_get_default() is called.
To maintain the invariant, the default stage is immediately realized
by Clutter itself.
Since we must guarantee that Cogl has a GL context to query, it is too
late to use the "dummy Window" trick from within the get_features()
virtual function implementation.
Instead, we can create a dummy Window from create_context() itself and
leave it around - basically trading a default stage with a dummy X
window.
We need to have the dummy X window around all the time so that the
GLX context can be selected and made current.
When we disable the per-actor events delivery Clutter replicates the X11
implicit soft grab for motion events with off-stage. The implicit grab
is done whenever the pointer of a device leaves a window with a button
still pressed; with the implicit grab in place the window still receives
motion events even after the LeaveNotify - until the button is released.
The implicit grab is not honoured in the per-actor event deliver case,
though, so we have a mismatch between two in theory equivalent cases.
Luckily, the fix is pretty trivial: when we check for a motion event
with a stage set but without an actor set, and that has off-stage
coordinates, we arbitrarily set the source to be the stage of the event
and emit the pointer event.
The LEAVE/ENTER event pairs should be queued during the InputDevice
update process, when we change the actor under the device pointer.
This commit cleans up the event emission code inside clutter-main.c
and the logic of the event processing.
The InputDevice objects stores pointer coordinates, state, stage and
the actor under the cursor, so if the current backend provides us with
one attached to the Event structure then we want the InputDevice itself
to update its state and give us the ClutterActor underneath the
pointer's cursor.
Using the ::event signal to match the CLUTTER_DELETE event type (and
block the stage destruction) can be costly, since it means checking
every single event.
The ::delete-event signal is similar in spirit to any other specialized
signal handler dealing with events, and retains the same semantics.
This adds gives Cogl a dedicated UProf context which will be linked together
with Clutter's context during clutter_init_real().
Initial timers cover _cogl_journal_flush and _cogl_journal_log_quad
You can explicitly ask for a report of Cogl statistics by exporting
COGL_PROFILE_OUTPUT_REPORT=1 but since the context is linked with Clutter's
the statisitcs will also be shown in the automatic Clutter reports.
This suspends and resumes all uprof timers and counters except while dealing
with picking, so as to give more focused statistics.
Be aware that there are still some issues with this profile option since
there are a few special case counters and timers that shouldn't be
suspended; noteably the frame counters are incorrect so the per frame stats
can't be trusted.
As we have for debugging, this adds the ability to control profiling flags
either via the command line or an environment variable.
The first option added is CLUTTER_PROFILE=disable-report
This also changes the reporting to be opt-out so you don't need to export
CLUTTER_PROFILE_OUTPUT_REPORT=1 to see a report but you can use
CLUTTER_PROFILE=disable-report to disable it if desired.
UProf is a small library that aims to help applications/libraries provide
domain specific reports about performance. It currently provides high
precision timer primitives (rdtsc on x86) and simple counters, the ability
to link statistics between optional components at runtime and makes report
generation easy.
This adds initial accounting for:
- Total mainloop time
- Painting
- Picking
- Layouting
- Idle time
The timing done by uprof is of wall clock time. It's not based on stochastic
samples we simply sample a counter at the start and end. When dealing with
the complexities of GPU drivers and with various kinds of IO this form of
profiling can be quite enlightening as it will be able to represent where
your application is blocking unlike tools such as sysprof.
To enable uprof accounting you must configure Clutter with --enable-profile
and have uprof-0.2 installed from git://git.moblin.org/uprof
If you want to see a report of statistics when Clutter applications exit you
should export CLUTTER_PROFILE_OUTPUT_REPORT=1 before running them.
Just a final word of caution; this stuff is new and the manual nature of
adding uprof instrumentation means it is prone to some errors when modifying
code. This just means that when you question strange results don't rule out
a mistake in the instrumentation. Obviously though we hope the benfits out
weigh e.g. by focusing on very key stats and by having automatic reporting.
Apparently, calling g_set_prgname() multiple times is not allowed
anymore, and hence clutter_init_* calls should not do that. Though this
is really GLib's fault - and a massive nuisance for us - we should
prolly comply to avoid the test suite dying on us.
When getting signals from higher level toolkits, occasionally
one wants access to the underlying event; say for a Button
widget's "clicked" signal, to get the keyboard state.
Rather than having all of the highlevel widgets emit
ClutterEvent just for the more unusual use cases,
add a global function to access the event state.
http://bugzilla.openedhand.com/show_bug.cgi?id=1888
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
* text-direction:
docs: Add text-direction accessors
Set the default language on the Pango context
actor: Set text direction on parenting
tests: Display the index inside text-box-layout
box-layout: Honour :text-direction
text: Dirty layout cache on text direction changes
actor: Add :text-direction property
Use the newly added ClutterTextDirection enumeration
Add ClutterTextDirection enumeration
The colour test for the stage in _clutter_do_pick checks for white to
determine whether the stage was picked but since 47db7af4d we were
setting the colur to black. This usually worked because the id of the
default stage ends up being 0 which equates to black. However if a
second stage is created then it will always end up picking the first
stage.
The stage's pick id can be written to the framebuffer when we call
cogl_clear so there's no need for the stage to also chain up in it's pick
function resulting in clutter-actor.c also emitting a rectangle for the
stage.
Instead of using PangoDirection directly we should use the
ClutterTextDirection enumeration.
We also need a pair of accessor functions for setting and
getting the default text direction.
cogl_clip_push, and cogl_clip_push_window_rect which are now deprecated were
used in various places internally so this just switches to using the
replacement functions.
The debugging function read_pixels_to_file() and _clutter_do_pick were both
directly calling glReadPixels, but we don't wan't Clutter making direct
OpenGL calls and Cogl provides a suitable alternative. It also means
read_pixels_to_file() doesn't need to manually flip the data read due to
differences in Clutter/Cogl coordinate systems.
Cogl's support for offscreen rendering was originally written just to support
the clutter_texture_new_from_actor API and due to lack of documentation and
several confusing - non orthogonal - side effects of using the API it wasn't
really possible to use directly.
This commit does a number of things:
- It removes {gl,gles}/cogl-fbo.{c,h} and adds shared cogl-draw-buffer.{c,h}
files instead which should be easier to maintain.
- internally CoglFbo objects are now called CoglDrawBuffers. A
CoglDrawBuffer is an abstract base class that is inherited from to
implement CoglOnscreen and CoglOffscreen draw buffers. CoglOffscreen draw
buffers will initially be used to support the
cogl_offscreen_new_to_texture API, and CoglOnscreen draw buffers will
start to be used internally to represent windows as we aim to migrate some
of Clutter's backend code to Cogl.
- It makes draw buffer objects the owners of the following state:
- viewport
- projection matrix stack
- modelview matrix stack
- clip state
(This means when you switch between draw buffers you will automatically be
switching to their associated viewport, matrix and clip state)
Aside from hopefully making cogl_offscreen_new_to_texture be more useful
short term by having simpler and well defined semantics for
cogl_set_draw_buffer, as mentioned above this is the first step for a couple
of other things:
- Its a step toward moving ownership for windows down from Clutter backends
into Cogl, by (internally at least) introducing the CoglOnscreen draw
buffer. Note: the plan is that cogl_set_draw_buffer will accept on or
offscreen draw buffer handles, and the "target" argument will become
redundant since we will instead query the type of the given draw buffer
handle.
- Because we have a common type for on and offscreen framebuffers we can
provide a unified API for framebuffer management. Things like:
- blitting between buffers
- managing ancillary buffers (e.g. attaching depth and stencil buffers)
- size requisition
- clearing
Just like CLUTTER_CHECK_VERSION does version checking at compile
time, we need a way to verify the version of the library that we
are linking against. This is mostly needed for language bindings
and for run-time loadable modules -- when we'll get those.
gdk is an optional clutter dependency, so the pick buffer debugging option
needs some guards so we don't break, for example, the OSX builds. This also
adds a comment for the bit fiddling done on the pick colors used to ensure
the pick colors are more distinguished while debugging. (we swap the
nibbles of each color component so that pick buffers don't just look black.)
Now if you export CLUTTER_DEBUG=dump-pick-buffers clutter will write out a
png, e.g. pick-buffer-00000.png, each time _clutter_to_pick() is called.
It's a rather crude way to debug the picking (realtime visualization in a
second stage would probably be nicer) but it we've used this approach
successfully numerous times when debugging Clutter picking issues so it
makes sense to have a debug option for it.
The race we were experiencing in the X11 backends is apparently
back after the fix in commit 00a3c698.
This time, just delaying the setting of the SYNC_MATRICES flag
is not enough, so we can resume the use of a STAGE_IN_RESIZE
private flag.
This should also fix bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1668
This function should only need to be called in exceptional circumstances
since Cogl can normally determine internally when a flush is necessary.
As an optimization Cogl drawing functions may batch up primitives
internally, so if you are trying to use raw GL outside of Cogl you stand a
better chance of being successful if you ask Cogl to flush any batched
geometry before making your state changes.
cogl_flush() ensures that the underlying driver is issued all the commands
necessary to draw the batched primitives. It provides no guarantees about
when the driver will complete the rendering.
This provides no guarantees about the GL state upon returning and to avoid
confusing Cogl you should aim to restore any changes you make before
resuming use of Cogl.
If you are making state changes with the intention of affecting Cogl drawing
primitives you are 100% on your own since you stand a good chance of
conflicting with Cogl internals. For example clutter-gst which currently
uses direct GL calls to bind ARBfp programs will very likely break when Cogl
starts to use ARBfb programs internally for the material API, but for now it
can use cogl_flush() to at least ensure that the ARBfp program isn't applied
to additional primitives.
This does not provide a robust generalized solution supporting safe use of
raw GL, its use is very much discouraged.
Previously the journal was always flushed at the end of
_cogl_rectangles_with_multitexture_coords, (i.e. the end of any
cogl_rectangle* calls) but now we have broadened the potential for batching
geometry. In ideal circumstances we will only flush once per scene.
In summary the journal works like this:
When you use any of the cogl_rectangle* APIs then nothing is emitted to the
GPU at this point, we just log one or more quads into the journal. A
journal entry consists of the quad coordinates, an associated material
reference, and a modelview matrix. Ideally the journal only gets flushed
once at the end of a scene, but in fact there are things to consider that
may cause unwanted flushing, including:
- modifying materials mid-scene
This is because each quad in the journal has an associated material
reference (i.e. not copy), so if you try and modify a material that is
already referenced in the journal we force a flush first)
NOTE: For now this means you should avoid using cogl_set_source_color()
since that currently uses a single shared material. Later we
should change it to use a pool of materials that is recycled
when the journal is flushed.
- modifying any state that isn't currently logged, such as depth, fog and
backface culling enables.
The first thing that happens when flushing, is to upload all the vertex data
associated with the journal into a single VBO.
We then go through a process of splitting up the journal into batches that
have compatible state so they can be emitted to the GPU together. This is
currently broken up into 3 levels so we can stagger the state changes:
1) we break the journal up according to changes in the number of material layers
associated with logged quads. The number of layers in a material determines
the stride of the associated vertices, so we have to update our vertex
array offsets at this level. (i.e. calling gl{Vertex,Color},Pointer etc)
2) we further split batches up according to material compatability. (e.g.
materials with different textures) We flush material state at this level.
3) Finally we split batches up according to modelview changes. At this level
we update the modelview matrix and actually emit the actual draw command.
This commit is largely about putting the initial design in-place; this will be
followed by other changes that take advantage of the extended batching.
I've found this is something I do quite often when debugging rendering
problems since its a simple way to wipe out lots of geometry and removes a
lot of unpredictable noise when logging geometry passing through the Cogl
journal.
The _clutter_context_get_default() function will automatically
create the main Clutter context; if we just want to check whether
Clutter has been initialized this will complicate matters, by
requiring a call to g_type_init() inside the client code.
Instead, we should simply provide an internal API that checks
whether the main Clutter context exists and if it has been
initialized, without any side effect.
The input device API is split halfway thorugh the backends in a very
weird way. The data structures are private, as they should, but most
of the information should be available in the main API since it's
generic enough.
The device type enumeration, for instance, should be common across
every backend; the accessors for device type and id should live in the
core API. The internal API should always use ClutterInputDevice and
not the private X11 implementation when dealing with public structures
like ClutterEvent.
By adding accessors for the device type and id, and by moving the
device type enumeration into the core API we can cut down the amount
of symbols private and/or visible only to the X11 backends; this way
when other backends start implementing multi-pointer support we can
share the same API across the code.
The clutter_context_get_default() function is private, but shared
across Clutter. For this reason, it should be prefixed by '_' so
that the symbol is hidden from the shared object.
The clutter_redraw() function is used by embedding toolkits to
force a redraw on a stage. Since everything is performed by
toggling a flag inside the Stage itself and then letting the
master clock advance, we need a ClutterStage method to ensure
that we start the master clock and redraw.
clutter-master-clock.c clutter-master-clock.h: When the
SYNC_TO_VBLANK feature is not available, wait for 1/frame_rate
seconds since the start of the last frame before drawing the next
frame. Add _clutter_master_clock_start_running() to abstract
the usage of g_main_context_wakeup()
clutter-stage.c: Add _clutter_master_clock_start_running()
clutter-main.c: Update docs for clutter_set_default_frame_rate()
clutter_get_default_frame_rate() to no longer talk about timeline
frame rates.
test-text-perf.c test-text.c: Set a frame rate of 1000fps so that
frame-rate limiting doesn't affect the result.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Instead of trying to guess about which motion events are
extraneous, queue up all events until we process a frame.
This allows us to look ahead and reliably compress consecutive
sequence of motion events.
clutter-main.c: Feed received events to the stage for queueing.
Remove old compression code. Remove clutter_get_motion_events_frequency()
clutter_set_motion_events_frequency()
clutter-stage.c: Keep a queue of pending events.
clutter-master-clock.c: Add processng of queued events to the
clock source dispatch function.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Remove code to advance the master clock after drawing a frame; if
there are any running timelines the master clock will do another
frame by itself, and the clock will be advanced before running
that frame.
With this change, there is no point in queueing an extra frame
redraw after completing a timeline, since we are always advancing
the timeline *before* redrawing, so remove that code as well.
(This does mean that calling clutter_timeline_stop() won't implicitly
cause the stage to be redrawn; this doesn't seem like something
an app should rely on in any case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The clutter_redraw() function is used by libraries embedding
Clutter inside another toolkit, instead of queueing a redraw
on the embedded stage. This means that clutter_redraw() should
perform the same sequence of actions done by the redraw idle
callback.
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
The setup_viewport() function should only be used by Clutter and
not by application code.
It can be emulated by changing the Stage size and perspective and
requeueing a redraw after calling clutter_stage_ensure_viewport().
Sometimes it is necessary for third party code to have a
function called during the redraw process, so that you can
update the scenegraph before it is painted.
cogl_clip_push_window_rect is implemented using GPU scissoring which allows
the GPU to cull anything that falls outside a given rectangle. Since in the
case of picking we only ever care about a single pixel we can get the GPU to
ignore all geometry that doesn't intersect that pixel and only rasterize for
one pixel.
The stencil buffer is always cleared the first time a clip is used
that needs it and the stencil test is disabled otherwise so there is
no need to clear before a paint.
Calling glReadPixels is bad enough in forcing us to synchronize the CPU with
the GPU, but glFinish has even stronger synchonization semantics than
glReadPixels which may negate some driver optimizations possible in
glReadPixels.
The master clock is currently advanced using a frame source driven
by the default frame rate. This breaks the sync to vblank because
the vblanking rate could be different than 60 Hz -- or it might be
completely disabled (e.g. with CLUTTER_VBLANK=none).
We should be using the main loop to check if we have timelines
playing, and if so queue a redraw on the stages we own.
We should also prepare the subsequent frame at the end of the redraw
process, so if there are new redraw we will have the scene already
in place.
This makes Clutter redraw at the maximum frame rate, which is
limited by the vblanking frequency.
With the recent change to internal floating point values, ClutterUnit
has become a redundant type, defined to be a float. All integer entry
points are being internally converted to floating point values to be
passed to the GL pipeline with the least amount of conversion.
ClutterUnit is thus exposed as just a "pixel with fractionary bits",
and not -- as users might think -- as generic, resolution and device
independent units. not that it was the case, but a definitive amount
of people was convinced it did provide this "feature", and was flummoxed
about the mere existence of this type.
So, having ClutterUnit exposed in the public API doubles the entry
points and has the following disadvantages:
- we have to maintain twice the amount of entry points in ClutterActor
- we still do an integer-to-float implicit conversion
- we introduce a weird impedance between pixels and "pixels with
fractionary bits"
- language bindings will have to choose what to bind, and resort
to manually overriding the API
+ *except* for language bindings based on GObject-Introspection, as
they cannot do manual overrides, thus will replicate the entire
set of entry points
For these reason, we should coalesces every Actor entry point for
pixels and for ClutterUnit into a single entry point taking a float,
like:
void clutter_actor_set_x (ClutterActor *self,
gfloat x);
void clutter_actor_get_size (ClutterActor *self,
gfloat *width,
gfloat *height);
gfloat clutter_actor_get_height (ClutterActor *self);
etc.
The issues I have identified are:
- we'll have a two cases of compiler warnings:
- printf() format of the return values from %d to %f
- clutter_actor_get_size() taking floats instead of unsigned ints
- we'll have a problem with varargs when passing an integer instead
of a floating point value, except on 64bit platforms where the
size of a float is the same as the size of an int
To be clear: the *intent* of the API should not change -- we still use
pixels everywhere -- but:
- we remove ambiguity in the API with regard to pixels and units
- we remove entry points we get to maintain for the whole 1.0
version of the API
- we make things simpler to bind for both manual language bindings
and automatic (gobject-introspection based) ones
- we have the simplest API possible while still exposing the
capabilities of the underlying GL implementation
Redundant clearing of depth and stencil buffers every render can be very
expensive, so cogl now gives control over which auxiliary buffers are
cleared.
Note: For now clutter continues to clear the color, depth and stencil buffer
each paint.
Currently, all timelines install a timeout inside the TimeoutPool
they share. Every time the main loop spins, all the timeouts are
updated. This, in turn, will usually lead to redraws being queued
on the stages.
This behaviour leads to the potential starvation of timelines and
to excessive redraws.
One lesson learned from the games developers is that the scenegraph
should be prepared in its entirety before the GL paint sequence is
initiated. This means making sure that every ::new-frame signal
handler is called before clutter_redraw() is invoked.
In order to do so a TimeoutPool is not enough: we need a master
clock. The clock will be responsible for advancing all the active
timelines created inside a scene, but only when the stage is
being redrawn.
The sequence is:
+ queue_redraw() is invoked on an actor and bubbles up
to the stage
+ if no redraw() has already been scheduled, install an
idle handler with a known priority
+ inside the idle handler:
- advance the master clock, which will in turn advance
every playing timeline by the amount of milliseconds
elapsed since the last redraw; this will make every
playing timeline emit the ::new-frame signal
- queue a relayout
- call the redraw() method of the backend
This way we trade multiple timeouts with a single frame source
that only runs if a timeline is playing and queues redraws on
the various stages.
If we need to check that the layout sequence is correct in
terms of order of execution and with respect to caching, then
having a CLUTTER_DEBUG_LAYOUT debug flag would make things
easier.
Bug 1495 - Timelines run 4% short
Previously the timelines were timed by calculating the interval
between each frame stored as an integer number of milliseconds so some
precision is lost. For example, requesting 60 frames per second gets
converted to 16 ms per frame which is actually 62.5 frames per
second. This makes the timeline shorter by 4%.
This patch merges the common code for timing from the timeout pools
and frame sources into an internal clutter-timeout-interval file. This
stores the interval directly as the FPS and counts the number of
frames that have been reached instead of the elapsed time.
The fog and perspective API is currently split in two parts:
- the floating point version, using values
- the fixed point version, using structures
The relative properties are using the structure types, since they
are meant to set multiple values at the same time. Instead of
using bare values, the whole API should be coalesced into two
simple calls using structures to match the GObject properties.
Thus:
clutter_stage_set_fog (ClutterStage*, const ClutterFog*)
clutter_stage_get_fog (ClutterStage*, ClutterFog*)
clutter_stage_set_perspective (ClutterStage*, const ClutterPerspective*)
clutter_stage_get_perspective (ClutterStage*, ClutterPerspective*)
Which supercedes the fixed point and floating point variants.
More importantly, both ClutterFog and ClutterPerspective should
using floating point values, since that's what get passed to
COGL anyway.
ClutterFog should also drop the "density" member, since ClutterStage
only allows linear fog; non-linear fog distribution can be achieved
using a signal handler and calling cogl_set_fog() directly; this keeps
the API compact yet extensible.
Finally, there is no ClutterStage:fog so it should be added.
Grabs are an entirely evil way to override the whole event delivery
machinery that Clutter has in place.
A pointer grab can be effectively replaced by a much more reliable
::captured-event signal handler, for instance.
Sometimes, grabs are a necessary evil -- and that is why Clutter
exposes them in the API; that should not fool anyone into thinking
that they should be used unless strictly necessary.
When event delivery is invoked by synthetic events through
clutter_do_event from inside an event handler clutter was silently
ignoring it, this warning will hopefully help resolving some issues.
The font options accessors in ClutterBackend only deal with const
cairo_font_options_t values, since:
- set_font_options() will copy the font options
- get_font_options() will return a pointer to the internal
font options
Not using const in these cases makes the API confusing and might lead
to erroneous calls to cairo_font_options_destroy().
Instead of using a fixed size array for storing the scenegraph sub-node
during event delivery we should use a GPtrArray. The benefits are:
- a smaller allocation
- no undocumented yet binding constraint on the scenegraph size
The environment variable to disable mipmapping should also be
a command line switch, and be handled like the rest of Clutter's
environment variables/command line switches.
Clutter is able to show debug messages written using the CLUTTER_NOTE()
macro at runtime, either by using an environment variable:
CLUTTER_DEBUG=...
or by using a command line switch:
--clutter-debug=...
--clutter-no-debug=...
Both are parsed during the initialization process by using the
GOption API.
COGL would benefit from having the same support.
In order to do this, we need a cogl_get_option_group() function in
COGL that sets up a GOptionGroup for COGL and adds a pre-parse hook
that will check the COGL_DEBUG environment variable. The OptionGroup
will also install two command line switches:
--cogl-debug
--cogl-no-debug
With the same semantics of the Clutter ones.
During Clutter initialization, the COGL option group will be attached
to the GOptionContext used to parse the command line options passed
to a Clutter application.
Every debug message written using:
COGL_NOTE (SECTION, "message format", arguments);
Will then be printed only if SECTION was enabled at runtime.
This whole machinery, like the equivalent one in Clutter, depends on
a compile time switch, COGL_ENABLE_DEBUG, which is enabled at the same
time as CLUTTER_ENABLE_DEBUG. Having two different symbols allows
greater granularity.