Some apps or some use cases don't need to clear the stage on immediate
rendering GPUs. A media player playing a fullscreen video or a
tile-based game, for instance.
These apps are redrawing the whole screen, so we can avoid clearing the
color buffer when preparing to paint the stage, since there is no
blending with the stage color being performed.
We can add an private set of hints to ClutterStage, and expose accessors
for each potential hint; the first hint is the 'no-clear' one.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2058
The marshallers we use for the signals are declared in a private header,
and it stands to reason that they should also be hidden in the shared
object by using the common '_' prefix. We are also using some direct
g_cclosure_marshal_* symbol from GLib, instead of consistently use the
clutter_marshal_* symbol.
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
* stage-min-size-rework:
docs: Update minimum size accessors
actor: Use the TOPLEVEL flag instead of a type check
[stage] Use min-width/height props for min size
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
Instead of shadowing these properties with different properties with the
same names on stage, actually use them. Behaviour should be identical,
except the minimum stage size can now be enforced by setting the
min-width/height properties as well as using the set_minimum_size
function.
The motion event compression should be affected by the device field of
the event; that is: we should compress motion events coming from the
same device.
The introduction of the StageManager in 0.8 implied that the first Stage
instance to be created was automatically assigned the status of "default
stage". This was all well and good, since the default stage was created
behind the curtains by the initialization sequence.
Now that the initialization sequence does not create a default stage any
longer, it means that the first stage created using clutter_stage_new()
gets to be the default, and all special and warm and fuzzy - which also
means that the first stage created by clutter_stage_new() cannot be
destroyed or handled as any other stage. Whoopsie.
Let's go back to the old semantics: the stage created by the first
invocation of clutter_stage_get_default() is the default stage, and
nothing else can be set as default. One day we'll be able to break the
API and the whole default stage business will be a thing of the past.
Setting/unsetting fullscreen on a mapped or unmapped window now works
correctly.
If you unfullscreen a window that was initially full-screened, it will
unset the fullscreen hint and the WM will likely push the size down to
the largest valid size.
If the window was previously un-fullscreened, Clutter will restore the
previous size.
Fullscreening also now works if the WM switches the hint without the
application's knowledge (as happens when you resize a window to the size
of the screen, for example, with stock metacity).
When we resize, we relied on the stage's allocate to re-initialise the
GL viewport. Unfortunately, if we resized within Clutter, the new size
was cached before the window is actually resized, so glViewport wasn't
being called after resizing (some of the time, it's a race condition).
Change the way resizing works slightly so that we only resize when the
geometry size doesn't match our preferred size, and queue a relayout on
ConfigureNotify so the glViewport gets called.
Also change window creation slightly so that setting the size of a
window before it's realized works correctly.
The master clock might have a Stage during its destruction phase,
without a StageWindow attached to it. If this happens and we try
to dereference the StageWindow to get its class and call a virtual
function we might experience some slight turbulence and... then...
explode.
http://bugzilla.openedhand.com/show_bug.cgi?id=1987
This replaces code like this:
if (CLUTTER_ACTOR_IS_VISIBLE (self))
clutter_actor_queue_redraw (self);
with:
clutter_actor_queue_redraw (self);
clutter_actor_queue_redraw internally knows what can be optimized when
the actor is not visible, but it also knows that the queue_redraw signal
must always be sent in case a ClutterClone is cloning a hidden actor.
If your OpenGL driver supports GLX_INTEL_swap_event that means when
glXSwapBuffers is called it returns immediatly and an XEvent is sent when
the actual swap has finished.
Clutter can use the events that notify swap completion as a means to
throttle rendering in the master clock without blocking the CPU and so it
should help improve the performance of CPU bound applications.
We want to set the default size without triggering the layout machinary,
so change the window creation process slightly so we start with a
640x480 window.
Due to the way the new sizing works, clutter stage must set its size in
init (to maintain old behaviour) and the properties on the X11 stage
must be initialised to 1x1 so that it actually goes ahead with the
resize.
Fixes stages that aren't user resizable and have no size set from
appearing at 1x1.
We want the actual window geometry in clutter_stage_set_minimum_size,
not the set size. Now that the geometry function has been changed to do
what it says, use it.
Get the current size of the stage correctly in
clutter_stage_set_minimum_size. The get_geometry StageWindow function is
not equivalent of the current size, use clutter_actor_get_size().
Instead of creating the default stage during initialization we can
now safely create it whenever clutter_stage_get_default() is called.
To maintain the invariant, the default stage is immediately realized
by Clutter itself.
* device-manager: (37 commits)
x11: Re-enable XI1 extension keyboards
x11: Always handle core device events before XI events
docs: Documentation fixes for DeviceManager
device-manager: Fix the signals definition
docs: Add sections for InputDevice and DeviceManager
docs: Add clutter_input_device_get_device_name()
tests: Print out the device details on motion
Always register core devices
device: Remove unused is_default member
win32: Experimental implementation of device support
tests: Print the device name, as well as its Id
x11: Fill out the :name property of the InputDevices
device: Add the :name property to InputDevice
x11: Store core devices on the X11 Backend singleton
device: Unset the cursor actor when leaving the stage
device: Add pointer actor getter
x11: Discard the LeaveNotify for off-stage ButtonRelease
device: Do not overwrite the stage for an InputDevice
event: Off-stage button releases have a click count of 1
event: Scroll events do not have click count
...
ClutterStage has both set_key_focus() and get_key_focus() methods, but
there is no :key-focus property. This means that it is not possible to
get notifications when the key-focus has changes except by connecting to
both the ::key-focus-in and ::key-focus-out signals and do additional
bookkeeping.
http://bugzilla.openedhand.com/show_bug.cgi?id=1956
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The Stage field of an InputDevice is set by the backend, whenever the
pointer enters or leaves the Stage. The Stage should not overwrite the
stage field for every event it processes.
The InputDevice objects stores pointer coordinates, state, stage and
the actor under the cursor, so if the current backend provides us with
one attached to the Event structure then we want the InputDevice itself
to update its state and give us the ClutterActor underneath the
pointer's cursor.
Using the ::event signal to match the CLUTTER_DELETE event type (and
block the stage destruction) can be costly, since it means checking
every single event.
The ::delete-event signal is similar in spirit to any other specialized
signal handler dealing with events, and retains the same semantics.
UProf is a small library that aims to help applications/libraries provide
domain specific reports about performance. It currently provides high
precision timer primitives (rdtsc on x86) and simple counters, the ability
to link statistics between optional components at runtime and makes report
generation easy.
This adds initial accounting for:
- Total mainloop time
- Painting
- Picking
- Layouting
- Idle time
The timing done by uprof is of wall clock time. It's not based on stochastic
samples we simply sample a counter at the start and end. When dealing with
the complexities of GPU drivers and with various kinds of IO this form of
profiling can be quite enlightening as it will be able to represent where
your application is blocking unlike tools such as sysprof.
To enable uprof accounting you must configure Clutter with --enable-profile
and have uprof-0.2 installed from git://git.moblin.org/uprof
If you want to see a report of statistics when Clutter applications exit you
should export CLUTTER_PROFILE_OUTPUT_REPORT=1 before running them.
Just a final word of caution; this stuff is new and the manual nature of
adding uprof instrumentation means it is prone to some errors when modifying
code. This just means that when you question strange results don't rule out
a mistake in the instrumentation. Obviously though we hope the benfits out
weigh e.g. by focusing on very key stats and by having automatic reporting.
The ClutterStage:use-alpha property is used to let a stage know that it
should honour the alpha component of the ClutterStage:color property.
If :use-alpha is set to FALSE the stage always uses the full opacity
when clearing itself before a paint(); otherwise, the alpha value is
used.
The stage's pick id can be written to the framebuffer when we call
cogl_clear so there's no need for the stage to also chain up in it's pick
function resulting in clutter-actor.c also emitting a rectangle for the
stage.
There is a new internal Cogl function called _cogl_check_driver_valid
which looks at the value of the GL_VERSION string to determine whether
the driver is supported. Clutter now calls this after the stage is
realized. If it fails then the stage is marked as unrealized and a
warning is shown.
_cogl_features_init now also checks the version number before getting
the function pointers for glBlendFuncSeparate and
glBlendEquationSeparate. It is not safe to just check for the presence
of the functions because some drivers may define the function without
fully implementing the spec.
The GLES version of _cogl_check_driver_valid just always returns TRUE
because there are no version requirements yet.
Eventually the function could also check for mandatory extensions if
there were any.
http://bugzilla.openedhand.com/show_bug.cgi?id=1875
We can not process events for a stage that has been destroyed so we
should make sure that the events for the stage are removed from the
global event queue during dispose.
http://bugzilla.openedhand.com/show_bug.cgi?id=1882
Because Cogl defines the origin of viewport and window coordinates to be
top-left it always needs to know the size of the current window so that Cogl
window/viewport coordinates can be transformed into OpenGL coordinates.
This also fixes cogl_read_pixels to use the current draw buffer height
instead of the viewport height to determine the OpenGL y coordinate to use
for glReadPixels.
The only backend that tried to implement offscreen stages was the GLX backend
and even this has apparently be broken for some time without anyone noticing.
The property still remains and since the property already clearly states that
it may not work I don't expect anyone to notice.
This simplifies quite a bit of the GLX code which is very desireable from the
POV that we want to start migrating window system code down to Cogl and the
simpler the code is the more straight forward this work will be.
In the future when Cogl has a nicely designed API for framebuffer objects then
re-implementing offscreen stages cleanly for *all* backends should be quite
straightforward.
Instead of using ClutterActor for the base class of the Stage
implementation we should extend the StageWindow interface with
the required bits (geometry, realization) and use a simple object
class.
This require a wee bit of changes across Backend, Stage and
StageWindow, even though it's mostly re-shuffling.
First of all, StageWindow should get new virtual functions:
* geometry:
- resize()
- get_geometry()
* realization
- realize()
- unrealize()
This covers all the bits that we use from ClutterActor currently
inside the stage implementations.
The ClutterBackend::create_stage() virtual function should create
a StageWindow, and not an Actor (it should always have been; the
fact that it returned an Actor was a leak of the black magic going
on underneath). Since we never guaranteed ABI compatibility for
the Backend class, this is not a problem.
Internally to ClutterStage we can finally drop the shenanigans of
setting/unsetting actor flags on the implementation: if the realization
succeeds, for instance, we set the REALIZED flag on the Stage and
we're done.
As an initial proof of concept, the X11 and GLX stage implementations
have been ported to the New World Order(tm) and show no regressions.
The Stage:offscreen property hasn't been tested for ages, and it
should really just use a FBO, not indirect rendering on a X Pixmap
only on X11. There are better ways anyway to get the current
contents of ClutterStage as a buffer anyway.
We might remove it at any later date, or actually make it work
properly.
It might be desirable for some applications and/or platforms to get
every motion event that was delivered to Clutter from the windowing
backend. By adding a per-stage flag we can bypass the throttling
done when processing the events.
http://bugzilla.openedhand.com/show_bug.cgi?id=1665
A lot of applications change the size of the stage from the default
before the stage is initially shown. The size change won't take affect
until the first allocation run. However we want the window to be at
the correct size when we first map it so we should force an allocation
run before showing the stage.
There was an explicit call to XResizeWindow in
clutter_stage_x11_show. This is not needed anymore because
XResizeWindow will already have been called by the allocate method.
To allow for flushing of batched geometry within Cogl we can't support users
directly calling glReadPixels. glReadPixels is also awkward, not least
because it returns upside down image data.
All the unit tests have been swithed over and clutter_stage_read_pixels now
sits on top of this too.
The clutter_context_get_default() function is private, but shared
across Clutter. For this reason, it should be prefixed by '_' so
that the symbol is hidden from the shared object.
Merge branch 'premultiplication'
[cogl-texture docs] Improves the documentation of the internal_format args
[test-premult] Adds a unit test for texture upload premultiplication semantics
[fog] Document that fogging only works with opaque or unmultipled colors
[test-blend-strings] Explicitly request RGBA_888 tex format for test textures
[premultiplication] Be more conservative with what data gets premultiplied
[bitmap] Fixes _cogl_bitmap_fallback_unpremult
[cogl-bitmap] Fix minor copy and paste error in _cogl_bitmap_fallback_premult
Avoid unnecesary unpremultiplication when saving to local data
Don't unpremultiply Cairo data
Default to a blend function that expects premultiplied colors
Implement premultiplication for CoglBitmap
Use correct texture format for pixmap textures and FBO's
Add cogl_color_premultiply()
The fixed function fogging provided by OpenGL only works with unmultiplied
colors (or if the color has an alpha of 1.0) so since we now premultiply
textures and colors by default a note to this affect has been added to
clutter_stage_set_fog and cogl_set_fog.
test-depth.c no longer uses clutter_stage_set_fog for this reason.
In the future when we can depend on fragment shaders we should also be
able to support fogging of premultiplied primitives.