Since we use Cogl for the context creation we can now provide a default
context creation that should just work, plus a couple of hooks to allow
plugging into the creation sequence for platforms supported by Cogl that
require special handling — like foreign displays or alpha-enabled swap
chains.
The various backends have now two choices: either replace the
create_context() in its entirety, or plug themselves into the default
context creation.
Input backends are, in some cases, independent from the windowing system
backends; we can initialize input handling using a model similar to what
we use for windowing backends, including an environment variable and
compile-/run-time checks.
This model allows us to remove the backend-specific init_events(), and
use a generic implementation directly inside the base ClutterBackend
class, thus further reducing the backend-specific code that every
platform has to implement.
This requires some minor surgery to every single backend, to make sure
that the function exposed to initialize the event loop is similar and
performs roughly the same operations.
This adds experimental API to be able to get the CoglContext associated
with the ClutterBackend. The CoglContext is required to use some of the
experimental 2.0 Cogl API.
Note: Since CoglContext is itself experimental API this API should
considered experimental too. This patch introduces a
CLUTTER_ENABLE_EXPERIMENTAL_API #ifdef guard which anyone wanting to use
this API must define so it's explicitly clear to developers that they
are playing with experimental API.
Note: This API is not yet supported on OSX because OSX still uses the
stub Cogl winsys and the Clutter backend doesn't explicitly create a
CoglContext.
Note: even though this is experimental API we still promise that it
wont be changed during a stable release cycle. This means for example
that you can depend on this for the lifetime of the clutter-1.8 stable
release cycle.
The G_CONST_RETURN define in GLib is, and has always been, a bit fuzzy.
We always used it to conform to the platform, at least for public-facing
API.
At first I assumed it has something to do with brain-damaged compilers
or with weird platforms where const was not really supported; sadly,
it's something much, much worse: it's a define that can be toggled at
compile-time to remove const from the signature of public API. This is a
truly terrifying feature that I assume was added in the past century,
and whose inception clearly had something to do with massive doses of
absynthe and opium — because any other explanation would make the
existence of such a feature even worse than assuming drugs had anything
to do with it.
Anyway, and pleasing the gods, this dubious feature is being
removed/deprecated in GLib; see bug:
https://bugzilla.gnome.org/show_bug.cgi?id=644611
Before deprecation, though, we should just remove its usage from the
whole API. We should especially remove its usage from Cally's internals,
since there it never made sense in the first place.
This makes it possible to build Clutter against a standalone build of
Cogl instead of having the Clutter build traverse into the clutter/cogl
subdirectory.
This migrates all the GLX window system code down from the Clutter
backend code into a Cogl winsys. Moving OpenGL window system binding
code down from Clutter into Cogl is the biggest blocker to having Cogl
become a standalone 3D graphics library, so this is an important step in
that direction.
This gives us a way to clearly track the internal Cogl API that Clutter
depends on. The aim is to split Cogl out from Clutter into a standalone
3D graphics API and eventually we want to get rid of any private
interfaces for Clutter so its useful to have a handle on that task.
Actually it's not as bad as I was expecting though.
The GQueue that stores the global events queue is handled all over the
place:
• the structure is created in _clutter_backend_init_events();
• the queue is handled in clutter-event.c, clutter-stage.c and
clutter-backend.c;
• ClutterStage::dispose cleans up the events associated with
the stage being destroyed;
• the queue is destroyed in ClutterBackend::dispose.
Since we need to have access to it in different places we cannot put it
inside ClutterBackendPrivate, hence it should stay in ClutterMainContext;
but we should still manage it from just one place - preferably by the
ClutterEvent API only.
In the future, we want event translators to be the way to handle events
in backends. For this reason, they should be a part of the base abstract
ClutterBackend class, and not an X11-only concept.
Instead of asking all backends to do that for us, we can call
ClutterStageWindow::redraw ourselves by default.
This changeset fixes all backends to actually do the right thing, and
move the stage implementation redraw inside the ClutterStageWindow
implementation itself.
This is a lump commit that is fairly difficult to break down without
either breaking bisecting or breaking the test cases.
The new design for handling X11 event translation works this way:
- ClutterBackend::translate_event() has been added as the central
point used by a ClutterBackend implementation to translate a
native event into a ClutterEvent;
- ClutterEventTranslator is a private interface that should be
implemented by backend-specific objects, like stage
implementations and ClutterDeviceManager sub-classes, and
allows dealing with class-specific event translation;
- ClutterStageX11 implements EventTranslator, and deals with the
stage-relative X11 events coming from the X11 event source;
- ClutterStageGLX overrides EventTranslator, in order to
deal with the INTEL_GLX_swap_event extension, and it chains up
to the X11 default implementation;
- ClutterDeviceManagerX11 has been split into two separate classes,
one that deals with core and (optionally) XI1 events, and the
other that deals with XI2 events; the selection is done at run-time,
since the core+XI1 and XI2 mechanisms are mutually exclusive.
All the other backends we officially support still use their own
custom event source and translation function, but the end goal is to
migrate them to the translate_event() virtual function, and have the
event source be a shared part of Clutter core.
Move the private Backend API to a separate header.
This also allows us to finally move the class vtable and instance
structure to a separate file and plug the visibility hole that left
the Backend class bare for everyone to poke into.
When building actor relative transforms, instead of using the matrix
stack to combine transformations and making assumptions about what is
currently on the stack we now just explicitly initialize an identity
matrix and apply transforms to that.
This removes the full_vertex_t typedef for internal transformation code
and we just use ClutterVertex.
ClutterStage now implements apply_transform like any other actor now
and the code we had in _cogl_setup_viewport has been moved to the
stage's apply_transform instead.
ClutterStage now tracks an explicit projection matrix and viewport
geometry. The projection matrix is derived from the perspective whenever
that changes, and the viewport is updated when the stage gets a new
allocation. The SYNC_MATRICES mechanism has been removed in favour of
_clutter_stage_dirty_viewport/projection() APIs that get used when
switching between multiple stages to ensure cogl has the latest
information about the onscreen framebuffer.
Events allocated by Clutter should have a pointer to platform-specific
data; this would allow backends to add separate structures for holding
ancillary data, whilst retaining the ClutterEvent structure for use on
the stack.
In theory, for Clutter 2.x we might just want to drop Event and use an
opaque structure, or a typed data structure inheriting from
GTypeInstance instead.
If the backend was disposed then priv->font_name would be freed but not
set to NULL and so if clutter_backend_get_font_name was then called it
would double free priv->font_name.
Since the Settings:font-dpi property is exposed as 1024 * real_dpi in
order to make the setting as neutral as possible (and allow XSETTINGS
to use it natively) we need a simple API returning the DPI using a
floating point value.
The marshallers we use for the signals are declared in a private header,
and it stands to reason that they should also be hidden in the shared
object by using the common '_' prefix. We are also using some direct
g_cclosure_marshal_* symbol from GLib, instead of consistently use the
clutter_marshal_* symbol.
While this is totally fine (0 in the pointer context will be converted
in the right internal NULL representation, which could be a value with
some bits to 1), I believe it's clearer to use NULL in the pointer
context.
It seems that, in most case, it's more an overlook than a deliberate
choice to use FALSE/0 as NULL, eg. copying a _COGL_GET_CONTEXT (ctx, 0)
or a g_return_val_if_fail (cond, 0) from a function returning a
gboolean.
We kind of assume that stuff will break well before during the
ClutterBackend::create_context() implementation if we fail to create a
GL context. We do, however, have error reporting in place inside the
Backend API to catch those cases. Unfortunately, since we switched to
lazy initialization of the Stage, there can be a case of GL context
creation failure that still leads to a successful initialization - and a
segmentation fault later on. This is clearly Not Good™.
Let's try to catch a failure in all the places calling create_context()
and report back to the user the error in a meaningful way, before
crashing and burning.
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
Commit d2bdd3cb62 fixed some compiler warnings but also broke the
ability to create a stage. Although not having warnings from the
compiler is nice, it is also nice to be able to create a stage so lets
not invert the meaning of the error check.
UProf is a small library that aims to help applications/libraries provide
domain specific reports about performance. It currently provides high
precision timer primitives (rdtsc on x86) and simple counters, the ability
to link statistics between optional components at runtime and makes report
generation easy.
This adds initial accounting for:
- Total mainloop time
- Painting
- Picking
- Layouting
- Idle time
The timing done by uprof is of wall clock time. It's not based on stochastic
samples we simply sample a counter at the start and end. When dealing with
the complexities of GPU drivers and with various kinds of IO this form of
profiling can be quite enlightening as it will be able to represent where
your application is blocking unlike tools such as sysprof.
To enable uprof accounting you must configure Clutter with --enable-profile
and have uprof-0.2 installed from git://git.moblin.org/uprof
If you want to see a report of statistics when Clutter applications exit you
should export CLUTTER_PROFILE_OUTPUT_REPORT=1 before running them.
Just a final word of caution; this stuff is new and the manual nature of
adding uprof instrumentation means it is prone to some errors when modifying
code. This just means that when you question strange results don't rule out
a mistake in the instrumentation. Obviously though we hope the benfits out
weigh e.g. by focusing on very key stats and by having automatic reporting.
There is a new internal Cogl function called _cogl_check_driver_valid
which looks at the value of the GL_VERSION string to determine whether
the driver is supported. Clutter now calls this after the stage is
realized. If it fails then the stage is marked as unrealized and a
warning is shown.
_cogl_features_init now also checks the version number before getting
the function pointers for glBlendFuncSeparate and
glBlendEquationSeparate. It is not safe to just check for the presence
of the functions because some drivers may define the function without
fully implementing the spec.
The GLES version of _cogl_check_driver_valid just always returns TRUE
because there are no version requirements yet.
Eventually the function could also check for mandatory extensions if
there were any.
http://bugzilla.openedhand.com/show_bug.cgi?id=1875
Because Cogl defines the origin of viewport and window coordinates to be
top-left it always needs to know the size of the current window so that Cogl
window/viewport coordinates can be transformed into OpenGL coordinates.
This also fixes cogl_read_pixels to use the current draw buffer height
instead of the viewport height to determine the OpenGL y coordinate to use
for glReadPixels.
The only backend that tried to implement offscreen stages was the GLX backend
and even this has apparently be broken for some time without anyone noticing.
The property still remains and since the property already clearly states that
it may not work I don't expect anyone to notice.
This simplifies quite a bit of the GLX code which is very desireable from the
POV that we want to start migrating window system code down to Cogl and the
simpler the code is the more straight forward this work will be.
In the future when Cogl has a nicely designed API for framebuffer objects then
re-implementing offscreen stages cleanly for *all* backends should be quite
straightforward.
When computing the pixels value of a ClutterUnits value we should
be caching the value to avoid recomputing for every call of
clutter_units_to_pixels(). We already have a flag telling us to
return the cached value, but we miss the mechanism to evict the
cache whenever the Backend settings affecting the conversion, that
is default font and resolution, change.
In order to implement the eviction we can use a "serial"; the
Backend will have an internal serial field which we retrieve and
put inside the ClutterUnits structure (we split one of the two
64 bit padding fields into two 32 bit fields to maintain ABI); every
time we call clutter_units_to_pixels() we compare the units serial
with that of the Backend; if they match and pixels_set is set to
TRUE then we just return the stored pixels value. If the serials
do not match then we unset the pixels_set flag and recompute the
pixels value.
We can verify this by adding a simple test unit checking that
by changing the resolution of ClutterBackend we get different
pixel values for 1 em.
http://bugzilla.openedhand.com/show_bug.cgi?id=1843
Instead of using ClutterActor for the base class of the Stage
implementation we should extend the StageWindow interface with
the required bits (geometry, realization) and use a simple object
class.
This require a wee bit of changes across Backend, Stage and
StageWindow, even though it's mostly re-shuffling.
First of all, StageWindow should get new virtual functions:
* geometry:
- resize()
- get_geometry()
* realization
- realize()
- unrealize()
This covers all the bits that we use from ClutterActor currently
inside the stage implementations.
The ClutterBackend::create_stage() virtual function should create
a StageWindow, and not an Actor (it should always have been; the
fact that it returned an Actor was a leak of the black magic going
on underneath). Since we never guaranteed ABI compatibility for
the Backend class, this is not a problem.
Internally to ClutterStage we can finally drop the shenanigans of
setting/unsetting actor flags on the implementation: if the realization
succeeds, for instance, we set the REALIZED flag on the Stage and
we're done.
As an initial proof of concept, the X11 and GLX stage implementations
have been ported to the New World Order(tm) and show no regressions.
The StageManager singleton instance is already kept around
by the clutter_stage_manager_get_default() function; there is
no need to have it inside the main Clutter context as well.
The clutter_context_get_default() function is private, but shared
across Clutter. For this reason, it should be prefixed by '_' so
that the symbol is hidden from the shared object.