Fix clutter initialisation if argb visuals are enabled, setting a border
color on creating the dummy window. This should avoid BadMatch happening
when the depth of the root window visual is not the same of the depth
of the argb visual.
http://bugzilla.openedhand.com/show_bug.cgi?id=2011
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This changes clutter_glx_texture_pixmap_update_area so it defers the
call to glXBindTexImageEXT until our pre "paint" signal handler which
makes clutter_glx_texture_pixmap_update_area cheap to call.
The hope is that mutter can switch to reporting raw damage updates to
ClutterGLXTexturePixmap and we can use these to queue clipped redraws.
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
The code has gotten really complicated to follow.
As soon as we have a sync-to-vblank mechanism we should just bail out.
Also, __GL_SYNC_TO_VBLANK (which is used by nVidia) should be assumed
equivalent to a CLUTTER_VBLANK_GLX_SWAP.
Some of the ClutterDebugFlags are not meant as a logging facility: they
actually change Clutter's behaviour at run-time.
It would be useful to have this distinction ratified, and thus split
ClutterDebugFlags into two: one DebugFlags for logging facilities and
another set of flags for behavioural changes.
This split is warranted because:
• it should be possible to do "CLUTTER_DEBUG=all" and only have
log messages on the output
• it should be possible to use behavioural modifiers even on a
Clutter that has been compiled without debugging messages
support
The commit adds two new debugging flags:
ClutterPickDebugFlags - controlled by the CLUTTER_PICK environment
variable
ClutterPaintDebugFlags - controlled by the CLUTTER_PAINT environment
variable
The PickDebugFlags are:
nop-picking
dump-pick-buffers
While the PaintDebugFlags is:
disable-swap-events
The mechanism is equivalent to the CLUTTER_DEBUG environment variable,
but it does not depend on the debug level selected when configuring and
compiling Clutter. The picking and painting debugging flags are
initialized at clutter_init() time.
http://bugzilla.openedhand.com/show_bug.cgi?id=1991
When we resize, we relied on the stage's allocate to re-initialise the
GL viewport. Unfortunately, if we resized within Clutter, the new size
was cached before the window is actually resized, so glViewport wasn't
being called after resizing (some of the time, it's a race condition).
Change the way resizing works slightly so that we only resize when the
geometry size doesn't match our preferred size, and queue a relayout on
ConfigureNotify so the glViewport gets called.
Also change window creation slightly so that setting the size of a
window before it's realized works correctly.
This uses the G_GNUC_DEPRECATED macros to mark the
cogl_{texture,vertex_buffer,shader}_ref and unref APIs as deprecated.
Since this flagged that cogl-pango-display-list.c and
clutter-glx-texture-pixmap.c were still using deprecated _ref/_unref
APIs they have now been changed to use the cogl_handle_ref/unref API
instead.
OpenGL is an implementation detail for Cogl so it's not appropriate to
expose OpenGL extensions through the Cogl API.
Note: Clutter is currently still using this API, because it is still
doing raw GL calls in ClutterGLXTexturePixmap, so this introduces a
couple of (legitimate) build warnings while compiling Clutter.
This fixes some backwards logic for asserting that we have a GLX major
version == 1 and a minor version >= 2. (NB: Although we technically
depend on GLX 1.3 features, we still have to support drivers that report
GLX 1.2 because there are a lot of mesa drivers out there incorrectly
report GLX 1.2 even though they export extensions that depend on GLX
1.3)
If your OpenGL driver supports GLX_INTEL_swap_event that means when
glXSwapBuffers is called it returns immediatly and an XEvent is sent when
the actual swap has finished.
Clutter can use the events that notify swap completion as a means to
throttle rendering in the master clock without blocking the CPU and so it
should help improve the performance of CPU bound applications.
Some extensions only support GLX versions > 1.3 and may not support
old style X Windows as GLXDrawables, so we now create GLXWindows for
stages when possible.
We want to set the default size without triggering the layout machinary,
so change the window creation process slightly so we start with a
640x480 window.
The reason why we have a dummy, offscreen Window when we create the
GLX context is that GLX does not like it when you ask the context for
features if it's not made current to a Drawable. Maybe in the future
it will allow us to do so, but right now we have to make do with what
GLX offers us.
Instead of using g_critical() inside the create_context() implementation
of the ClutterBackendGLX we should use the passed GError, so that the
error message can bubble up to the caller.
Since we must guarantee that Cogl has a GL context to query, it is too
late to use the "dummy Window" trick from within the get_features()
virtual function implementation.
Instead, we can create a dummy Window from create_context() itself and
leave it around - basically trading a default stage with a dummy X
window.
We need to have the dummy X window around all the time so that the
GLX context can be selected and made current.
These macros used to define Cogl wrappers for the GLenum values. There are
now Cogl enums everywhere in the API where these were required so we
shouldn't need them anymore. They were in the public headers but as
they are not neccessary and were not in the API docs for Clutter 1.0
it should be safe to remove them.
UProf is a small library that aims to help applications/libraries provide
domain specific reports about performance. It currently provides high
precision timer primitives (rdtsc on x86) and simple counters, the ability
to link statistics between optional components at runtime and makes report
generation easy.
This adds initial accounting for:
- Total mainloop time
- Painting
- Picking
- Layouting
- Idle time
The timing done by uprof is of wall clock time. It's not based on stochastic
samples we simply sample a counter at the start and end. When dealing with
the complexities of GPU drivers and with various kinds of IO this form of
profiling can be quite enlightening as it will be able to represent where
your application is blocking unlike tools such as sysprof.
To enable uprof accounting you must configure Clutter with --enable-profile
and have uprof-0.2 installed from git://git.moblin.org/uprof
If you want to see a report of statistics when Clutter applications exit you
should export CLUTTER_PROFILE_OUTPUT_REPORT=1 before running them.
Just a final word of caution; this stuff is new and the manual nature of
adding uprof instrumentation means it is prone to some errors when modifying
code. This just means that when you question strange results don't rule out
a mistake in the instrumentation. Obviously though we hope the benfits out
weigh e.g. by focusing on very key stats and by having automatic reporting.
Since asking for ARGB by default is still somewhat experimental on X11
and not every toolkit or complex widgets (like WebKit) still do not like
dealing with ARGB visuals, we should switch back to RGB by default - now
that at least we know it works.
For applications (and toolkit integration libraries) that want to enable
the ClutterStage:use-alpha property there is a new function:
void clutter_x11_set_use_argb_visual (gboolean use_argb);
which needs to be called before clutter_init().
The CLUTTER_DISABLE_ARGB_VISUAL environment variable can still be used
to force this value off at run-time.
When Clutter tries to pick an ARGB visual it tried to set the
GLX_TRANSPARENT_TYPE attribute of the FBConfig to
GLX_TRANSPARENT_RGB. However the code to do this was broken so that it
was actually trying to set the non-existant attribute number 0x8008
instead. Mesa silently ignored this so it appeared as if it was
working but the Nvidia drivers do not like it.
It appears that the TRANSPARENT_TYPE attribute is not neccessary for
getting an ARGB visual anyway and instead it is intended to support
color-key transparency. Therefore we can just remove it and get all of
the FBConfigs. Then if we need an ARGB visual we can just walk the
list to look for one with depth == 32.
The fbconfig is now stored in a single variable instead of having a
separate variable for the rgb and rgba configs because the old code
only ever retrieved one of them anyway.
When requesting the GLXFBConfig for creating the GLX context, we should
always request one that links to an ARGB visual instead of a plain RGB
one.
By using an ARGB visual we allow the ClutterStage:use-alpha property to
work as intended when running Clutter under a compositing manager.
The default behaviour of requesting an ARGB visual can be disabled by
using the:
CLUTTER_DISABLE_ARGB_VISUAL
Environment variable.
As part of an incremental process to have Cogl be a standalone project we
want to re-consider how we organise the Cogl source code.
Currently this is the structure I'm aiming for:
cogl/
cogl/
<put common source here>
winsys/
cogl-glx.c
cogl-wgl.c
driver/
gl/
gles/
os/ ?
utils/
cogl-fixed
cogl-matrix-stack?
cogl-journal?
cogl-primitives?
pango/
The new winsys component is a starting point for migrating window system
code (i.e. x11,glx,wgl,osx,egl etc) from Clutter to Cogl.
The utils/ and pango/ directories aren't added by this commit, but they are
noted because I plan to add them soon.
Overview of the planned structure:
* The winsys/ API is the API that binds OpenGL to a specific window system,
be that X11 or win32 etc. Example are glx, wgl and egl. Much of the logic
under clutter/{glx,osx,win32 etc} should migrate here.
* Note there is also the idea of a winsys-base that may represent a window
system for which there are multiple winsys APIs. An example of this is
x11, since glx and egl may both be used with x11. (currently only Clutter
has the idea of a winsys-base)
* The driver/ represents a specific varient of OpenGL. Currently we have "gl"
representing OpenGL 1.4-2.1 (mostly fixed function) and "gles" representing
GLES 1.1 (fixed funciton) and 2.0 (fully shader based)
* Everything under cogl/ should fundamentally be supporting access to the
GPU. Essentially Cogl's most basic requirement is to provide a nice GPU
Graphics API and drawing a line between this and the utility functionality
we add to support Clutter should help keep this lean and maintainable.
* Code under utils/ as suggested builds on cogl/ adding more convenient
APIs or mechanism to optimize special cases. Broadly speaking you can
compare cogl/ to OpenGL and utils/ to GLU.
* clutter/pango will be moved to clutter/cogl/pango
How some of the internal configure.ac/pkg-config terminology has changed:
backendextra -> CLUTTER_WINSYS_BASE # e.g. "x11"
backendextralib -> CLUTTER_WINSYS_BASE_LIB # e.g. "x11/libclutter-x11.la"
clutterbackend -> {CLUTTER,COGL}_WINSYS # e.g. "glx"
CLUTTER_FLAVOUR -> {CLUTTER,COGL}_WINSYS
clutterbackendlib -> CLUTTER_WINSYS_LIB
CLUTTER_COGL -> COGL_DRIVER # e.g. "gl"
Note: The CLUTTER_FLAVOUR and CLUTTER_COGL defines are kept for apps
As the first thing to take advantage of the new winsys component in Cogl;
cogl_get_proc_address() has been moved from cogl/{gl,gles}/cogl.c into
cogl/common/cogl.c and this common implementation first trys
_cogl_winsys_get_proc_address() but if that fails then it falls back to
gmodule.
This replaces calls to the old (glx 1.2) functions glXChooseVisual,
glXCreateContext, glXMakeCurrent with the 1.3+ fbconfig varients
glXChooseFBConfig, glXCreateNewContext, glXMakeContextCurrent.
The only backend that tried to implement offscreen stages was the GLX backend
and even this has apparently be broken for some time without anyone noticing.
The property still remains and since the property already clearly states that
it may not work I don't expect anyone to notice.
This simplifies quite a bit of the GLX code which is very desireable from the
POV that we want to start migrating window system code down to Cogl and the
simpler the code is the more straight forward this work will be.
In the future when Cogl has a nicely designed API for framebuffer objects then
re-implementing offscreen stages cleanly for *all* backends should be quite
straightforward.
Instead of using ClutterActor for the base class of the Stage
implementation we should extend the StageWindow interface with
the required bits (geometry, realization) and use a simple object
class.
This require a wee bit of changes across Backend, Stage and
StageWindow, even though it's mostly re-shuffling.
First of all, StageWindow should get new virtual functions:
* geometry:
- resize()
- get_geometry()
* realization
- realize()
- unrealize()
This covers all the bits that we use from ClutterActor currently
inside the stage implementations.
The ClutterBackend::create_stage() virtual function should create
a StageWindow, and not an Actor (it should always have been; the
fact that it returned an Actor was a leak of the black magic going
on underneath). Since we never guaranteed ABI compatibility for
the Backend class, this is not a problem.
Internally to ClutterStage we can finally drop the shenanigans of
setting/unsetting actor flags on the implementation: if the realization
succeeds, for instance, we set the REALIZED flag on the Stage and
we're done.
As an initial proof of concept, the X11 and GLX stage implementations
have been ported to the New World Order(tm) and show no regressions.
Right now we just check for a NULL stage before calling glXMakeCurrent().
We can, though, get a valid stage without an implementation attached to
it while we are disposing a stage after a CLUTTER_DELETE event, since the
events processing is performed on a vblank-locked basis.
When requesting a GLX visual from the X server we should explicitly
set the GL_DEPTH_SIZE and the GL_ALPHA_SIZE bits, otherwise some
functionality might just not work, or work unreliably.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1723
If we manually wait for the VBLANK with:
- SGI_video_sync
- Direct usage of the DRM ioctl
Then we should call glFinish() first, or otherwise the swap-buffers
may be delayed by pending drawing and cause a tear.
http://bugzilla.openedhand.com/show_bug.cgi?id=1636
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This function should only need to be called in exceptional circumstances
since Cogl can normally determine internally when a flush is necessary.
As an optimization Cogl drawing functions may batch up primitives
internally, so if you are trying to use raw GL outside of Cogl you stand a
better chance of being successful if you ask Cogl to flush any batched
geometry before making your state changes.
cogl_flush() ensures that the underlying driver is issued all the commands
necessary to draw the batched primitives. It provides no guarantees about
when the driver will complete the rendering.
This provides no guarantees about the GL state upon returning and to avoid
confusing Cogl you should aim to restore any changes you make before
resuming use of Cogl.
If you are making state changes with the intention of affecting Cogl drawing
primitives you are 100% on your own since you stand a good chance of
conflicting with Cogl internals. For example clutter-gst which currently
uses direct GL calls to bind ARBfp programs will very likely break when Cogl
starts to use ARBfb programs internally for the material API, but for now it
can use cogl_flush() to at least ensure that the ARBfp program isn't applied
to additional primitives.
This does not provide a robust generalized solution supporting safe use of
raw GL, its use is very much discouraged.
Previously the journal was always flushed at the end of
_cogl_rectangles_with_multitexture_coords, (i.e. the end of any
cogl_rectangle* calls) but now we have broadened the potential for batching
geometry. In ideal circumstances we will only flush once per scene.
In summary the journal works like this:
When you use any of the cogl_rectangle* APIs then nothing is emitted to the
GPU at this point, we just log one or more quads into the journal. A
journal entry consists of the quad coordinates, an associated material
reference, and a modelview matrix. Ideally the journal only gets flushed
once at the end of a scene, but in fact there are things to consider that
may cause unwanted flushing, including:
- modifying materials mid-scene
This is because each quad in the journal has an associated material
reference (i.e. not copy), so if you try and modify a material that is
already referenced in the journal we force a flush first)
NOTE: For now this means you should avoid using cogl_set_source_color()
since that currently uses a single shared material. Later we
should change it to use a pool of materials that is recycled
when the journal is flushed.
- modifying any state that isn't currently logged, such as depth, fog and
backface culling enables.
The first thing that happens when flushing, is to upload all the vertex data
associated with the journal into a single VBO.
We then go through a process of splitting up the journal into batches that
have compatible state so they can be emitted to the GPU together. This is
currently broken up into 3 levels so we can stagger the state changes:
1) we break the journal up according to changes in the number of material layers
associated with logged quads. The number of layers in a material determines
the stride of the associated vertices, so we have to update our vertex
array offsets at this level. (i.e. calling gl{Vertex,Color},Pointer etc)
2) we further split batches up according to material compatability. (e.g.
materials with different textures) We flush material state at this level.
3) Finally we split batches up according to modelview changes. At this level
we update the modelview matrix and actually emit the actual draw command.
This commit is largely about putting the initial design in-place; this will be
followed by other changes that take advantage of the extended batching.
If we have an not-visible texture pixmap, we need to:
- Still update it if it is realized, since it won't be
updated when shown. And it might be also be cloned.
- Queue a redraw if even if not visible, since it
it might be cloned.
http://bugzilla.openedhand.com/show_bug.cgi?id=1647
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
RGBA data in X pixmaps and in FBOs is already premultiplied; use
the right format when creating cogl textures.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
According to clutter_texture_set_cogl_texture you should unref the handle as
the texture takes its own.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
The OpenGL spec states that if you create a pixmap using glXCreatePixmap you
should use glXDestroyPixmap to destroy it.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Setting the pixmap for an unrealized ClutterGLXTexturePixmap should
not cause it to be realized, and certainly shouldn't cause the the
REALIZED flag to be set without using clutter_actor_realize().
This patch uses the simple approach that;
- pixmap changes on an unrealized ClutterGLXTexturePixmap
are ignored
- when the ClutterGLXTexturePixmap is realized, we then create
the GLXPixmap and the corresponding texture.
The call to clutter_glx_texture_pixmap_update_area() is moved
from create_cogl_texture() to
clutter_glx_texture_pixmap_create_glx_pixmap() since
create_cogl_texture() is only called from one place, and updating
the area is really something we do *after* creating the texture,
not part of creating the texture.
clutter_glx_texture_pixmap_create_glx_pixmap() is reorganized a
bit to avoid debug-logging confusingly if it's called before a pixmap
has been set, and for readability.
http://bugzilla.openedhand.com/show_bug.cgi?id=1635
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
An implementaton of realize() never needs to set the
CLUTTER_ACTOR_REALIZED flag, though it can unset the flag if
things fail unexpectedly. (Previously, stage backend implementations
had to do this since clutter_actor_realize() wasn't used; this
is no longer the case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1634
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The texture filters are now a property of the material layer rather
than the texture object. Whenever a texture is painted with a material
it sets the filters on all of the GL textures in the Cogl texture. The
filter is cached so that it won't be changed unnecessarily.
The automatic mipmap generation has changed so that the mipmaps are
only generated when the texture is painted instead of every time the
data changes. Changing the texture sets a flag to mark that the
mipmaps are dirty. This works better if the FBO extension is available
because we can use glGenerateMipmap. If the extension is not available
it will temporarily enable automatic mipmap generation and reupload
the first pixel of each slice. This requires tracking the data for the
first pixel.
The COGL_TEXTURE_AUTO_MIPMAP flag has been replaced with
COGL_TEXTURE_NO_AUTO_MIPMAP so that it will default to
auto-mipmapping. The mipmap generation is now effectively free if you
are not using a mipmap filter mode so you would only want to disable
it if you had some special reason to generate your own mipmaps.
ClutterTexture no longer has to store its own copy of the filter
mode. Instead it stores it in the material and the property is
directly set and read from that. This fixes problems with the filters
getting out of sync when a cogl handle is set on the texture
directly. It also avoids the mess of having to rerealize the texture
if the filter quality changes to HIGH because Cogl will take of
generating the mipmaps if needed.