For compatibility with the way we build Cogl as part of Clutter we now
substitute an empty MAINTAINER_CFLAGS variable. When building Cogl
standalone all our extra CFLAGS go through COGL_EXTRA_CFLAGS so the
separate MAINTAINER_CFLAGS aren't used, but automake will get confused
if a substitution isn't made.
This fixes the gdk-pixbuf check to not mistakenly check for the "xi"
package instead of gdk-pixbuf and remove a spurious listing "gl" in
COGL_PKG_REQUIRES which should only be there when we are using using
opengl not if we are using gles.
When building on windows for example we need to ensure we pass
-no-undefined to the linker. Although we were substituting a
COGL_EXTRA_LDFLAGS variable from our configure.ac we forgot to
reference that when linking cogl.
Until Cogl gains native win32/OSX support this remove the osx and win32
winsys files and instead we'll just rely on the stub-winsys.c to handle
these platforms. Since the only thing the platform specific files were
providing anyway was a get_proc_address function; it was trivial to
simply update the clutter backend code to handle this directly for now.
This is a workaround for a bug on OSX for some radeon hardware that
we can't verify and the referenced bug link is no longer valid.
If this is really still a problem then a new bug should be opened and we
can look at putting the fix in some more appropriate place than
cogl-gl.c
We want to be able to split Cogl out as a standalone project but there
are still some window systems that aren't natively supported by Cogl.
This allows Clutter to support those window systems directly but still
work with a standalone Cogl library.
This also ensures we set the SUPPORT_STUB conditional in clutter's
configure.ac when building for win32/osx and wayland.
For now we are going for the semantics that when a CoglOnscreen is first
allocated then it will automatically be mapped. This is for convenience
and if you don't want that behaviour then it is possible to instead
create an Onscreen from a foreign X window and in that case it wont be
mapped automatically.
This approach means that Cogl doesn't need onscreen_map/unmap functions
but it's possible we'll decide later that we can't avoid adding such
functions and we'll have to change these semantics.
Instead of using AC_DEFINE for the various COGL_HAS_PLATFORM defines
this now adds them to the COGL_DEFINES_SYMBOLS variable which gets
substituted into the public cogl-defines.h header.
This adds a simple standalone Cogl application that can be used to
smoke test a standalone build of Cogl without Clutter.
This also adds an x11-foreign app that shows how a toolkit can ask Cogl
to draw to an X Window that it owns instead of Cogl being responsible
for automatically creating and mapping an X Window for CoglOnscreen.
This allows more detailed control over the driver and winsys features
that Cogl should have. Cogl is designed so it can support multiple
window systems simultaneously so we have enable/disable options for
the drivers (gl vs gles1 vs gles2) and options for the individual window
systems; currently glx and egl. Egl is broken down into an option
for each platform.
The GDL API is used for example on intel ce4100 (aka Sodaville) based
systems as a way to allocate memory that can be composited using the
platforms overlay hardware. This updates the Cogl EGL winsys and the
support in Clutter so we can continue to support these platforms.
So that we can dynamically select what winsys backend to use at runtime
we need to have some indirection to how code accesses the winsys instead
of simply calling _cogl_winsys* functions that would collide if we
wanted to compile more than one backend into Cogl.
This moves the GLX specific code from cogl-texture-pixmap-x11.c into
cogl-winsys-glx.c. If we want the winsys components to by dynamically
loadable then we can't have GLX code scattered outside of
cogl-winsys-glx.c. This also sets us up for supporting the
EGL_texture_from_pixmap extension which is almost identical to the
GLX_texture_from_pixmap extension.
As was recently done for the GLX window system code, this commit moves
the EGL window system code down from the Clutter backend code into a
Cogl winsys.
Note: currently the cogl/configure.ac is hard coded to only build the GLX
winsys so currently this is only available when building Cogl as part
of Clutter.
The "DRM_SURFACELESS" EGL platform was invented when we were adding the
wayland backend to Clutter but in the end we added a dedicated backend
instead of extending the EGL backend so actually the platform name isn't
used.
Commit b061f737 moved _cogl_winsys_has_feature to the common winsys
code so there's no need to define it in the stub winsys any more. This
was breaking builds for backends using the stub winsys.
The comparison for finding onscreen framebuffers in
find_onscreen_for_xid had a small thinko so that it would ignore
framebuffers when the negation of the type is onscreen. This ends up
doing the right thing anyway because the onscreen type has the value 0
and the offscreen type has the value 1 but presumably it would fail if
we ever added any other framebuffer types.
The dispose function may be called multiple times during destruction
so it needs to be resilient against destroying any resources
twice. This wasn't the case for the reference to the Cogl context.
The code for _cogl_winsys_has_feature will be identical in all of the
winsys backends for the time being, so it seems to make sense to have
it in the common cogl-winsys.c file.
Previously the mask of available winsys features was stored in a
CoglBitmask. That isn't the ideal type to use for this because it is
intended for a growable array of bits so it can allocate extra memory
if there are more than 31 flags set. For the winsys feature flags the
highest used bit is known at compile time so it makes sense to
allocate a fixed array instead. This is conceptually similar to the
CoglDebugFlags which are stored in an array of integers with macros to
test a bit in the array. This moves the macros used for CoglDebugFlags
to cogl-flags.h and makes them more generic so they can be shared with
CoglContext.
Instead of having cogl_renderer_xlib_add_filter and friends there is
now cogl_renderer_add_native_filter which can be used regardless of
the backend. The callback function for the filter now just takes a
void pointer instead of an XEvent pointer which should be interpreted
differently depending on the backend. For example, on Xlib it would
still be an XEvent but on Windows it could be a MSG. This simplifies
the code somewhat because the _cogl_xlib_add_filter no longer needs to
have its own filter list when a stub renderer is used because there is
always a renderer available.
cogl_renderer_xlib_handle_event has also been renamed to
cogl_renderer_handle_native_event. This just forwards the event on to
all of the listeners. The backend renderer is expected to register its
own event filter if it wants to process the events in some way.
ClutterAnimation uses the weak ref machinery of GObject when associated
to ClutterActor by clutter_actor_animate() and friends - all the while
taking a reference on the actor itself. In order to trigger the weak ref
callback, external code would need to unref the Actor at least twice,
which has slim chance of happening. Plus, the way to destroy an Actor is
to call destroy(), not call unref().
The destruction sequence of ClutterActor emits the ::destroy signal, which
should be used by classes to release external references the might be
holding. My oh my, this sounds *exactly* the case!
So, let's switch to using the ::destroy signal for clutter_actor_animate()
and friends, since we know that the object bound to the Animation is
an Actor, and has a ::destroy signal.
This change has the added benefit of allowing destroying an actor as the
result of the Animation::completed signal without getting a segfault or
other bad things to happen.
Obviously, the change does not affect other GObject classes, or Animation
instances created using clutter_animation_new(); for those, the current
"let's take a reference on the object to avoid it going away in-flight"
mechanism should still suffice.
Side note: it would be interesting if GObject had an interface for
"destructible" objects, so that we could do a safe type check. I guess
it's a Rainy Day Project(tm)...
Do not use the generic GType class name: we have a :name property on
ClutterActor that is generally used for debugging purposes — so we
should use it when creating debugging spew in a consistent way.
The Cogl rework removed the Window creation from realize and its
relative destruction from unrealize; the two vfuncs also managed
the mapping between Window and Stage implementation that we use
when dealing with event handling. Sadly, the missing unrealization
left entries in the mapping dangling.
Since ClutterStageX11 already provides a ::realize implementation
that sub-classes are supposed to chain up to, and the Window ↔ Stage
mapping is private to clutter-stage-x11.c, it seems only fair that
the ClutterStageX11 should also provide an ::unrealize implementation
matching the ::realize.
This implementation just removes the StageX11 pointer from the X11
Window ↔ ClutterStageX11 mapping we set up in ::realize, since the
X11 Window is managed by Cogl, now.
Older drivers for PowerVR SGX hardware have the vendor-specific
GL_IMG_TEXTURE_NPOT extension instead of the
functionally-equivalent GL_OES_TEXTURE_NPOT extension.
We need to guard the usage of symbols related to the
GLX_INTEL_swap_event extension, to avoid breaking on platforms and/or
versions of Mesa that do not expose that extension.
It's generally useful to be able to query the width and height of a
framebuffer and we expect to need this in Clutter when we move the
eglnative backend code into Cogl since Clutter will need to read back
the fixed size of the framebuffer when realizing the stage.
This backend hasn't been used for years now and so because it is
untested code and almost certainly doesn't work any more it would be a
burdon to continue trying to maintain it. Considering that we are now
looking at moving OpenGL window system integration code down from
Clutter backends into Cogl that will be easier if we don't have to
consider this backend.
This makes it possible to build Clutter against a standalone build of
Cogl instead of having the Clutter build traverse into the clutter/cogl
subdirectory.
This adds an autogen.sh, configure.ac and build/autotool files etc under
clutter/cogl and makes some corresponding Makefile.am changes that make
it possible to build and install Cogl as a standalone library.
Some notable things about this are:
A standalone installation of Cogl installs 3 pkg-config files;
cogl-1.0.pc, cogl-gl-1.0.pc and cogl-2.0.pc. The second is only for
compatibility with what clutter installed though I'm not sure that
anything uses it so maybe we could remove it. cogl-1.0.pc is what
Clutter would use if it were updated to build against a standalone cogl
library. cogl-2.0.pc is what you would use if you were writing a
standalone Cogl application.
A standalone installation results in two libraries currently, libcogl.so
and libcogl-pango.so. Notably we don't include a major number in the
sonames because libcogl supports two major API versions; 1.x as used by
Clutter and the experimental 2.x API for standalone applications.
Parallel installation of later versions e.g. 3.x and beyond will be
supportable either with new sonames or if we can maintain ABI then we'll
continue to share libcogl.so.
The headers are similarly not installed into a directory with a major
version number since the same headers are shared to export the 1.x and
2.x APIs (The only difference is that cogl-2.0.pc ensures that
-DCOGL_ENABLE_EXPERIMENTAL_2_0_API is used). Parallel installation of
later versions is not precluded though since we can either continue
sharing or later add a major version suffix.
This migrates all the GLX window system code down from the Clutter
backend code into a Cogl winsys. Moving OpenGL window system binding
code down from Clutter into Cogl is the biggest blocker to having Cogl
become a standalone 3D graphics library, so this is an important step in
that direction.
As part of the process of splitting Cogl out as a standalone graphics
API we need to introduce some API concepts that will allow us to
initialize a new CoglContext when Clutter isn't there to handle that for
us...
The new objects roughly in the order that they are (optionally) involved
in constructing a context are: CoglRenderer, CoglOnscreenTemplate,
CoglSwapChain and CoglDisplay.
Conceptually a CoglRenderer represents a means for rendering. Cogl
supports rendering via OpenGL or OpenGL ES 1/2.0 and those APIs are
accessed through a number of different windowing APIs such as GLX, EGL,
SDL or WGL and more. Potentially in the future Cogl could render using
D3D or even by using libdrm and directly banging the hardware. All these
choices are wrapped up in the configuration of a CoglRenderer.
Conceptually a CoglDisplay represents a display pipeline for a renderer.
Although Cogl doesn't aim to provide a detailed abstraction of display
hardware, on some platforms we can give control over multiple display
planes (On TV platforms for instance video content may be on one plane
and 3D would be on another so a CoglDisplay lets you select the plane
up-front.)
Another aspect of CoglDisplay is that it lets us negotiate a display
pipeline that best supports the type of CoglOnscreen framebuffers we are
planning to create. For instance if you want transparent CoglOnscreen
framebuffers then we have to be sure the display pipeline wont discard
the alpha component of your framebuffers. Or if you want to use
double/tripple buffering that requires support from the display
pipeline.
CoglOnscreenTemplate and CoglSwapChain are how we describe our default
CoglOnscreen framebuffer configuration which can affect the
configuration of the display pipeline.
The default/simple way we expect most CoglContexts to be constructed
will be via something like:
if (!cogl_context_new (NULL, &error))
g_error ("Failed to construct a CoglContext: %s", error->message);
Where that NULL is for an optional "display" parameter and NULL says to
Cogl "please just try to do something sensible".
If you want some more control though you can manually construct a
CoglDisplay something like:
display = cogl_display_new (NULL, NULL);
cogl_gdl_display_set_plane (display, plane);
if (!cogl_display_setup (display, &error))
g_error ("Failed to setup a CoglDisplay: %s", error->message);
And in a similar fashion to cogl_context_new() you can optionally pass
a NULL "renderer" and/or a NULL "onscreen template" so Cogl will try to
just do something sensible.
If you need to change the CoglOnscreen defaults you can provide a
template something like:
chain = cogl_swap_chain_new ();
cogl_swap_chain_set_has_alpha (chain, TRUE);
cogl_swap_chain_set_length (chain, 3);
onscreen_template = cogl_onscreen_template_new (chain);
cogl_onscreen_template_set_pixel_format (onscreen_template,
COGL_PIXEL_FORMAT_RGB565);
display = cogl_display_new (NULL, onscreen_template);
if (!cogl_display_setup (display, &error))
g_error ("Failed to setup a CoglDisplay: %s", error->message);
This tries to make the naming style of files under cogl/winsys/
consistent with other cogl source files. In particular private header
files didn't have a '-private' infix.
This gives us a way to clearly track the internal Cogl API that Clutter
depends on. The aim is to split Cogl out from Clutter into a standalone
3D graphics API and eventually we want to get rid of any private
interfaces for Clutter so its useful to have a handle on that task.
Actually it's not as bad as I was expecting though.
This extends visualization for CLUTTER_PAINT=redraws so it now also
draws outlines for actors to show how they are being culled. Actors get
a green outline if they are fully inside the clip region, blue if fully
outside and greeny-blue if only partially inside.
This adds an internal _clutter_stage_get_active_framebuffer function
that can be used to get a pointer to the current CoglFramebuffer pointer
that is in use, in association with a given stage.
The "active" infix in the function name is there because we shouldn't
assume that a stage will always correspond to only a single framebuffer
so we aren't getting a pointer to a sole framebuffer, we are getting
a pointer to the framebuffer that is currently in use/being painted.
This API is now used for culling purposes where we need to check if we
are currently painting an actor to a framebuffer that is offscreen, that
doesn't correspond to the stage.
This renames the two internal functions _cogl_get_draw/read_buffer
as cogl_get_draw_framebuffer and _cogl_get_read_framebuffer. The
former is now also exposed as experimental API.
The long term goal with the Cogl API is that we will get rid of the
default global context. As a step towards this, this patch tracks a
reference back to the context in each CoglFramebuffer so in a lot of
cases we can avoid using the _COGL_GET_CONTEXT macro.
There is no corresponding implementation of _cogl_features_init any more
so it was simply an oversight that the prototype wasn't removed when the
implementation was removed.
Recently _cogl_swap_buffers_notify was added (in 142b229c5c) so that
Cogl would be notified when Clutter performs a swap buffers request for
a given onscreen framebuffer. It was expected this would be required for
the recent cogl_read_pixel optimization that was implemented (ref
1bdb0e6e98) but in the end it wasn't used.
Since it wasn't used in the end this patch removes the API.
This moves the functionality of _cogl_create_context_driver from
driver/{gl,gles}/cogl-context-driver-{gl,gles}.c into
driver/{gl,gles}/cogl-{gl,gles}.c as a static function called
initialize_context_driver.
cogl-context-driver-{gl,gles}.[ch] have now been removed.
This adds a new experimental function (you need to define
COGL_ENABLE_EXPERIMENTAL_API to access it) which takes us towards being
able to have a standalone Cogl API. This is really a minor aesthetic
change for now since all the GL context creation code still lives in
Clutter but it's a step forward none the less.
Since our current designs introduce a CoglDisplay object as something
that would be passed to the context constructor this provides a stub
cogl-display.h with CoglDisplay typedef.
_cogl_context_get_default() which Clutter uses to access the Cogl
context has been modified to use cogl_context_new() to initialize
the default context.
There is one rather nasty hack used in this patch which is that the
implementation of cogl_context_new() has to forcibly make the allocated
context become the default context because currently all the code in
Cogl assumes it can access the context using _COGL_GET_CONTEXT including
code used to initialize the context.
This moves the implementation of _clutter_do_pick to clutter-stage.c and
renames it _clutter_stage_do_pick. This function can be compared to
_clutter_stage_do_update/redraw in that it prepares for and starts a
traversal of a scenegraph descending from a given stage. Since it is
desirable that this function should have access to the private state of
the stage it is awkward to maintain outside of clutter-stage.c.
Besides moving _clutter_do_pick this patch is also able to remove the
following private state accessors from clutter-stage-private.h:
_clutter_stage_set_pick_buffer_valid,
_clutter_stage_get_pick_buffer_valid,
_clutter_stage_increment_picks_per_frame_counter,
_clutter_stage_reset_picks_per_frame_counter and
_clutter_stage_get_picks_per_frame_counter.
This updates the inner loops of the cull function so now the vertices of
the polygon being culled are iterated in the inner loop instead of the
clip planes and we count how many vertices are outside the current
plane so we can bail out immediately if all the vertices are outside of
any plane and so we can correctly track partial intersections with the
clip region.
The previous approach could catch some partial intersections but for
example a rectangle that was larger than the clip region centred over
the clip region with all corners outside would be reported as outside,
not as a partial intersection.
In 047227fb cogl_atlas_new was changed so that it can take a flags
parameter to specify whether to clear the new atlases and whether to
copy images to the new atlas after reorganisation. This was done so
that the atlas code could be shared with the glyph cache. At some
point during the development of this patch the flag was just a single
boolean instead and this is accidentally how it is used from the glyph
cache. The glyph cache therefore passes 'TRUE' as the set of flags
which means it will only get the 'clear' flag and not the
'disable-migration' flag. When the glyph cache gets full it will
therefore try to copy the texture to the new atlas as well as
redrawing them with cairo. This causes problems because the glyph
cache needs to work in situations where there is no FBO support.
In _cogl_pipeline_prune_empty_layer_difference if the layer's parent
has no owner then it just takes ownership of it. However this could
theoretically end up taking ownership of the root layer because
according to the comment above in the same function that should never
have an owner. This patch just adds an extra check to ensure that the
unowned layer has a parent.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2588
In _cogl_pipeline_prune_empty_layer_difference if we are reverting to
the immediate parent of an empty/redundant layer then it is not enough
to simply add a reference to the pipeline's ->layer_differences list
without also updating parent_layer->owner to point back to its new
owner.
This oversight was leading us to break the invariable that all layers
referenced in layer_differences have an owner and was also causing us to
break another invariable whereby after calling
_cogl_pipeline_layer_pre_change_notify the returned layer must always be
owned by the given 'required_owner'.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2588
When we come to presenting the result of a clipped redraw to the front
buffer with a blit we need to ensure that all the rendering is done,
otherwise redraw operations that are slower than the framerate can queue
up in the pipeline during a heavy animation, causing a larger and larger
backlog of rendering visible as lag to the user.
Note: Since calling glFinish() and sycnrhonizing the CPU with the GPU is
far from ideal, we hope that this is only a short term solution.
One idea is to using sync objects to track render completion so we can
throttle the backlog (ideally with an additional extension that lets us
get notifications in our mainloop instead of having to busy wait for the
completion.)
Another option is to support clipped redraws by reusing the contents of
old back buffers such that we can flip instead of using a blit and then
we can use GLX_INTEL_swap_events to throttle. For this though we would
still probably want an additional extension so we can report the limited
region of the window damage to X/compositors.
Thanks to Owen Taylor and Alexander Larsson for reporting the problem.
clutter_clone_get_paint_volume was being exported from the shared
library because the function wasn't declared static. This function
shouldn't be exposed because it should be accessed through
clutter_actor_get_paint_volume.
The texture containing the image for the redirected actor will always
be painted at a 1:1 texel:pixel ratio so there's no need to use linear
filtering. This should also counteract some of the effects of rounding
errors when calculating the geometry for the quad.
if cross compiling clutter using mingw using an out of tree build
directory then a pre-requisite for creating the resources.o file
containing the transparent cursor is for the win32 directory itself to
be created at $(top_builddir)/clutter/win32.
glib already has a data type to manage a list of callbacks called a
GHookList so we might as well use it instead of maintaining Cogl's own
type. The glib version may be slightly more efficient because it
avoids using a GList and instead encodes the prev and next pointers
directly in the GHook structure. It also has more features than
CoglCallbackList.
Previously we were applying the culling optimization to any actor
painted without considering that we may be painting to an offscreen
framebuffer where the stage clip isn't applicable.
For now we simply expose a getter for the current draw framebuffer
and we can assume that a return value of NULL corresponds to the
stage.
Note: This will need to be updated as stages start to be backed by real
CoglFramebuffer objects and so we won't get NULL in those cases.
To give quick visibility to the things going on relating to clipping and
culling this adds some more CLIPPING debug notes to clutter-actor.c and
clutter-stage.c
As documented in cogl-pipeline-private.h, there is a precedence to the
ClutterPaintVolume bitfields that should be considered whenever we
implement code that manipulates PaintVolumes...
Firstly if ->is_empty == TRUE then the values for ->is_complete and
->is_2d are undefined, so we should typically check ->is_empty as the
first priority.
This fixes a bug in _clutter_paint_volume_cull() whereby we were
checking pv->is_complete before checking pv->is_empty which was
resulting in assertions for actors with no size.
Drawing and clipping to paths is generally quite expensive because the
geometry has to be tessellated into triangles in a single VBO which
breaks up the journal batching. If we can detect when the path
contains just a single rectangle then we can instead divert to calling
cogl_rectangle which will take advantage of the journal, or by pushing
a rectangle clip which usually ends up just using the scissor.
This patch adds a boolean to each path to mark when it is a
rectangle. It gets cleared whenever a node is added or gets set to
TRUE whenever cogl2_path_rectangle is called. This doesn't try to
catch cases where a rectangle is composed by cogl_path_line_to and
cogl_path_move_to commands.
ClutterDragAction should be able to use the newly added ClutterSettings
property exposing the system's drag threshold.
Currently, the x-drag-threshold and the y-drag-threshold properties (and
relative accessors) use an unsigned integer for their values; we should
be able to safely expand the range to include -1 as the minimum value,
and use this new value to tell the ClutterDragAction that it should query
the ClutterSettings object for the drag threshold.
The storage of the properties has been changed, albeit in a compatible
way, as GObject installs a uint ↔ int transformation function for GValue
automatically.
The setter for the drag thresholds has been changes to use a signed
integer, but the getter has been updated to always Do The Right Thing™:
it never returns -1 but, instead, will return the valid drag threshold,
either from the value set or from the Settings singleton.
This change is ABI compatible.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2583
ClutterState is missing some documentation on how to define transitions
using ClutterScript definitions; it is also missing some example code
for both the C API and the ClutterScript syntax.
The allocation of the ClutterBox is not enough to be used as the paint
volume, because children might decide to paint outside that area.
Instead, we should use the allocation if the Box has a background color
and then do what Group does, and union all the paint volumes of the
children.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2600
In 9ff04e8a99 the builtin uniforms were moved to the common shader
boilerplate. However the common boilerplate is positioned before the
default precision specifier on GLES2 so it would fail to compile
because the uniforms end up with no precision in the fragment
shader. This patch just moves the precision specifier to above the
common boilerplate.
The CLUTTER_ENTER and CLUTTER_LEAVE event types were mistakenly ignored
by clutter_event_get_device(), when returning the device from a
non-allocated ClutterEvent.
There's an optimisation in clutter-actor.c to avoid calculating the
last known paint volume whenever culling and clipped redraws are both
disabled. However there was a small thinko in the logic so that it
would also avoid calculating the paint volume whenever only one of the
debug flags is enabled. This fixes it to explicitly check that the two
flags are not both enabled before skipping the paint volume
calculation.
Instead of unconditionally combining the modelview and projection
matrices and then iterating each of the vertices to call
cogl_matrix_transform_point for each one in turn we now only combine the
matrices if there are more than 4 vertices (with less than 4 vertices
its less work to transform them separately) and we use the new
cogl_vertex_{transform,project}_points APIs which can hopefully
vectorize the transformations.
Finally the perspective divide and viewport scale is done in a separate
loop at the end and we don't do the spurious perspective divide and
viewport scale for the z component.
Previously each time we needed to retrieve the model transform for a
given actor we would call the apply_transform vfunc which would build up
a transformation matrix based on the actor's current anchor point, its
scale, its allocation and rotation. The apply_transform implementation
would repeatedly call API like cogl_matrix_rotate, cogl_matrix_translate
and cogl_matrix_scale.
All this micro matrix manipulation APIs were starting to show up in the
profiles of dynamic applications so this adds priv->transform matrix
cache which maintains the combined result of the actors scale, rotation
and anchor point etc. Whenever something like the rotation changes then
then the matrix is marked as dirty, but so long as the matrix isn't
dirty then the apply_transform vfunc now just calls cogl_matrix_multiply
with the cached transform matrix.
This implements a variation of frustum culling whereby we convert screen
space clip rectangles into eye space mini-frustums so that we don't have
to repeatedly transform actor paint-volumes all the way into screen
coordinates to perform culling, we just have to apply the modelview
transform and then determine each points distance from the planes that
make up the clip frustum.
By avoiding the projective transform, perspective divide and viewport
scale for each point culled this makes culling much cheaper.
This simplifies the implementation of the ClutterStage apply_transform
vfunc by using the new cogl_matrix_view_2d_in_perspective utility API
which can setup up a view transform for a given perspective so that a
cross section of the view frustum at an arbitrary depth can be mapped
directly to 2D stage coordinates with (0,0) at the top left.
This adds two new experimental functions to cogl-matrix.c:
cogl_matrix_view_2d_in_perspective and cogl_matrix_view_2d_in_frustum
which can be used to setup a view transform that maps a 2D coordinate
system (0,0) top left and (width,height) bottom right to the current
viewport.
Toolkits such as Clutter that want to mix 2D and 3D drawing can use
these APIs to position a 2D coordinate system at an arbitrary depth
inside a 3D perspective projected viewing frustum.
Firstly Clutter shouldn't be using OpenGL directly so this needed
changing but also conceptually it doesn't make sense for
clutter_stage_read_pixels to validate the requested area to read against
the viewport it would make more sense to compare against the window
size. Finally checking that the width of the area is less than the
viewport or window width without considering the x isn't enough to know
if the area extends outside the windows bounds. (same for the height)
This patch removes the validation of the read area from
clutter_stage_read_pixels and instead we now simply rely on the
semantics of cogl_read_pixels for reading areas outside the window
bounds.
OpenGL < 4.0 only supports integer based viewports and internally we
have a mixture of code using floats and integers for viewports. This
patch switches all viewports throughout clutter and cogl to be
represented using floats considering that in the future we may want to
take advantage of floating point viewports with modern hardware/drivers.
This makes a change to the original point_in_poly algorithm from:
http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html
The aim was to tune the test so that tests against screen aligned
rectangles are more resilient to some in-precision in how we transformed
that rectangle into screen coordinates. In particular gnome-shell was
finding that for some stage sizes then row 0 of the stage would become a
dead zone when going through the software picking fast-path and this was
because the y position of screen aligned rectangles could end up as
something like 0.00024 and the way the algorithm works it doesn't have
any epsilon/fuz factor to consider that in-precision.
We've avoided introducing an epsilon factor to the comparisons since we
feel there's a risk of changing some semantics in ways that might not be
desirable. One of those is that if you transform two polygons which
share an edge and test a point close to that edge then this algorithm
will currently give a positive result for only one polygon.
Another concern is the way this algorithm resolves the corner case where
the horizontal ray being cast to count edge crossings may cross directly
through a vertex. The solution is based on the "idea of Simulation of
Simplicity" and "pretends to shift the ray infinitesimally down so that
it either clearly intersects, or clearly doesn't touch". I'm not
familiar with the idea myself so I expect a misplaced epsilon is likely
to break that aspect of the algorithm.
The simple solution this patch applies is to pixel align the polygon
vertices which should eradicate most noise due to in-precision.
https://bugzilla.gnome.org/show_bug.cgi?id=641197
Anything that is not CLUTTER_INIT_SUCCESS is to be considered an error.
This fixes the Clutter initialization sequence to actually error out
on pre-conditions and backend initialization failures.
clutter_offscreen_effect_pre_paint was using the unitialized value of
the ‘box’ variable whenever the actor doesn't have a paint
volume. This patch makes it just set the offset to 0,0 instead.
When removing the opacity override in the post_paint implementation,
ClutterOffscreenEffect would always set the override back to -1. This
ends up cancelling out the effect of any overrides from outer effects
which means that if any actor has multiple effects attached then it
would apply the opacity multiple times.
To fix this, the effect now preserves the old value of the opacity
override and restores that instead of setting -1.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2541
This is needed if an effect wants to temporarily override the paint
opacity. It needs to be able to restore the old opacity override in
the post_paint handler otherwise it would replace the effect of the
opacity override from any outer effects.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2541
The OffscreenEffect class needs to expose a way for sub-classes to
track the size of FBO it creates, in case it has to do some geometry
deformations like the DeformEffect sub-classes.
Let's move the private symbol we used internally in 1.6 to fix
DeformEffect to the list of public symbols of OffscreenEffect.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2570
The table we use for converting between keysyms and Unicode should be
static and constified, so that it can live in the .rodata section of
the ELF shared object, and be shared among processes.
This change moves the table to a source file, instead of an header; the
change also requires the clutter_keysym_to_unicode() function to be
moved from clutter-event.c into this new source file. The declaration is
still in clutter-event.h, so we don't need to do anything special.
Creating a synthetic event requires direct access to the ClutterEvent
union members; this access does not map in bindings to high-level
languages, especially run-time bindings using GObject-Introspection.
It's also midly annoying from C, as it unnecessarily exposes the guts of
ClutterEvent - something we might want to fix in the future.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2575
Many people expect clutter_init to work the same way as gtk_init which
exits the program on init failure. clutter_init however returns a
status code on failure which applications need to handle because if
the init fails then any further Clutter calls are likely to crash. In
Clutter 2.0 we may want to change this to be more like GTK+.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2574
When using a pipeline and the journal to blit images between
framebuffers, it should disable blending. Otherwise it will end up
blending the source texture with uninitialised garbage in the
destination texture.
Converting from Pango units to pixels by using the C conventions might
cause us to lose a pixel; since we're doing the same for the height, we
should use ceilf() to round up the width and the line height.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2573
The ClutterDeformEffect sub-classes are effectively deforming the
texture target of an FBO, not the actor itself. Thus, we need to
use the FBO's size, and not the actor's allocated size, given that
the actor might be transformed prior to applying an effect.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2571
Since the FBO target might have a different size than the mere paint box
of the actor, we need API to get it out of the ClutterOffscreenEffect
private data structure and on to sub-classes.
Since we cannot add new API in a stable cycle, we need a private
function; we'll leave it there even when opening 1.7, since it's useful
for internal purposes.
Once upon a time, the land of Clutter had a stage singleton. It was
created automatically at initialization time and stayed around even
after the main loop was terminated. The singleton was content in
being all there was. There also was a global API to handle the
configuration of the stage singleton that would affect the behaviour
on other classes, signals and properties.
Then, an evil wizard came along and locked the stage singleton in his
black tower, and twisted it until it was possible to create new stages.
These new stages were pesky, and didn't have the same semantics of the
singleton: they didn't stay around when closed, or terminate the main
loop on delete events.
The evil wizard also started moving all the stage-related API from the
global context into class-specific methods.
Finally, the evil wizard cast a spell, and the stage singleton was
demoted to creation on demand - and until somebody called the
clutter_stage_get_default() function, the singleton remained in a limbo
of NULL pointers and undefined memory areas.
There was a last bit - literally - of information still held by the
global API; a tiny, little flag that disabled per-actor motion events.
The evil wizard added private accessors for it, and stored it inside the
stage private structure, in preparation for a deprecation that would
come in a future development cycle.
The evil wizard looked down upon the land of Clutter from the height of
his black tower; the lay of the land had been reshaped into a crucible
of potential, and the last dregs of the original force of creation were
either molted into new, useful shapes, or blasted away by the sheer fury
of his will.
All was good.
The clutter-id-pool.h header is private and not installed; yet, all the
clutter_id_pool_* symbols are public. Let's correct this oversight we've
been stringing along since forever.
Only allow access to the ClutterMainContext through the private
_clutter_context_get_default() function, so we can easily grep
it and remove the unwanted usage of the global context.
The shader stack held by ClutterMainContext should only be accessed
using functions, and not directly.
Since it's a stack, we can use stack-like operations: push, pop and
peek.
The _clutter_do_redraw() function should really be moved inside
ClutterStage, since all it does is calling private stage and
backend functions. This also allows us to change a long-standing
issue with a global fps counter for all stages, instead of a\
per-stage one.
Let's try and start reducing the size of ClutterActorPrivate by moving
some optional, out-of-band data from it to GObject data.
The ShaderData structure is a prime candidate for this migration: it
does not need to be inspected by the actor, and its relationship with an
actor is transient and optional.
By attaching it to the actor's instance through g_object_set_data() we
neatly tie its lifetime to the instance, and we don't have to care
cleaning it up in the finalize()/dispose() implementation of
ClutterActor itself.
If an atlas texture's last reference is held by the journal or by the
last flushed pipeline then if an atlas migration is started it can
cause a crash. This is because the atlas migration will cause a
journal flush and can sometimes change the current pipeline which
means that the texture would be destroyed during migration.
This patch adds an extra 'post_reorganize' callback to the existing
'reorganize' callback (which is now renamed to 'pre_reorganize'). The
pre_reorganize callback is now called before the atlas grabs a list of
the current textures instead of after so that it doesn't matter if the
journal flush destroys some of those textures. The pre_reorganize
callback for CoglAtlasTexture grabs a reference to all of the textures
so that they can not be destroyed when the migration changes the
pipeline. In the post_reorganize callback the reference is removed
again.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2538
In _clutter_actor_queue_redraw_with_clip it has a local variable to
mark when a new paint volume for the clip is created so that it can be
freed when the function returns. However the actual code to free the
paint volume went missing in 3b789490d2 so the variable did
nothing. This patch just adds the free back in.
When Cogl debugging is disabled then the 'waste' variable is not used
so it throws a compiler warning. This patch removes the variable and
the value is calculated directly as the parameter to COGL_NOTE.
Some code was doing pointer arithmetic on the return value from
cogl_buffer_map which is void* pointer. This is a GCC extension so we
should try to avoid it. This patch adds casts to guint8* where
appropriate.
Based on a patch by Fan, Chun-wei.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2561
About other assorted boneheadedness, the GType for GParamSpec is
called 'GParam'. Why? Who knows. I assume alcohol was involved,
but I honestly don't want to know.
This removes the last g-ir-scanner warning in Clutter.
This time, in Clutter core.
The ObjC standard library provides a type called 'id', which obviously
requires any library to either drop the useful shadowed variable warning
or stop using 'id' as a variable name.
Yes, it's almost unbearably stupid. Well, at least it's not 'index' in
string.h, or 'y2' in math.h.
Instead of directly banging GL to migrate textures the atlas now uses
the CoglFramebuffer API. It will use one of four approaches; it can
set up two FBOs and use _cogl_blit_framebuffer to copy between them;
it can use a single target fbo and then render the source texture to
the FBO using a Cogl draw call; it can use a single FBO and call
glCopyTexSubImage2D; or it can fallback to reading all of the texture
data back to system memory and uploading it again with a sub texture
update.
Previously GL calls were used directly because Cogl wasn't able to
create a framebuffer without a stencil and depth buffer. However there
is now an internal version of cogl_offscreen_new_to_texture which
takes a set of flags to disable the two buffers.
The code for blitting has now been moved into a separate file called
cogl-blit.c because it has become quite long and it may be useful
outside of the atlas at some point.
The 4 different methods have a fixed order of preference which is:
* Texture render between two FBOs
* glBlitFramebuffer
* glCopyTexSubImage2D
* glGetTexImage + glTexSubImage2D
Once a method is succesfully used it is tried first for all subsequent
blits. The default default can be overridden by setting the
environment variable COGL_ATLAS_DEFAULT_BLIT_MODE to one of the
following values:
* texture-render
* framebuffer
* copy-tex-sub-image
* get-tex-data
This adds a declaration for _cogl_is_texture_2d to the private header
so that it can be used in cogl-blit.c to determine if the target
texture is a simple 2D texture.
This adds a function called _cogl_texture_2d_copy_from_framebuffer
which is a simple wrapper around glCopyTexSubImage2D. It is currently
specific to the texture 2D backend.
This adds the _cogl_blit_framebuffer internal function which is a
wrapper around glBlitFramebuffer. The API is changed from the GL
version of the function to reflect the limitations provided by the
GL_ANGLE_framebuffer_blit extension (eg, no scaling or mirroring).
This extension is the GLES equivalent of the GL_EXT_framebuffer_blit
extension except that it has some extra restrictions. We need to check
for some extension that provides glBlitFramebuffer so that we can
unconditionally use ctx->drv.pf_glBlitFramebuffer in both GL and GLES
code. Even with the restrictions, the extension provides enough
features for what Cogl needs.
Previously when _cogl_atlas_texture_migrate_out_of_atlas is called it
would unreference the atlas texture's sub-texture before calling
_cogl_atlas_copy_rectangle. This would leave the atlas texture in an
inconsistent state during the copy. This doesn't normally matter but
if the copy ends up doing a render then the atlas texture may end up
being referenced. In particular it would cause problems if the texture
is left in a texture unit because then Cogl may try to call
get_gl_texture even though the texture isn't actually being used for
rendering. To fix this the sub texture is now unrefed after the copy
call instead.
The current framebuffer is now internally separated so that there can
be a different draw and read buffer. This is required to use the
GL_EXT_framebuffer_blit extension. The current draw and read buffers
are stored as a pair in a single stack so that pushing the draw and
read buffer is done simultaneously with the new
_cogl_push_framebuffers internal function. Calling
cogl_pop_framebuffer will restore both the draw and read buffer to the
previous state. The public cogl_push_framebuffer function is layered
on top of the new function so that it just pushes the same buffer for
both drawing and reading.
When flushing the framebuffer state, the cogl_framebuffer_flush_state
function now tackes a pointer to both the draw and the read
buffer. Anywhere that was just flushing the state for the current
framebuffer with _cogl_get_framebuffer now needs to call both
_cogl_get_draw_buffer and _cogl_get_read_buffer.
As noted in commit ce3f55292a an explict glFlush is needed for
both glBlitFramebuffer and glXCopySubBuffer.
_clutter_backend_glx_blit_sub_buffer was already doing an explicit
flush when using glBlitFramebuffer, so just do it unconditonally
and remove the call from clutter_stage_glx_redraw.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2558
Since we realize on creation we need to unrealize on destruction. This
makes sure that the ClutterStageWindow implementation can tear down any
resource set up during the realization phase.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2559
* nobled/wayland-fixes2:
wayland: fix shm buffers
wayland: set renderable type on dummy surface
wayland: check for egl extensions explicitly
wayland: fall back to shm buffers if drm fails
wayland: add shm buffer code
wayland: make buffer handling generic
wayland: really fix buffer format selection
wayland: fix pixel format
wayland: clean up buffer creation code
wayland: don't require the surfaceless extensions
wayland: check for API-specific surfaceless extension
wayland: fix GLES context creation
wayland: use EGL_NO_SURFACE
wayland: update to new api
wayland: fix connecting to default socket
fix ClutterContainer docs
The 'in_clone_paint' parameter of the private function
_clutter_actor_set_in_clone_paint() shadowed the private function
in_clone_paint(). Rename this parameter to 'is_in_clone_paint' to remove
a compiler warning.
If an actor was partially off of the stage, it would be clipped because
of the stage viewport. This produces problems if you use an offscreen
effect that relies on the entire actor being rendered (e.g. shadows).
Expand the viewport in this scenario so that the offscreen-rendering isn't
clipped.
This fixes http://bugzilla.clutter-project.org/show_bug.cgi?id=2550
Replace the opacity_parent with an opacity_override variable, to allow
direct overriding of the paint opacity and simplify this mechanism
somewhat.
This also required a new private flag, in_clone_paint, to maintain the
functionality of the public function clutter_actor_is_in_clone_paint()
When pushing a framebuffer it would previously push
COGL_INVALID_HANDLE to the top of the framebuffer stack so that when
it later calls cogl_set_framebuffer it will recognise that the
framebuffer is different and replace the top with the new
pointer. This isn't ideal because it breaks the code to flush the
journal because _cogl_framebuffer_flush_journal is called with the
value of the old pointer which is NULL. That function was checking for
a NULL pointer so it wouldn't actually flush. It also would mean that
if you pushed the same framebuffer twice we would end up dirtying
state unnecessarily. To fix this cogl_push_framebuffer now pushes a
reference to the current framebuffer instead.
After a dependent framebuffer is added to a framebuffer it was never
getting removed. Once the journal for a framebuffer is flushed we no
longer depend on any framebuffers so the list should be cleared. This
was causing leaks of offscreens and textures.
Unlike glXSwapBuffers, glXCopySubBuffer and glBlitFramebuffer don't
issue an implicit glFlush() so we have to flush ourselves if we want the
request to complete in finite amount of time since otherwise the driver
can batch the command indefinitely.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2551
This adds a note to clarify that cogl_matrix_multiply allows you to
multiply the @a matrix in-place, so @a can equal @result but @b can't
equal @result.
When uploading the layer matrix to GL it wasn't first calling
glActiveTextureMatrix to set the right texture unit for the
layer. This would end up setting the texture matrix on whatever layer
happened to be previously active. This happened to work for
test-cogl-multitexture presumably because it was coincidentally
setting the layer matrix on the last used layer.
As the prelude to deprecation of the function in 1.8, let's move the
implementation to an internal function, and use that instead of the
public facing one.
The GQueue that stores the global events queue is handled all over the
place:
• the structure is created in _clutter_backend_init_events();
• the queue is handled in clutter-event.c, clutter-stage.c and
clutter-backend.c;
• ClutterStage::dispose cleans up the events associated with
the stage being destroyed;
• the queue is destroyed in ClutterBackend::dispose.
Since we need to have access to it in different places we cannot put it
inside ClutterBackendPrivate, hence it should stay in ClutterMainContext;
but we should still manage it from just one place - preferably by the
ClutterEvent API only.
In the future, we want event translators to be the way to handle events
in backends. For this reason, they should be a part of the base abstract
ClutterBackend class, and not an X11-only concept.
Instead of asking all backends to do that for us, we can call
ClutterStageWindow::redraw ourselves by default.
This changeset fixes all backends to actually do the right thing, and
move the stage implementation redraw inside the ClutterStageWindow
implementation itself.
We need to *write* to the shared memory, not read from it.
cogl_texture_from_data() is read-only, it doesn't keep
the data in sync with the texture.
Instead, we have to call cogl_texture_get_data() ourselves
to sync manually.
eglGetProcAddress() returns non-null function pointers
whether or not they're actually supported by the driver,
since it can be used before any driver gets loaded. So
we have to check if the extensions are advertised first,
which requires having an initialized display, so we split
the display creation code into its own function.
The exception to extension-checking is EGL_MESA_drm_display,
since by definition it's needed before any display is even
created.
Use both the MappingNotify event and the XKB XkbMapNotify event, if
we're compiled with XKB support.
This change is also useful for making ClutterKeymapX11 an event
translator and let it deal with XKB events internally like we do for
stage and input events.
Based on a patch by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off by: Emmanuele Bassi <ebassi@linux.intel.com>
http://bugzilla.clutter-project.org/show_bug.cgi?id=2525
The redraw function might be called during destruction phase, when the
Stage state has not entirely been tore down. We need to be slightly more
resilient to that scenario.
The pipeline private data is accessed both from the private data set
on a CoglPipeline and the destroy notify function of a weak material
that the vertex buffer creates when it needs to override the wrap
mode. However when a CoglPipeline is destroyed, the CoglObject code
first removes all of the private data set on the object and then the
CoglPipeline code gets invoked to destroy all of the weak children. At
this point the vertex buffer's weak override destroy notify function
will get invoked and try to use the private data which has already
been freed causing a crash.
This patch instead adds a reference count to the pipeline private data
stuct so that we can avoid freeing it until both the private data on
the pipeline has been destroyed and all of the weak materials are
destroyed.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2544
In cogl_pipeline_set_layer_combine_constant it was comparing whether
the new color is the same as the old color using a memcmp on the
constant_color parameter. However the combine constant is stored in
the layer data as an array of four floats but the passed in color is a
CoglColor (which is currently an array of four guint8s). This was
causing valgrind errors and presumably also the check for setting the
same color twice would always fail.
This patch makes it do the conversion to a float array upfront before
the comparison.
Instead of just setting the input device pointer in the private event
data, it should also set the field in the event sub-types, so that
direct access to the structures still works.
cogl_matrix_project_points and cogl_matrix_transform_points had an
optimization for the common case where the stride parameters exactly
match the size of the corresponding structures. The code for both when
generated by gcc with -O2 on x86-64 use two registers to hold the
addresses of the input and output arrays. In the strided version these
pointers are incremented by adding the value of a register and in the
packed version they are incremented by adding an immediate value. I
think the difference in cost here would be negligible and it may even
be faster to add a register.
Also GCC appears to retain the loop counter in a register for the
strided version but in the packed version it can optimize it out and
directly use the input pointer as the counter. I think it would be
possible to reorder the code a bit to explicitly use the input pointer
as the counter if this were a problem.
Getting rid of the packed versions tidies up the code a bit and it
could potentially be faster if the code differences are small and we
get to avoid an extra conditional in cogl_matrix_transform_points.
Use a DeviceManager sub-class similar to the Win32 backend one, which
creates two InputDevices: a core pointer and a core keyboard.
The event translation code then uses these two devices to fill out the
.device field of the events.
Throw in enter/leave tracking, given that we need to update the device's
state.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2490
Implementation of event loop which works with GLib events, native OS X
events and Clutter events.
The event loop source code comes from the equivalent code in the Quartz
GDK backend from GTK+ 2.22.1, which is LGPL v2.1+ and thus compatible
with Clutter's licensing terms.
The code has been tested with libsoup, which did not work before together
with Clutter.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
http://bugzilla.clutter-project.org/show_bug.cgi?id=2490
Wayland visuals refer to a pixel's bytes in order from
most significant to least significant, while the
one-byte-per-component Cogl formats refer to the order
of increasing memory addresses, so converting between
the two depends on the system's endianness.
The height was being set from the ClutterGeometry in some parts
and from the stage in others. And since both callers of this
function pass &stage_wayland->allocation as the geometry anyway,
the stage argument isn't really even needed.
Since we need to find the stage from the X11 Window, it's better to use
a static hashmap that gets updated every time the ClutterStageX11:xwin
member is changed, instead of iterating over every stage handled by the
global ClutterStageManager singleton.
Clutter should just require that the windowing system used by a backend
adds a device to the stage when the device enters, and removes it from
the stage when the device leaves; with this information, we can
synthesize every crossing event and update the device state without
other intervention from the backend-specific code.
The generation of additional crossing events for actors that are
covering the stage at the coordinates of the crossing event should be
delegated to the event processing code.
The x11 and win32 backends need to be modified to relay the enter and
leave events from the windowing system.
When synthesizing events coming from input devices it should be
possible to just call a setter function, to avoid a huge switch
on the type of the event.
Clutter should also store the device pointer inside the private
data, for faster access of the pointer in allocated events.
Finally, the get_device_id() and get_device_type() accessors should
just be wrappers around clutter_event_get_device(), to reduce the
amount of code duplication.
Since we access it in order to get the X11 Display pointer, it makes
sense to have the ClutterBackendX11 already available inside the
ClutterStageX11 structure, and avoid the pattern:
ClutterBackend *backend = clutter_get_default_backend ();
ClutterBackendX11 *backend_x11 = CLUTTER_BACKEND_X11 (backend);
which costs us a function call, a type cast and an unused variable.
Adapt to changes from this Wayland commit:
"Update surface.attach and change surface.map to surface.map_toplevel"
(82da52b15b49da3f3c7b4bd85d334ddfaa375ebc)
When we receive a ConfigureNotify event that doesn't affect the size
of the window (only the position) then we were still calling
clutter_stage_ensure_viewport which ends up queueing a full stage
redraw. This patch makes it so that it only ensures the viewport when
the size changes as it already did for avoiding queueing a relayout.
It now also avoids setting the clipped redraws cool off period when
the window only moves under the assumption that it's only necessary
for size changes.
Since the XI2 device manager code is going to be compiled only on
POSIX compliant systems, we can safely assume the presence of stdint.h
and include it unconditionally.
CLUTTER_BIND_POSITION and CLUTTER_BIND_SIZE are two convenience
enumeration values for binding x and y, and width and height
respectively, using a single ClutterBindConstraint.
When copying COMBINE state in
_cogl_pipeline_layer_init_multi_property_sparse_state we would read some
state from the destination layer (invalid data potentially), then
redundantly set the value back on the destination. This was picked up by
valgrind, and the code is now more careful about how it references the
src layer vs the destination layer.
There is currently a problem with per-framebuffer journals in that it's
possible to create a framebuffer from a texture which then gets rendered
too but the framebuffer (and corresponding journal) can be freed before
the texture gets used to draw with.
Conceptually we want to make sure when freeing a framebuffer that - if
it is associated with a texture - we flush the journal as the last thing
before really freeing the framebuffer's meta data. Technically though
this is awkward to implement since the obvious mechanism for us to be
notified about the framebuffer's destruction (by setting some user data
internally with a callback) notifies when the framebuffer has a
ref-count of 0. This means we'd have to be careful what we do with the
framebuffer to consider e.g. recursive destruction; anything that would
set more user data on the framebuffer while it is being destroyed and
ensuring nothing else gets notified of the framebuffer's destruction
before the journal has been flushed.
For simplicity, for now, this patch provides another solution which is
to flush framebuffer journals whenever we switch away from a given
framebuffer via cogl_set_framebuffer or cogl_push/pop_framebuffer. The
disadvantage of this approach is that we can't batch all the geometry of
a scene that involves intermediate renders to offscreen framebufers.
Clutter is doing this more and more with applications that use the
ClutterEffect APIs so this is a shame. Hopefully this will only be a
stop-gap solution while we consider how to reliably support journal
logging across framebuffer changes.
When flushing a clip stack that contains more than one rectangle which
needs to use the stencil buffer the code takes a different path so
that it can combine the new rectangle with the existing contents of
the stencil buffer. However it was not correctly flushing the
modelview and projection matrices so that rectangle would be in the
wrong place.
This adds a COGL_DEBUG=clipping option that reports how the clip is
being flushed. This is needed to determine whether the scissor,
stencil clip planes or software clipping is being used.
The CoglDebugFlags are now stored in an array of unsigned ints rather
than a single variable. The flags are accessed using macros instead of
directly peeking at the cogl_debug_flags variable. The index values
are stored in the enum rather than the actual mask values so that the
enum doesn't need to be more than 32 bits wide. The hope is that the
code to determine the index into the array can be optimized out by the
compiler so it should have exactly the same performance as the old
code.
The lighting parameters such as the diffuse and ambient colors were
previously only flushed in the fixed vertend. This meant that if a
vertex shader was used then they would not be set. The lighting
parameters are uniforms which are just as useful in a fragment shader
so it doesn't really make sense to set them in the vertend. They are
now flushed in the common cogl-pipeline-opengl code but the code is
#ifdef'd for GLES2 because they need to be part of the progend in that
case.
The uniforms for the alpha test reference value and point size on
GLES2 are updating using similar code. This generalizes the code so
that there is a static array of predefined builtin uniforms which
contains the uniform name, a pointer to a function to get the value
from the pipeline, a pointer to a function to update the uniform and a
flag representing which CoglPipelineState change affects the
uniform. The uniforms are then updated in a loop. This should simplify
adding more builtin uniforms.
The builtin uniforms are accessible from either the vertex shader or
the fragment shader so we should define them in the common
section. This doesn't really matter for the current list of uniforms
because it's pretty unlikely that you'd want to access the matrices
from the fragment shader, but for other builtins such as the lighting
material properties it makes sense.
Between Clutter 0.8 and 1.0, the new-frame signal of ClutterTimeline
changed the second parameter to be an elapsed time in milliseconds
rather than the frame number. However a few places in clutter were
still calling the parameter 'frame_num' which is a bit
misleading. Notably the signature for the signal class closure in the
header was using the wrong name. This changes them to use 'msecs'.
ClutterTimeline has special handling for the first time do_tick is
called which was not emitting a new-frame signal. This meant that an
application which directly uses the timeline would have to manually
setup the initial state of an animation after starting a timeline to
avoid painting a single frame with the wrong state. It seems to make
more sense to instead emit the new-frame signal so that the
application always sees a new-frame when the progress changes before a
paint.
This adds a custom "rows" property, that allows to define the rows of a
ClutterModel. A single row can either an array of all columns or an
object with column-name : column-value pairs.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2528
Allow to 'abuse' the clutter_script_parse_node function by calling it
with an initialized GValue instead of a valid GParamSpec argument to
obtain the correct typed value from the json node.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2528
* xi2: (41 commits)
test-devices: Actually print the axis data
device-manager/xi2: Sync the stage of source devices
event: Clean up clutter_event_copy()
device: unset the axes array pointer when resetting
device-manager/xi2: Fix device hotplugging
glx: Clean up GLX implementation
device/x11: Store min/max keycode in the XI device class
x11: Hide all private symbols
docs: More documentation fixes for InputDevice
*/event: Never manipulate the event queue directly
win32: Update DeviceManager device creation
device: Allow enabling/disabling non-master devices
backend/eglx: Add newly created stages to the translators
device: Add more doc annotations
device: Use a double for translate_axis() argument
test-devices: Clean up and show axes data
event: Fix up clutter_event_copy()
device/xi2: Translate the axis data after setting devices
device: Add more accessors for properties
docs: Update API reference
...
When we added the texture->framebuffers member a _cogl_texture_init
funciton was added to initialize the list of framebuffers associated
with a texture to NULL. All the backends were updated except the
x11 tfp backend. This was causing crashes in test-pixmap.
This is part of a broader cleanup of some of the experimental Cogl API.
One of the reasons for this particular rename is to reduce the verbosity
of using the API. Another reason is that CoglVertexArray is going to be
renamed CoglAttributeBuffer and we want to help emphasize the
relationship between CoglAttributes and CoglAttributeBuffers.
We have a bunch of experimental convenience functions like
cogl_primitive_p2/p2t2 that have corresponding vertex structures but it
seemed a bit odd to have the vertex annotation e.g. "P2T2" be an infix
of the type like CoglP2T2Vertex instead of be a postfix like
CoglVertexP2T2. This switches them all to follow the postfix naming
style.
COGL_DEBUG=disable-fast-read-pixel can be used to disable the
optimization for reading a single pixel colour back by looking at the
geometry in the journal and not involving the GPU. With this disabled we
will always flush the journal, rendering to the framebuffer and then use
glReadPixels to get the result.
This adds a transparent optimization to cogl_read_pixels for when a
single pixel is being read back and it happens that all the geometry of
the current frame is still available in the framebuffer's associated
journal.
The intention is to indirectly optimize Clutter's render based picking
mechanism in such a way that the 99% of cases where scenes are comprised
of trivial quad primitives that can easily be intersected we can avoid
the latency of kicking a GPU render and blocking for the result when we
know we can calculate the result manually on the CPU probably faster
than we could even kick a render.
A nice property of this solution is that it maintains all the
flexibility of the render based picking provided by Clutter and it can
gracefully fall back to GPU rendering if actors are drawn using anything
more complex than a quad for their geometry.
It seems worth noting that there is a limitation to the extensibility of
this approach in that it can only optimize picking a against geometry
that passes through Cogl's journal which isn't something Clutter
directly controls. For now though this really doesn't matter since
basically all apps should end up hitting this fast-path. The current
idea to address this longer term would be a pick2 vfunc for ClutterActor
that can support geometry and render based input regions of actors and
move this optimization up into Clutter instead.
Note: currently we don't have a primitive count threshold to consider
that there could be scenes with enough geometry for us to compensate for
the cost of kicking a render and determine a result more efficiently by
utilizing the GPU. We don't currently expect this to be common though.
Note: in the future it could still be interesting to revive something
like the wip/async-pbo-picking branch to provide an asynchronous
read-pixels based optimization for Clutter picking in cases where more
complex input regions that necessitate rendering are in use or if we do
add a threshold for rendering as mentioned above.
Both cogl_matrix_transform_points and _project_points take points_in and
points_out arguments and explicitly allow pointing to the same array
(i.e. to transform in-place) The implementation of the various internal
transform functions though were not handling this possability and so it
was possible the reference partially transformed vertex values as if
they were original input values leading to incorrect results. This patch
ensures we take a temporary copy of the current input point when
transforming.
This adds a utility function that can determine if a given point
intersects an arbitrary polygon, by counting how many edges a
"semi-infinite" horizontal ray crosses from that point. The plan is to
use this for a software based read-pixel fast path that avoids using the
GPU to rasterize journaled primitives and can instead intersect a point
being read with quads in the journal to determine the correct color.
This adds a stop-gap mechanism for Cogl to know when the window system
is requested to present the current backbuffer to the frontbuffer by
adding a _cogl_swap_buffers_notify function that backends are now
expected to call right after issuing the equivalent request to OpenGL
vie the platforms OpenGL binding layer. This (blindly) updates all the
backends to call this new function.
For now Cogl doesn't do anything with the notification but the intention
is to use it as part of a planned read-pixel optimization which will
need to reset some state at the start of each new frame.
Instead of having _cogl_get/set_clip stack which reference the global
CoglContext this instead makes those into CoglClipState method functions
named _cogl_clip_state_get/set_stack that take an explicit pointer to a
CoglClipState.
This also adds _cogl_framebuffer_get/set_clip_stack convenience
functions that avoid having to first get the ClipState from a
framebuffer then the stack from that - so we can maintain the
convenience of _cogl_get_clip_stack.
This adds an internal function to be able to query the screen space
bounding box of the current clip entries contained in a given
CoglClipStack.
This bounding box which is cheap to determine can be useful to know the
largest extents that might be updated while drawing with this clip
stack.
For example the plan is to use this as part of an optimized read-pixel
path handled on the CPU which will need to track the currently valid
extents of the last call to cogl_clear()
Instead of having a single journal per context, we now have a
CoglJournal object for each CoglFramebuffer. This means we now don't
have to flush the journal when switching/pushing/popping between
different framebuffers so for example a Clutter scene that involves some
ClutterEffect actors that transiently redirect to an FBO can still be
batched.
This also allows us to track state in the journal that relates to the
current frame of its associated framebuffer which we'll need for our
optimization for using the CPU to handle reading a single pixel back
from a framebuffer when we know the whole scene is currently comprised
of simple rectangles in a journal.
This adds an internal alternative to cogl_object_set_user_data that also
passes an instance pointer to destroy notify callbacks.
When setting private data on a CoglObject it's often desirable to know
the instance being destroyed when we are being notified to free the
private data due to the object being freed. The typical solution to this
is to track a pointer to the instance in the private data itself so it
can be identified but that usually requires an extra micro allocation
for the private data that could have been avoided if only the callback
were given an instance pointer.
The new internal _cogl_object_set_user_data passes the instance pointer
as a second argument which means it is ABI compatible for us to layer
the public version on top of this internal function.
This moves the implementation of cogl_clear into cogl-framebuffer.c as
two new internal functions _cogl_framebuffer_clear and
_cogl_framebuffer_clear4f. It's not clear if this is what the API will
look like as we make more of the CoglFramebuffer API public due to the
limitations of using flags to identify buffers when framebuffers may
contain any number of ancillary buffers but conceptually it makes some
sense to tie the operation of clearing a color buffer to a framebuffer.
The short term intention is to enable tracking the current clear color
as a property of the framebuffer as part of an optimization for reading
back single pixels when the geometry is simple enough that we can
compute the result quickly on the CPU. (If the point doesn't intersect
any geometry we'll need to return the last clear color.)
Hierarchy and Device changed events come through with the X window set
to be the root window, not the stage window. We need to whitelist them
so that we can actually support hotplugging and device changes.
The x11 backend exposes a lot of symbols that are meant to only be used
when implementing a subclassed backend, like the glx and eglx ones.
The uninstalled headers are also filled with cruft declarations of
functions long since removed.
Let's try to clean up this mess.
Slave and floating devices should always be disabled, and not deliver
events to the scene. It is up to the user to enable non-master devices
and handle events coming from them.
ClutterInputDevice gets a new :enabled property, defaulting to FALSE;
when a device manager creates a new device it has to set it to TRUE if
the :device-mode property is set to CLUTTER_INPUT_MODE_MASTER.
The main event queue entry point, _clutter_event_push(), will
automatically discard events coming from disabled devices.
CLUTTER_BUTTON_* and CLUTTER_MOTION event types have axes data attached
to them, so we want to expose a common ClutterEvent method for
extracting that data.
The ClutterStageX11 implementation does most of the heavy lifting, so
subclasses like ClutterStageGLX and ClutterStageEGL do not need to
handle things like creating the stage Window and selecting events; just
chaining up and using the internal API will suffice.
Undeprecate the XInput-related X11 API: since we don't enable XI support
by default we still need to ask for it, and see if we have it after the
backend initialization sequence.
Event translation is now done where it belongs: we don't need a massive
switch in a file with direct access to private structure members.
So long, event_translate(); and thanks for all the fish.
We ask XI2 to get the client pointer for CLUTTER_POINTER_DEVICE, and
we use the attached keyboard device for CLUTTER_KEYBOARD_DEVICE. For
everything else, we return NULL.
We keep the symbol in the public header, but the definition is now
private. You could not sub-class InputDevice anyway, without the
instance structure, and the lack of padding in the class made actually
implementing devices in backends really hard.
This is a lump commit that is fairly difficult to break down without
either breaking bisecting or breaking the test cases.
The new design for handling X11 event translation works this way:
- ClutterBackend::translate_event() has been added as the central
point used by a ClutterBackend implementation to translate a
native event into a ClutterEvent;
- ClutterEventTranslator is a private interface that should be
implemented by backend-specific objects, like stage
implementations and ClutterDeviceManager sub-classes, and
allows dealing with class-specific event translation;
- ClutterStageX11 implements EventTranslator, and deals with the
stage-relative X11 events coming from the X11 event source;
- ClutterStageGLX overrides EventTranslator, in order to
deal with the INTEL_GLX_swap_event extension, and it chains up
to the X11 default implementation;
- ClutterDeviceManagerX11 has been split into two separate classes,
one that deals with core and (optionally) XI1 events, and the
other that deals with XI2 events; the selection is done at run-time,
since the core+XI1 and XI2 mechanisms are mutually exclusive.
All the other backends we officially support still use their own
custom event source and translation function, but the end goal is to
migrate them to the translate_event() virtual function, and have the
event source be a shared part of Clutter core.