This function should only need to be called in exceptional circumstances
since Cogl can normally determine internally when a flush is necessary.
As an optimization Cogl drawing functions may batch up primitives
internally, so if you are trying to use raw GL outside of Cogl you stand a
better chance of being successful if you ask Cogl to flush any batched
geometry before making your state changes.
cogl_flush() ensures that the underlying driver is issued all the commands
necessary to draw the batched primitives. It provides no guarantees about
when the driver will complete the rendering.
This provides no guarantees about the GL state upon returning and to avoid
confusing Cogl you should aim to restore any changes you make before
resuming use of Cogl.
If you are making state changes with the intention of affecting Cogl drawing
primitives you are 100% on your own since you stand a good chance of
conflicting with Cogl internals. For example clutter-gst which currently
uses direct GL calls to bind ARBfp programs will very likely break when Cogl
starts to use ARBfb programs internally for the material API, but for now it
can use cogl_flush() to at least ensure that the ARBfp program isn't applied
to additional primitives.
This does not provide a robust generalized solution supporting safe use of
raw GL, its use is very much discouraged.
Previously the journal was always flushed at the end of
_cogl_rectangles_with_multitexture_coords, (i.e. the end of any
cogl_rectangle* calls) but now we have broadened the potential for batching
geometry. In ideal circumstances we will only flush once per scene.
In summary the journal works like this:
When you use any of the cogl_rectangle* APIs then nothing is emitted to the
GPU at this point, we just log one or more quads into the journal. A
journal entry consists of the quad coordinates, an associated material
reference, and a modelview matrix. Ideally the journal only gets flushed
once at the end of a scene, but in fact there are things to consider that
may cause unwanted flushing, including:
- modifying materials mid-scene
This is because each quad in the journal has an associated material
reference (i.e. not copy), so if you try and modify a material that is
already referenced in the journal we force a flush first)
NOTE: For now this means you should avoid using cogl_set_source_color()
since that currently uses a single shared material. Later we
should change it to use a pool of materials that is recycled
when the journal is flushed.
- modifying any state that isn't currently logged, such as depth, fog and
backface culling enables.
The first thing that happens when flushing, is to upload all the vertex data
associated with the journal into a single VBO.
We then go through a process of splitting up the journal into batches that
have compatible state so they can be emitted to the GPU together. This is
currently broken up into 3 levels so we can stagger the state changes:
1) we break the journal up according to changes in the number of material layers
associated with logged quads. The number of layers in a material determines
the stride of the associated vertices, so we have to update our vertex
array offsets at this level. (i.e. calling gl{Vertex,Color},Pointer etc)
2) we further split batches up according to material compatability. (e.g.
materials with different textures) We flush material state at this level.
3) Finally we split batches up according to modelview changes. At this level
we update the modelview matrix and actually emit the actual draw command.
This commit is largely about putting the initial design in-place; this will be
followed by other changes that take advantage of the extended batching.
An implementaton of realize() never needs to set the
CLUTTER_ACTOR_REALIZED flag, though it can unset the flag if
things fail unexpectedly. (Previously, stage backend implementations
had to do this since clutter_actor_realize() wasn't used; this
is no longer the case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1634
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
With the recent change to internal floating point values, ClutterUnit
has become a redundant type, defined to be a float. All integer entry
points are being internally converted to floating point values to be
passed to the GL pipeline with the least amount of conversion.
ClutterUnit is thus exposed as just a "pixel with fractionary bits",
and not -- as users might think -- as generic, resolution and device
independent units. not that it was the case, but a definitive amount
of people was convinced it did provide this "feature", and was flummoxed
about the mere existence of this type.
So, having ClutterUnit exposed in the public API doubles the entry
points and has the following disadvantages:
- we have to maintain twice the amount of entry points in ClutterActor
- we still do an integer-to-float implicit conversion
- we introduce a weird impedance between pixels and "pixels with
fractionary bits"
- language bindings will have to choose what to bind, and resort
to manually overriding the API
+ *except* for language bindings based on GObject-Introspection, as
they cannot do manual overrides, thus will replicate the entire
set of entry points
For these reason, we should coalesces every Actor entry point for
pixels and for ClutterUnit into a single entry point taking a float,
like:
void clutter_actor_set_x (ClutterActor *self,
gfloat x);
void clutter_actor_get_size (ClutterActor *self,
gfloat *width,
gfloat *height);
gfloat clutter_actor_get_height (ClutterActor *self);
etc.
The issues I have identified are:
- we'll have a two cases of compiler warnings:
- printf() format of the return values from %d to %f
- clutter_actor_get_size() taking floats instead of unsigned ints
- we'll have a problem with varargs when passing an integer instead
of a floating point value, except on 64bit platforms where the
size of a float is the same as the size of an int
To be clear: the *intent* of the API should not change -- we still use
pixels everywhere -- but:
- we remove ambiguity in the API with regard to pixels and units
- we remove entry points we get to maintain for the whole 1.0
version of the API
- we make things simpler to bind for both manual language bindings
and automatic (gobject-introspection based) ones
- we have the simplest API possible while still exposing the
capabilities of the underlying GL implementation
* clutter/osx/clutter-osx.h (_clutter_event_osx_put)
* clutter/osx/clutter-event-osx.c (clutter_event_osx_translate,
NSEvent:clutterStage:)
* clutter/osx/clutter-stage-osx.c (EVENT_HANDLER): Since events are
delivered to ClutterGLView, pass the associated ClutterStage directly
to event translation. Avoids relying on being embedded in
ClutterGLWindow, which makes it easier to implement clutter-gtk.
Bug #911 - OSX: add multistage support
* clutter/osx/clutter-backend-osx.{c,h}
(clutter_backend_osx_init_stage, clutter_backend_osx_get_stage,
clutter_backend_osx_redraw, clutter_backend_osx_create_stage,
clutter_backend_osx_ensure_context, clutter_backend_osx_class_init,
clutter_backend_osx_dispose, ClutterGLView:drawRect:):
* clutter/osx/clutter-stage-osx.{c,h} (clutter_stage_osx_realize,
ClutterGLWindow:setFrameSize:):
Adapt to new multistage backend API. Don't keep a pointer to
default stage. Derive from ClutterActor instead of ClutterStage.
Implement ClutterStageWindow interface. Paint, resize and
otherwise manipulate the wrapper rather than self when necessary.
(clutter_backend_post_parse): Create our singleton GL context
here. We could probably create the context when the default
stage is created, but I think this is more clean.
* clutter/osx/clutter-event-osx.c (clutter_event_osx_translate)
* clutter/osx/clutter-stage-osx.c (clutter_stage_osx_state_update,
ClutterGLWindow:windowShouldClose:):
* clutter/osx/clutter-stage-osx.h: Export ClutterGLWindow interface
for clutter-event-osx.c to easily get the stage for NSWindow.
Fill in ClutterEventAny::stage on our events.
Consistently use 'stage_osx' and 'wrapper' as variable names
when referring to ClutterStageOSX and ClutterStage objects
respectively.
* clutter/clutter-actor.c:
(clutter_actor_real_show),
(clutter_actor_real_hide): Do not set the MAPPED flag on the actor
if it is a top-level one (like ClutterStage); the backends are
responsible for setting that flag, as it might be the result of an
asynchronous operation (e.g. on X11).
* clutter/eglnative/clutter-stage-egl.c:
(clutter_stage_egl_show),
(clutter_stage_egl_hide): Set/unset the CLUTTER_ACTOR_MAPPED flag
on show and hide respectively.
* clutter/osx/clutter-stage-osx.c:
(clutter_stage_osx_show),
(clutter_stage_osx_hide): Ditto as above.
* clutter/sdl/clutter-stage-sdl.c:
(clutter_stage_sdl_show),
(clutter_stage_sdl_hide): Ditto as above, plus chain up to the
parent class show/hide virtual functions.
* clutter/x11/clutter-event-x11.c (event_translate): Use the MapNotify
and UnmapNotify events to call the X11 stage map/unmap functions.
* clutter/x11/clutter-stage-x11.[ch]:
(clutter_stage_x11_set_fullscreen): Set the fullscreen_on_map flag
with the fullscreen value.
(clutter_stage_x11_map), (clutter_stage_x11_unmap): Set the MAPPED
flag on the stage actor and redraw; also, if the fullscreen_on_map
flag was set, call clutter_stage_fullscreen() as well. (#648)
* tests/Makefile.am:
* tests/test-fullscreen.c: Add a fullscreen test case for checking
whether fullscreen works on every backend/platform.