ClutterAnimation currently inherits the initial floating reference
semantics from GInitiallyUnowned. An Animation is, though, meant to
be used as a top-level object, like a Timeline or a Behaviour, and
not "owned" by another object. For this reason, the initial floating
reference does not make any sense.
Document that repeated calls to clutter_cairo_texture_create()
continue drawing on the same cairo_surface_t. Add
clutter_cairo_texture_clear() for when you don't want that behavior.
http://bugzilla.openedhand.com/show_bug.cgi?id=1599
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The cursor x position is already translated, so we do not need to take the
actors allocation into account when calculating scrolling.
Additionally, we need to update the text_x value before running
clutter_text_ensure_cursor_position.
Add any scrolling offset to the x value when in single line mode.
Now that the offset is taken into account in the position_to_coords
function, we do not need to adjust the cursor x manually in
clutter_text_paint.
Print out the cursor and selection positions in order to verify
the behaviour of the Text actor.
This is a likely candidate for a conformance test unit as well.
If the cursor is already at the end of the Text contents then we
need to maintain its position when deleting the previous character
using the relative key binding.
The required "fake" libclutter-cogl.la upon with the main clutter
shared object depends is only built with introspection enabled
instead of being built unconditionally.
Passing:
--library=clutter-@CLUTTER_FLAVOUR@-@CLUTTER_API_VERSION@
to g-ir-scanner, when building Cogl was causing g-ir-scanner to
link the introspection program against the installed clutter library,
if it existed or fail otherwise. Instead copy the handling from
the json/ directory where we link against the convenience library
to scan, and do the generation of the typelib later in the
main clutter/directory.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1594
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If text is set, ClutterText should never return less than the layout
height for minimum and preferred heights.
This holds unless ellipsize and wrap are enabled, in which case the
minimum height should be the height of the first line -- which is
the height needed to at the very least show the ellipsization.
Based on a patch by: Thomas Wood <thomas@openedhand.com>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1598
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This is the another step into abstracting the backend operations
that are currently spread all across the board back into the
backend implementations where they belong.
The GL context creation, for instance, is demanded to the stage
realization which makes it a critical path for every operation
that is GL-context bound. This usually does not make any difference
since we realize the default stage, but at some point we might
start looking into avoiding the default stage realization in order
to make the Clutter startup faster.
It also makes the code maintainable because every part is self
contained and can be reworked with the minimum amount of pain.
The master clock is using the redraw priority to create the source
that will be used to spin the paint sequence if something is being
animated using a timeline.
Unfortunately, the priority is too high and this causes starvation
when embedding into other toolkits -- like gtk+.
Thanks to Havoc Pennington for catching this.
The XVisualInfo for GL is created when a stage is being realized.
When embedding Clutter inside another toolkit we might not want to
realize a stage to extract the XVisualInfo, then set the stage
window using a foreign X Window -- which will cause a re-realization.
Instead, we should abstract as much as possible into the X11 backend.
Unfortunately, the XVisualInfo for GL is requested using GLX API; for
this reason we have to create a ClutterBackendX11 method that we
override inside the ClutterBackendGLX implementation.
This also allows us to move a little bit of complexity from out of
the stage realization, which is currently a very delicate and hard
to debug section.
I was seeing clutter_text_get_selection trying to malloc up to 4Gb due to
unexpected negative arithmetic for the start/end offsets which resulted
in a crash.
This just tests for positions of -1 before deciding if the start/end
positions need to be swapped. The conversion from position to byte offset
already works with -1.
The picking test has two configurables: the number of actors and
the number of events per frame. It makes sense to have them as
command line options to test with multiple configurations without
having to change defines and recompile.
cogl_clip_push_window_rect is implemented using GPU scissoring which allows
the GPU to cull anything that falls outside a given rectangle. Since in the
case of picking we only ever care about a single pixel we can get the GPU to
ignore all geometry that doesn't intersect that pixel and only rasterize for
one pixel.
Previously clipping could only be specified in object coordinates, now
rectangles can also be pushed in window coordinates.
Internally rectangles pushed this way are intersected and then clipped using
scissoring. We also transparently try to convert rectangles pushed in
object coordinates into window coordinates as we anticipate the scissoring
path will be faster then the clip planes and undoubtably it will be faster
than using the stencil buffer.
The stencil buffer is always cleared the first time a clip is used
that needs it and the stencil test is disabled otherwise so there is
no need to clear before a paint.
COGLenum, COGLint and COGLuint which were simply typedefs for GL{enum,int,uint}
have been removed from the API and replaced with specialised enum typedefs, int
and unsigned int. These were causing problems for generating bindings and also
considered poor style.
The cogl texture filter defines CGL_NEAREST and CGL_LINEAR etc are now replaced
by a namespaced typedef 'CoglTextureFilter' so they should be replaced with
COGL_TEXTURE_FILTER_NEAREST and COGL_TEXTURE_FILTER_LINEAR etc.
The shader type defines CGL_VERTEX_SHADER and CGL_FRAGMENT_SHADER are handled by
a CoglShaderType typedef and should be replaced with COGL_SHADER_TYPE_VERTEX and
COGL_SHADER_TYPE_FRAGMENT.
cogl_shader_get_parameteriv has been replaced by cogl_shader_get_type and
cogl_shader_is_compiled. More getters can be added later if desired.
Calling glReadPixels is bad enough in forcing us to synchronize the CPU with
the GPU, but glFinish has even stronger synchonization semantics than
glReadPixels which may negate some driver optimizations possible in
glReadPixels.
Commit 43fa38fcf5 broke out-of-tree builds by removing some of the
builddir directories from the include path. builddir/clutter/cogl and
builddir/clutter are needed because cogl.h and cogl-defines-gl.h are
automatically generated by the configure script. The main clutter
headers are in the srcdir so this needs to be in the path too.
When the stage state changes between active/deactive, send out
key-focus-in/key-focus-out signal for the current key focused
actor on the stage.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1503
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Setting the stage window using the set_stage_foreign() method will
lead to a re-realization. We need to make sure that the Drawable
currently associated to the GL context is set to None, to avoid a
BadDrawable error or, if we're unlucky, a segfault in the X server.
This commit reverts part of commit 5bcde25c - specifically the
part that forced a realization of the stage if we are ensuring
the GL context with it. This makes Clutter behave like it did
prior to commit 5bcde25c: if we are asked to ensure the GL context
with an unrealized stage we simply pass NULL to the backend
implementation.
When showing a Stage for the first time we end up realizing the stage
implementation before realizing the wrapper. This leads to segmentation
faults or errors coming from the backend because we're fumbling the
state and realization sequence.
Since we are destroying any previously set VisualInfo we keep we know
for sure that stage->xvisinfo is going to be None; hence, no reason to
check this condition.
The verify_map_state() internal method is conditionally compiled
if we have CLUTTER_ENABLE_DEBUG set; for this reason, all calls to
that method should be made conditional.
When building Clutter with introspection enabled everything stops
at Cogl GIR generation because it depends on the installed library
to work. Since we still require some changes in the API to be able
to build the GIR and the typelib for Cogl we should disable the
generation of the GIR as well.
The fix for bug 1138 broke multi-stage support on GLX, causing
X11 to segfault with the following stack trace:
Backtrace:
0: /usr/X11R6/bin/X(xf86SigHandler+0x7e) [0x80c91fe]
1: [0xb7eea400]
2: /usr/lib/xorg/modules/extensions//libglx.so [0xb7ae880c]
3: /usr/lib/xorg/modules/extensions//libglx.so [0xb7aec0d6]
4: /usr/X11R6/bin/X [0x8154c24]
5: /usr/X11R6/bin/X(Dispatch+0x314) [0x808de54]
6: /usr/X11R6/bin/X(main+0x4b5) [0x8074795]
7: /lib/i686/cmov/libc.so.6(__libc_start_main+0xe5) [0xb7c75775]
8: /usr/X11R6/bin/X(FontFileCompleteXLFD+0x21d) [0x8073a81]
which I can only track down to clutter_backend_glx_ensure_current()
being passed a NULL stage -- something that happens when a stage
is not correct realized. That should lead to a glXMakeCurrent(None)
and not to a segmentation fault, though.
When destroying a top-level actor we can actually relax the verification
of the map state, since it might be fully asynchronous and we might not
re-enter inside the mainloop in time to receive the unmap notification.
If the filter means that the there should be no rows left in the model,
clutter_model_get_iter_at_row (model, 0) should return NULL.
Howevever the currene implementation misbehaves and returns a bad iterator.
This change resolves the issue by tracking if we actually found any
non-filtered rows in our pass through the sequence.
OH Bugzilla: 1591
The stage is chaining up to the ClutterGroup::paint instead of
the ClutterGroup::pick method. This works anyway because we
detect the stage by default, but it's not a reliable solution
in case we decide to change the picking further on.
The master clock is currently advanced using a frame source driven
by the default frame rate. This breaks the sync to vblank because
the vblanking rate could be different than 60 Hz -- or it might be
completely disabled (e.g. with CLUTTER_VBLANK=none).
We should be using the main loop to check if we have timelines
playing, and if so queue a redraw on the stages we own.
We should also prepare the subsequent frame at the end of the redraw
process, so if there are new redraw we will have the scene already
in place.
This makes Clutter redraw at the maximum frame rate, which is
limited by the vblanking frequency.
Currently, picking in ClutterGroup pollutes the CLUTTER_DEBUG=paint
logs since it just calls the paint function. Reimplementing the pick
doesn't make us lose anything -- it might even be slightly faster
since we don't have to do a (typed) cast and a class dereference.
The timeline created when calling set_timeline(NULL) is referenced
even though we implicitly own it. When the Animation is destroyed,
the timeline is then leaked.
Thanks to: Richard Heatley <richard.heatley@starleaf.com>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1548