It's important step of initialization because all features calls from
font rendering libs based on this parameter. By default it equals to
-1 and test-text-cache test crashes in this case.
Trick with hiding view while showing the stage affects on responder
chain. The main view ceases to be first responder and we should
manually set first responder.
Problem was in incorrect application initialization.
[NSApplication sharedApplication] should be call in backend init(not
in init stage). It doesn't require any data and only makes a
connection to window server.
Cleanup clutter_backend_osx_post_parse function and move context
initialization to clutter_backend_osx_create_context. The OpenGL pixel
format attributes were taken as is. Also move bringing application to
foreground in clutter_stage_osx_realize, it seems there is best place
for it.
Viewport didn't initialized before OGL drawing and it causes crash on
assert so added viewport initalization to
clutter_stage_osx_realize. Also showing the stage causes drawing
function but other part of the system(in particular conformance tests)
don't expect it and aren't ready at this moment.
Mention the XFixes extension for compositors using input regions to let
events "pass through" the stage.
Thanks to: Robert Bragg <robert@linux.intel.com>
When we disable the event retrieval, we now just disable the X11 event
source, not the event selection. We need to make that clear to
applications, especially compositors, which might expect complete
control over the selection.
Currently, we select input events and GLX events conditionally,
depending on whether the user has disabled event retrieval.
We should, instead, unconditionally select input events even with event
retrieval disabled because we need to guarantee that the Clutter
internal state is maintained when calling clutter_x11_handle_event()
without requiring applications or embedding toolkits to select events
themselves. If we did that, we'd have to document the events to be
selected, and also update applications and embedding toolkits each time
we added a new mask, or a new class of events - something that's clearly
not possible.
See:
http://bugzilla.clutter-project.org/show_bug.cgi?id=998
for the rationale of why we did conditional selection. It is now clear
that a compositor should clear out the input region, since it cannot
assume a perfectly clean slate coming from us.
See:
http://bugzilla.clutter-project.org/show_bug.cgi?id=2228
for an example of things that break if we do conditional event
selection on GLX events. In that specific case, the X11 server ≤ 1.8
always pushed GLX events on the queue, even without selecting them; this
has been fixed in the X11 server ≥ 1.9, which means that applications
like Mutter or toolkit integration libraries like Clutter-GTK would stop
working on recent Intel drivers providing the GLX_INTEL_swap_event
extension.
This change has been tested with Mutter and Clutter-GTK.
* elliot/cookbook-animations-rotating:
cookbook: Added recipe for animated rotation of an actor
cookbook: Add explanation about including code samples
cookbook: Make filename used in video example consistent
cookbook: Add example code for animated rotation
This makes the gles2 cogl_program_use consistent with the GL version by
not binding the program immediately and instead leaving it to
cogl-material.c to bind the program when actually drawing something.
Previously custom uniforms were tracked in _CoglGles2Wrapper but as part
of a process to consolidate the gl/gles2 shader code it seems to make
sense for this state to be tracked in the CoglProgram object instead.
http://bugzilla.o-hand.com/show_bug.cgi?id=2179
Instead of having to query GL and translate the GL enum into a
CoglShaderType each time cogl_shader_get_type is called we now keep
track of the type in CoglShader.
The Animatable interface was created specifically for the Animation
class. It turns out that it might be fairly useful to others - such as
ClutterAnimator and ClutterState.
The newly-added API in this cycle for querying and accessing custom
properties should not require that we pass a ClutterAnimation to the
implementations: the Animatable itself should be enough.
This is necessary to allow language bindings to wrap
clutter_actor_animate() correctly and do type validation and
demarshalling between native values and GValues; an Animation instance
is not available until the animate() call returns, and validation must
be performed before that happens.
There is nothing we can do about the animate_property() virtual
function - but in that case we might want to be able to access the
animation from an Animatable implementation to get the Interval for
the property, just like ClutterActor does in order to animate
ClutterActorMeta objects.
XGetGeometry is a great piece of API, since it gets a lot of stuff that
are moderately *not* geometry related - the root window, and the depth
being two.
Since we have multiple conditions depending on the result of that call
we should split them up depending on the actual error - and each of them
should have a separate error message. This makes debugging simpler.
It's possible - though not recommended - that user code causes the
destruction of an actor in one of the notification handlers for
flag-based properties. We should protect the multiple notification
emission with g_object_ref/unref.
New recipe covering how to animate rotation of
an actor (in all axes).
Covers various factors affecting rotation animation
(like orientation of axes, parent rotation/orientation),
as well as trying to make rotations easier to visualise
(e.g. describing how rotation direction is affected by
those factors, how a rotation can be expected to look
when animated). Uses implicit animations for code examples.
Also refers to a full code example which uses ClutterState.
Nothing was storing the shader type when a shader was created so it
would get confused about whether it was a custom vertex or fragment
shader.
Also the 'type' member of CoglShader was a GLenum but the only place
that read it was treating it as if it was CoglShaderType. This changes
it be CoglShaderType.
In 7fae8ac051 the two cogl-defines.h files from GLES and GL were
unified. However this missed out the COGL_HAS_GLES[12] defines from
GLES. The configure.ac still made an AC_SUBST for the right version
but the replacement was never put in any headers. This fixes it so
that instead of directly calling AC_SUBST the value is now put into a
variable which later gets added to COGL_DEFINES so that it will end up
in cogl-defines.h
There was an initializer for the COGL_DEFINES variable which sets it
to the empty value before it is filled in. The name of the variable
wasn't spelt right so it wouldn't work properly. This doesn't really
matter because it would default to empty anyway.
Since the GLES2 wrapper grew support for multi-texturing, the
tex_coord varying variable defined in the vertex shader is actually an
array of texture coordinates so it ought to match in the fragment
shader in test-shader. This seemed to work anyway under Mesa/Intel but
under NVidia it does not so I don't think it's safe to assume that
linking a non-array varying with an array will work.
When loading an RGB image GdkPixbuf will pad the rowstride so that the
beginning of each row is aligned to 4 bytes. This was causing us to
fallback to the code that copies the buffer. It is probably safe to
avoid copying the buffer if we can detect that the rowstride is simply
an alignment of the packed rowstride.
This also changes the copying fallback code so that it uses the
aligned rowstride. However it is now extremely unlikely that the
fallback code would ever be used.
In commit b780413e5a the GdkPixbuf loading code was changed so that
if it needs to copy the pixbuf then it would tightly pack it. However
it was still using the rowstride from the pixbuf so the image would
end up skewed. This fixes it to use the real rowstride.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2235
In OpenGL the 'shininess' lighting parameter is floating point value
limited to the range 0.0→128.0. This number is used to affect the size
of the specular highlight. Cogl materials used to only accept a number
between 0.0 and 1.0 which then gets multiplied by 128.0 before sending
to GL. I think the assumption was that this is just a weird GL quirk
so we don't expose it. However the value is used as an exponent to
raise the attenuation to a power so there is no conceptual limit to
the value.
This removes the mapping and changes some of the documentation.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2222
When flushing a fixed-function or arbfp material it would always call
disable_glsl to try to get rid of the previous GLSL shader. This is
needed even if current_use_program_type is not GLSL because if an
application calls cogl_program_uniform then Cogl will have to bind the
program to set the uniform. If this happens then it won't update
current_use_program_type presumably because the enabled state of arbfp
is still valid.
The problem was that disable_glsl would only select program zero when
the current_use_program_type is set to GLSL which wouldn't be the case
if cogl_program_uniform was called. This patch changes it to just
directly call _cogl_gl_use_program_wrapper(0) instead of having a
separate disable_glsl function. The current program is cached in the
cogl context anyway so it shouldn't cause any extra unnecessary GL
calls.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2232