This backend is a event backend that can be enabled for EGL (for now).
It uses udev (gudev) to query input devices on a linux system, listens to
keyboard events from input devices and xkbcommon to translate raw key
codes into key keysyms.
This commit only supports key events, more to follow.
Looking at what the X11 backend does: the unicode value is being
translated to the unicode codepoint of the symbol if possible. Let's do
the same then.
Before that, key events for say KEY_Right (0xff53) had the unicode_value
set to the keysym, which meant "This key event is actually printable and
is Unicode codepoint is 0xff53", which lead to interesting results.
The wayland client code has support for translating raw linux input
device key codes coming from the wayland compositor into key symbols
thanks to libxkbcommon.
A backend directly listening to linux input devices (called evdev, just
like the Xorg one) could use exactly the same code for the translation,
so abstract it a bit in a separate file.
In 6246c2bd6 I moved the code to add the boilerplate to a shader to a
separate function and also made it so that the common boilerplate is
added as a separate string to glShaderSource. However I didn't notice
that the #define for the vertex and fragment shaders already includes
the common part so it was being added twice. Mesa seems to accept this
but it was causing problems on the IMG driver because COGL_VERSION was
defined twice.
* elliot/cookbook-animations-scaling:
cookbook: Add recipe for animated scaling of an actor
cookbook: Add example of scaling a texture
cookbook: Added "animated scaling" recipe skeleton
cookbook: Added animated scaling example
Don't calculate an extra layout in clutter_text_get_preferred_height for
single-line strings, when it's unnecessary. There's no need to set the
width of a layout when in single-line mode, as wrapping will not happen.
Previously when the shader effect is used with a new actor it would
end up throwing away the old program. I don't think this is neccessary
and it means if you use an effect to temporarily bind to an actor then
it will recompile the shader whenever it is applied.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2454
When a new actor is set for ClutterOffscreenEffect it would throw away
the old material. I don't think there is anything specifically tied to
the actor in the material so throwing away just loses Cogl's cached
state about the material. This ends up relinking the shader every time
a new actor is set in ClutterShaderEffect.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2454
Do not use the compiler to zero the first field of the GValue member,
since it's apparently non-portable. As we're allocating memory anyway we
can let the slice allocator do the zero-ing for us.
Mentioned in: http://bugzilla.clutter-project.org/show_bug.cgi?id=2455
Before commit 49898d43 CoglPipeline would compare whether a pipeline
layer's texture is equal by fetching the underlying GL handle. I
changed that so that it would only compare the CoglHandles because
that commit removes the GL handle texture overrides and sliced
textures instead log the underlying primitive texture. However I
forgot that the primitives don't always use
_cogl_texture_foreach_sub_texture_in_region when the quad fits within
the single texture so it won't use a texture override. This meant that
atlas textures and sub textures get logged with the atlas handle so
the comparison still needs to be done using the GL handles. It might
be nice to add a CoglTexture virtual to get the underlying primitive
texture instead to avoid having the pipeline poke around with GL
handles.
This is mostly a stub, starting point for a one-stop Cogl
micro-benchmarking tool. The first test it adds is one that tries to
hammer the Cogl journal, but no doubt there are lots of other things we
should be regularly testing. Currently the aim is that the tool will be
able to also generate reports which we can collect to keep track of
performance changes over time.
If we have to make override changes to the user's source material to
handle cogl_polygon then we need to make sure we unref the override
material at the end.
Previously we used the layers->backend_priv[] members to determine when
to notify backends about layer changes, but it entirely up to the
backends if they want to associate private state with layers, even
though they may still be interested in layer change notifications (they
may associate layer related state with the owner pipeline).
We now make the observation that in
_cogl_pipeline_backend_layer_change_notify we should be able to assume
there can only be one backend currently associated with the layer
because we wouldn't allow changes to a layer with multiple dependants.
This means we can determine the backend to notify by looking at the
owner pipeline instead.
Previously whenever the size of the FBO changes it would create a new
material and attach the texture to it. This is not good for Cogl
because it throws away any cached state for the material. In
test-rotate the size of the FBO changes constantly so it effectively
uses a new material every paint. For shader effects this also ends up
relinking the shader every paint because the linked programs are part
of the material state.
The features_cached member of CoglContext is intended to mark when
we've calculated the features so that we know if they are ready in
cogl_get_features. However we always intialize the features while
creating the context so features_cached will never be FALSE so it's
not useful. We also had the odd behaviour that the COGL_DEBUG feature
overrides were only applied in the first call to
cogl_get_features. However there are other functions that use the
feature flags such as cogl_features_available that don't use this
function so in some cases the feature flags will be interpreted before
the overrides are applied. This patch makes it always initialize the
features and apply the overrides immediately while creating the
context. This fixes a problem with COGL_DEBUG=disable-arbfp where the
first material flushed is done before any call to cogl_get_features so
it may still use ARBfp.
Now that the GLSL backend can generate code it can effectively handle
any pipeline unless there is an ARBfp program. However with current
open source GL drivers the ARBfp compiler is more stable so it makes
sense to prefer ARBfp when possible. The GLSL backend is also lower
than the fixed function backend on the assumption that any driver that
supports GLSL will also support ARBfp so it's quicker to try the fixed
function backend next.
This adds COGL_DEBUG=disable-fixed to disable the fixed function
pipeline backend. This is needed to test the GLSL shader generation
because otherwise the fixed function backend would always override it.
We don't want to use gl_PointCoord to implement point sprites on big
GL because in that case we already use glTexEnv(GL_COORD_REPLACE) to
replace the texture coords with the point sprite coords. Although GL
also supports the gl_PointCoord variable, it requires GLSL 1.2 which
would mean we would have to declare the GLSL version and check for
it. We continue to use gl_PointCoord for GLES2 because it has no
glTexEnv function.
This creates a material which users a layer to override the color of
the rectangle. A simple vertex shader is then created which just
emulates the fixed function pipeline. No fragment shader is
added. This demonstrates a bug where the layer state is getting
ignored when a vertex shader is in use.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2221
The GLES2 wrapper no longer needs to generate any fragment shader
state because the GLSL pipeline backend will always give the wrapper a
custom fragment shader. This simplifies a lot of the state comparison
done by the wrapper. The fog generation is also removed even though
it's actually part of the vertex shader because only the fixed
function pipeline backend actually calls the fog functions so it would
be disabled when using any of the other backends anyway. We can fix
this when the two shader backends also start generating vertex
shaders.
GLES2 has no glAlphaFunc function so we need to simulate the behaviour
in the fragment shader. The alpha test function is simulated with an
if-statement and a discard statement. The reference value is stored as
a uniform.
Previously the flag to mark the differences for the alpha test
function and reference value were conflated into one. However this is
awkward when generating shader code to simulate the alpha testing for
GLES 2 because in that case changing the function would need a
different program but changing the reference value just requires
updating a uniform. This patch makes the function and reference have
their own state flags.
The GLSL shader generation supports layer combine constants so there's
no need to disable it for GLES2. It looks like there was also code for
it in the GLES2 wrapper so I'm not sure why it was disabled in the
first place.
The GLSL pipeline backend can now generate code to represent the
pipeline state in a similar way to the ARBfp backend. Most of the code
for this is taken from the GLES 2 wrapper.
_cogl_shader_compile_real had some code to create a set of strings to
combine the boilerplate code with a shader before calling
glShaderSource. This has now been moved to its own internal function
so that it could be used from the GLSL pipeline backend as well.
need_texture_combine_separate is moved to cogl-pipeline.c and renamed
to _cogl_pipeline_need_texture_combine_separate. The function is
needed by both the ARBfp and GLSL codegen backends so it makes sense to
share it.
The code for finding the arbfp authority for a pipeline should be the
same as finding the GLSL authority. So that the code can be shared the
function has been moved to cogl-pipeline.c and renamed to
_cogl_pipeline_find_codegen_authority.
Only one of the material backends can be generating code at the same
time so it seems to make sense to share the same source buffer between
arbfp and glsl. The new name is fragment_source_buffer in case we
later want to create a new buffer for the vertex shader. That probably
couldn't share the same buffer because it will likely need to be
generated at the same time.
Use the internal child list for the default map/unmap vfuncs. This removes
the requirement for non-container composite actors to implement their own
map/unmap functions.
Unrealizing an actor is a recursive process that needs to traverse the
children of an actor to ensure they are also unrealized. This maintains
the invariant that if any given actor is marked as unrealized then you
know that all its children have also been unrealized.
The previous implementation would use the container interface's
foreach_with_internals vfunc to explicitly traverse the children of
container actors but this didn't consider composite actors that aren't
containers.
Since clutter-actor now maintains an explicit list of children we can
also handle composite actors that aren't containers using
_clutter_actor_traverse.
This makes it possible to choose the traversal order; either depth first
or breadth first and when visiting actors in a depth first order there
is now a callback called before children are traversed and one called
after. Some tasks such as unrealizing actors need to explicitly control
the traversal order to maintain the invariable that all children of an
actor are unrealized before we actually mark the parent as unrealized.
The callbacks are now passed the relative depth in the graph of the
actor being visited and instead of only being able to return a boolean
to bail out of further traversal it can now do one of: continue,
skip_children or break. To implement something like unrealize it's
desirable to skip children that you find have already been unrealized.