The vertex attribute API assumes that if there is a color array
enabled then we can't determine if the colors are opaque so we have to
enable blending. The journal always uses a color array to avoid
switching color state between rectangles. Since the journal switched
to using vertex attributes this means we effectively always enable
blending from the journal. To fix this there is now a new flag for
_cogl_draw_vertex_attributes to specify that the color array is known
to only contain opaque colors which causes the draw function not to
copy the pipeline. If the pipeline has blending disabled then the
journal passes this flag.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2481
There is an internal version of cogl_draw_vertex_attributes_array
which previously just bypassed the framebuffer flushing, journal
flushing and pipeline validation so that it could be used to draw the
journal. This patch generalises the function so that it takes a set of
flags to specify which parts to flush. The public version of the
function now just calls the internal version with the flags set to
0. The '_real' version of the function has now been merged into the
internal version of the function because it was only called in one
place. This simplifies the code somewhat. The common code which
flushed the various state has been moved to a separate function. The
indexed versions of the functions have had a similar treatment.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2481
Cogl no longer has any code that assumes the buffer in a CoglBitmap is
allocated to the full size of height*rowstride. We should comment that
this is the case so that we remember to keep it that way. This is
important for cogl_texture_new_from_data because the application may
have created the data from a sub-region of a larger image and in that
case it's not safe to read the full rowstride of the last row when the
sub region contains the last row of the larger image.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2491
When uploading data for GLES we need to deal with cases where the
rowstride is too large to be described only by GL_UNPACK_ALIGNMENT
because there is no GL_UNPACK_ROW_LENGTH. Previously for the
sub-region uploading code it would always copy the bitmap and for the
code to upload the whole image it would copy the bitmap unless the
rowstride == bpp*width. Neither paths took into account that we don't
need to copy if the rowstride is just an alignment of bpp*width. This
moves the bitmap copying code to a separate function that is used by
both upload methods. It only copies the bitmap if the rowstride is not
just an alignment of bpp*width.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2491
The ffs function is defined in C99 so if we want to use it in Cogl we
need to provide a fallback for MSVC. This adds a configure check for
the function and then a fallback using a while loop if it is not
available.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2491
If we have to copy the bitmap to do the premultiplication then we were
previously using the rowstride of the source image as the rowstride
for the new image. This is wasteful if the source image is a subregion
of a larger image which would make it use a large rowstride. If we
have to copy the data anyway we might as well compact it to the
smallest rowstride. This also prevents the copy from reading past the
end of the last row of pixels.
An internal function called _cogl_bitmap_copy has been added to do the
copy. It creates a new bitmap with the smallest possible rowstride
rounded up the nearest multiple of 4 bytes. There may be other places
in Cogl that are currently assuming we can read height*rowstride of
the source buffer so they may want to take advantage of this function
too.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2491
The builtin vertex attribute for the normals was incorrectly checked
for as 'cogl_normal' however it is defined as cogl_normal_in in the
shader boilerplate and for the name generated by CoglVertexBuffer.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2499
Implement the ClutterStageWindow::set_accept_focus() virtual function in
the win32 backend.
If accept_focus is set to be TRUE then we call SetforegroundWindow()
after calling ShowWindow(). This is similar to what GDK does when
dealing with the same situation.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2500
Allow the developer to set whether the Stage should receive key focus
when mapped. The implementation is fully backend-dependent. The default
value is TRUE because that's what we've been expecting so far.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2500
If an actor is (unfortunately) queuing a relayout in relayout, you would
end up with (ClutterActor*)stage->needs_allocation set to TRUE and
stage->relayout_pending set to TRUE. But if then in the same cycle, an
actor calls clutter_actor_get_allocation_box, that will trigger another
(recursive) _clutter_stage_maybe_relayout, which will wrongly reset the
relayout pending to FALSE, while not actually performing a new relayout
because of the re-entrancy protection.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2503
Other frameworks expose the same functionality as "auto-reverse",
probably to match the cassette tape player. It actually makes sense
for Clutter to follow suit.
The stage has a dirty flag to record whenever the viewport and
projection matrices need to be flushed. However after flushing these
the flags were never cleared so it would always redundantly update the
state.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2480
The ARBfp fragend was bypassing generating a shader if the pipeline
contains a user program. However it shouldn't do this if the pipeline
only contains a vertex shader. This was breaking
test-cogl-just-vertex-shader.
Adding an action should allow passing a user data pointer, and have a
notification action that gets called when removing the action. This
allows introspection and language bindings to attach custom data to the
action - for instance, the real callable object that should be invoked.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2479
Previously, ClutterText took keyboard focus on mouse-down, regardless
if it were editable or selectable. Now it checks these properties,
and behaves like other actors if it can't do anything useful with
the focus.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2462
Previously Cogl would only ever use one atlas for textures and if it
reached the maximum texture size then all other new textures would get
their own GL texture. This patch makes it so that we create as many
atlases as needed. This should avoid breaking up some batches and it
will be particularly good if we switch to always using multi-texturing
with a default shader that selects between multiple atlases using a
vertex attribute.
Whenever a new atlas is created it is stored in a GSList on the
context. A weak weference is taken on the atlas using
cogl_object_set_user_data so that it can be removed from the list when
the atlas is destroyed. The atlas textures themselves take a reference
to the atlas and this is the only thing that keeps the atlas
alive. This means that once the atlas becomes empty it will
automatically be destroyed.
All of the COGL_NOTEs pertaining to atlases are now prefixed with the
atlas pointer to make it clearer which atlas is changing.
All of the drawing needed in _cogl_add_path_to_stencil_buffer is done
with the vertex attribute API so there should be no need to flush the
enable flags to enable the vertex array. This was causing problems on
GLES2 where the vertex array isn't available.
The GLES2 wrapper is no longer needed because the shader generation is
done within the GLSL fragend and vertend and any functions that are
different for GLES2 are now guarded by #ifdefs.
Once the GLES2 wrapper is removed then we won't have the GLenums
needed for setting up the layer combine state. This adds Cogl enums
instead which have the same values as the corresponding GLenums. The
enums are:
CoglPipelineCombineFunc
CoglPipelineCombineSource
and
CoglPipelineCombineOp
Once the GLES2 wrapper is removed we won't be able to upload the
matrices with the fixed function API any more. The fixed function API
gives a global state for setting the matrix but if a custom shader
uniform is used for the matrices then the state is per
program. _cogl_matrix_stack_flush_to_gl is called in a few places and
it is assumed the current pipeline doesn't need to be flushed before
it is called. To allow these semantics to continue to work, on GLES2
the matrix flush now just stores a reference to the matrix stack in
the CoglContext. A pre_paint virtual is added to the progend which is
called whenever a pipeline is flushed, even if the same pipeline was
flushed already. This gives the GLSL progend a chance to upload the
matrices to the uniforms. The combined modelview/projection matrix is
only calculated if it is used. The generated programs end up never
using the modelview or projection matrix so it usually only has to
upload the combined matrix. When a matrix stack is flushed a reference
is taked to it by the pipeline progend and the age is stored so that
if the same state is used with the same program again then we don't
need to reupload the uniform.
Sometimes it would be useful if we could efficiently track when a matrix
stack has been modified. For example on GLES2 we have to upload the
modelview as a uniform to our glsl programs but because the modelview
state is part of the framebuffer state it becomes a bit more tricky to
know when to re-sync the value of the uniform with the framebuffer
state. This adds an "age" counter to CoglMatrixStack which is
incremented for any operation that effectively modifies the top of the
stack so now we can save the age of the stack inside the pipeline
whenever we update modelview uniform and later compare that with the
stack to determine if it has changed.
This returns the layer matrix given a pipeline and a layer index. The
API is kept as internal because it directly returns a pointer into the
layer private data to avoid a copy into an out-param. We might also
want to add a public function which does the copy.
When the GLES2 wrapper is removed we can't use the fixed function API
such as glColorPointer to set the builtin attributes. Instead the GLSL
progend now maintains a cache of attribute locations that are queried
with glGetAttribLocation. The code that previously maintained a cache
of the enabled texture coord arrays has been modified to also cache
the enabled vertex attributes under GLES2. The vertex attribute API is
now the only place that is using this cache so it has been moved into
cogl-vertex-attribute.c
Previously when stroking a path it was flushing a pipeline and then
directly calling glDrawArrays to draw the line strip from the path
nodes array. This patch changes it to build a CoglVertexArray and a
series of attributes to paint with instead. The vertex array and
attributes are attached to the CoglPath so it can be reused later. The
old vertex array for filling has been renamed to fill_vbo.
The code to display the source when the show-source debug option is
given has been moved to _cogl_shader_set_source_with_boilerplate so
that it will show both user shaders and generated shaders. It also
shows the code with the full boilerplate. To make it the same for
ARBfp, cogl_shader_compile_real now also dumps user ARBfp shaders.
The GLSL vertend is mostly only useful for GLES2. The fixed function
vertend is kept at higher priority than the GLSL vertend so it is
unlikely to be used in any other circumstances.
Due to Mesa bug 28585 calling glVertexAttrib with attrib location 0
doesn't appear to work. This patch just reorders the vertex and color
attributes in the shader in the hope that Mesa will assign the color
attribute to a different location.
Some builtin attributes such as the matrix uniforms and some varyings
were missing from the boilerplate for GLES2. This also moves the
texture matrix and texture coord attribute declarations to
cogl-shader.c so that they can be dynamically defined depending on the
number of texture coord arrays enabled.
The vertends are intended to flush state that would be represented in
a vertex program. Code to handle the layer matrix, lighting and
point size has now been moved from the common cogl-pipeline-opengl
backend to the fixed vertend.
'progend' is short for 'program backend'. The progend is intended to
operate on combined state from a fragment backend and a vertex
backend. The progend has an 'end' function which is run whenever the
pipeline is flushed and the two pipeline change notification
functions. All of the progends are run whenever the pipeline is
flushed instead of selecting a single one because it is possible that
multiple progends may be in use for example if the vertends and
fragends are different. The GLSL progend will take the shaders
generated by the fragend and vertend and link them into a single
program. The fragend code has been changed to only generate the shader
and not the program. The idea is that pipelines can share fragment
shader objects even if their vertex state is different. The authority
for the progend needs to be the combined authority on the vertend and
fragend state.
This adds two internal functions:
gboolean
_cogl_program_has_fragment_shader (CoglHandle handle);
gboolean
_cogl_program_has_vertex_shader (CoglHandle handle);
They just check whether any of the contained shaders are of that type.
The pipeline function _cogl_pipeline_find_codegen_authority has been
renamed to _cogl_pipeline_find_equivalent_parent and it now takes a
set of flags for the pipeline and layer state that affects the
authority. This is needed so that we can reuse the same code in the
vertend and progends.
Previously enabling and disabling textures was done whatever the
backend in cogl-pipeline-opengl. However enabling and disabling
texture targets only has any meaning if no fragment shaders are being
used so this patch moves the code to cogl-pipeline-fragend-fixed.
The GLES2 wrapper has also been changed to ignore enabledness when
deciding whether to update texture coordinate attribute pointers.
The current Cogl pipeline backends are entirely concerned with the
fragment processing state. We also want to eventually have separate
backends to generate shaders for the vertex processing state so we
need to rename the fragment backends. 'Fragend' is a somewhat weird
name but we wanted to avoid ending up with illegible symbols like
CoglPipelineFragmentBackendGlslPrivate.
While we do check for compatibility and transformability of a GValue
with the GParamSpec value type, we are actually failing really badly
at it.
First of all, we bail out on the wrong conditions.
Then we use the type of the value passed instead of using the type
of the property itself.
This makes it impossible to actually use transformation functions for
GValue types - even those that have been registered by GLib itself -
when using the Animation API directly, instead of going through the
clutter_actor_animate() wrappers.
The ParamSpec sub-classes we define are meant to be used only from the C
API, as high-level languages completely ignore them.
The ClutterStageWindow interface is an internal type that escaped into
the public headers; all its methods are private, but we cannot remove
the type until we break for 2.0.
Currently clutter_list_model_get_iter_at_row() always returns an
iterator to the last non-filtered row when asking for row [1-N].
Patch makes the function return an iterator to the Nth non-filtered
row or NULL.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2460
Adapt to changes from these wayland commits:
35fd2a8cc68c42d90756330535de04cbbb4d2613
2bb3ebe1e437acf836449f0a63f3264ad29566f2
f8fc08f77187f6a5723281dab66841e5f3c24320
http://bugzilla.clutter-project.org/show_bug.cgi?id=2474
We are currently using a pipeline as a key into our arbfp program cache
but because we weren't making a copy of the pipelines used as keys there
were times when doing a lookup in the cache would end up trying to
compare a lookup key with an entry key that would point to invalid
memory.
Note: the current approach isn't ideal from the pov that that key
pipeline may reference some arbitrarily large user textures will now be
kept alive indefinitely. The plan to improve on this is that we will
have a mechanism to create a special "key pipeline" which will derive
from the default Cogl pipeline (to avoid affecting the lifetime of
other pipelines) and only copy state from the original pipeline that
affects the arbfp program and will reference small dummy textures
instead of potentially large user textures.
In the arbfp backend there is a seqential approach to finding a suitable
arbfp program to use for a given pipeline; first we see if there's
already a program associated with the pipeline, 2nd we try and find a
program associated with the "arbfp-authority" 3rd we try and lookup a
program in a cache and finally we resort to starting code-generation for
a new program. This patch slightly reworks the code of these steps to
hopefully make them a bit clearer.
_cogl_pipeline_needs_blending_enabled tries to determine whether each
layer is using the default combine state. However it was using
argument 0 for both checks so the if-statement would never be true.
There are a set of "EvalFlags" that get passed to _cogl_pipeline_hash
that can tweak the semantics of what state is evaluated for hashing but
these flags weren't getting passed via the HashState state structure
so it would be undefined if you would get the correct semantics.
According to 9cc9033347 the windows headers #define near as nothing,
and presumable the same is true for 'far' too. Apparently this define is
to improve compatibility with code written for Windows 3.1, so it's good
that people will be able to incorporate such code into their Clutter
applications.
Since c6493885c3 when building the EGL backend for eglx there was
no fallback in the init_events implementation so the X11 backend init
function would never get called. This was stopping it from receiving
any X events so a lot of things broke. It now just chains up.
When clutter_score_append_at_marker is called instead of
clutter_score_append the complete_id field of ClutterScoreEntry was
being left uninitialised. When the entry is eventually freed it would
sometimes try to disconnect an invalid signal id. This was causing
conformance test failures for me on GLES2.
We were trying to declare and initializing an arbfp program cache for
GLES but since the prototypes for the _hash and _equal functions were
only available for GL this broke the GLES builds. By #ifdefing the code
to conditionally declare/initialize for GL only this should hopefully
fix GLES builds.
The constant 'True' is defined by Xlib which isn't used for all clutter
builds so this replaces occurrences of True with TRUE which is defined
by glib. This should hopefully fix the win32 builds.
This adds a cache (A GHashTable) of ARBfp programs and before ever
starting to code-generate a new program we will always first try and
find an existing program in the cache. This uses _cogl_pipeline_hash and
_cogl_pipeline_equal to hash and compare the keys for the cache.
There is a new COGL_DEBUG=disable-program-caches option that can disable
the cache for debugging purposes.
This allows us to get a hash for a set of state groups for a given
pipeline. This can be used for example to get a hash of the fragment
processing state of a pipeline so we can implement a cache for compiled
arbfp/glsl programs.
_cogl_pipeline_equal now accepts a mask of pipeline differences and layer
differences to constrain what state will be compared. In addition a set
of flags are passed that can tweak the comparison semantics for some
state groups. For example when comparing layer textures we sometimes
only need to compare the texture target and can ignore the data itself.
In updating the code this patch also changes it so all required pipeline
authorities are resolved in one step up-front instead of resolving the
authority for each state group in turn and repeatedly having to traverse
the pipeline's ancestry. This adds two new functions
_cogl_pipeline_resolve_authorities and
_cogl_pipeline_layer_resolve_authorities to handle resolving a set of
authorities.
This removes the unused array of per-packend priv data pointers
associated with every CoglPipelineLayer. This reduces the size of all
layer allocations and avoids having to zero an array for each
_cogl_pipeline_layer_copy.
A non-static function named cogl_object_get_type was inadvertently added
during the addition of the CoglObject base type, but there is no public
prototype in the headers and it's only referenced inside cogl-object.c
to implement cogl_handle_get_type() for compatibility. This removes the
function since we don't want to commit to CoglObject always simply being
a boxed type. In the future we may want to register hierarchical
GTypeInstance based types.
To allow us to have gobject properties that accept a CoglMatrix value we
need to register a GType. This adds a cogl_gtype_matrix_get_type function
that will register a static boxed type called "CoglMatrix".
This adds a new section to the reference manual for GType integration
functions.
As a pre-requisite for being able to register a boxed GType for
CoglMatrix (enabling us to define gobject properties that accept a
CoglMatrix) this adds cogl_matrix_copy and _free functions.
In _cogl_pipeline_needs_blending_enabled after first checking whether
the property most recently changed requires blending we would then
resort to checking all other properties too in case some other state
also requires blending. We now avoid checking all other properties in
the case that blending was previously disabled and checking the property
recently changed doesn't require blending.
Note: the plan is to improve this further by explicitly keeping track
of the properties that currently cause blending to be enabled so that we
never have to resort to checking all other properties we can constrain
the checks to those masked properties.
This moves _cogl_pipeline_get_parent and _cogl_pipeline_get_authority
into cogl-pipeline-private.h so they can be inlined since they have been
seen to get quite high in profiles. Given that they both contain such
small amounts of code the function call overhead is significant.
This adds a debug option called disable-software-clipping which causes
the journal to always log the clip stack state rather than trying to
manually clip rectangles.
Before flushing the journal there is now a separate iteration that
will try to determine if the matrix of the clip stack and the matrix
of the rectangle in each entry are on the same plane. If they are it
can completely avoid the clip stack and instead manually modify the
vertex and texture coordinates to implement the clip. The has the
advantage that it won't break up batching if a single clipped
rectangle is used in a scene.
The software clip is only used if there is no user program and no
texture matrices. There is a threshold to the size of the batch where
it is assumed that it is worth the cost to break up a batch and
program the GPU to do the clipping. Currently this is set to 8
although this figure is plucked out of thin air.
To check whether the two matrices are on the same plane it tries to
determine if one of the matrices is just a simple translation of the
other. In the process of this it also works out what the translation
would be. These values can be used to translate the clip rectangle
into the coordinate space of the rectangle to be logged. Then we can
do the clip directly in the rectangle's coordinate space.
Previously in cogl-clip-state.c when it detected that the current
modelview matrix is screen-aligned it would convert the clip entry to
a window clip. Instead of doing this cogl-clip-stack.c now contains
the detection and keeps the entry as a rectangle clip but marks that
it is entirely described by its scissor rect. When flusing the clip
stack it doesn't do anything extra for entries that have this mark
(because the clip will already been setup by the scissor). This is
needed so that we can still track the original rectangle coordinates
and modelview matrix to help detect when it would be faster to modify
the rectangle when adding it to the journal rather than having to
break up the batch to set the clip state.
When logging a quad we now only store the 2 vertices representing the
top left and bottom right of the quad. The color is only stored once
per entry. Once we come to upload the data we expand the 2 vertices
into four and copy the color to each vertex. We do this by mapping the
buffer and directly expanding into it. We have to copy the data before
we can render it anyway so it doesn't make much sense to expand the
vertices before uploading and this way should save some space in the
size of the journal. It also makes it slightly easier if we later want
to do pre-processing on the journal entries before uploading such as
doing software clipping.
The modelview matrix is now always copied to the journal entry whereas
before it would only be copied if we aren't doing software
transform. The journal entry struct always has the space for the
modelview matrix so hopefully it's only a small cost to copy the
matrix.
The transform for the four entries is now done using
cogl_matrix_transform_points which may be slightly faster than
transforming them each individually with a call to
cogl_matrix_transfom.
This reverts commit 4cfe90bde2.
GLSL 1.00 on GLES doesn't support unsized arrays so the whole idea
can't work.
Conflicts:
clutter/cogl/cogl/cogl-pipeline-glsl.c
The check for whether we can reuse a program we've already generated
was only being done if the pipeline already had a
glsl_program_state. When there is no glsl_program_state it then looks
for the nearest ancestor it can share the program with. It then
wasn't checking whether that ancestor already had a GL program so it
would start generating the source again. It wouldn't however compile
that source again because _cogl_pipeline_backend_glsl_end does check
whether there is already a program. This patch moves the check until
after it has found the glsl_program_state, whether or not it was found
from an ancestor or as its own state.
Under GLES2 we were defining the cogl_tex_coord_in varying as an array
with a size determined by the number of texture coordinate arrays
enabled whenever the program is used. This meant that we may have to
regenerate the shader with a different size if the shader is used with
more texture coord arrays later. However in OpenGL the equivalent
builtin varying gl_TexCoord is simply defined as:
varying vec4 gl_TexCoord[]; /* <-- no size */
GLSL is documented that if you declare an array with no size then you
can only access it with a constant index and the size of the array
will be determined by the highest index used. If you want to access it
with a non-constant expression you need to redeclare the array
yourself with a size.
We can replicate the same behaviour in our Cogl shaders by instead
declaring the cogl_tex_coord_in with no size. That way we don't have
to pass around the number of tex coord attributes enabled when we
flush a material. It also means that CoglShader can go back to
directly uploading the source string to GL when cogl_shader_source is
called so that we don't have to keep a copy of it around.
If the user wants to access cogl_tex_coord_in with a non-constant
index then they can simply redeclare the array themself. Hopefully
developers will expect to have to do this if they are accustomed to
the gl_TexCoord array.
When compiling for GLES2, the codegen is affected by state other than
the layers. That means when we find an authority for the codegen state
we can't directly look at authority->n_layers to determine the number
of layers because it isn't necessarily the layer state authority. This
patch changes it to use cogl_pipeline_get_n_layers instead. Once we
have two authorities that differ in codegen state we then compare all
of the layers to decide if they would affect codegen. However it was
ignoring the fact that the authorities might also differ by the other
codegen state. This path also adds an extra check for whether
_cogl_pipeline_compare_differences contains any codegen bits other
than COGL_PIPELINE_STATE_LAYERS.
When determining if a layer would require a different shader to be
generated it needs to check a certain set of state changes and it
needs to check whether the texture target is different. However it was
checking whether texture texture was different only if the other state
was also different which doesn't make any sense. It also only checked
the texture difference if that was the only state change which meant
that effectively the code was impossible to reach. Now it does the
texture target check indepent of the other state changes.
The fixed pipeline backend wasn't correctly flushing the combine
constant because it was using the wrong flag to determine if the
combine constant has changed since the last flushed material.
When enabling a unit that was disabled from a previous flush pipeline
it was forgetting to rebind the right texture unit so it wouldn't
work. This was causing the redhand to disappear when using the fixed
function backend in test-cogl-multitexture if anything else is added
to the scene.
For shader generation backends we don't need to worry about changes to
the texture object and changing the user matrix. The missing user
matrix flag was causing test-cogl-multitexture to regenerate the
shader every frame.
Having ctx here produces a warning on GLES. However it's needed for Big
GL as we have at the top of the file:
#ifdef HAVE_COGL_GL
#define glClientActiveTexture ctx->drv.pf_glClientActiveTexture
#endif
This reverts commit 27a3a2056a.
That what happens when you test things only with 2 configure options
instead of 3. The 2 tested compile, the third one breaks. Another good
catch for the eglx bot!
With glib 2.28, we'll be able to have one GSource per device manager
with child sources for earch device. Make a note to update the code
in a few months.
An array is used to translate the button to its mask. Clutter defines
the masks for button 1 to 5 but we report BTN_LEFT..BTN_TASK ie
0x110..0x117. We need to pad the array for the translation not to access
random data for buttons between 0x115 and 0x117.
Discarding the event without any warning when the device has no
associated stage makes it hard to find the bug for people implementing
new event backends. We should really warn for that abnormal condition in
_clutter_input_device_update().
We know support EV_REL events comming from evdev devices. This addition
is pretty straigthforward, it adds a x,y per GSource listening to a
evdev device, updates from EL_REL (relative) events and craft new
ClutterMotionEvents. As for buttons, BTN_LEFT..BTN_TASK are translated
to ClutterButtonEvents with 1..8 as button number.
Even with udev, the read fails before udev has a chance to signal the
change. Hence (and to handle errors gracefully anyway), let's remove the
device from the device manager in case of a read() error.
The device manager now fully owns the GSources corresponding to the
devices it manages. This will allow not only to remove the source when
udev signals a device removal but also handle read() errors gracefully
by removing the faulty device from the manager.
Just connect to the GUdevClient "uevent" signal and deals with
"add"/"remove" commands. This drives the installation/removal of
GSource to listen to the device.
Let's use the sysfs path of the device to make sure we only load evdev
device, not legacy mousedev ones for instance. We rely on the sysfs
API/ABI guarantees and look for devices finishing by /input%d/event%d.
This backend is a event backend that can be enabled for EGL (for now).
It uses udev (gudev) to query input devices on a linux system, listens to
keyboard events from input devices and xkbcommon to translate raw key
codes into key keysyms.
This commit only supports key events, more to follow.
Looking at what the X11 backend does: the unicode value is being
translated to the unicode codepoint of the symbol if possible. Let's do
the same then.
Before that, key events for say KEY_Right (0xff53) had the unicode_value
set to the keysym, which meant "This key event is actually printable and
is Unicode codepoint is 0xff53", which lead to interesting results.
The wayland client code has support for translating raw linux input
device key codes coming from the wayland compositor into key symbols
thanks to libxkbcommon.
A backend directly listening to linux input devices (called evdev, just
like the Xorg one) could use exactly the same code for the translation,
so abstract it a bit in a separate file.
In 6246c2bd6 I moved the code to add the boilerplate to a shader to a
separate function and also made it so that the common boilerplate is
added as a separate string to glShaderSource. However I didn't notice
that the #define for the vertex and fragment shaders already includes
the common part so it was being added twice. Mesa seems to accept this
but it was causing problems on the IMG driver because COGL_VERSION was
defined twice.
Don't calculate an extra layout in clutter_text_get_preferred_height for
single-line strings, when it's unnecessary. There's no need to set the
width of a layout when in single-line mode, as wrapping will not happen.
Previously when the shader effect is used with a new actor it would
end up throwing away the old program. I don't think this is neccessary
and it means if you use an effect to temporarily bind to an actor then
it will recompile the shader whenever it is applied.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2454
When a new actor is set for ClutterOffscreenEffect it would throw away
the old material. I don't think there is anything specifically tied to
the actor in the material so throwing away just loses Cogl's cached
state about the material. This ends up relinking the shader every time
a new actor is set in ClutterShaderEffect.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2454
Do not use the compiler to zero the first field of the GValue member,
since it's apparently non-portable. As we're allocating memory anyway we
can let the slice allocator do the zero-ing for us.
Mentioned in: http://bugzilla.clutter-project.org/show_bug.cgi?id=2455
Before commit 49898d43 CoglPipeline would compare whether a pipeline
layer's texture is equal by fetching the underlying GL handle. I
changed that so that it would only compare the CoglHandles because
that commit removes the GL handle texture overrides and sliced
textures instead log the underlying primitive texture. However I
forgot that the primitives don't always use
_cogl_texture_foreach_sub_texture_in_region when the quad fits within
the single texture so it won't use a texture override. This meant that
atlas textures and sub textures get logged with the atlas handle so
the comparison still needs to be done using the GL handles. It might
be nice to add a CoglTexture virtual to get the underlying primitive
texture instead to avoid having the pipeline poke around with GL
handles.
If we have to make override changes to the user's source material to
handle cogl_polygon then we need to make sure we unref the override
material at the end.
Previously we used the layers->backend_priv[] members to determine when
to notify backends about layer changes, but it entirely up to the
backends if they want to associate private state with layers, even
though they may still be interested in layer change notifications (they
may associate layer related state with the owner pipeline).
We now make the observation that in
_cogl_pipeline_backend_layer_change_notify we should be able to assume
there can only be one backend currently associated with the layer
because we wouldn't allow changes to a layer with multiple dependants.
This means we can determine the backend to notify by looking at the
owner pipeline instead.
Previously whenever the size of the FBO changes it would create a new
material and attach the texture to it. This is not good for Cogl
because it throws away any cached state for the material. In
test-rotate the size of the FBO changes constantly so it effectively
uses a new material every paint. For shader effects this also ends up
relinking the shader every paint because the linked programs are part
of the material state.
The features_cached member of CoglContext is intended to mark when
we've calculated the features so that we know if they are ready in
cogl_get_features. However we always intialize the features while
creating the context so features_cached will never be FALSE so it's
not useful. We also had the odd behaviour that the COGL_DEBUG feature
overrides were only applied in the first call to
cogl_get_features. However there are other functions that use the
feature flags such as cogl_features_available that don't use this
function so in some cases the feature flags will be interpreted before
the overrides are applied. This patch makes it always initialize the
features and apply the overrides immediately while creating the
context. This fixes a problem with COGL_DEBUG=disable-arbfp where the
first material flushed is done before any call to cogl_get_features so
it may still use ARBfp.
Now that the GLSL backend can generate code it can effectively handle
any pipeline unless there is an ARBfp program. However with current
open source GL drivers the ARBfp compiler is more stable so it makes
sense to prefer ARBfp when possible. The GLSL backend is also lower
than the fixed function backend on the assumption that any driver that
supports GLSL will also support ARBfp so it's quicker to try the fixed
function backend next.
This adds COGL_DEBUG=disable-fixed to disable the fixed function
pipeline backend. This is needed to test the GLSL shader generation
because otherwise the fixed function backend would always override it.
We don't want to use gl_PointCoord to implement point sprites on big
GL because in that case we already use glTexEnv(GL_COORD_REPLACE) to
replace the texture coords with the point sprite coords. Although GL
also supports the gl_PointCoord variable, it requires GLSL 1.2 which
would mean we would have to declare the GLSL version and check for
it. We continue to use gl_PointCoord for GLES2 because it has no
glTexEnv function.
The GLES2 wrapper no longer needs to generate any fragment shader
state because the GLSL pipeline backend will always give the wrapper a
custom fragment shader. This simplifies a lot of the state comparison
done by the wrapper. The fog generation is also removed even though
it's actually part of the vertex shader because only the fixed
function pipeline backend actually calls the fog functions so it would
be disabled when using any of the other backends anyway. We can fix
this when the two shader backends also start generating vertex
shaders.
GLES2 has no glAlphaFunc function so we need to simulate the behaviour
in the fragment shader. The alpha test function is simulated with an
if-statement and a discard statement. The reference value is stored as
a uniform.
Previously the flag to mark the differences for the alpha test
function and reference value were conflated into one. However this is
awkward when generating shader code to simulate the alpha testing for
GLES 2 because in that case changing the function would need a
different program but changing the reference value just requires
updating a uniform. This patch makes the function and reference have
their own state flags.
The GLSL shader generation supports layer combine constants so there's
no need to disable it for GLES2. It looks like there was also code for
it in the GLES2 wrapper so I'm not sure why it was disabled in the
first place.
The GLSL pipeline backend can now generate code to represent the
pipeline state in a similar way to the ARBfp backend. Most of the code
for this is taken from the GLES 2 wrapper.
_cogl_shader_compile_real had some code to create a set of strings to
combine the boilerplate code with a shader before calling
glShaderSource. This has now been moved to its own internal function
so that it could be used from the GLSL pipeline backend as well.
need_texture_combine_separate is moved to cogl-pipeline.c and renamed
to _cogl_pipeline_need_texture_combine_separate. The function is
needed by both the ARBfp and GLSL codegen backends so it makes sense to
share it.
The code for finding the arbfp authority for a pipeline should be the
same as finding the GLSL authority. So that the code can be shared the
function has been moved to cogl-pipeline.c and renamed to
_cogl_pipeline_find_codegen_authority.
Only one of the material backends can be generating code at the same
time so it seems to make sense to share the same source buffer between
arbfp and glsl. The new name is fragment_source_buffer in case we
later want to create a new buffer for the vertex shader. That probably
couldn't share the same buffer because it will likely need to be
generated at the same time.
Use the internal child list for the default map/unmap vfuncs. This removes
the requirement for non-container composite actors to implement their own
map/unmap functions.
Unrealizing an actor is a recursive process that needs to traverse the
children of an actor to ensure they are also unrealized. This maintains
the invariant that if any given actor is marked as unrealized then you
know that all its children have also been unrealized.
The previous implementation would use the container interface's
foreach_with_internals vfunc to explicitly traverse the children of
container actors but this didn't consider composite actors that aren't
containers.
Since clutter-actor now maintains an explicit list of children we can
also handle composite actors that aren't containers using
_clutter_actor_traverse.
This makes it possible to choose the traversal order; either depth first
or breadth first and when visiting actors in a depth first order there
is now a callback called before children are traversed and one called
after. Some tasks such as unrealizing actors need to explicitly control
the traversal order to maintain the invariable that all children of an
actor are unrealized before we actually mark the parent as unrealized.
The callbacks are now passed the relative depth in the graph of the
actor being visited and instead of only being able to return a boolean
to bail out of further traversal it can now do one of: continue,
skip_children or break. To implement something like unrealize it's
desirable to skip children that you find have already been unrealized.
ClutterX11TexturePixmap watches for configure events to tell when it
needs to name a new pixmap for the window. However, ConfigureEvents
occur on moves in addition to resizes, and doing round trips and
naming new pixmaps every time a window is moved is a real performance
killer.
Add clutter_x11_texture_pixmap_sync_window_internal() that takes the
size/position of the window as arguments rather than always calling
XGetWindowAttributes. This allows us to bypass all work other than
notifying the window-x/window-y properties when we get a ConfigurEvent
for a move.
The last received width/height is saved to allow us to also omit
XGetWindowAttributes on MapNotify events.
The public clutter_x11_texture_pixmap_sync_window() becomes a bit less
efficient since we no longer combine the roundtrips for
XGetWindowAttributes() and XCompositeNameWindowPixmap(), but it appears
to have no callers in current publicly available code.
Several FIXME's are added for areas where there are still weird things
going on in the code or improvements could be made.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2356
* cogl_texture_get_data() is converted to use
_cogl_texture_foreach_sub_texture_in_region() to iterate
through the underlying textures.
* When we need to read only a portion of the underlying
texture, we set up a FBO and use _cogl_read_pixels()
to read the portion we need. This is enormously more
efficient for reading a small portion of a large atlas
texture.
* The CoglAtlasTexture, CoglSubTexture, and CoglTexture2dSliced
implementation of get_texture() are removed.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2414
Previously in cogl_read_pixels we assume the format of the framebuffer
is always premultiplied because that is the most likely format with
the default Cogl blend mode. However when the framebuffer is bound to
a texture we should be able to make a better guess at the format
because we know the texture keeps track of the premult status. This
patch adds an internal format member to CoglFramebuffer. For onscreen
framebuffers we still assume it is RGBA_8888_PRE but for offscreen to
textures we copy the texture format. cogl_read_pixels uses this to
determine whether the data returned by glReadPixels will be
premultiplied.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2414
When converting the data in cogl_read_pixels it was using bmp_format
instead of the format passed in to the function. bmp_format is the
same as the passed in format except that it always has the premult bit
set. Therefore the conversion would not handle premultiply correctly.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2414
This is the same as _cogl_read_pixels except that it takes a rowstride
parameter for the destination buffer. Under OpenGL setting the
rowstride this will end up calling GL_ROW_LENGTH so that the buffer
region can be directly written to. Under GLES GL_ROW_LENGTH is not
supported so it will use an intermediate buffer as it does if the
format is not GL_RGBA.
cogl_read_pixels now just calls the full version of the function with
the rowstride set to width*bpp.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2414
This function is the same as cogl_offscreen_new_to_texture but it
takes a level parameter and a set of flags so that FBOs can be used to
render to higher mipmap levels and to disable the depth and stencil
buffers. cogl_offscreen_new_to_texture now just calls the new function
with the level set to zero. This function could be useful in a few
places in Cogl where we want to use FBOs as an implementation detail
such as when copying between textures.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2414
In clutter_stage_real_queue_redraw we were checking to see if the
backend will ignore any subsequent redraw_clip so we can avoid the cost
of projecting the paint-volume of an actor into stage coordinates, but
we weren't ensuring that a full redraw would be queued instead we just
bailed out immediately. This makes sure to call
_clutter_stage_window_add_redraw_clip (stage_window, NULL) in this case
to make sure the backend will do an un-clipped redraw.
This tweaks the semantics of the has_redraw_clips vfunc so we can assume
that at the start of a new frame there is an implied, initial,
redraw_clip that clips everything (i.e. nothing would be redrawn) so in
that case we would expect the has_redraw_clips vfunc to return True at
the start of a new frame for backends that support clipping.
Previously there was an ambiguity when this function returned False
since it could either mean a full screen redraw had been queued or it
could mean that the clip state wasn't yet initialized for that frame.
This would result in _clutter_stage_has_full_redraw_queued() returning
True at the start of a new frame even before any actors have been
updated, which in turn meant we would incorrectly ignore queue_redraw
requests for actors, believing them to be redundant.
Previously we were leaving it up to the default implementation of
get_paint_volume in ClutterGroup to handle the stage by determining the
bounding box of all contained children. This isn't the true bounding box
of the stage though since the stage is responsible for clearing the
entire framebuffer at the start of the frame. This adds a
get_paint_volume implementation for ClutterStage which simply returns
False which means Clutter has to assume it covers everything.
When we handle Expose events we try and queue a clipped redraw of the
stage, but for some reason we were also redundantly calling
clutter_actor_queue_redraw for the stage which would negate the request
to queue a clipped redraw.
When uploading a 3D texture with an awkward rowstride, on GLES Cogl
will copy the images to an intermediate buffer to pass to GL. However
it was using the wrong height when copying the data so it would end up
overflowing the buffer and crashing.
Since we're using CoglPipelineWrapModeInternal in the internal API
anyway, and the compiler complains loudly when comparing two enumeration
types without casting, the PipelineLayer struct should store the
wrap modes using the internal enumeration.
The last_paint_box for an actor represents its "normal" position - we
shouldn't update it or use it to cull drawing if we are painting
a clone of the actor. Tracking whether we are painting a clone is
done by adding _clutter_actor_push/pop_clone_paint() and a global
"clone paint level".
http://bugzilla.clutter-project.org/show_bug.cgi?id=2396
When using clip planes and we we have to project some vertices into
screen coordinates we used to transform those by the modelview and then
the projection matrix separately. Now we combine the modelview and
projection matrix and then use that to transform the vertices in one
step instead.
When logging quads in the journal it used to be possible to specify a
mask of fallback layers (layers where a default white texture should be
used in-place of the corresponding texture in the current source
pipeline). Since we now handle fallbacks for cogl_rectangle* primitives
when validating the pipeline up-front before logging in the journal we
no longer need the ability for the journal to apply fallbacks too.
When transforming a paint-volume or transforming allocation vertices we
are transforming more than one point at a time so we can batch those
together with cogl_matrix_transform_points instead of
cogl_matrix_transform_point. Also in both of these cases we don't need
to do a projective transform so using cogl_matrix_transform_points also
lets us reduce the per-vertex computation.
This add two new function that allows us to transform or project an
array of points instead of only transforming one point at a time. Recent
benchmarking has shown cogl_matrix_transform_point to be a bottleneck
sometimes, so this should allow us to reduce the overhead when
transforming lots of vertices at the same time, and also reduce the cost
of 3 component, non-projective transforms.
For now they are marked as experimental (you have to define
COGL_ENABLE_EXPERIMENTAL_API) because there is some concern that it
introduces some inconsistent naming. cogl_matrix_transform_point would
have to be renamed cogl_matrix_project_point to be consistent, but that
would be an API break.
Switch _cogl_rectangles_with_multitexture_coords to using
_cogl_pipeline_foreach_layer to iterate the layers of a pipeline when
validating instead of iterating the pipelines internal list, which is
risky since any modifications to pipelines (even to an override pipeline
derived from the original), could potentially corrupt the list as it is
being iterated.
This removes the possibility to specify wrap mode overrides within a
CoglPipelineFlushOptions struct since the right way to handle these
overrides is by copying the user's material and making the changes to
that copy before flushing. All primitives code has already switched away
from using these wrap mode overrides so this patch just removes unused
code and types. It also remove the wrap_mode_overrides argument for
_cogl_journal_log_quad.
The CSS Color Module 3, available at:
http://www.w3.org/TR/css3-color/
allows defining colors as:
rgb ( r, g, b )
rgba ( r, g, b, a)
along with the usual hexadecimal and named notations.
The r, g, and b channels can be:
• integers between 0 and 255
• percentages, between 0% and 100%
The alpha channel, if included using the rgba() modifier, can be a
floating point value between 0.0 and 1.0.
The ClutterColor parser should support this notation.
With the refactoring to centralize code into CoglBuffer,
_cogl_buffer_fini() was never actually implemented, so all GL
vertex and index buffer objects were leaked.
The duplicate call to glDeleteBuffers() in CoglPixelArray is
removed (it wasn't paying attention to whether the buffer had been
allocated as a PBO or not.)
http://bugzilla.clutter-project.org/show_bug.cgi?id=2423
This adds egl backend support for handling clipped redraws. This uses
the EGL_NOK_swap_region extension to enable the EGL backend to present a
subregion from the back buffer to the front so we don't always have to
redraw the entire stage for small updates.
This adds a COGL_DEBUG=wireframe option to visualize the underlying
geometry of the primitives being drawn via Cogl. This works for triangle
list, triangle fan, triangle strip and quad (internal only) primitives.
It also works for indexed vertex arrays.
In cogl_vertex_buffer_indices_get_for_quads() we sometimes have to
extend the length of an existing array, but when we came to unref the
previous array we didn't first check that it wasn't simply NULL.
This adds an optional data argument for cogl_vertex_array_new() since it
seems that mostly every case where we use this API we follow up with a
cogl_buffer_set_data() matching the size of the new array. This
simplifies all those cases and whenever we want to delay uploading of
data then NULL can simply be passed.
The Behaviour class and its implementations have been replaced by the
new animation framework API and by the constraints for layout-related
animations.
Currently, we need to make tests build, so we undef DISABLE_DEPRECATED
in specific test cases while they get ported.
The paint volume structure is cached in the Actor it references, and
this causes a reference cycle.
The paint volume is going to be used when painting, so the actor must
still be valid - otherwise Clutter will bail out far before than
accessing the actor pointer in ClutterPaintVolume.
Otherwise, we could have used dispose() to check for a valid actor and
remove a reference if the actor field is !NULL; it feels less clean,
though, since we're effectively managing an extra reference on
ourselves.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2431
Starting from the 2.27 cycle, GLib is exposing a monotonic clock with
microseconds granularity throughout the time-based API. We can start
using it, given that the old, non-monotonic version is going to be
deprecated by the same cycle.
GLib 2.28 will deprecate GTimeVal and related API in favour of
standardizing on microseconds granularity for all time-based API.
Clutter should switch too.
All of the current users of GTimeVal convert to milliseconds when
doing time operations, and use GTimeVal only as storage. This can
effectively be replaced by a gint64.
The Master Clock uses a microsecond resolution, except when interacting
with the main loop itself, since the main loop has a millisecond
resolution - at least until Ryan Lortie manages to switch that too to
microseconds on Linux.
The clutter_timeline_do_tick() function was erroneously not privatized,
but it was still assumed to be private; we should just remove it from
the public symbols.
For internal usage, writing:
clutter_actor_get_name (actor) != NULL
? clutter_actor_get_name (actor)
: G_OBJECT_TYPE_NAME (actor)
is overly verbose and does two type checks. A simple, internal method
for getting the same result without type checks would be much more
appreciated.
The "watch" function functionality in xsettings-client.c is designed
for setups like GDK where filters are per-window. If we are going
to pass all events to _clutter_xsettings_client_process_event()
anyways, we can just pass in NULL for watch.
This avoids a nasty infinite loop where an event would get processed
triggering removing a filter and adding a new filter, which would
immediately run and remove a filter and add another and so on
ad-infinitum.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2415
The same behavior can be achieved by capturing events on stage while
button is pressed. This fixes a problem when using click and drag
actions on the same actor as there no grabs involved.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2409
There's no longer any need to use the GL handle in the callback for
_cogl_texture_foreach_sub_texture_in_region because it can now work in
terms of primitive cogl textures so it has now been removed. This
would be helpful if we ever want to make the foreach function public
so that apps could implement their own primitives using sliced
textures.
Since d5634e37 the sliced texture backend now works in terms of
CoglTexture2Ds so there's no need to have special casing for
overriding the texture of a pipeline layer with a GL handle. Instead
we can just use cogl_pipeline_set_layer_texture with the
CoglHandle. The special _cogl_pipeline_set_layer_gl_texture_slice
function has now been removed and parts of the code for comparing
materials have been simplified.
The cogl_texture_foreach_sub_texture_in_region virtual for the sliced
texture backend was previously passing the CoglHandle of the sliced
texture to the callback. Since d5634e37 the slice texture backend now
works in terms of 2D textures so it's possible to pass the underlying
slice texture as a handle too. This makes all of the foreach callbacks
consistent in that they pass a CoglHandle of the primitive texture
type that matches the GL handle.
When COGL_ENABLE_EXPERIMENTAL_2_0_API is defined cogl.h will now include
cogl2-path.h which changes cogl_path_new() so it can directly return a
CoglPath pointer; it no longer exposes a prototype for
cogl_{get,set}_path and all the remaining cogl_path_ functions now take
an explicit path as their first argument.
The idea is that we want to encourage developers to retain path objects
for as long as possible so they can take advantage of us uploading the
path geometry to the GPU. Currently although it is possible to start a
new path and query the current path, it is not convenient.
The other thing is that we want to get Cogl to the point where nothing
depends on a global, current context variable. This will allow us to one
day define a sensible threading model if/when that is ever desired.
For now this new define is simply an alias for
COGL_ENABLE_EXPERIMENTAL_API but the intention is that we will also use
it to start experimenting with changes that need to break the existing
Cogl API in incompatible ways.
Since EGA colors are apparently all the rage in other toolkits, Clutter
should not be left out. On top of the usual CGA/EGA palette the static
colors also include the Tango Icon palette, which at least is more
pleasant to the eye.
Static colors are accessed through an enumeration by using
clutter_color_get_static(), or using the short-hand pre-processor
macros.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2066
We now prepend a set of defines to any given GLSL shader so that we can
define builtin uniforms/attributes within the "cogl" namespace that we
can use to provide compatibility across a range of the earlier versions
of GLSL.
This updates test-cogl-shader-glsl.c and test-shader.c so they no longer
needs to special case GLES vs GL when splicing together its shaders as
well as the blur, colorize and desaturate effects.
To get a feel for the new, portable uniform/attribute names here are the
defines for OpenGL vertex shaders:
#define cogl_position_in gl_Vertex
#define cogl_color_in gl_Color
#define cogl_tex_coord_in gl_MultiTexCoord0
#define cogl_tex_coord0_in gl_MultiTexCoord0
#define cogl_tex_coord1_in gl_MultiTexCoord1
#define cogl_tex_coord2_in gl_MultiTexCoord2
#define cogl_tex_coord3_in gl_MultiTexCoord3
#define cogl_tex_coord4_in gl_MultiTexCoord4
#define cogl_tex_coord5_in gl_MultiTexCoord5
#define cogl_tex_coord6_in gl_MultiTexCoord6
#define cogl_tex_coord7_in gl_MultiTexCoord7
#define cogl_normal_in gl_Normal
#define cogl_position_out gl_Position
#define cogl_point_size_out gl_PointSize
#define cogl_color_out gl_FrontColor
#define cogl_tex_coord_out gl_TexCoord
#define cogl_modelview_matrix gl_ModelViewMatrix
#define cogl_modelview_projection_matrix gl_ModelViewProjectionMatrix
#define cogl_projection_matrix gl_ProjectionMatrix
#define cogl_texture_matrix gl_TextureMatrix
And for fragment shaders we have:
#define cogl_color_in gl_Color
#define cogl_tex_coord_in gl_TexCoord
#define cogl_color_out gl_FragColor
#define cogl_depth_out gl_FragDepth
#define cogl_front_facing gl_FrontFacing
The profiling support was broken - probably during the restructuring of
the build environment, but I'm too lazy to bisect that.
The fix is trivial, and everything works as it should.
When converting the virtual coordinates of the underlying texture for
a slice to virtual coordinates for the whole texture it was using the
size and offset of the intersection as the size of the child
texture. This would be incorrect if the texture contains waste or the
texture coordinates are not the default. Instead the sliced foreach
function now passes the CoglSpan to the callback instead of the
intersection.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2398
Previously in the tests/tools directory we build a disable-npots
library which was used as an LD_PRELOAD to trick Cogl in to thinking
there is no NPOT texture extension. This is a little awkward to use so
it seems much simpler to just define a COGL_DEBUG option to disable
npot textures.
Instead of waiting until clutter_actor_paint to check if there are any
handlers connected to the "paint" signal, we now do the check whenever
the paint-volume is requested in _actor_get_paint_volume_mutable().
Previously we checked in clutter_actor_paint(), but at that time we may
already be using a stage clip that could be derived from an invalid
paint-volume. We used to try and handle that by queuing a follow up,
unclipped, redraw but anyway there was an additional problem with the
previous approach because the checking wasn't enough to always catch
invalid volumes involved in culling (considering that containers may
derive their volume from children that haven't yet been painted)
By moving the check to _get_paint_volume time not only do we now
correctly check children in cases where a container derives its volume
from its children's volumes but we no longer need to queue follow up
redraws to cover up artefacts.
Since we now never queue follow up redraws, this in turn means we should
no longer clobber redraws queued with an explicit clip which was
something affecting gnome-shell since it connects a handler to the paint
signal of the stage.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2388
In some micro-benchmarks testing journal throughput the list
manipulation jumps pretty high in the profile. This replaces the GSList
usage with a GArray instead which is effectively a grow only allocation
that means we avoid ongoing allocations while manipulating the stack
mid-scene.
During _cogl_pipeline_needs_blending_enabled we were always checking the
current lighting properties (ambient,diffuse,specular,emission) which
had a notable impact during micro-benchmarks that exercise journal
throughput of simple colored rectangles. This #if 0's the offending code
considering that Cogl doesn't actually support lighting currently and
when it actually does then we will be able to optimize this by avoiding
the checks when lighting is disabled.
When using cogl_set_source_color4ub there is a notable difference
between colors that require blending and those that dont. When trying to
modify the color of pipeline referenced by the journal we don't force a
flush of the journal unless the color change will also change the
blending state. By using two separate pipeline objects for handing
opaque or transparent colors we can avoid ever flushing the journal when
repeatedly using cogl_set_source_color and jumping between opaque and
transparent colors.
This reworks _cogl_texture_quad_multiple_primitives so instead of using
the CoglPipelineWrapModeOverrides mechanism to force the clamp to edge
repeat mode we now derive an override pipeline using cogl_pipeline_copy
instead. This avoids a relatively large, unconditional, memset.
This avoids using the wrap mode overrides mechanism to implement
_cogl_multitexture_quad_single_primitive which requires memsetting a
fairly large array. This updates it to use cogl_pipeline_foreach_layer()
and we now derive an override_material to handle changes to the wrap
modes instead of using the CoglPipelineWrapModeOverrides.
Previously there was a check to avoid filling the path if there are
zero nodes. However the tesselator also won't generate any triangles
if there are less than 3 nodes so we might as well bail out in that
case too. If we don't emit any triangles then we would end up trying
to create an empty VBO. Although I don't think this should necessarily
be a problem, this seems to cause Mesa to segfault in version 7.8.1
when calling glBufferSubData (although not in
master). test-cogl-primitives tries to fill a path with only two
points so it's convenient to be able to avoid the crash in this case.
When adding a new entry to the journal a reference is now taken on the
current clip stack. Modifying the current clip state no longer causes
a journal flush. The journal flushing code now has an extra stage to
compare the clip state of each entry. The comparison can simply be
done by comparing the pointers. Although different clip states will
still end up with multiple draw calls this at leasts allows a scene
comprising of multiple different clips to be upload with one vbo. It
also lays the groundwork to do certain tricks when drawing clipped
rectangles such as modifying the geometry instead of setting a clip
state.
This adds a flag to avoid flushing the clip state when flushing the
framebuffer state. This will be used by the journal to manage its own
clip state flushing.
Flushing the clip state no longer does anything that would cause the
journal to flush. The clip state is only flushed when flushing the
framebuffer state and in all cases this ends up flushing the journal
in one way or another anyway. Avoiding flushing the journal will make
it easier to log the clip state in the journal.
Previously when trying to set up a rectangle clip that can't be
scissored or when using a path clip the code would use cogl_rectangle
as part of the process to fill the stencil buffer. This is now changed
to use a new internal _cogl_rectangle_immediate function which
directly uses the vertex array API to draw a triangle strip without
affecting the journal. This should be just as efficient as the
previous journalled code because these places would end up flushing
the journal immediately before and after submitting the single
rectangle anyway and flushing the journal always creates a new vbo so
it would effectively do the same thing.
Similarly there is also a new internal _cogl_clear function that does
not flush the journal.
Previously we tracked whether the clip stack needs flushing as part of
the CoglClipState which is part of the CoglFramebuffer state. This is
a bit odd because most of the clipping state (such as the clip planes
and the scissor) are part of the GL context's state rather than the
framebuffer. We were marking the clip state on the framebuffer dirty
every time we change the framebuffer anyway so it seems to make more
sense to have the dirtiness be part of the global context.
Instead of a just a single boolean to record whether the state needs
flushing, the CoglContext now holds a reference to the clip stack that
was flushed. That way we can flush arbitrary stack states and if it
happens to be the same as the state already flushed then Cogl will do
nothing. This will be useful if we log the clip stack in the journal
because then we will need to flush unrelated clip stack states for
each batch.
Instead of having a separate CoglHandle for CoglClipStack the code is
now expected to directly hold a pointer to the top entry on the
stack. The empty stack is then the NULL pointer. This saves an
allocation when we want to copy the stack because we can just take a
reference on a stack entry. The idea is that this will make it
possible to store the clip stack in the journal without any extra
allocations.
The _cogl_get_clip_stack and set functions now take a CoglClipStack
pointer instead of a handle so it would no longer make sense to make
them public. However I think the only reason we would have wanted that
in the first place would be to save the clip state between switching
FBOs and that is no longer necessary.
CoglVertexAttribute has an internal draw function that is used by the
CoglJournal to avoid the call to cogl_journal_flush which would
otherwise end up recursively flushing the journal forever. The
enable_gl_state function called by this was previously also calling
_cogl_flush_framebuffer_state. However the journal code tries to
handle this function specially by calling it with a flag to disable
flushing the modelview matrix. This is useful because the journal
handles flushing the modelview itself. Without this patch the journal
state ends up getting flushed twice. This isn't a particularly big
problem currently because the matrix stack has caching to recognise
when it would push the same state twice and bails out. However if we
later want to use the framebuffer flush flags to override a particular
state of the framebuffer (such as the clip state) then we need to make
sure the flush isn't called twice.
Unless the CoglBuffer is being used for texture data then it's
relatively unlikely that the data will contain an array of bytes. For
example if it's used as a vertex array then it's more likely to be
floats or some vertex struct. In that case it's much more convenient
if set_data and map use void* pointers so that we can avoid a cast.
The convenience constructors for the builtin vertex structs were
creating the primitive and then immediately destroying it and
returning the pointer. I think the intention was to unref the
attributes instead. This adds an internal wrapper around the
new_with_attributes_array constructor which unrefs the attributes
instead of the primitive. The convenience constructors now use that.
The GLES2 wrapper was referring to COGL_MATERIAL_PROGRAM_TYPE_GLSL but
this has since been renamed to COGL_PIPELINE_PROGRAM_TYPE_GLSL so the
GLES2 backend wouldn't compile.
The gles2 wrapper functions don't understand about the CoglBuffer API so
they don't support attributes stored in a CoglVertexArray. Instead of
teaching the backend about buffers we are going to wait until we have
overhauled the GLES 2 backend. We are currently making progress
consolidating the GLES 2 backend with a new GLSL backend for
CoglMaterial. This will hugely simplify the GLES 2 support and share
code with the OpenGL backend. In the end it's hoped that this problem
will simply go away so it doesn't make much sense to solve it with the
current design.
This applies an API naming change that's been deliberated over for a
while now which is to rename CoglMaterial to CoglPipeline.
For now the new pipeline API is marked as experimental and public
headers continue to talk about materials not pipelines. The CoglMaterial
API is now maintained in terms of the cogl_pipeline API internally.
Currently this API is targeting Cogl 2.0 so we will have time to
integrate it properly with other upcoming Cogl 2.0 work.
The basic reasons for the rename are:
- That the term "material" implies to many people that they are
constrained to fragment processing; perhaps as some kind of high-level
texture abstraction.
- In Clutter they get exposed by ClutterTexture actors which may be
re-inforcing this misconception.
- When comparing how other frameworks use the term material, a material
sometimes describes a multi-pass fragment processing technique which
isn't the case in Cogl.
- In code, "CoglPipeline" will hopefully be a much more self documenting
summary of what these objects represent; a full GPU pipeline
configuration including, for example, vertex processing, fragment
processing and blending.
- When considering the API documentation story, at some point we need a
document introducing developers to how the "GPU pipeline" works so it
should become intuitive that CoglPipeline maps back to that
description of the GPU pipeline.
- This is consistent in terminology and concept to OpenGL 4's new
pipeline object which is a container for program objects.
Note: The cogl-material.[ch] files have been renamed to
cogl-material-compat.[ch] because otherwise git doesn't seem to treat
the change as a moving the old cogl-material.c->cogl-pipeline.c and so
we loose all our git-blame history.
Instead of using the CoglHandle type for material variables this updates
the pango code to use CoglMaterial * instead. CoglHandle is the old
typename which is being phased out of the API.
The pango-display-list code was calling cogl_set_source in numerous
places and it didn't appear to be saving the users source to restore
later. This could result in the user inadvertantly drawing a primitive
with one of these internally managed materials instead of one that they
chose. To rectify this the code now uses cogl_{push,pop}_source to save
and restore the users source.
This updates the implementation of cogl_polygon so it sits on the new
CoglVertexArray and CoglVertexAttribute apis. This lets us minimize the
number of different drawing paths we have to maintain in Cogl.
Since the sliced texture support for cogl_polygon has been broken for a
long time now and no one has complained this patch also greatly
simplifies the code by not doing any special material validation so
cogl_polygon will be restricted in the same way as
cogl_draw_vertex_attributes. (i.e. sliced textures not supported).
Instead of using raw OpenGL in the journal we now use the vertex
attributes API instead. This is part of an ongoing effort to reduce the
number of drawing paths we maintain in Cogl.
The functionality of cogl_vertex_buffer_indices_get_for_quads is now
provided by cogl_get_rectangle_indices so this reworks the former to now
work in terms of the latter so we don't have duplicated logic.
As part of an ongoing effort to reduce the number of draw paths we have
in Cogl this re-works CoglVertexBuffer to use the CoglVertexAttribute
and CoglPrimitive APIs instead of using raw GL.
This adds a way to mark that a primitive is in use so that modifications
will generate a warning. The plan is to use this mechanism when batching
primitives in the journal to warn users that mid-scene modifications of
primitives is not allowed.
This adds convenience primitive constructors named like:
cogl_primitive_new_p3 or
cogl_primitive_new_p3c4 or
cogl_primitive_new_p3t2c4
where the letters correspond to the interleved vertex attributes layouts
such as CoglP3Vertex which is a struct with 3 float x,y,z members for
the [p]osition, or CoglP3T2C4Vertex which is a struct with 3 float x,y,z
members for the [p]osition, 2 float s,t members for the [t]exture
coordinates and 4 unsigned byte r,g,b,a members for the [c]olor.
The hope is that people will find these convenient enough to replace
cogl_polygon.
A CoglPrimitive is a retainable object for drawing a single primitive,
such as a triangle strip, fan or list.
CoglPrimitives build on CoglVertexAttributes and CoglIndices which
themselves build on CoglVertexArrays and CoglIndexArrays respectively.
A CoglPrimitive encapsulates enough information such that it can be
retained in a queue (e.g. the Cogl Journal, or renderlists in the
future) and drawn at some later time.
A CoglVertexAttribute defines a single attribute contained in a
CoglVertexArray. I.e. a CoglVertexArray is simply a buffer of N bytes
intended for containing a collection of attributes (position, color,
normals etc) and a CoglVertexAttribute defines one such attribute by
specifying its start offset in the array, its type, the number of
components and the stride etc.
CoglIndices define a range of indices inside a CoglIndexArray. I.e. a
CoglIndexArray is simply a buffer of N bytes and you can then
instantiate multiple CoglIndices collections that define a sub-region of
a CoglIndexArray by specifying a start offset and an index data type.
This adds a new CoglVertexArray object which is a subclass of CoglBuffer
used to hold vertex attributes. A later commit will add a
CoglVertexAttribute API which will be used to describe the attributes
inside a CoglVertexArray.
A CoglIndexArray is a subclass of CoglBuffer and will be used to hold
vertex indices. A later commit will add a CoglIndices API which will
allow describing a range of indices inside a CoglIndexArray.
This adds an internal mechanism to mark that a buffer is in-use so that
a warning can be generated if the user attempts to modify the buffer.
The plans is for the journal to use this mechanism so that we can warn
users about mid-scene modifications of buffers.
We now make _cogl_buffer_bind return a base pointer for the bound buffer
which can be used with OpenGL. The pointer will be NULL for GPU based
buffers or may point to an malloc'd buffer. Since OpenGL expects an
offset instead of a pointer when dealing with buffer objects this means
we can handle fallback malloc buffers and GPU buffers in a consistent
way.
This allows _cogl_material_flush_gl_state to bail out faster if
repeatedly asked to flush the same material and we can see the material
hasn't changed.
Since we can rely on the material age incrementing when any material
property changes or any associated layer property changes then we can
track the age of the material after flushing so it can be compared with
the age of the material if it is subsequently re-flushed. If the age is
the same we only have to re-assert the texture object state.
MaterialNodes are used for the sparse graph of material state and layer
state. In the case of materials there is the idea of weak materials that
don't take a reference on their parent and in that case we need to be
careful not to unref our parent during
_cogl_material_node_unparent_real. This adds a has_parent_reference
member to the CoglMaterialNode struct so we now know when to skip the
unref.
If there is private data associated with a CoglObject then there may be
a user_data_array that needs to be freed. The code was mistakenly
freeing the array inside the loop that was actually iterating over the
user data array notifying the objects destruction instead of waiting
until all the data entries had been destroyed.
Once an actor had _clutter_stage_queue_redraw_entry_invalidate()
called on it once, then priv->queue_redraw_entry would point to
an entry with entry->actor NULL. _clutter_stage_queue_actor_redraw()
doesn't handle this case and no further redraws would be queued.
To fix this, NULL out priv->queue_redraw_entry() and then make sure
we free the invalidated entry in
_clutter_stage_maybe_finish_queue_redraws() just as we do for
still valid entries.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2389
Previously when trying to destroy all of the stages in the backend
dispose function it would poke directly in the ClutterStageManager
struct to get the list. In 8613013ab0 the defintion of
ClutterStageManager moved to a different header which isn't included
by the Win32 backend so it wouldn't compile. In that commit the X11
backend was changed to unref the stage manager instead of poking in
the internals so we should do the same for the win32 backend.
One of the ideas behind _internal() functions is to be able to have a
version of the original one without checks (among other things). As
these functions are either static or private to the library, we control
the arguments given to it, and thus no need for checking them again
here.
Telling the user about files not found when loading a ClutterScript with
ClutterTextures in it is very useful and can save a few minutes (or
hours) of frustation because it "does not work".
This merges the two implementations of CoglProgram for the GLES2 and
GL backends into one. The implementation is more like the GLES2
version which would track the uniform values and delay sending them to
GL. CoglProgram is now effectively just a GList of CoglShaders along
with an array of stored uniform values. CoglProgram never actually
creates a GL program, instead this is left up to the GLSL material
backend. This is necessary on GLES2 where we may need to relink the
user's program with different generated shaders depending on the other
emulated fixed function state. It will also be necessary in the future
GLSL backends for regular OpenGL. The GLSL and ARBfp material backends
are now the ones that create and link the GL program from the list of
shaders. The linked program is attached to the private material state
so that it can be reused if the CoglProgram is used again with the
same material. This does mean the program will get relinked if the
shader is used with multiple materials. This will be particularly bad
if the legacy cogl_program_use function is used because that
effectively always makes one-shot materials. This problem will
hopefully be alleviated if we make a hash table with a cache of
generated programs. The cogl program would then need to become part of
the hash lookup.
Each CoglProgram now has an age counter which is incremented every
time a shader is added. This is used by the material backends to
detect when we need to create a new GL program for the user program.
The internal _cogl_use_program function now takes a GL program handle
rather than a CoglProgram. It no longer needs any special differences
for GLES2. The GLES2 wrapper function now also uses this function to
bind its generated shaders.
The ARBfp shaders no longer store a copy of the program source but
instead just directly create a program object when cogl_shader_source
is called. This avoids having to reupload the source if the same
shader is used in multiple materials.
There are currently a few gross hacks to get the GLES2 backend to work
with this. The problem is that the GLSL material backend is now
generating a complete GL program but the GLES2 wrapper still needs to
add its fixed function emulation shaders if the program doesn't
provide either a vertex or fragment shader. There is a new function in
the GLES2 wrapper called _cogl_gles2_use_program which replaces the
previous cogl_program_use implementation. It extracts the GL shaders
from the GL program object and creates a new GL program containing all
of the shaders plus its fixed function emulation. This new program is
returned to the GLSL material backend so that it can still flush the
custom uniforms using it. The user_program is attached to the GLES2
settings struct as before but its stored using a GL program handle
rather than a CoglProgram pointer. This hack will go away once the
GLSL material backend replaces the GLES2 wrapper by generating the
code itself.
Under Mesa this currently generates some GL errors when glClear is
called in test-cogl-shader-glsl. I think this is due to a bug in Mesa
however. When the user program on the material is changed the GLSL
backend gets notified and deletes the GL program that it linked from
the user shaders. The program will still be bound in GL
however. Leaving a deleted shader bound exposes a bug in Mesa's
glClear implementation. More details are here:
https://bugs.freedesktop.org/show_bug.cgi?id=31194
Previously cogl_set_source_color and cogl_set_source_texture modified
a single global material. If an application then mixes using
cogl_set_source_color and texture then the material will constantly
need a new ARBfp program because the numbers of layers alternates
between 0 and 1. This patch just adds a second global material that is
only used for cogl_set_source_texture. I think it would still end up
flushing the journal if cogl_set_source_texture is used with multiple
different textures but at least it should avoid a recompile unless the
texture target also changes. It might be nice to somehow attach a
material to the CoglTexture for use with cogl_set_source_texture but
it would be difficult to implement this without creating a circular
reference.
This moves the CoglIndicesType and CoglVerticesMode typedefs from
cogl-vertex-buffer.h to cogl-types.h so they can be shared with the
anticipated cogl vertex attribute API.
This renames the BufferBindTarget + BufferUsageHint enums to match the
anticipated new APIs for "index arrays" and "vertex arrays" as opposed
to using the terms "vertices" or "indices".
previously we would silently bail out if the given offset + data size
would overflow the buffer size. Now we use g_return_val_if_fail so we
get a warning if we hit this case.
This adds a store_created bit field to CoglBuffer so we know if the
underlying buffer has been allocated yet. Previously the code was trying
to do something really wrong by accidentally using the
COGL_PIXEL_ARRAY_FLAG_IS_SET macro (note "PIXEL_ARRAY") and what is more
odd was the declaration of a CoglPixelArray *pixel_array in
cogl-buffer.c which the buffer was being cast too before calling using
the macro. Probably this was the fall-out of some previous code
re-factoring.
All the macros get used for are to |= (a new flag bit), &= ~(a flag bit)
or use the & operator to test if a flag bit is set. I haven't found the
code more readable with these macros, but several times now I've felt
the need to double check if these macros do anything else behind the
hood or I've forgotten what flags are available so I've had to go to the
macro definition to see what the full enum names are for the flags (the
macros use symbol concatenation) so I can search for the definition of
all the flags. It turns out they are defined next to the macro so you
don't have to search far, but without the macro that wouldn't have been
necessary.
The more common use of the _IS_SET macro is actually more concise
expanded and imho since it doesn't hide anything in a separate header
file the code is more readable without the macro.
This is a counter part for _cogl_material_layer_get_texture which takes
a layer index instead of a direct CoglMaterialLayer pointer. The aim is
to phase out code that directly iterates the internal layer pointers of
a material since the layer pointers can change if any property of any
layer is changed making direct layer pointers very fragile.
This adds internal _cogl_material_get_layer_filters and
_cogl_material_get_layer_{min,mag}_filter functions which can be used to
query the filters associated with a layer using a layer_index, as
opposed to a layer pointer. Accessing layer pointers is considered
deprecated so we need to provide layer_index based replacements.
When we come to submitting the users given attributes we sort them into
different types of buffers. Previously we had three types; strided,
unstrided and multi-pack. Really though unstrided was just a limited
form of multi-pack buffer and didn't imply any hind of special
optimization so this patch consolidates some code by reducing to just
two types; strided and multi-pack.
This is a counter part for _cogl_material_layer_pre_paint which takes a
layer index instead of a direct CoglMaterialLayer pointer. The aim is to
phase out code that directly iterates the internal layer pointers of a
material since the layer pointers can change if any property of any
layer is changed making direct layer pointers very fragile.
This exposes the idea of a stack of source materials instead of just
having a single current material. This allows the writing of orthogonal
code that can change the current source material and restore it to its
previous state. It also allows the implementation of new composite
primitives that may want to validate the current source material and
possibly make override changes in a derived material.
* private-cleanup:
Add copyright notices
Clean up clutter-private.h/6
Clean up clutter-private.h/5
Clean up clutter-private.h/4
Clean up clutter-private.h/3
Clean up clutter-private.h/2
Clean up clutter-private.h/1
Correct the argument order and replace all occurrences of
clutter_state_change() with the appropriate clutter_state_set_state() or
clutter_state_warp_to_state().
If you warp to a state, it should be immediately set. Check if the
animation is in progress when warping to a state and don't short-circuit
in the already-set check if we're not animating.
Add special behaviour when you set the key of the current target state:
- If the state is transitioning, add/modify the interval so that the new
key transitions from the current time (taking into account pre-delay) to
its target final property
- If the state is set but has already finished animating/was warped to,
set the property immediately
If ClutterState is in the middle of a transition and you remove all the
keys from the target state, the target state will be destroyed without
stopping the animation/unsetting the target state. This caused an invalid
memory access.
Allow setting a %NULL state. This has the effect of unsetting the current
state and stopping all animation. This allows you to, for example, start
a state transition, set the state to NULL, alter the state transition
and then resume it again, by just setting it.
* wip/path-constraint:
docs: Add PathConstraint
tests: Add a PathConstraint interactive test
Add ClutterPathConstraint
actor-box: Add setters for origin and size
ClutterPathConstraint is a simple Constraint implementation that
modifies the allocation of the Actor to which is has been applied using
a progress value and a ClutterPath.
There was previously a flag that gets set when this function was
called but nothing checked it so the function effectively did
nothing. Also the flag was a member of the backend struct but this
can't be used because the function should be called before
clutter_init so the backend is not ready yet. This patch makes the
event disabling work more like the X11 backend and set a global
variable instead.
This function handles a single windows message. The idea is that it
could be used by clutter-gtk to forward on events from a
GdkEventFilter. The function replaces the old message_translate()
function. That function didn't translate the event anymore anyway and
instead it could generate multiple events so
clutter_win32_handle_event seems like a more appropriate name. The
function returns TRUE or FALSE depending on whether the event was
completely handled instead of setting call_window_proc.
When handling an allocation on the stage, Clutter uses the oppurtunity
to inform Cogl of the new size of the framebuffer so that it can
handle the viewport correctly. It queries the size of the window
implementation using a backend virtual function. However it was doing
this before letting the backend handle the allocation so on Win32 it
would end up using the previous framebuffer size. This wasn't
affecting the X11 backend because in that case the resizes are
asynchronous so setting the stage size causes one allocation which
ends up sending a window size request. Eventually a ConfigureNotify is
received which causes the size of the stage to be set again and
another allocation is fired meaning the framebuffer size will be set
again this time with the correct size. In Win32 the resizes are
synchronous so we don't have this second allocation.
When compiling for non-glx platforms the winsys feature data array
ends up empty. Empty arrays cause problems for MSVC so this patch adds
a stub entry so that the array always has at least one entry.
Based on a patch by Ole André Vadla Ravnås
There was an array whose length was define by a static const int
variable. GCC seems to consider this a variable-length array so it
will cause warnings now that -Wvla is enabled. We might as well make
this constant a #define instead to avoid the warning.
Instead of directly manipulating GL textures itself,
CoglTexture2DSliced now works in terms of CoglHandles. It creates the
texture slices using cogl_texture_new_with_size which should always
end up creating a CoglTexture2D because the size should fit. This
allows us to avoid replicating some code such as the first pixel
mipmap tracking and it better enforces the separation that each
texture backend is the only place that contains code dealing with each
texture target.
This adds two new internal functions to create a foreign texture for
the texture 2d and rectangle backends. cogl_texture_new_from_foreign
will now use one of these backends directly if there is no waste
instead of always using the sliced texture backend.
Move the private Backend API to a separate header.
This also allows us to finally move the class vtable and instance
structure to a separate file and plug the visibility hole that left
the Backend class bare for everyone to poke into.
Since we allow compiling Clutter without the XComposite extension
available, we need to protect the calls to the XComposite API with
the guards provided by the configure script.
Currently, the memory management in ClutterScript is overly complicated.
The basic design tenet should be:
- ClutterScript owns a reference on every object it creates
This allows the Script instance to reliably handle the lifetime of the
instances from creation to disposal.
In case of unmerge, the Script instance should destroy any Actor
instance, except for the Stage, and release the reference it owns. The
Stage is special because it's really owned by Clutter itself, and it
should be destroyed explicitly.
When disposing the Script itself, it should just release the reference;
any parented actor, or any InitiallyUnowned instance, will then be
managed by the parent object, as they should, while every GObject
instance will go away, as documented.
This commit is based on a patch by:
Henrik Hedberg <hhedberg@innologies.fi>
http://bugzilla.clutter-project.org/show_bug.cgi?id=2316
By using a new signal, ::create-surface (width, height), it should be
possible for third party code and sub-classes to override the default
surface creation code in CairoSurface.
This commit takes a bit of the patch from:
http://bugzilla.clutter-project.org/show_bug.cgi?id=1878
which cleans up CairoTexture; the idea, mutuated from that bug, is that
the CairoTexture actor checks whether the surface it has it's an image
one, and in that case it uses a Cogl texture as the backing store. In
case the surface is not an image one we assume that the surface itself
has some way of updating the GL state and flush the surface.
Always use pageflipping, but avoid full repaint by copying back dirty
regions from front to back. Additionally, we dealy copying back until
we're ready to paint the new frame, so we can avoid copying areas that
will be repainted anyway.
This is the least amount of copying per frame we can get away with at all
and at the same time we don't have to worry about stalling the GPU on
synchronized blits since we always pageflip.
When we don't use a window system drawable, we can't query the color
masks at context initialization time. Do it lazily so we're sure to have
a current context with a valid framebuffer.
We need to make sure that redraws queued for actors on a stage are for
actors actually in the stage. So in clutter_actor_unparent() descend
through the children and remove redraws. Just removing the actor itself
isn't good enough since an entire hierarchy can be removed from the
stage without breaking it up into individual actors.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2359
This is based on an original patch from Owen Taylor who debugged the
root cause of this bug; thanks.
In the case that an unclipped redraw of an actor is queued after a
clipped we should update any existing ClutterStageQueueRedrawEntry
so entry->has_clip = FALSE and free the previous clip.
Instead of using the allocation-changed signal, use the queue-relayout
signal on the source to queue a relayout on the actor to which the
BindConstraint has been attached to.
The ::allocation-changed signal is not always enough, given that a
BindConstraint can use the position as well as the size of an actor to
drive the allocation of another; in this regard, it's much similar
to a ClutterClone, which requires a notification on every change, even
potential, and not just real ones, given the short-circuiting done
inside ClutterActor.
Instead of delegating the check for the ActorMeta:enabled property to
the sub-classes of ClutterActorMeta, ClutterActor can do the check prior
to using the ClutterActorMeta instances.
The interpolate() method does what it says on the tin: it interpolates
between two colors using the given factor.
ClutterColor uses it to register a progress function for Intervals.
When picking a size for the last slice in a texture, Cogl would always
pick the biggest power of two size that doesn't create too much
waste and is less than or equal to the previous slice size. However
this can end up creating a texture that is bigger than needed if there
is a smaller power of two.
For example, if the maximum waste is 127 (the current default) and we
try to create a texture that is 257 pixels wide it will decide that
the next power of two (512) is too much waste (255) so it will create
the first slice at 256 pixels wide. Then we only have 1 pixel left to
allocate but Cogl would pick the next smaller size that has a small
enough waste which is 128. But of course 1 is already a power of two
so that's redundantly oversized by 127.
This patch fixes it so that whenever it finds a size that would be big
enough, instead of using exactly that it picks the next power of two
up from the size we need to fill.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2355
A Clone:source property might be NULL, and we should not penalize
performance when we can just bail out early, because that would kind of
defeat the point.
Whenever the allocation is changed on a child of a ClutterTableLayout
and animations are not in effect then it would store a copy of the
allocation in the child meta data. However it was not freeing the old
copy of the allocation so it would end up with a small leak.
Instead of just changing it to free the old value this patch makes it
store the allocation inline in the meta data struct because it seems
that the size of an actor box is already quite small compared to the
size of the meta data struct so it is probably not worth having a
separate allocation for it. To detect the case when there has not yet
been an allocation a separate boolean is used instead of storing NULL.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2358
All the nifty things you discover when translating strings not exposed
to anyone. First the clutter-wide record of the number of typos in one
string. Second, ClutterTexture happened to have the only property blurbs
ending with a '.', remove them.
the "position" property of ClutterText is really the position of the
cursor. Rename the nick accordingly not to confuse it with the position
of the actor itself and be consistent with all the other cursor-related
properties.
The descriptions for the 'y-align' and 'x-align' properties talk about a
layer and a layer manager. It seems that these properties are the
alignement factors relative to the BinLayout, so document them
accordingly.
There are ordering issues in the pixmap destruction with current and
past X11 server, Mesa and dri2. Under some circumstances, an X pixmap
might be destroyed with the GLX pixmap still referencing it, and thus
the X server will decide to destroy the GLX pixmap as well; then, when
Cogl tries to destroy the GLX pixmap, it gets BadDrawable errors.
Clutter 1.2 used to trap + sync all calls to glXDestroyPixmap(), but
then we assumed that the ordering issue had been solved. So, we're back
to square 1.
I left a Big Fat Comment™ right above the glXDestroyPixmap() call
referencing the bug and the reasoning behind the trap, so that we don't
go and remove it in the future without checking that the issue has been
in fact solved.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2324
After commit 8dd8fbdb some errors appear if you try work directly
against cally:
* cally.pc.in removed some elements. After install clutter, doing
pkg-config --cflags cally-1.0
fails due missing winsys
* cally headers were moved from clutter-1.0/cally to
clutter-1.0/clutter/cally. Applications using it (yes I know,
nobody is officially using it) would require to:
* Change their include.
* Add directly a dependency to cally, in order to use the cally.pc
file with the correct directory include.
Note: Take into account that accessibility support still works (ie:
clutter_get_accessibility_enabled). This bug only prevents
applications to work directly against cally (ie: create a CallyActor
subclass)
http://bugzilla.clutter-project.org/show_bug.cgi?id=2353
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Landing the paint-box branch accidentally added two slots to the
ClutterEffectClass vtable, plus the get_paint_volume() function
pointer. This is an ABI break from 1.4.
Like we do for the Quartz backend, we should turn on the -xobjective-c
compiler flag for the Fruity backend.
This does not mean that the backend actually works.
The marshaller was defined as OBJECT,OBJECT,PARAM but the signal
definition used only two arguments. Since the signal never worked
and we never got any report about it, nobody could be possibly
using the ::child-notify signal.
Since we added child properties to the Container interface we made a
guarantee that the ::child-notify signal would be emitted whenever a
property was set using clutter_container_child_set*().
We were lying.
The child_notify virtual function was not implemented, and the signal
was never emitted.
We also used a G_LIKELY() macro while checking for non-NULL on a
function pointer that was by default set to NULL, thus making the
setting of child properties far less efficient than needed.
The clutter stage has a list of entries of actors waiting to be redrawn.
Each entry has a "clip" ClutterPaintVolume member which represents which
how much of the actor needs to get redrawn. It's possible for there to
be no clip associated with the entry. In this case, the clip member is
invalid, the has_clip member should be set to false.
This commit fixes a bug where the has_clip member was not being
initially, explicitly set to false for new entries, and not being
explicitly set to false in the event the clip associated with the entry
is freed.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2350
Signed-off-by: Robert Bragg <robert@linux.intel.com>
In all the changes made recently to how we handle redraws and adding
support for paint-volumes we stopped looking at explicit clip regions
passed to _clutter_actor_queue_redraw_with_clip.
In _clutter_actor_finish_queue_redraw we had started always trying to
clip the redraw to the paint-volume of the actor, but forgot to consider
that the user may have already determined the clip region for us!
Now we first check if the given clip != NUll and if so we don't need to
calculate the paint-volume of the actor.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2349
One of the later changes made on the paint volume branch before merging
with master was to make paint volumes opt in only since we couldn't make
any safe assumptions about how custom actors may constrain their
painting. We added very conservative implementations for the existing
Clutter actors - including for ClutterTexture which
ClutterX11TexturePixmap is a sub-class of - but we were conservative to
the extent of explicitly checking the GType of the actor so we would
avoid making any assumptions about sub-classes. The upshot was that we
neglected to implement the get_paint_volume vfunc for
ClutterX11TexturePixmap.
This patch provides an implementation that simply reports the actor's
allocation as its paint volume. Also unlike for other core actors it
doesn't explicitly check the GType so we are assuming that all existing
sub-classes of ClutterX11TexturePixmap constrain their drawing to the
actor's transformed allocation. If anyone does want to draw outside the
allocation in future sub-classes, then they should also provide an
updated get_paint_volume implementation.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2349
When using the debug function _cogl_debug_dump_materials_dot_file to
write a dot file representing the sparse graph of material state we now
only show a link between materials and layers when the material directly
owns that layer reference (i.e. just those referenced in
material->layer_differences) This makes it possible to see when
ancestors of a material are being deferred too for layer state.
For example when looking at the graph if you see that a material has an
n_layers of 3 but there is only a link to 2 layers, then you know you
need to look at it's ancestors to find the last layer.
In 4ee05f8e21 the namespace for the clutter keysym macros were
changed to CLUTTER_KEY_* but the win32 events backend was still
referring to the old names.
GObject ≥ 2.26.0 added a nice convenience call for installing properties
from an array of GParamSpec. Since we're already storing all GParamSpec
in an array in order to use them with g_object_notify_by_pspec(), this
turns out nicely for us.
Since we do not depend on GLib 2.26 (yet), we need to provide a simple
private wrapper that implements the fall back to the default
g_object_class_install_property() call.
ClutterDragAction has been converted as a proof of concept.
During destruction, the StageWindow implementation associated to a Stage
might be NULL. We need to add more checks for a) the IN_DESTRUCTION flag
being set and b) the StageWindow pointer being NULL. Otherwise, we will
get warnings during the destruction of the Stage.
Both of the cogl_texture_2d_sliced_new functions called the
slices_create function which creates the underlying GL
textures. However this was also called by init_base so the textures
would end up being created twice. This would make it leak the GL
textures and the arrays which point to them.
The internal copy of JSON-GLib was meant to go away right after the 1.0
release, given that JSON-GLib was still young and relatively unknown.
Nowadays, many projects started depending on this little library, and
distributions ship it and keep it up to date.
Keeping a copy of JSON-GLib means keeping it up to date; unfortunately,
this would also imply updating the code not just for the API but for the
internal implementations.
Starting with the 1.2 release, Clutter preferably dependend on the
system copy; with the 1.4 release we stopped falling back automatically.
The 1.6 cycle finally removes the internal copy and requires a copy of
JSON-GLib installed on the target system in order to compile Clutter.
Since re-working how redraws are queued it is no longer necessary to
dirty the pick buffer in _clutter_actor_real_queue_redraw since this
should now reliably be handled in _clutter_stage_queue_actor_redraw.
This adds two internal functions relating to explicit traversal of the
scenegraph:
_clutter_actor_foreach_child
_clutter_actor_traverse
_clutter_actor_foreach_child just iterates the immediate children of an
actor, and with a new ClutterForeachCallback type it allows the
callbacks to break iteration early.
_clutter_actor_traverse traverses the given actor and all of its
decendants. Again traversal can be stopped early if a callback returns
FALSE.
The first intended use for _clutter_actor_traverse is to maintain a
cache pointer to the stage for all actors. In this case we will need to
update the pointer for all descendants of an actor when an actor is
reparented in any way.
This adds a private getter to query the number of children an actor has.
One use planned for this API is to avoid calling get_paint_volume on
such actors. (It's not clear what the best semantics for
get_paint_volume are for actors with children, so we are considering
leaving the semantics undefined for the initial clutter 1.4 release)
We now explicitly track the list of children each actor has in a private
GList. This gives us a reliable way to know how many children an actor
has - even for composite actors that don't implement the container
interface. This also will allow us to directly traverse the scenegraph
in a more generalized fashion. Previously the scenegraph was
more-or-less represented implicitly according the implementation of
paint methods.
When using the CLUTTER_PAINT=paint-volumes debug option we try and show
when a paint volume couldn't be determined by drawing a blue outline of
the allocation instead. There was a typo though and instead we were
drawing an outline the size of the stage instead of for the given actor.
This fixes that and removes a FIXME comment relating to the blue outline
that is now implemented.
To allow Clutter to queue clipped redraws when a clone actor changes we
need to be able to report a paint volume for clone actors. This patch
makes ClutterClones query the paint volume of their source actor and
masquerade it as their own volume.
This reverts commit ca44c6a7d8abe9f2c548bee817559ea8adaa7a80.
In reality there are probably lots of actors that depend on the exact
semantics as they are documented so this change isn't really acceptable.
For example when the font changes in ClutterText we only queue a
relayout, and since it's possible that the font will have the same size
and the actor won't get a new allocation it wouldn't otherwise queue a
redraw.
Since queue_redraw requests now get deferred until just before a paint
run it is actually no longer a problem to queue the redraw here.
Instead of immediately, recursively emitting the "queue-redraw" signal
when clutter_actor_queue_redraw is called we now defer this process
until all stage updates are complete. This allows us to aggregate
repeated _queue_redraw requests for the same actor avoiding redundant
paint volume transformations. By deferring we also increase the
likelihood that the actor will have a valid paint volume since it will
have an up to date allocation; this in turn means we will more often be
able to automatically queue clipped redraws which can have a big impact
on performance.
Here's an outline of the actor queue redraw mechanism:
The process starts in clutter_actor_queue_redraw or
_clutter_actor_queue_redraw_with_clip.
These functions queue an entry in a list associated with the stage which
is a list of actors that queued a redraw while updating the timelines,
performing layouting and processing other mainloop sources before the
next paint starts.
We aim to minimize the processing done at this point because there is a
good chance other events will happen while updating the scenegraph that
would invalidate any expensive work we might otherwise try to do here.
For example we don't try and resolve the screen space bounding box of an
actor at this stage so as to minimize how much of the screen redraw
because it's possible something else will happen which will force a full
redraw anyway.
When all updates are complete and we come to paint the stage (see
_clutter_stage_do_update) then we iterate this list and actually emit
the "queue-redraw" signals for each of the listed actors which will
bubble up to the stage for each actor and at that point we will
transform the actors paint volume into screen coordinates to determine
the clip region for what needs to be redrawn in the next paint.
Note: actors are allowed to queue a redraw in reseponse to a
queue-redraw signal so we repeat the processing of the list until it
remains empty. An example of when this happens is for Clone actors or
clutter_texture_new_from_actor actors which need to queue a redraw if
their source queues a redraw.
For Clone actors we will need a way to report the volume of the source
actor as the volume of the clone actor. To make this work though we need
to be able to replace the reference to the source actor with a reference
to the clone actor instead. This adds a private
_clutter_paint_volume_set_reference_actor function to do that.
This adds a way to initialize a paint volume from another source paint
volume. This lets us for instance pass the contents of one paint volume
back through the out param of a get_paint_volume implementation.
This makes clutter_actor_queue_redraw simply bail out early if the actor
isn't a descendant of a ClutterStage since the request isn't meaningful
and it avoids a crash when trying to queue a clipped redraw against the
stage to clear the actors old location.
This splits out all the clutter_paint_volume code from clutter-actor.c
into clutter-paint-volume.c. Since clutter-actor.c and
clutter-paint-volume.c both needed the functionality of
_fully_transform_vertices, this function has now been moved to
clutter-utils.c as _clutter_util_fully_transform_vertices.
There are too many examples where the default assumption that an actor
paints inside its allocation isn't true, so we now return FALSE in the
base implementation instead. This means that by default we are saying
"we don't know the paint volume of the actor", so developers need to
implement the get_paint_volume virtual to take advantage of culling and
clipped redraws with their actors.
This patch provides very conservative get_paint_volume implementations
for ClutterTexture, ClutterCairoTexture, ClutterRectangle and
ClutterText which all explicitly check the actor's object type to avoid
making any assumptions about subclasses.
We were always explicitly checking priv->needs_allocation in
_clutter_actor_queue_redraw_with_clip, but we only need to do that if
the CLUTTER_REDRAW_CLIPPED_TO_ALLOCATION flag is used.
This initializes priv->last_paint_box with a degenerate box, so a newly
allocated actor added to the scenegraph and made visible only needs to
trigger a redraw of its initial position. If we don't have a valid
last_paint_box though we would instead trigger a full stage redraw.
To make comparing the performance with culling/clipped redraws
enabled/disabled fairer we now avoid querying the paint box when they
are disabled, so that results should reflect how the cost of
transforming paint volumes into screen space etc gets offset against the
benefit of culling.
In clutter_stage_allocate at the end we were always querying the latest
allocation set and using the geometry to assert the viewport and then
kicking a full redraw. These only need to be done when the allocation
really changes, so we now read the previous allocation at the start of
the function and compare at the end. This was stopping clipped redraws
from being used in a lot of cases.
To consider that we've see a number of drivers that can struggle to get
going and may produce a bad first frame we now force the first 2 frames
to be full redraws. This became a serious issue after we started using
clipped redraws more aggressively because we assumed that after the
first frame the full framebuffer was valid and we only redraw the
content that changes. With buggy drivers though, applications would be
left with junk covering a lot of the stage until some event triggered a
full redraw.
This is a workaround for a race condition when resizing windows while
there are in-flight glXCopySubBuffer blits happening.
The problem stems from the fact that rectangles for the blits are
described relative to the bottom left of the window and because we can't
guarantee control over the X window gravity used when resizing so the
gravity is typically NorthWest not SouthWest.
This means if you grow a window vertically the server will make sure to
place the old contents of the window at the top-left/north-west of your
new larger window, but that may happen asynchronous to GLX preparing to
do a blit specified relative to the bottom-left/south-west of the window
(based on the old smaller window geometry).
When the GLX issued blit finally happens relative to the new bottom of
your window, the destination will have shifted relative to the top-left
where all the pixels you care about are so it will result in a nasty
artefact making resizing look very ugly!
We can't currently fix this completely, in-part because the window
manager tends to trample any gravity we might set. This workaround
instead simply disables blits for a while if we are notified of any
resizes happening so if the user is resizing a window via the window
manager then they may see an artefact for one frame but then we will
fallback to redrawing the full stage until the cooling off period is
over.
Instead of triggering a full stage redraw for Expose events we use the
geometry of the exposed region given in the event to queue a clipped
redraw of the stage.
Clutter has now taken responsibility for managing its viewport,
projection matrix and view transform as part of ClutterStage so
_cogl_setup_viewport is no longer used by anything, and since it's quite
an obscure API anyway it's we've taken the opportunity to remove the
function.
Since clutter_actor_queue_redraw now automatically clips redraws
according to the paint volume of the actor we have to be careful to
ensure we really force a full redraw when the stage is allocated a new
size or the stage viewport changes.
We have bent the originally documented semantics a bit so now where we
say "Queueing a new layout automatically queues a redraw as well" it
might be clearer to say "Queuing a new layout implicitly queues a redraw
as well if anything in the layout changes".
This should be close enough to the original semantics to not cause any
problems.
Without this change then we we fail to take advantage of clipped redraws
in a lot of cases because queuing a redraw with priv->needs_allocation
== TRUE will automatically be promoted to a full stage redraw since it's
not possible to determine a valid paint-volume.
Also queuing a redraw here will end up registering a redundant clipped
redraw for the current location, doing quite a lot of redundant
transforms, and then later when re-allocated during layouting another
queue redraw would happen with the correct paint-volume.
This uses actor paint volumes to perform culling during
clutter_actor_paint.
When performing a clipped redraw (because only a few localized actors
changed) then as we traverse the scenegraph painting the actors we can
now ignore actors that don't intersect the clip region. Early testing
shows this can have a big performance benefit; e.g. 100% fps improvement
for test-state with culling enabled and we hope that there are even much
more compelling examples than that in the real world,
Most Clutter applications are 2Dish interfaces and have quite a lot of
actors that get continuously painted when anything is animated. The
dynamic actors are often localized to an area of user focus though so
with culling we can completely avoid painting any of the static actors
outside the current clip region.
Obviously the cost of culling has to be offset against the cost of
painting to determine if it's a win, but our (limited) testing suggests
it should be a win for most applications.
Note: we hope we will be able to also bring another performance bump
from culling with another iteration - hopefully in the 1.6 cycle - to
avoid doing the culling in screen space and instead do it in the stage's
model space. This will hopefully let us minimize the cost of
transforming the actor volumes for culling.
This makes clutter_actor_queue_redraw transparently use an actor's paint
volume to queue a clipped redraw.
We save the actors paint box each time it is painted so that when
clutter_actor_queue_redraw is called we can determine the old and new
location of the actor so we know the full bounds of what must be redrawn
to clear its old view and show the new.
This makes _clutter_actor_transform_and_project_box a static function
and removes the prototype from clutter-private.h since it is no longer
used outside clutter-actor.c
The base implementation for the actor queue_relayout method was queuing
an implicit redraw, but there shouldn't be anything implied from the
mere process of queuing a redraw that should force us to queue a redraw.
If actors are moved as a part of relayouting later then they will queue
a redraw. Also clutter_actor_queue_relayout() still also explicitly
queues a redraw so I think this may have been doubly redundant.
If clutter_actor_allocate finds it necessary to update an actors
allocation then it now also queue a redraw of that actor. Currently we
queue redraws for actors very early on when queuing a relayout instead
of waiting to determine the final outcome of relayouting to determine if
a redraw is really required. With this in place we can move away from
preemptive queuing of redraws.
clutter_actor_queue_relayout currently queues a relayout and a redraw,
but the plan is to change it to only queue a relayout and honour the
documentation by assuming that the process of relayouting will
result queuing redraws for any actors whos allocation changes.
This doesn't make that change it just adds an internal
_clutter_actor_queue_only_relayout function which
clutter_actor_queue_relayout now uses as well as calling
clutter_actor_queue_redraw.
This adds a private ->relayout_pending boolean similar in spirit to
redraw_pending. This will allow us to queue a relayout without
implicitly queueing a redraw; instead we can depend on the actions
of a relayout to queue any necessary redraw.
When clutter_texture_new_from_actor is use we need to track when the
source actor queues a redraw or a relayout so we can also queue a redraw
or relayout for the texture actor.
There is an internal _clutter_actor_queue_redraw_with_clip API that gets
used for texture-from-pixmap to minimize what we redraw in response to
Damage events. It was previously working in terms of a ClutterActorBox
but it has now been changed so an actor can queue a redraw of volume
instead.
The plan is that clutter_actor_queue_redraw will start to transparently
use _clutter_actor_queue_redraw_with_clip when it can determine a paint
volume for the actor.
For the blur effect we use a BLUR_PADDING constant to pad out the volume
of the source actor on the x and y axis. Previously we were offsetting
the origin negatively using BLUR_PADDING and then adding BLUR_PADDING
to the width and height, but we should have been adding 2*BLUR_PADDING
instead.
This ensures that clipped redraws are disabled when using
CLUTTER_PAINT=redraws. This may seem unintuitive given that this option
is for debugging clipped redraws, but we can't draw an outline outside
the clip region and anything we draw inside the clip region is liable to
leave a trailing mess on the screen since it won't be cleared up by
later clipped redraws.
This adds a debug option to visualize the paint volumes of all actors.
When CLUTTER_PAINT=paint-volumes is exported in the environment before
running a Clutter application then all actors will have their bounding
volume drawn in green with a label corresponding to the actors type.
This is a fairly extensive second pass at exposing paint volumes for
actors.
The API has changed to allow clutter_actor_get_paint_volume to fail
since there are times - such as when an actor isn't a descendent of the
stage - when the volume can't be determined. Another example is when
something has connected to the "paint" signal of the actor and we simply
have no way of knowing what might be drawn in that handler.
The API has also be changed to return a const ClutterPaintVolume pointer
(transfer none) so we can avoid having to dynamically allocate the
volumes in the most common/performance critical code paths. Profiling was
showing the slice allocation of volumes taking about 1% of an apps time,
for some fairly basic tests. Most volumes can now simply be allocated on
the stack; for clutter_actor_get_paint_volume we return a pointer to
&priv->paint_volume and if we need a more dynamic allocation there is
now a _clutter_stage_paint_volume_stack_allocate() mechanism which lets
us allocate data which expires at the start of the next frame.
The API has been extended to make it easier to implement
get_paint_volume for containers by using
clutter_actor_get_transformed_paint_volume and
clutter_paint_volume_union. The first allows you to query the paint
volume of a child but transformed into parent actor coordinates. The
second lets you combine volumes together so you can union all the
volumes for a container's children and report that as the container's
own volume.
The representation of paint volumes has been updated to consider that
2D actors are the most common.
The effect apis, clutter-texture and clutter-group have been update
accordingly.
Previously we used the transformed allocation but that doesn't take
into account actors with depth which may be projected outside the
area covered by the transformed allocation.
The blur effect will sample pixels on the edges of the offscreen buffer,
so we want to add a padding to avoid clamping the blur.
We do this by creating a larger target texture, and updating the paint
volume of the actor during paint to take that padding into account.
We should be using the real, on-screen, transformed size of the actor to
size and position the offscreen buffer we use to paint the actor for an
effect.
An Effect implementation might override the paint volume of the actor to
which it is applied to. The get_paint_volume() virtual function should
be added to the Effect class vtable so that any effect can get the
current paint volume and update it.
The clutter_actor_get_paint_volume() function becomes context aware, and
does the right thing if called from within a ClutterEffect pre_paint()
or post_paint() implementation, by allowing all effects in the chain up
to the caller to modify the paint volume.
An actor has an implicit "paint volume", that is the volume in 3D space
occupied when painting itself.
The paint volume is defined as a cuboid with the origin placed at the
top-left corner of the actor; the size of the cuboid is given by three
vectors: width, height and depth.
ClutterActor provides API to convert the paint volume into a 2D box in
screen coordinates, to compute the on-screen area that an actor will
occupy when painted.
Actors can override the default implementation of the get_paint_volume()
virtual function to provide a different volume.
*** WARNING: THIS COMMIT CHANGES THE BUILD ***
Do not recurse into the backend directories to build private, internal
libraries.
We only recurse from clutter/ into the cogl sub-directory; from there,
we don't recurse any further. All the backend-specific code in Cogl and
Clutter is compiled conditionally depending on the macros defined by the
configure script.
We still recurse from the top-level directory into doc, clutter and
tests, because gtk-doc and tests do not deal nicely with non-recursive
layouts.
This change makes Clutter compile slightly faster, and cleans up the
build system, especially when dealing with introspection data.
Ideally, we also want to make Cogl part of the top-level build, so that
we can finally drop the sed trick to change the shared library from the
GIR before compiling it.
Currently disabled:
‣ OSX backend
‣ Fruity backend
Currently enabled but untested:
‣ EGL backend
‣ Windows backend
ClutterAnimator currently has a number of bugs related to its
referencing of its internal timeline.
1) The default timeline created in _init is not unreffed (it appears the
programmer has wrongly thought ClutterTimeline has a floating reference
based on the use of g_object_ref_sink in _set_timeline)
2) The timeline and slave_timeline vars are unreffed in finalize instead
of dispose
3) The signal handlers set up in _set_timeline are not disconnected when
the animator is disposed
http://bugzilla.clutter-project.org/show_bug.cgi?id=2347
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This reorganizes the loop for clutter_actor_contains so that it is a
for loop rather than a while loop. Although this is mostly just
nitpicking, I think this change could make the loop slightly faster if
not optimized because it doesn't perform the self == descendant check
twice and it is clearer.
The documentation for clutter_actor_contains didn't specify what
happens when self==descendant. A strict reading of it might lead you
to think that it would return FALSE because in that case the
descendant isn't an immediate child or a deeper descendant. The code
actually would return TRUE. I think this is more useful so this patch
fixes the docs rather than the code.
When removing all keys in a ClutterAnimator, the hash table with
object/property name pairs went out of sync. This change makes
the animator always clear this hash table upon key-removal; and
refreshing it if the animator's timeline is running.
Fixes bug #2335
Each time a material property changes we look to see if any of its
ancestry has become redundant and if so we prune that redundant
ancestry.
There was a problem with the logic that handles this though because we
weren't considering that a material which is a layer state authority may
still defer to ancestors to define the state of individual layers.
For example a material that derives from a parent with 5 layers can
become a STATE_LAYERS authority by simply changing it's ->n_layers count
to 4 and in that case it can still defer to its ancestors to define the
state of those 4 layers.
This patch checks first if a material is a layer state authority and if
so only tries to prune its ancestry if it also *owns* all the individual
layers it depends on. (I.e. if g_list_length
(material->layer_differences) != material->n_layers then it's not safe
to try pruning its ancestry!)
http://bugzilla-attachments.gnome.org/attachment.cgi?id=170907
There is GL_INVALID_ENUM error for GL_DEPTH_STENCIL when call
glRenderbufferStorage() with OpenGL ES backend. So enable this
only for OpenGL backend.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
This is all internal, so we shouldn't need it; unfortunately, it seems
we're passing invalid data internally, so for the time being catching
inconsistencies should at least emit a warning for us to backtrace.
This adds a check in clutter_actor_real_queue_redraw after calling
_clutter_actor_get_stage_internal to check in case the actor doesn't yet
have an associated stage so we can avoid passing a NULL stage pointer to
_clutter_stage_set_pick_buffer_valid which could cause a crash.
*** This is an API change ***
The general pattern for axis-aligned arguments is:
x argument
y argument
If we consider columns an x-aligned argument, and row a y-aligned
argument, then we need to update the TableLayout functions to be:
column
row
and not:
row
column
We have an optimization to track when there are multiple picks per
frame so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There was a problem though in that we were tracking this information in
the ClutterMainContext, but conceptually this doesn't really make sense
because the pick buffer is associated with a stage framebuffer and there
can be multiple stages for one context.
This patch moves the state tracking to ClutterStage.
This reverts commit d7e86e2696.
This was a half baked patch that was pushed a bit early since it broke
test-texture-pick-with-alpha + the commit message refers to a change on
the wip/paint-box branch that hasn't happened yet.
We have an optimization to track when there are multiple picks per
frames so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There were two problems with how we were tracking this state though.
Firstly we were tracking this information in the ClutterMainContext, but
conceptually this doesn't really make sense because the pick buffer is
associated with a stage framebuffer and there can be multiple stages for
one context. Secondly - since the change to how redraws are queued - we
weren't marking the pick buffer as invalid when a queuing a redraw, we
were only marking the buffer invalid when signaling/finishing the
queue-redraw process, which is now deferred until just before a paint.
This meant using clutter_stage_get_actor_at_pos after a scenegraph
change could give a wrong result if it just read from an existing (but
technically invalid) pick buffer.
This patch moves the state tracking to ClutterStage, and ensures the
buffer is invalidated in _clutter_stage_queue_actor_redraw.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2283
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The request mode set by the box layout was previously width-for-height
in a vertical layout and height-for-width in a horizontal layout which
seems to be wrong. For example, if width-for-height is used in a
vertical layout then the width request will come second with the
for_height parameter set. However a vertical layout doesn't pass the
for_height parameter on to its children so doing the requests in that
order doesn't help. If the layout contains a ClutterText then both the
width and height request for it will have -1 for the for_width and
for_height parameters so the text would end up allocated too small.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2328
If set_cogl_texture() is called after unsetting the Texture's material
then we really want to make a copy of the template.
Also, we should assert more often if the internal state goes horribly
wrong: at least, we'll have a backtrace.
The order of the row_span and column_span arguments was different in
the declaration from that in the definition. This was causing the
gtk-doc to also have the wrong order.
If COGL_OBJECT_DEBUG is defined then cogl-object-private.h will call
COGL_NOTE in the ref and unref macros. For this to work the debug
header needs to also be included or COGL_NOTE won't necessarily be
defined.
If the FlowLayout layout manager wasn't allocated the same size it
requested then it should blow its caches and recompute the layout
with the given allocation size.
Instead of using the fixed position and size API, use the newly added
update_allocation() virtual function in ClutterConstraint to change the
allocation of a ClutterActor. This allows using constraints inside
layout managers, and also allows Constraints to react to changes in the
size of an actor without causing relayout cycles.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2319
The Constraint should plug directly into the allocation mechanism, and
modify the allocation of the actor to which they are applied to. This is
similar to the mechanism used by the Effect class to modify the paint
sequence of an actor.
In line with the changes made in f5f066df9c to clean up how Clutter
deals with transformations of actors this patch updates the code in
clutter-offscreen-effect.c. We now query the projection matrix from the
stage instead of the perspective and instead of duplicating the logic to
setup the stage view transform we now use
_clutter_actor_apply_modelview_transform for the stage instead.
cogl_util_next_p2 is declared in cogl-util.h which is a private header
so it shouldn't be possible for an application to use it. It's
probably not a function we'd like to export from Cogl so it seems
better to keep it private. This patch renames it to _cogl_util_next_p2
so that it won't be exported from the shared library.
The documentation for the function is also slightly wrong because it
stated that the function returned the next power greater than
'a'. However the code would actually return 'a' if it's already a
power of two. I think the actual behaviour is more useful so this
patch changes the documentation rather than the code.
Previously CoglVertexBuffer would always set the flush options flags
to at least contain COGL_MATERIAL_FLUSH_FALLBACK_MASK. The code then
later checks whether any flags are set before deciding whether to copy
the material to implement the overrides. This means that it would
always end up copying the material even if there are no fallback
layers. This patch changes it so that it only sets
COGL_MATERIAL_FLUSH_FALLBACK_MASK if fallback_layers != 0.
If a single arbfp program is being shared between multiple CoglMaterials
then we need to make sure we update all program.local params when
switching between materials. Previously we had a dirty flag to track
when combine_constant params were changed but didn't take in to account
that different materials sharing the same program may have different
combine constants.
Previously the backend private state was used to either link to an
authority material or provide authoritative program state. The mechanism
seemed overly complex and felt very fragile. I made a recent comment
which added a lot of documentation to make it easier to understand but
still it didn't feel very elegant.
This patch takes a slightly different approach; we now have a
ref-counted ArbfpProgramState object which encapsulates a single ARBfp
program and the backend private state now just has a single member which
is a pointer to one of these arbfp_program_state objects. We no longer
need to cache pointers to our arbfp-authority and so we can get rid of
a lot of awkward code that ensured these pointers were
updated/invalidated at the right times. The program state objects are
not tightly bound to a material so it will also allow us to later
implement a cache mechanism that lets us share state outside a materials
ancestry. This may help to optimize code not following the
recommendations of deriving materials from templates, avoiding one-shot
materials and not repeatedly modifying materials because even if a
material's ancestry doesn't naturally lead us to shareable state we can
fallback to searching for shareable state using central hash tables.
This adds a way to iterate the layer indices of the given material since
cogl_material_get_layers has been deprecated. The user provides a
callback to be called once for each layer.
Because modification of layers in the callback may potentially
invalidate any number of the internal CoglMaterialLayer structures and
invalidate the material's layer cache this should be more robust than
cogl_material_get_layers() which used to return a const GList *
pointing directly to internal state.
This fixes the material backends to declare their constant vtable in the
c file with a corresponding extern declaration in the header. This
should fix complaints about duplicate symbols seen on OSX.
Instead of lazily incorporating combine constants as arbfp PARAM
constants in the source directly we now use program.local parameters
instead so we can avoid repeating codegen if a material's combine
constant is updated. This should be a big win for applications animating
a constant used for example in an animated interpolation, such as
gnome-shell.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2280