This adds a note to clarify that cogl_matrix_multiply allows you to
multiply the @a matrix in-place, so @a can equal @result but @b can't
equal @result.
When uploading the layer matrix to GL it wasn't first calling
glActiveTextureMatrix to set the right texture unit for the
layer. This would end up setting the texture matrix on whatever layer
happened to be previously active. This happened to work for
test-cogl-multitexture presumably because it was coincidentally
setting the layer matrix on the last used layer.
The pipeline private data is accessed both from the private data set
on a CoglPipeline and the destroy notify function of a weak material
that the vertex buffer creates when it needs to override the wrap
mode. However when a CoglPipeline is destroyed, the CoglObject code
first removes all of the private data set on the object and then the
CoglPipeline code gets invoked to destroy all of the weak children. At
this point the vertex buffer's weak override destroy notify function
will get invoked and try to use the private data which has already
been freed causing a crash.
This patch instead adds a reference count to the pipeline private data
stuct so that we can avoid freeing it until both the private data on
the pipeline has been destroyed and all of the weak materials are
destroyed.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2544
In cogl_pipeline_set_layer_combine_constant it was comparing whether
the new color is the same as the old color using a memcmp on the
constant_color parameter. However the combine constant is stored in
the layer data as an array of four floats but the passed in color is a
CoglColor (which is currently an array of four guint8s). This was
causing valgrind errors and presumably also the check for setting the
same color twice would always fail.
This patch makes it do the conversion to a float array upfront before
the comparison.
cogl_matrix_project_points and cogl_matrix_transform_points had an
optimization for the common case where the stride parameters exactly
match the size of the corresponding structures. The code for both when
generated by gcc with -O2 on x86-64 use two registers to hold the
addresses of the input and output arrays. In the strided version these
pointers are incremented by adding the value of a register and in the
packed version they are incremented by adding an immediate value. I
think the difference in cost here would be negligible and it may even
be faster to add a register.
Also GCC appears to retain the loop counter in a register for the
strided version but in the packed version it can optimize it out and
directly use the input pointer as the counter. I think it would be
possible to reorder the code a bit to explicitly use the input pointer
as the counter if this were a problem.
Getting rid of the packed versions tidies up the code a bit and it
could potentially be faster if the code differences are small and we
get to avoid an extra conditional in cogl_matrix_transform_points.
* elliot/cookbook-actors-composite:
docs: Add reference to useful GObject tutorial
docs: Explain why destroy() is implemented
docs: Implement destroy() rather than dispose()
docs: Don't use clutter_stage_get_default()
docs: Change text on button
docs: Add a note about other state variables
docs: Complete composite actor recipe
docs: Change order of functions in example to match docs
docs: Add more comments on how allocate() works
docs: Include code examples in the recipe
docs: Explain enums for properties and signals
docs: Don't set explicit size on button
docs: Add example of preferred_height() and preferred_width()
docs: Add recipe for creating a custom ClutterActor with composition
docs: Add more comments on code example for composite actor
docs: Improve example code formatting
docs: Add some gtk-doc annotations to example
docs: Add custom ClutterActor example which uses composition
When copying COMBINE state in
_cogl_pipeline_layer_init_multi_property_sparse_state we would read some
state from the destination layer (invalid data potentially), then
redundantly set the value back on the destination. This was picked up by
valgrind, and the code is now more careful about how it references the
src layer vs the destination layer.
There is currently a problem with per-framebuffer journals in that it's
possible to create a framebuffer from a texture which then gets rendered
too but the framebuffer (and corresponding journal) can be freed before
the texture gets used to draw with.
Conceptually we want to make sure when freeing a framebuffer that - if
it is associated with a texture - we flush the journal as the last thing
before really freeing the framebuffer's meta data. Technically though
this is awkward to implement since the obvious mechanism for us to be
notified about the framebuffer's destruction (by setting some user data
internally with a callback) notifies when the framebuffer has a
ref-count of 0. This means we'd have to be careful what we do with the
framebuffer to consider e.g. recursive destruction; anything that would
set more user data on the framebuffer while it is being destroyed and
ensuring nothing else gets notified of the framebuffer's destruction
before the journal has been flushed.
For simplicity, for now, this patch provides another solution which is
to flush framebuffer journals whenever we switch away from a given
framebuffer via cogl_set_framebuffer or cogl_push/pop_framebuffer. The
disadvantage of this approach is that we can't batch all the geometry of
a scene that involves intermediate renders to offscreen framebufers.
Clutter is doing this more and more with applications that use the
ClutterEffect APIs so this is a shame. Hopefully this will only be a
stop-gap solution while we consider how to reliably support journal
logging across framebuffer changes.
When flushing a clip stack that contains more than one rectangle which
needs to use the stencil buffer the code takes a different path so
that it can combine the new rectangle with the existing contents of
the stencil buffer. However it was not correctly flushing the
modelview and projection matrices so that rectangle would be in the
wrong place.
This adds a COGL_DEBUG=clipping option that reports how the clip is
being flushed. This is needed to determine whether the scissor,
stencil clip planes or software clipping is being used.
The CoglDebugFlags are now stored in an array of unsigned ints rather
than a single variable. The flags are accessed using macros instead of
directly peeking at the cogl_debug_flags variable. The index values
are stored in the enum rather than the actual mask values so that the
enum doesn't need to be more than 32 bits wide. The hope is that the
code to determine the index into the array can be optimized out by the
compiler so it should have exactly the same performance as the old
code.
The lighting parameters such as the diffuse and ambient colors were
previously only flushed in the fixed vertend. This meant that if a
vertex shader was used then they would not be set. The lighting
parameters are uniforms which are just as useful in a fragment shader
so it doesn't really make sense to set them in the vertend. They are
now flushed in the common cogl-pipeline-opengl code but the code is
#ifdef'd for GLES2 because they need to be part of the progend in that
case.
The uniforms for the alpha test reference value and point size on
GLES2 are updating using similar code. This generalizes the code so
that there is a static array of predefined builtin uniforms which
contains the uniform name, a pointer to a function to get the value
from the pipeline, a pointer to a function to update the uniform and a
flag representing which CoglPipelineState change affects the
uniform. The uniforms are then updated in a loop. This should simplify
adding more builtin uniforms.
The builtin uniforms are accessible from either the vertex shader or
the fragment shader so we should define them in the common
section. This doesn't really matter for the current list of uniforms
because it's pretty unlikely that you'd want to access the matrices
from the fragment shader, but for other builtins such as the lighting
material properties it makes sense.
* xi2: (41 commits)
test-devices: Actually print the axis data
device-manager/xi2: Sync the stage of source devices
event: Clean up clutter_event_copy()
device: unset the axes array pointer when resetting
device-manager/xi2: Fix device hotplugging
glx: Clean up GLX implementation
device/x11: Store min/max keycode in the XI device class
x11: Hide all private symbols
docs: More documentation fixes for InputDevice
*/event: Never manipulate the event queue directly
win32: Update DeviceManager device creation
device: Allow enabling/disabling non-master devices
backend/eglx: Add newly created stages to the translators
device: Add more doc annotations
device: Use a double for translate_axis() argument
test-devices: Clean up and show axes data
event: Fix up clutter_event_copy()
device/xi2: Translate the axis data after setting devices
device: Add more accessors for properties
docs: Update API reference
...
When we added the texture->framebuffers member a _cogl_texture_init
funciton was added to initialize the list of framebuffers associated
with a texture to NULL. All the backends were updated except the
x11 tfp backend. This was causing crashes in test-pixmap.
This is part of a broader cleanup of some of the experimental Cogl API.
One of the reasons for this particular rename is to reduce the verbosity
of using the API. Another reason is that CoglVertexArray is going to be
renamed CoglAttributeBuffer and we want to help emphasize the
relationship between CoglAttributes and CoglAttributeBuffers.
We have a bunch of experimental convenience functions like
cogl_primitive_p2/p2t2 that have corresponding vertex structures but it
seemed a bit odd to have the vertex annotation e.g. "P2T2" be an infix
of the type like CoglP2T2Vertex instead of be a postfix like
CoglVertexP2T2. This switches them all to follow the postfix naming
style.
COGL_DEBUG=disable-fast-read-pixel can be used to disable the
optimization for reading a single pixel colour back by looking at the
geometry in the journal and not involving the GPU. With this disabled we
will always flush the journal, rendering to the framebuffer and then use
glReadPixels to get the result.
This adds a transparent optimization to cogl_read_pixels for when a
single pixel is being read back and it happens that all the geometry of
the current frame is still available in the framebuffer's associated
journal.
The intention is to indirectly optimize Clutter's render based picking
mechanism in such a way that the 99% of cases where scenes are comprised
of trivial quad primitives that can easily be intersected we can avoid
the latency of kicking a GPU render and blocking for the result when we
know we can calculate the result manually on the CPU probably faster
than we could even kick a render.
A nice property of this solution is that it maintains all the
flexibility of the render based picking provided by Clutter and it can
gracefully fall back to GPU rendering if actors are drawn using anything
more complex than a quad for their geometry.
It seems worth noting that there is a limitation to the extensibility of
this approach in that it can only optimize picking a against geometry
that passes through Cogl's journal which isn't something Clutter
directly controls. For now though this really doesn't matter since
basically all apps should end up hitting this fast-path. The current
idea to address this longer term would be a pick2 vfunc for ClutterActor
that can support geometry and render based input regions of actors and
move this optimization up into Clutter instead.
Note: currently we don't have a primitive count threshold to consider
that there could be scenes with enough geometry for us to compensate for
the cost of kicking a render and determine a result more efficiently by
utilizing the GPU. We don't currently expect this to be common though.
Note: in the future it could still be interesting to revive something
like the wip/async-pbo-picking branch to provide an asynchronous
read-pixels based optimization for Clutter picking in cases where more
complex input regions that necessitate rendering are in use or if we do
add a threshold for rendering as mentioned above.
Both cogl_matrix_transform_points and _project_points take points_in and
points_out arguments and explicitly allow pointing to the same array
(i.e. to transform in-place) The implementation of the various internal
transform functions though were not handling this possability and so it
was possible the reference partially transformed vertex values as if
they were original input values leading to incorrect results. This patch
ensures we take a temporary copy of the current input point when
transforming.
This adds a utility function that can determine if a given point
intersects an arbitrary polygon, by counting how many edges a
"semi-infinite" horizontal ray crosses from that point. The plan is to
use this for a software based read-pixel fast path that avoids using the
GPU to rasterize journaled primitives and can instead intersect a point
being read with quads in the journal to determine the correct color.
This adds a stop-gap mechanism for Cogl to know when the window system
is requested to present the current backbuffer to the frontbuffer by
adding a _cogl_swap_buffers_notify function that backends are now
expected to call right after issuing the equivalent request to OpenGL
vie the platforms OpenGL binding layer. This (blindly) updates all the
backends to call this new function.
For now Cogl doesn't do anything with the notification but the intention
is to use it as part of a planned read-pixel optimization which will
need to reset some state at the start of each new frame.
Instead of having _cogl_get/set_clip stack which reference the global
CoglContext this instead makes those into CoglClipState method functions
named _cogl_clip_state_get/set_stack that take an explicit pointer to a
CoglClipState.
This also adds _cogl_framebuffer_get/set_clip_stack convenience
functions that avoid having to first get the ClipState from a
framebuffer then the stack from that - so we can maintain the
convenience of _cogl_get_clip_stack.
This adds an internal function to be able to query the screen space
bounding box of the current clip entries contained in a given
CoglClipStack.
This bounding box which is cheap to determine can be useful to know the
largest extents that might be updated while drawing with this clip
stack.
For example the plan is to use this as part of an optimized read-pixel
path handled on the CPU which will need to track the currently valid
extents of the last call to cogl_clear()
Instead of having a single journal per context, we now have a
CoglJournal object for each CoglFramebuffer. This means we now don't
have to flush the journal when switching/pushing/popping between
different framebuffers so for example a Clutter scene that involves some
ClutterEffect actors that transiently redirect to an FBO can still be
batched.
This also allows us to track state in the journal that relates to the
current frame of its associated framebuffer which we'll need for our
optimization for using the CPU to handle reading a single pixel back
from a framebuffer when we know the whole scene is currently comprised
of simple rectangles in a journal.
This adds an internal alternative to cogl_object_set_user_data that also
passes an instance pointer to destroy notify callbacks.
When setting private data on a CoglObject it's often desirable to know
the instance being destroyed when we are being notified to free the
private data due to the object being freed. The typical solution to this
is to track a pointer to the instance in the private data itself so it
can be identified but that usually requires an extra micro allocation
for the private data that could have been avoided if only the callback
were given an instance pointer.
The new internal _cogl_object_set_user_data passes the instance pointer
as a second argument which means it is ABI compatible for us to layer
the public version on top of this internal function.
This moves the implementation of cogl_clear into cogl-framebuffer.c as
two new internal functions _cogl_framebuffer_clear and
_cogl_framebuffer_clear4f. It's not clear if this is what the API will
look like as we make more of the CoglFramebuffer API public due to the
limitations of using flags to identify buffers when framebuffers may
contain any number of ancillary buffers but conceptually it makes some
sense to tie the operation of clearing a color buffer to a framebuffer.
The short term intention is to enable tracking the current clear color
as a property of the framebuffer as part of an optimization for reading
back single pixels when the geometry is simple enough that we can
compute the result quickly on the CPU. (If the point doesn't intersect
any geometry we'll need to return the last clear color.)
Previously most of the code for cogl-program and cogl-shader was
ifdef'd out for GLES 1.1 and alternate stub definitions were
defined. This patch removes those and instead puts #ifdef's directly
in the functions that need it. This should make it a little bit easier
to maintain.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2516
When determining whether to hash the combine constant Cogl checks the
arguments to the combine funcs to determine whether the combine
constant is used. However is was using the GLenums GL_CONSTANT_COLOR
and GL_CONSTANT_ALPHA but these are not valid values for the
CoglPipelineCombineSource enum so presumably the constant would never
get hashed. This patch makes it use Cogl's enum of
COGL_PIPELINE_COMBINE_SOURCE_CONSTANT instead.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2516
GLES has an extension called GL_OES_mapbuffer to support mapping
buffer objects but only for writing. Cogl now has two new feature
flags to advertise whether mapping for reading and writing is
supported. Under OpenGL, these features are always set if the VBO
extension is advertised and under GLES only the write flag is set if
the GL_OES_mapbuffer extension is advertised.
In the journal code and when generating the stroke path the vertices
are generated on the fly and stored in a CoglBuffer using
cogl_buffer_map. However cogl_buffer_map is allowed to fail but it
wasn't checking for a NULL return value. In particular on GLES it will
always fail because glMapBuffer is only provided by an extension. This
adds a new pair of internal functions called
_cogl_buffer_{un,}map_for_fill_or_fallback which wrap
cogl_buffer_map. If the map fails then it will instead return a
pointer into a GByteArray attached to the context. When the buffer is
unmapped the array is copied into the buffer using
cogl_buffer_set_data.
On GLES2 there's no builtin mechanism to replace texture coordinates
with point sprite coordinates so calling glEnable(GL_POINT_SPRITE)
isn't valid. Instead the point sprite coords are implemented by using
a special builtin varying variable in GLSL.
There are several places where we need to compare the texture state of a
pipeline and sometimes we need to take into consideration if the
underlying texture has changed but other times we may only care to know
if the texture target has changed.
For example the fragends typically generate programs that they want to
share with all pipelines with equivalent fragment processing state, and
in this case when comparing pipelines we only care about the texture
targets since changes to the underlying texture won't affect the
programs generated.
Prior to this we had tried to handle this by passing around some special
flags to various functions that evaluate pipeline state to say when we
do/don't care about the texture data, but this wasn't working in all
cases and was more awkward to manage than the new approach.
Now we simply have two state bits:
COGL_PIPELINE_LAYER_STATE_TEXTURE_TARGET and
COGL_PIPELINE_LAYER_STATE_TEXTURE_DATA and CoglPipelineLayer has an
additional target member. Since all the appropriate code takes masks of
these state bits to determine what to evaluate we don't need any extra
magic flags.
When notifying that a pipeline property is going to change, then at
times a pipeline will take over being the authority of the corresponding
state group. Some state groups can contain multiple properties and so to
maintain the integrity of all of the properties we have to initialize
all the property values in the new authority. For state groups with only
one property we don't have to initialize anything during the
pre_change_notify() because we can assume the value will be initialized
as part of the change being notified.
This patch optimizes how we handle this initialization of state groups
in a couple of ways; firstly we no longer do anything to initialize
state-groups with only one property, secondly we no longer use
_cogl_pipeline_copy_differences - (we have a new
_cogl_pipeline_init_multi_property_sparse_state() func) so we can avoid
lots calls to handle_automatic_blend_enable() which is sometimes seen
high in sysprof profiles.
Previously atlasing would be disabled if the GL driver does not
support reading back texture data. This meant that atlasing would not
happen on GLES. However we also require that the driver support FBOs
and the texture data is only read back as a fallback if the FBO
fails. Therefore the atlas should be ok on GLES 2 which has FBO
support in core.
We try and bail out of flushing pipeline state asap if we can see the
pipeline has already been flushed and hasn't changed but we weren't
checking to see if the skip_gl_color flag is the same as when it was
last flush too and so we'd sometimes bail out without updating the
glColor correctly.
When an item is added to the journal the current pipeline immediately
gets the legacy state applied to it and the modified pipeline is
logged instead of the original. However the actual drawing from the
journal is done using the vertex attribute API which was also applying
the legacy state. This meant that the legacy state used would be a
combination of the state set when the journal entry was added as well
as the state set when the journal is flushed. To fix this there is now
an extra CoglDrawFlag to avoid applying the legacy state when setting
up the GL state for the vertex attributes. The journal uses this flag
when flushing.
The vertex attribute API assumes that if there is a color array
enabled then we can't determine if the colors are opaque so we have to
enable blending. The journal always uses a color array to avoid
switching color state between rectangles. Since the journal switched
to using vertex attributes this means we effectively always enable
blending from the journal. To fix this there is now a new flag for
_cogl_draw_vertex_attributes to specify that the color array is known
to only contain opaque colors which causes the draw function not to
copy the pipeline. If the pipeline has blending disabled then the
journal passes this flag.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2481
There is an internal version of cogl_draw_vertex_attributes_array
which previously just bypassed the framebuffer flushing, journal
flushing and pipeline validation so that it could be used to draw the
journal. This patch generalises the function so that it takes a set of
flags to specify which parts to flush. The public version of the
function now just calls the internal version with the flags set to
0. The '_real' version of the function has now been merged into the
internal version of the function because it was only called in one
place. This simplifies the code somewhat. The common code which
flushed the various state has been moved to a separate function. The
indexed versions of the functions have had a similar treatment.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2481
Cogl no longer has any code that assumes the buffer in a CoglBitmap is
allocated to the full size of height*rowstride. We should comment that
this is the case so that we remember to keep it that way. This is
important for cogl_texture_new_from_data because the application may
have created the data from a sub-region of a larger image and in that
case it's not safe to read the full rowstride of the last row when the
sub region contains the last row of the larger image.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2491
When uploading data for GLES we need to deal with cases where the
rowstride is too large to be described only by GL_UNPACK_ALIGNMENT
because there is no GL_UNPACK_ROW_LENGTH. Previously for the
sub-region uploading code it would always copy the bitmap and for the
code to upload the whole image it would copy the bitmap unless the
rowstride == bpp*width. Neither paths took into account that we don't
need to copy if the rowstride is just an alignment of bpp*width. This
moves the bitmap copying code to a separate function that is used by
both upload methods. It only copies the bitmap if the rowstride is not
just an alignment of bpp*width.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2491
The ffs function is defined in C99 so if we want to use it in Cogl we
need to provide a fallback for MSVC. This adds a configure check for
the function and then a fallback using a while loop if it is not
available.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2491
If we have to copy the bitmap to do the premultiplication then we were
previously using the rowstride of the source image as the rowstride
for the new image. This is wasteful if the source image is a subregion
of a larger image which would make it use a large rowstride. If we
have to copy the data anyway we might as well compact it to the
smallest rowstride. This also prevents the copy from reading past the
end of the last row of pixels.
An internal function called _cogl_bitmap_copy has been added to do the
copy. It creates a new bitmap with the smallest possible rowstride
rounded up the nearest multiple of 4 bytes. There may be other places
in Cogl that are currently assuming we can read height*rowstride of
the source buffer so they may want to take advantage of this function
too.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2491
The builtin vertex attribute for the normals was incorrectly checked
for as 'cogl_normal' however it is defined as cogl_normal_in in the
shader boilerplate and for the name generated by CoglVertexBuffer.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2499