There were a few problems flushing texture overrides so that sliced
textures would not work:
* In _cogl_material_set_layer_texture it ignored the 'overriden'
parameter and always set texture_overridden to FALSE.
* cogl_texture_get_gl_texture wasn't being called correctly in
override_layer_texture_cb. It returns a gboolean to indicate the
error status but this boolean was being assigned to gl_target.
* _cogl_material_layer_texture_equal did not take into account the
override.
* _cogl_material_layer_get_texture_info did not return the overridden
texture so it would always use the first texture slice.
There was a lot of common code that was copied to all of the backends
to convert the data to a suitable format and wrap it into a CoglBitmap
so that it can be passed to _cogl_texture_driver_upload_subregion_to_gl.
This patch moves the common code to cogl-texture.c so that the virtual
just takes a CoglBitmap that is already in the right format.
This verifies that calling cogl_texture_get_data returns the same data
uploaded to the texture. The bottom quarter of the texture is replaced
using cogl_texture_set_region. It tries creating the texture with
different sizes and flags in the hope that it will hit different
texture backends.
Previously cogl_texture_get_data would pretty much directly pass on to
the get_data texture virtual function. This ended up with a lot of
common code that was copied to all of the backends. For example, the
method is expected to return the required data size if the data
pointer is NULL and to calculate its own rowstride if the rowstride is
0. Also it needs to convert the downloaded data if GL can't support
that format directly.
This patch moves the common code to cogl-texture.c so the virtual is
always called with a format that can be downloaded directly by GL and
with a valid rowstride. If the download fails then the virtual can
return FALSE in which case cogl-texture will use the draw and read
fallback.
This greatly speeds up running all the conformance tests by no longer
delaying many of the tests for a number of dummy frames to be painted.
We used to skip frames because we thought there was a problem with the
driver's glReadPixels implementation. Although we have seen driver
issues at times the real reason the delay was needed was because
resizing the stage usually happens asynchronously (because a non
synchronous X request is used by clutter_stage_set_size()). We now force
all X requests to be synchronized for the conformance tests so this is
no longer a problem and we can avoid these hacks.
For point sprites you are usually drawing the whole texture so you
most often want GL_CLAMP_TO_EDGE. This patch removes the override for
COGL_MATERIAL_WRAP_MODE_AUTOMATIC when point sprites are enabled for a
layer so that it will clamp to edge.
This adds a new API call to enable point sprite coordinate generation
for a material layer:
void
cogl_material_set_layer_point_sprite_coords_enabled (CoglHandle material,
int layer_index,
gboolean enable);
There is also a corresponding get function.
Enabling point sprite coords simply sets the GL_COORD_REPLACE of the
GL_POINT_SPRITE glTexEnv when flusing the material. There is no
separate application control for glEnable(GL_POINT_SPRITE). Instead it
is left permanently enabled under the assumption that it has no affect
unless GL_COORD_REPLACE is enabled for a texture unit.
http://bugzilla.openedhand.com/show_bug.cgi?id=2047
The required .so file was named using @CLUTTER_WINSYS@ but since
bf9d5f3949 the .so should be named with @CLUTTER_SONAME_INFIX@. This
was breaking the build on eglx.
Recently I added a _cogl_debug_dump_materials_dot_file function for
debugging the sparse material state. This extends the state dumped to
include the graph of layer state also.
We were mistakenly only initializing layer->layer_index for new layers
associated with texture units > 0. This had gone unnoticed because
normally layers associated with texture unit0 have a layer index of 0
too. Mutter was hitting this issue because it was initializing layer 1
before layer 0 for one of its materials so layer 1 was temporarily
associated with texture unit 0.
* cally-merge:
cally: Add introspection generation
cally: Improving cally doc
cally: Cleaning CallyText
cally: Refactoring "window:create" and "window:destroy" emission code
cally: Use proper backend information on CallyActor
cally: Check HAVE_CONFIG_H on cally-util.c
docs: Fix Cally documentation
cally: Clean up the headers
Add binaries of the Cally examples to the ignore file
docs: Add Cally API reference
Avoid to load cally module on a11y examples
Add accessibility tests
Initialize accessibility support on clutter_init
Rename some methods and includes to avoid -Wshadow warnings
Cally initialization code
Add Cally
This makes test-timeline get the default stage so there is at least one
stage instantiated. Without any stages the master clock will never run
which was causing this test to fail.
Toolkits and applications not written in C might still need access to
the Cally API to write accessibility extensions based on it for their
own native elements.
The report generation was broken by the split of the various test units;
also, we were using GTest in a way that's not really sanctioned by
upstream.
This commit tries to re-use the targets from GLib's Makefile rules while
compensating for our own set up.
We might want pieces higher in the stack (like Mx) to handle XSettings
events as well, and swallowing them by removing them from the events
queue would make it impossible.
Previously "window:create" and "window:destroy" were emitted on
CallyUtil. Although it works, and CallyUtil already have callbacks to
stage_added/removed signals, I think that it is more tidy/clear to do
that on CallyRoot:
* CallyRoot already has code to manage ClutterStage addition/removal
* In fact, we can see CallyRoot as the object exposing the a11y
information from ClutterStageManager, so it fits better here.
* CallyUtil callbacks these signals are related to key event
listeners (key snooper simulation). One of the main CallyUtil
responsabilities is managing event (connecting, emitting), so I
would prefer to not start to add/mix more functionalities here.
Ideally it would be better to emit all CallyStage methods from
CallyStage, but it is clear that "create" and "destroy" are more easy
to emit from a external object
Previously cogl_set_fog would cause a flush of the Cogl journal and
would directly bang the GL state machine to setup fogging. As part of
the ongoing effort to track most state in CoglMaterial to support
renderlists this now adds an indirection so that cogl_set_fog now just
updates ctx->legacy_fog_state. The fogging state then gets enabled as a
legacy override similar to how the old depth testing API is handled.
This is a blind patch because I don't know enough about the osx backend
and the osx backend probably doesn't even work these days anyway but
since people have filed bugs specifically on OSX that imply they don't
have a depth or stencil buffer this tries to fix that.
Maybe someone will eventually pick up the osx backend again and verify
if this helps.
http://bugzilla.clutter-project.org/show_bug.cgi?id=1394
Since we'll want to share the fallback logic with CoglVertexArray this
moves the malloc based fallback (for when OpenGL doesn't support vertex
or pixel buffer objects) into cogl-buffer.c.
Explicitly warn if we detect that a CoglBuffer is being freed while it
is still mapped. Previously we silently unmapped the buffer, but it's
not something we want to encourage.
This makes CoglBuffer track the last used bind target as a private
property. This is later used when binding a buffer to map instead of
always using the PIXEL_UNPACK target.
This also adds some additional sanity checks that code doesn't try to
nest binds to the same target or bind a buffer to multiple targets at
the same time.
Normally the asynchronous nature of X means that setting the clutter
stage size may really happen an indefinite amount of time later but
since the tests are so short lived and may only render a single frame
this is not an acceptable semantic.
This way we should be able to remove all the hacky sleeps and frame
count delays from our tests.
Since we now run every test in a separate process there is no need to
try and avoid state leakage between tests. This removes the code to
cleanup all children of the stage and disconnect handlers from the
stage paint signal. We now explicitly print a warning if the users tries
to run multiple tests in one process.
This adds three new feature flags COGL_FEATURE_TEXTURE_NPOT_BASIC,
COGL_FEATURE_TEXTURE_NPOT_MIPMAP and COGL_FEATURE_TEXTURE_NPOT_REPEAT
that can tell you if your hardware supports non power of two textures,
npot textures + mipmaps and npot textures + wrap modes other than
CLAMP_TO_EDGE.
The pre-existing COGL_FEATURE_TEXTURE_NPOT feature implies all of the
above.
By default GLES 2 core supports npot textures but mipmaps and repeat
modes can only be used with power of two textures. This patch also makes
GLES check for the GL_OES_texture_npot extension to determine if mipmaps
and repeating are supported with npot textures.
glDisableVertexAttribArray was defined to glEnableVertexAttribArray so
it would probably cause crashes if it was ever used. Presumably
nothing is using these yet because the generic attributes are not yet
tied to shader attributes in a predictable way.
We now always aim to use pkg-config based configuration when possible,
but when not configure.ac now knows the difference between GLES_CM
libraries that contain EGL symbols (I.e. a separate EGL library doesn't
need to be found) and GLESv1_CM libraries that don't contain EGL
symbols.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2160
In the .json file used for the test, there is no null -> "base"
transition defined only a "clicked" -> "base", when the "clicked" state
is removed the "base" state will also disappear.
I was fed up to cd into the tests/conform or tests/interactive directories
to launch a specific test. Now, with the power the abs_ variants of
builddir and srcdir we can run specific test from any directory.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2159
For testing purposes, either to identify bugs in Cogl or the driver or
simulate lack of PBO support COGL_DEBUG=disable-pbos can be used to
fallback to malloc instead.
The pango renderer was causing lots of override materials to be allocated
because the vertex_buffer API converts AUTOMATIC mode into REPEAT for
backwards compatibility. By explicitly setting the wrap mode to
CLAMP_TO_EDGE when creating the glyph_material then the vertex_buffer
API will leave it untouched.
This allows you to tell Cogl that you are planning to replace all the
buffer's data once it is mapped with cogl_buffer_map. This means if the
buffer is currently being accessed by the GPU then the driver doesn't
have to stall and wait for it to finish before it can access it from the
CPU and can instead potentially allocate a new buffer with undefined
data and map that.
There was a missing '* 4' and '* i' in the for() loops that initialized
the first test buffer, so it was containing uninitialized data causing
the test to fail.
Make Cally follow the single-include header file policy of Clutter and
Cogl; this means making cally.h the single include header, and requires
a new cally-main.h file for the functions defined by cally.h.
Also:
• clean up the licensing notice and remove the FSF address;
• document the object structures (instance and class);
• G_GNUC_CONST-ify the get_type() functions;
• reduce the padding for CallyActor sub-classes;
• reduce the amount of headers included.