We want any run-time warnings to cause the conformance tests to fail.
We are currently setting G_DEBUG in test_utils_init and this would
previously cause the fatal-warnings debug option to be set. However
since commit 47444dac of glib this no longer works because the
environment variable is read in a magic constructor of libglib so it
is too late to try to set it there. This patch makes it also set it in
run-tests.sh to avoid the problem.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 95a6d962f5bc2f21bfcdb2f0bc6b55cfa28792b3)
The free function for atlas textures was previously always assuming
that there will be a valid sub_texture pointer but this might not be
the case if the texture was never successfully allocated. This was
causing Cogl to crash if the application tries to make a texture that
can not fit in the atlas using the automagic texture API.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit a7a8b7aefc8cb03fe8b716bee06b3449a7dba85f)
It looks like Cogl doesn't currently clean up an unallocated atlas
texture properly so if it fails to allocate it. It will then crash
when it is freed. This adds a conformance test to verify that all of
the various texture types can be freed without allocating them.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit bfada22d9a7f60baad5ae5615e45f10327e23e36)
Previously the unit tests were using libdl without directly linking to
it. It looks like this ends up working because one of Cogl's
dependencies ends up pulling adding -ldl via libtool. However in some
configurations it looks like this wasn't happening.
To avoid this problem we can just use GModule to resolve the symbols.
g_module_open is documented to return a handle to the ‘main program’
when NULL is passed as the filename and looking at the code it seems
that this ends up using RTLD_DEFAULT so it will have the same effect.
The in-tree copy of glib already has the code for gmodule so this
shouldn't cause problems for --disable-glib.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit b14ece116ed3e4b18d59b645e77b3449fac51137)
This adds a new function to enable per-vertex point size on a
pipeline. This can be set with
cogl_pipeline_set_per_vertex_point_size(). Once enabled the point size
can be set either by drawing with an attribute named
'cogl_point_size_in' or by writing to the 'cogl_point_size_out'
builtin from a snippet.
There is a feature flag which must be checked for before using
per-vertex point sizes. This will only be set on GL >= 2.0 or on GLES
2.0. GL will only let you set a per-vertex point size from GLSL by
writing to gl_PointSize. This is only available in GL2 and not in the
older GLSL extensions.
The per-vertex point size has its own pipeline state flag so that it
can be part of the state that affects vertex shader generation.
Having to enable the per vertex point size with a separate function is
a bit awkward. Ideally it would work like the color attribute where
you can just set it for every vertex in your primitive with
cogl_pipeline_set_color or set it per-vertex by just using the
attribute. This is harder to get working with the point size because
we need to generate a different vertex shader depending on what
attributes are bound. I think if we wanted to make this work
transparently we would still want to internally have a pipeline
property describing whether the shader was generated with per-vertex
support so that it would work with the shader cache correctly.
Potentially we could make the per-vertex property internal and
automatically make a weak pipeline whenever the attribute is bound.
However we would then also need to automatically detect when an
application is writing to cogl_point_size_out from a snippet.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 8495d9c1c15ce389885a9356d965eabd97758115)
Conflicts:
cogl/cogl-context.c
cogl/cogl-pipeline-private.h
cogl/cogl-pipeline.c
cogl/cogl-private.h
cogl/driver/gl/cogl-pipeline-progend-fixed.c
cogl/driver/gl/gl/cogl-pipeline-progend-fixed-arbfp.c
This moves the code in test-bitmask into a UNIT_TEST() directly in
cogl-bitmask.c which will now be run as a tests/unit/ test.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 693c85e0cde8a1ffbffc03a5f8fcc1f92e8d0ac7)
Includes fix to build conform tests with -I$(top_builddir)/cogl to
be able to find cogl-gl-header.h
This adds a white-box unit test that verifies that GL_BLEND is disabled
when drawing an opaque rectangle, enabled when drawing a transparent
rectangle and then disabled again when drawing a transparent rectangle
but with a blend string that effectively disables blending.
This shares the test utilities and launcher infrastructure we are using
for conformance tests so we get consistent reporting and so unit tests
will be run against a range of different drivers.
This adds a --enable-unit-tests configure option which is enabled by
default but if disabled will make all UNIT_TESTS() into static inline
functions that we should expect the compiler to discard since they won't
be referenced by anything.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 9047cce06bbf9051ec77e622be2fdbb96ed767a8)
This adds a test to make sure that if the same pipeline is used to draw
two primitives, one which doesn't need blending because it has an opaque
color associated, and another using a color attribute that requires
blending then Cogl should recognize that it needs to enable blending for
the second primitive even though the pipeline hasn't changed.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit c15d91f1c6293bebd4494d1e20586121483cddef)
This enables basic Emscripten support in Cogl via the SDL winsys.
Assuming you have setup an emscripten toolchain you can configure Cogl
like this:
emconfigure ./configure --enable-debug --enable-emscripten
Building the examples will build .html files that can be loaded directly
by a WebGL enabled browser.
Note: at this point the emscripten support has just barely been smoke
tested so it's expected that as we continue to build on this we will
learn about more things we need to change in Cogl to full support this
environment.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit a3bc2e7539391b074e697839dfae60b69c37cf10)
cogl_framebuffer_add_fence creates a synchronisation fence, which will
invoke a user-specified callback when the GPU has finished executing all
commands provided to it up to that point in time.
Support is currently provided for GL 3.x's GL_ARB_sync extension, and
EGL's EGL_KHR_fence_sync (when used with OpenGL ES).
Signed-off-by: Daniel Stone <daniel@fooishbar.org>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=691752
(cherry picked from commit e6d37470da9294adc1554c0a8c91aa2af560ed9f)
This adds compiler symbol deprecation declarations for old Cogl APIs so
that users can easily see via compiler warning when they are using these
symbols, and also see a hint for what the apis should be replaced with.
So that users of Cogl can manage when to show these warnings this
introduces a scheme borrowed from glib whereby you can declare what
version of the Cogl api you are using:
COGL_VERSION_MIN_REQUIRED can be defined to indicate the oldest Cogl api
that the application wants to use. Cogl will only warn about
deprecations for symbols that were deprecated earlier than this required
version. If this is left undefined then by default Cogl will warn about
all deprecations.
COGL_VERSION_MAX_ALLOWED can be defined to indicate the newest api
that the application uses. If the application uses symbols newer than
this then Cogl will give a warning about that.
This patch removes the need to maintain the COGL_DISABLE_DEPRECATED
guards around deprecated symbols.
This patch fixes a few uses of deprecated symbols in the examples/
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Some of the examples and tests are using functions from -lm. With some
linkers, if we don't expicitly link against it an error will be
reported. This patch adds the library to all of the examples even
though not all of them use math functions because I don't think it
will do any harm and it will save us having to remember to add it if
an example later starts using some math functions.
https://bugzilla.gnome.org/show_bug.cgi?id=697330
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 66a1aeaee1f7cdcdd505f5745277723a43d714b6)
Conflicts:
examples/Makefile.am
tests/conform/Makefile.am
tests/micro-perf/Makefile.am
When a pipeline is added to the cache, a normal copy would previously be
made to use as the key in the hash table. This copy keeps a reference
to the real pipeline which means all of the resources it contains are
retained forever, even if they aren't necessary to generate the hash.
This patch changes it to create a trimmed down copy that only has the
state necessary to generate the hash. A new function called
_cogl_pipeline_deep_copy is added which makes a new pipeline that is
directly a child of the root pipeline. It then copies over the
pertinent state from the original pipeline. The pipeline state is
copied using the existing _cogl_pipeline_copy_differences function.
There was no equivalent function for the layer state so I have added
one.
That way the pipeline key doesn't have the texture data state and it
doesn't hold a reference to the original pipeline so it should be much
cheaper to keep around.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e27e01c1215e7e7c7c0183ded11dd769bb112c5c)
Currently when a unique pipeline is created and added to the pipeline
cache that pipeline will live forever which includes keeping a
reference to any large resources that the pipeline has such as
textures. These textures don't actually need to be kept for the
pipeline to be used as a key in the hash table so ideally we wouldn't
do this. This test case tries rendering with a pipeline that has
textures and then checks whether the textures are successfully
destroyed after the pipeline is unreffed. The test is currently marked
as a known failure because the pipeline cache will prevent them from
being destroyed.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 0056f805dadd46b966d57f66b02890b8c29971ac)
Adding a layer difference may mean the pipeline overrides all of the
layers of its parent which might make the parent redundant so we
should try to prune the hierarchy.
This is particularly important for CoglGst because whenever a new
frame is ready it tries to make a copy of the pipeline it last used
and then replace all of the textures in the layers. Without this patch
the new pipeline would keep the parent pipeline alive which means also
keeping the old textures alive so all of the frames of the video would
effectively be leaked.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 576c7b55aa835448c977f1d79d128dffd40e7cd8)
The current recommendation for pipelines is that once they have been
used for painting then they should be considered immutable. If you
want to modify a pipeline you should instead make a copy and unref the
original pipeline. Internally we try to check whether the modified
copy replaces all of the properties of the parent and prune a
redundant ancestor hierarchy. Pruning the hierarchy is particularly
important if the pipelines contain textures because otherwise the
textures may be leaked when the parent pipeline keeps a reference to
it.
This test verifies that usage pattern by creating a chain of pipeline
copies each with their own replacement texture. Some user data is then
set on the textures with a callback so that we can verify that once
the original pipelines are destroyed then the textures are also
destroyed.
The test is currently failing because Cogl doesn't correctly prune
ancestory for layer state authority.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 3fbec92acb90008492eb125025f92b42d6e07930)
This adds hook points to add global function and variable declarations
to either the fragment or vertex shader. The declarations can then be
used by subsequent snippets. Only the ‘declarations’ string of the
snippet is used and the code is directly put in the global scope near
the top of the shader.
The reason this is necessary rather than just adding a normal snippet
with the declarations is that for the other hooks Cogl assumes that
the snippets are independent of each other. That means if a snippet
has a replace string then it will assume that it doesn't even need to
generate the code for earlier hooks which means the global
declarations would be lost.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ebb82d5b0bc30487b7101dc66b769160b40f92ca)
This fixes some minor errors and warnings that were preventing Cogl
building with mingw32:
• cogl-framebuffer-gl.c was not including cogl-texture-private.h.
Presumably something else ends up including that when building for
GLX.
• The WGL winsys was not including cogl-error-private.h
• A call to strsplit in the WGL winsys was wrong.
• For some reason the test-wrap-rectangle-textures test was trying to
include the GDKPixbuf header.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5380343399f834d9f96ca3b137d49c9c2193900a)
The journal manually flushes its own modelview matrix state so it
needs to mark the state as dirty so that if a primitive is drawn with
the same matrix state as the last primitive it will correctly reflush
it.
https://bugzilla.gnome.org/show_bug.cgi?id=693612
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c7290c994c742456ff0977cb394c289afb377049)
This adds a conformance test which draws a rectangle using the journal
in-between two rectangles drawn with primitives without changing any
other state. Currently this is failing because the modelview matrix
state is not correctly flushed.
The journal also flushes in own clip state so the test additionally
puts everything in a clip and verifies that that worked. This is not
currently broken but we might as well test it.
https://bugzilla.gnome.org/show_bug.cgi?id=693612
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit b703f9a1a98894a12021cbdd632e1d59214e612f)
For the boolean environment variables that affect the running of the
conformance tests we now explicitly check the value of those variables
so that "0", "off" and "false" (upper or lower case) will be considered
as FALSE instead of just interpreting set as TRUE and unset as FALSE. If
the value is set to something entirely spurious then we abort with a
warning message. Thanks to Artie Eoff for suggesting this change.
https://bugzilla.gnome.org/show_bug.cgi?id=693894
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 25a8cf3607a482ca390eb9841295d1b365cbe53b)
This updates the examples and test-journal which now use the _FRAME_SYNC
events to throttle rendering so that they don't install a redundant idle
handler to paint when they get a FRAME_SYNC event and instead they now
directly paint when the event is received.
(cherry picked from commit 579eb75e6ac6f50d7a9cfe5093435126158b3c96)
This updates test-journal to use the new
cogl_onscreen_add_frame_callback() api to use _SYNC events for
throttling.
(cherry picked from commit 24e6a5376ed3982f5647e2c2df0618272c381636)
The GLES2 spec only guarantees calling glReadPixels with GL_RGBA and
an implementation specific format. Mesa seems to now reject reading
with GL_RGB so the test had started failing. This fixes it to just
read using GL_RGBA.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 745cbcbdee0f5b4df4c6a735b03709248a551884)
test_utils_compare_pixel makes the error report slightly easier to
read because it displays the values for the whole pixel instead of
reporting that there was a difference somewhere.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit a094ac9688a5e6b53be1cd379dc045973aab2fae)
The rounding used when storing 10-bit per component data into an 8-bit
per component texture seems to have changed in recent versions of Mesa
which was causing this test to fail. I've also noticed this failing on
the NVidia binary driver. This patch adds some fuzziness to the
comparison so that it will pass. There is a new test_utils function
called test_utils_compare_pixel_and_alpha which is the same as
test_utils_compare_pixel except that it also compares the alpha
component.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ce626fb3939b0f200d85ccdf32809608b879212d)
It looks like it's not meant to be valid to create a framebuffer with
an alpha-only texture as a render target on GLES. Since the following
Mesa commit, this requirement is now enforced so that the
test_framebuffer_get_bits test fails:
http://cgit.freedesktop.org/mesa/mesa/commit/?id=cf300eaa
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit cfb0859cab843b000f4baa3ca155a245730edcfa)
This make autogen.sh look for automake-1.13 and also updates all
Makefile.am files to no longer use the INCLUDES variable which automake
1.13 warns is deprecated by AM_CPPFLAGS.
https://bugzilla.gnome.org/show_bug.cgi?id=690891
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 5de5569e960102afe979a5f2f0403e1defebca62)
This marks that test-backface-culling is currently known to fail without
NPOT texture support. This allows us do a 1.13 snapshot release before
we find a fix for this.
The GL framebuffer driver now makes sure to bind the framebuffer
before counting the number of bits. Previously it would just query the
number of bits for whatever framebuffer happened to be used last.
In addition the virtual for querying the framebuffer bits has been
modified to take a pointer to a structure instead of a separate
pointer to each component. This should make it slightly more efficient
and easier to maintain.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e9c58b2ba23a7cebcd4e633ea7c3191f02056fb5)
This adds a test which creates two offscreen framebuffers, one with
just an alpha component texture and the other will a full RGBA
texture. The bit sizes of both framebuffers are then checked to verify
that they either have or haven't got bits for the RGB components.
The test currently fails because the framebuffer functions don't bind
the framebuffer before querying so they just query whichever
framebuffer happened to be used last.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 7ca01373efe908efc9f18f2cb7f4a51c274ef677)
This adds a conformance test which renders a rectangle texture using
the two wrap modes clamp-to-edge and repeat. It then verifies that the
correct region of the texture is drawn for the texture coordinates
that are > 1.0.
The test currently always fails. The cogl_framebuffer_draw_rectangle
function is documented to always take normalized texture coordinates
regardless of the coordinate system of the texture. This works
correctly if all of the texture coordinates are in the range [0.0,1.0]
because cogl-primitives uses a different code path in that case.
However if the multiple-quad code path is taken then the coordinates
actually need to un-normalized for it to work.
There is a comment in cogl_meta_texture_foreach_in_region() which
implies that the incoming coordinates should always be normalized.
The documentation for the callback says that the resulting sub-texture
coordinates will always be in the coordinate system of the low-level
texture. However it doesn't work out like this because the meta
texture function uses the span iterating function which always returns
normalized coordinates. It looks like there needs to be some more
conversions going on somewhere.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d2059bb32b8015060e10f41dbbb68d4230b47ddb)
_cogl_texture_spans_foreach_in_region first swaps over the texture
coordinates if they are flipped so that it can always iterate in a
positive direction. It sets a flag so that it will remember that the
coordinates are flipped. Before invoking the callback it is meant to
reflip the coordinates so that the callee doesn't need to be aware of
the flipping. However it was only flipping the sub-texture coordinates
and not the virtual coordinates. This was causing sliced textures to
draw their slice rectangles with the wrong geometry.
test-backface-culling was failing because of this.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e7338a1e09cb22151374aefa6f0bb58485af9189)
There were a few problems with the sub texture iterating code of
sliced textures which were causing some conformance tests to fail when
NPOT textures are disabled:
• The spans are stored in un-normalized coordinates and the
coordinates passed to the foreach function are normalized. The
function was trying to un-normalize them before passing them to the
span iterator code but it was using the wrong factor which was
causing it to actually doubley normalize them.
• The shim function to renormalize the coordinates before passing them
to the callback was renormalizing the sub-texture coordinates
instead of the virtual coordinates. The sub-texture coordinates are
already in the right scale for whatever is the underlying texture so
we don't need to touch them. Instead we need to normalize the
virtual coordinates because these are coming from the un-normalized
coordinates that we passed to the span iterating code.
• The normalize factors passed to the span iterating were always 1.
The code uses this normalizing factor to round the incoming
coordinates to the nearest multiple of a full texture. It divides
the coordinates by the factor rather than multiplying so it looks
like we should be passing the virtual texture size here.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c9773566b0ec0a17b34c440090529de8cff9609e)
Consistent with how we lazily allocate framebuffers this patch allows us
to instantiate textures but still specify constraints and requirements
before allocating storage so that we can be sure to allocate the most
appropriate/efficient storage.
This adds a cogl_texture_allocate() function that is analogous to
cogl_framebuffer_allocate() which can optionally be called to explicitly
allocate storage and catch any errors. If this function isn't used
explicitly then Cogl will implicitly ensure textures are allocated
before the storage is needed.
It is generally recommended to rely on lazy storage allocation or at
least perform explicit allocation as late as possible so Cogl can be
fully informed about the best way to allocate storage.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 1fa7c0f10a8a03043e3c75cb079a49625df098b7)
Note: This reverts the cogl_texture_rectangle_new_with_size API change
that dropped the CoglError argument and keeps the semantics of
allocating the texture immediately. This is because Mutter currently
uses this API so we will probably look at updating this later once
we have a corresponding Mutter patch prepared. The other API changes
were kept since they only affected experimental api.
Both the texture drivers weren't handling errors correctly when a
CoglPixelBuffer was used to set the contents of an entire texture.
This was causing it to hit an assertion failure in the pixel buffer
tests.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 888733d3c3b24080d2f136cedb3876a41312e4cf)
test-pixel-buffer previously had two tests, one to check filling the
pixel buffer by mapping it and another to fill it by just setting the
data. These tests were set up in a kind of confusing way where it
would try to paint both steps and then validate them together using
colors looked up from a table. This patch separates out the two tests
and gets rid of the tables which hopefully makes them a bit easier to
follow.
The contents of the bitmap are now set to an image with has a
different colour for each of its four quadrants instead of just a
single colour in the hope that this will be a bit more of an extensive
test.
The old code had a third test that was commented out. This test has
been removed.
The textures are now created using cogl_texture_2d_new_* which means
they won't be in the atlas. This exposes a bug where setting the
entire contents of the texture won't handle errors properly and it
will hit an assertion. The previous code using the atlas would end up
only setting a sub-region of the larger atlas texture so the bug
wouldn't be hit. To make sure we still test this code path there is
now a third test which explicitly sets a sub-region of the texture
using the bitmap.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 8beb3a4cc20f539a50645166485b95e8e5b25779)
This ports the test-texture-get-set-data clutter test to be a standalone
Cogl test.
(cherry picked from commit 40defa3dbd355754d0f7611d3c50de35db514e4a)
This allows apps to catch out-of-memory errors when allocating textures.
Textures can be pretty huge at times and so it's quite possible for an
application to try and allocate more memory than is available. It's also
very possible that the application can take some action in response to
reduce memory pressure (such as freeing up texture caches perhaps) so
we shouldn't just automatically abort like we do for trivial heap
allocations.
These public functions now take a CoglError argument so applications can
catch out of memory errors:
cogl_buffer_map
cogl_buffer_map_range
cogl_buffer_set_data
cogl_framebuffer_read_pixels_into_bitmap
cogl_pixel_buffer_new
cogl_texture_new_from_data
cogl_texture_new_from_bitmap
Note: we've been quite conservative with how many apis we let throw OOM
CoglErrors since we don't really want to put a burdon on developers to
be checking for errors with every cogl api call. So long as there is
some lower level api for apps to use that let them catch OOM errors
for everything necessary that's enough and we don't have to make more
convenient apis more awkward to use.
The main focus is on bitmaps and texture allocations since they
can be particularly large and prone to failing.
A new cogl_attribute_buffer_new_with_size() function has been added in
case developers need to catch OOM errors when allocating attribute buffers
whereby they can first use _buffer_new_with_size() (which doesn't take a
CoglError) followed by cogl_buffer_set_data() which will lazily allocate
the buffer storage and report OOM errors.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f7735e141ad537a253b02afa2a8238f96340b978)
Note: since we can't break the API for Cogl 1.x then actually the main
purpose of cherry picking this patch is to keep in-line with changes
on the master branch so that we can easily cherry-pick patches.
All the api changes relating stable apis released on the 1.12 branch
have been reverted as part of cherry-picking this patch so this most
just applies all the internal plumbing changes that enable us to
correctly propagate OOM errors.
This adds a conformance test with an alpha-component texture. The
texture is rendered using a pipeline with the same layer combine mode
as cogl-pango.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 05190519bad4519e66cbdb5326943c832d15a841)
Previously when make test is run it would say ‘fail’ in lower case
letters for both tests that are known bugs we need to fix and for
drivers that can't run the test. This makes it too easy to lose track
of bugs.
To fix this, the ADD_TEST macro has now been changed to take two sets
of flags instead of just one. The first specifies the requirements for
the test to run at all. The second specifies the set of flags required
to run without any known failures. The table in the test report now
says ‘n/a’ instead of ‘fail’ for tests that don't match the feature
requirements.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 723f8d4402e7b2ef3a71f51bb29b10d1c0ec8d81)
The tests that were using GLSL or 3D textures were directly printing
“Skipped” and then reporting success. Instead of doing this they now
just try to continue without checking for the feature but the
appropriate test requirement flag is now set in test-conform-main so
the table of results will correctly display that is a failure.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit b8f918e44b243a5fa36d5f382a90bebb0de0728f)
This renames the global ctx and fb variables to test_ctx and test_fb
respectively in line with the names use on the master branch. This is to
make it easier to cherry pick patches from master.
This fixes some problems which were stopping --disable-glib from
working properly:
• A lot of the public headers were including glib.h. This shouldn't be
necessary because the API doesn't expose any glib types. Otherwise
any apps would require glib in order to get the header.
• The public headers were using G_BEGIN_DECLS. There is now a
replacement macro called COGL_BEGIN_DECLS which is defined in
cogl-types.h.
• A similar fix has been done for G_GNUC_NULL_TERMINATED and
G_GNUC_DEPRECATED.
• The CFLAGS were not including $(builddir)/deps/glib which was
preventing it finding the generated glibconfig.h when building out
of tree.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 4138b3141c2f39cddaea3d72bfc04342ed5092d0)
This updates the npot-texture test to use Cogl directly instead of
relying on Clutter.
Note that this currently fails when Cogl ends up using sliced
textures. It looks like there is some bug with the meta texture
function to iterate the primitive textures. This happens when
COGL_DEBUG=disable-npot-textures is set.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 32f0e1e8fff56be3123dc4571f07bb5a314f6818)
This adds a small test case which maps a sub region of a small
attribute buffer and replaces the texture coordinates of one of the
vertices. The vertices are then drawn and the correct colours are
checked.
There is now a new test requirement for the
COGL_FEATURE_ID_MAP_BUFFER_FOR_WRITE feature which this test requires.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 22183265b021dd038338b4398056c0a1eae77edb)
This adds a simple test which sets an alpha test on a pipeline and
then renders a texture. It then verifies that the transparent parts of
the texture aren't drawn. This is currently failing with the GL3
driver because GL3 requires the alpha test to be implemented in GLSL
but the generated alpha test uniform is only updated for the GLES2
driver.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 4ec04507bfaf2d61707dccfb59ac7326962ee741)
This adds a new CoglDriver for GL 3 called COGL_DRIVER_GL3. When
requested, the GLX, EGL and SDL2 winsyss will set the necessary
attributes to request a forward-compatible core profile 3.1 context.
That means it will have no deprecated features.
To simplify the explosion of checks for specific combinations of
context->driver, many of these conditionals have now been replaced
with private feature flags that are checked instead. The GL and GLES
drivers now initialise these private feature flags depending on which
driver is used.
The fixed function backends now explicitly check whether the fixed
function private feature is available which means the GL3 driver will
fall back to always using the GLSL progend. Since Rob's latest patches
the GLSL progend no longer uses any fixed function API anyway so it
should just work.
The driver is currently lower priority than COGL_DRIVER_GL so it will
not be used unless it is specificly requested. We may want to change
this priority at some point because apparently Mesa can make some
memory savings if a core profile context is used.
In GL 3, getting the combined extensions string with glGetString is
deprecated so this patch changes it to use glGetStringi to build up an
array of extensions instead. _cogl_context_get_gl_extensions now
returns this array instead of trying to return a const string. The
caller is expected to free the array.
Some issues with this patch:
• GL 3 does not support GL_ALPHA format textures. We should probably
make this a feature flag or something. Cogl uses this to render text
which currently just throws a GL error and breaks so it's pretty
important to do something about this before considering the GL3
driver to be stable.
• GL 3 doesn't support client side vertex buffers. This probably
doesn't matter because CoglBuffer won't normally use malloc'd
buffers if VBOs are available, but it might but worth making
malloc'd buffers a private feature and forcing it not to use them.
• GL 3 doesn't support the default vertex array object. This patch
just makes it create and bind a single non-default vertex array
object which gets used just like the normal default object. Ideally
it would be good to use vertex array objects properly and attach
them to a CoglPrimitive to cache the state.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 66c9db993595b3a22e63f4c201ea468bc9b88cb6)
If we're using the system glib library then we need to make sure not to
include headers under deps/glib otherwise we end up with with
incompatible typedefs that break the build.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 5d5fc97b59951ec56a4193b7ee7909ebd3cfbb94)
This commit pushes --disable-glib to the extreme of embedding the par of
glib cogl depends on in tree to be able to generate a DSO that does not
depend on an external glib.
To do so, it:
- keeps a lot of glib's configure.ac in as-glibconfig.m4
- pulls the code cogl depends on and the necessary dependencies
Reviewed-by: Robert Bragg <robert@linux.intel.com>