mutter/clutter/tests
2019-04-18 12:53:09 -04:00
..
accessibility Drop Autotools 2019-01-10 11:50:54 -02:00
conform clutter/text: Generate resource scaled text and paint it at proper scaling 2019-03-01 17:46:49 +00:00
interactive cogl: Remove unused cogl_texture_new_from_foreign 2019-04-18 12:53:09 -04:00
micro-bench Drop Autotools 2019-01-10 11:50:54 -02:00
performance clutter/tests: Fix missing declaration warnings 2019-01-22 18:31:19 +01:00
clutter-1.0.suppressions move everything into a clutter/ directory 2016-04-12 20:04:26 +02:00
meson.build Add meson build support 2018-11-06 18:51:44 +01:00
README move everything into a clutter/ directory 2016-04-12 20:04:26 +02:00

Outline of test categories:

The conform/ tests should be non-interactive unit-tests that verify a single
feature is behaving as documented. Use the GLib and Clutter test API and macros
to write the test units. The conformance test suites are meant to be used with
continuous integration builds.

The performance/ tests are performance tests, both focused tests testing single
metrics and larger tests. These tests are used to report one or more
performance markers for the build of Clutter. Each performance marker is picked
up from the standard output of running the tests from strings having the form
"\n@ marker-name: 42.23" where 'marker-name' and '42.23' are the key/value pairs
of a single metric. Each test can provide multiple key/value pairs. Note that
if framerate is the feedback metric the test should forcibly enable FPS
debugging itself. The file test-common.h contains utility function helping to
do fps reporting.

The interactive/ tests are any tests whose status can not be determined without
a user looking at some visual output, or providing some manual input etc. This
covers most of the original Clutter tests. Ideally some of these tests will be
migrated into the conform/ directory.

The accessibility/ tests are tests created to test the accessibility support of
clutter, testing some of the atk interfaces.

Other notes:

• All tests should ideally include a detailed description in the source
explaining exactly what the test is for, how the test was designed to work,
and possibly a rationale for the approach taken for testing. Tests for specific
bugs should reference the bug report URL or number.

• When running tests under Valgrind, you should follow the instructions
available here:

        https://wiki.gnome.org/Valgrind

and also use the suppression file available in the Git repository.