mutter/tests
2012-11-26 13:29:36 +01:00
..
accessibility build: Fix out of tree builds 2012-07-30 12:33:24 +01:00
conform test: Only run X11-specific code when using X11 2012-10-12 15:12:55 +01:00
data actor: Add a custom scriptable "margin" property 2012-05-21 15:31:34 +02:00
interactive tests: Track TOUCH_CANCEL events in test-events 2012-11-26 13:29:36 +01:00
micro-bench build: Fix out of tree builds 2012-07-30 12:33:24 +01:00
performance build: Fix out of tree builds 2012-07-30 12:33:24 +01:00
Makefile.am build: Allow disabling all tests and examples 2012-05-01 19:09:47 +01:00
README tests: Add performance tracking framework 2011-07-04 17:27:48 +01:00

Outline of test categories:

The conform/ tests should be non-interactive unit-tests that verify a single
feature is behaving as documented. See conform/ADDING_NEW_TESTS for more
details.

The performance/ tests are performance tests, both focused tests testing single
metrics and larger tests. These tests are used to report one or more
performance markers for the build of Clutter. Each performance marker is picked
up from the standard output of running the tests from strings having the form
"\n@ marker-name: 42.23" where 'marker-name' and '42.23' are the key/value pairs
of a single metric. Each test can provide multiple key/value pairs. Note that
if framerate is the feedback metric the test should forcibly enable FPS
debugging itself. The file test-common.h contains utility function helping to
do fps reporting.

The interactive/ tests are any tests whose status can not be determined without
a user looking at some visual output, or providing some manual input etc. This
covers most of the original Clutter tests. Ideally some of these tests will be
migrated into the conformance/ directory so they can be used in automated
nightly tests.

The accessibility/ tests are tests created to test the accessibility support of
clutter, testing some of the atk interfaces.

The data/ directory contains optional data (like images and ClutterScript
definitions) that can be referenced by a test.

Other notes:

• All tests should ideally include a detailed description in the source
explaining exactly what the test is for, how the test was designed to work,
and possibly a rationale for the approach taken for testing.

• When running tests under Valgrind, you should follow the instructions
available here:

        http://live.gnome.org/Valgrind

and also use the suppression file available inside the data/ directory.