31da46c799
This adds an autogen.sh, configure.ac and build/autotool files etc under clutter/cogl and makes some corresponding Makefile.am changes that make it possible to build and install Cogl as a standalone library. Some notable things about this are: A standalone installation of Cogl installs 3 pkg-config files; cogl-1.0.pc, cogl-gl-1.0.pc and cogl-2.0.pc. The second is only for compatibility with what clutter installed though I'm not sure that anything uses it so maybe we could remove it. cogl-1.0.pc is what Clutter would use if it were updated to build against a standalone cogl library. cogl-2.0.pc is what you would use if you were writing a standalone Cogl application. A standalone installation results in two libraries currently, libcogl.so and libcogl-pango.so. Notably we don't include a major number in the sonames because libcogl supports two major API versions; 1.x as used by Clutter and the experimental 2.x API for standalone applications. Parallel installation of later versions e.g. 3.x and beyond will be supportable either with new sonames or if we can maintain ABI then we'll continue to share libcogl.so. The headers are similarly not installed into a directory with a major version number since the same headers are shared to export the 1.x and 2.x APIs (The only difference is that cogl-2.0.pc ensures that -DCOGL_ENABLE_EXPERIMENTAL_2_0_API is used). Parallel installation of later versions is not precluded though since we can either continue sharing or later add a major version suffix. |
||
---|---|---|
.. | ||
accessibility | ||
conform | ||
data | ||
interactive | ||
micro-bench | ||
Makefile.am | ||
README |
Outline of test categories: The conform/ tests should be non-interactive unit-tests that verify a single feature is behaving as documented. See conform/ADDING_NEW_TESTS for more details. The micro-bench/ tests should be focused perfomance test, ideally testing a single metric. Please never forget that these tests are synthetec and if you are using them then you understand what metric is being tested. They probably don't reflect any real world application loads and the intention is that you use these tests once you have already determined the crux of your problem and need focused feedback that your changes are indeed improving matters. There is no exit status requirements for these tests, but they should give clear feedback as to their performance. If the framerate is the feedback metric, then the test should forcibly enable FPS debugging. The interactive/ tests are any tests whose status can not be determined without a user looking at some visual output, or providing some manual input etc. This covers most of the original Clutter tests. Ideally some of these tests will be migrated into the conformance/ directory so they can be used in automated nightly tests. The accessibility/ tests are tests created to test the accessibility support of clutter, testing some of the atk interfaces. The data/ directory contains optional data (like images and ClutterScript definitions) that can be referenced by a test. Other notes: • All tests should ideally include a detailed description in the source explaining exactly what the test is for, how the test was designed to work, and possibly a rationale for the approach taken for testing. • When running tests under Valgrind, you should follow the instructions available here: http://live.gnome.org/Valgrind and also use the suppression file available inside the data/ directory.