mirror of
https://github.com/brl/mutter.git
synced 2024-12-25 04:22:05 +00:00
71498a6376
The CoglTexture constructors expose the "max-waste" argument for controlling the maximum amount of wasted areas for slicing or, if set to -1, disables slicing. Slicing is really relevant only for large images that are never repeated, so it's a useful feature only in controlled use cases. Specifying the amount of wasted area is, on the other hand, just a way to mess up this feature; 99% the times, you either pull this number out of thin air, hoping it's right, or you try to do the right thing and you choose the wrong number anyway. Instead, we can use the CoglTextureFlags to control whether the texture should not be sliced (useful for Clutter-GST and for the texture-from-pixmap actors) and provide a reasonable value for enabling the slicing ourself. At some point, we might even provide a way to change the default at compile time or at run time, for particular platforms. Since max_waste is gone, the :tile-waste property of ClutterTexture becomes read-only, and it proxies the cogl_texture_get_max_waste() function. Inside Clutter, the only cases where the max_waste argument was not set to -1 are in the Pango glyph cache (which is a POT texture anyway) and inside the test cases where we want to force slicing; for the latter we can create larger textures that will be bigger than the threshold we set. Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com> Signed-off-by: Robert Bragg <robert@linux.intel.com> Signed-off-by: Neil Roberts <neil@linux.intel.com> |
||
---|---|---|
.. | ||
conform | ||
data | ||
interactive | ||
micro-bench | ||
tools | ||
.gitignore | ||
Makefile.am | ||
README |
Outline of test categories: The conform/ tests should be non-interactive unit-tests that verify a single feature is behaving as documented. See conform/ADDING_NEW_TESTS for more details. The micro-bench/ tests should be focused perfomance test, ideally testing a single metric. Please never forget that these tests are synthetec and if you are using them then you understand what metric is being tested. They probably don't reflect any real world application loads and the intention is that you use these tests once you have already determined the crux of your problem and need focused feedback that your changes are indeed improving matters. There is no exit status requirements for these tests, but they should give clear feedback as to their performance. If the framerate is the feedback metric, then the test should forcibly enable FPS debugging. The interactive/ tests are any tests whos status can not be determined without a user looking at some visual output, or providing some manual input etc. This covers most of the original Clutter tests. Ideally some of these tests will be migrated into the conformance/ directory so they can be used in automated nightly tests. Other notes: All tests should ideally include a detailed description in the source explaining exactly what the test is for, how the test was designed to work, and possibly a rationale for the aproach taken for testing.