mirror of
https://github.com/brl/mutter.git
synced 2024-12-23 19:42:05 +00:00
9c7afe0c5b
Timelines no longer work in terms of a frame rate and a number of frames but instead just have a duration in milliseconds. This better matches the working of the master clock where if any timelines are running it will redraw as fast as possible rather than limiting to the lowest rated timeline. Most applications will just create animations and expect them to finish in a certain amount of time without caring about how many frames are drawn. If a frame is going to be drawn it might as well update all of the animations to some fraction of the total animation rather than rounding to the nearest whole frame. The 'frame_num' parameter of the new-frame signal is now 'msecs' which is a number of milliseconds progressed along the timeline. Applications should use clutter_timeline_get_progress instead of the frame number. Markers can now only be attached at a time value. The position is stored in milliseconds rather than at a frame number. test-timeline-smoothness and test-timeline-dup-frames have been removed because they no longer make sense. |
||
---|---|---|
.. | ||
conform | ||
data | ||
interactive | ||
micro-bench | ||
tools | ||
.gitignore | ||
Makefile.am | ||
README |
Outline of test categories: The conform/ tests should be non-interactive unit-tests that verify a single feature is behaving as documented. See conform/ADDING_NEW_TESTS for more details. The micro-bench/ tests should be focused perfomance test, ideally testing a single metric. Please never forget that these tests are synthetec and if you are using them then you understand what metric is being tested. They probably don't reflect any real world application loads and the intention is that you use these tests once you have already determined the crux of your problem and need focused feedback that your changes are indeed improving matters. There is no exit status requirements for these tests, but they should give clear feedback as to their performance. If the framerate is the feedback metric, then the test should forcibly enable FPS debugging. The interactive/ tests are any tests whos status can not be determined without a user looking at some visual output, or providing some manual input etc. This covers most of the original Clutter tests. Ideally some of these tests will be migrated into the conformance/ directory so they can be used in automated nightly tests. Other notes: All tests should ideally include a detailed description in the source explaining exactly what the test is for, how the test was designed to work, and possibly a rationale for the aproach taken for testing.