mirror of
https://github.com/brl/mutter.git
synced 2024-12-23 19:42:05 +00:00
61c45da90a
Merge branch 'master-clock-updates' * master-clock-updates: (22 commits) Change the paint forcing on the Text cache text [timelines] Improve marker hit check and don't fudge the delta Revert "[timeline] Don't clamp the elapsed time when a looping tl reaches the end" [tests] Don't add a newline to the end of g_test_message calls [test-timeline] Add a marker at the beginning of the timeline [timeline] Don't clamp the elapsed time when a looping tl reaches the end [master-clock] Throttle if no redraw was performed [docs] Update Clutter's API reference Force a paint instead of calling clutter_redraw() Fix clutter_redraw() to match the redraw cycle Run the repaint functions inside the redraw cycle Remove useless manual timeline ticking Move elapsed-time calculations into ClutterTimeline Limit the frame rate when not syncing to VBLANK Decrease the main-loop priority of the frame cycle Avoid motion-compression in test-picking test Compress events as part of the frame cycle Remove stage update idle and do updates from the master clock Call g_main_context_wakeup() when we start running timelines Remove unused msecs_delta member ... |
||
---|---|---|
.. | ||
conform | ||
data | ||
interactive | ||
micro-bench | ||
tools | ||
.gitignore | ||
Makefile.am | ||
README |
Outline of test categories: The conform/ tests should be non-interactive unit-tests that verify a single feature is behaving as documented. See conform/ADDING_NEW_TESTS for more details. The micro-bench/ tests should be focused perfomance test, ideally testing a single metric. Please never forget that these tests are synthetec and if you are using them then you understand what metric is being tested. They probably don't reflect any real world application loads and the intention is that you use these tests once you have already determined the crux of your problem and need focused feedback that your changes are indeed improving matters. There is no exit status requirements for these tests, but they should give clear feedback as to their performance. If the framerate is the feedback metric, then the test should forcibly enable FPS debugging. The interactive/ tests are any tests whos status can not be determined without a user looking at some visual output, or providing some manual input etc. This covers most of the original Clutter tests. Ideally some of these tests will be migrated into the conformance/ directory so they can be used in automated nightly tests. Other notes: All tests should ideally include a detailed description in the source explaining exactly what the test is for, how the test was designed to work, and possibly a rationale for the aproach taken for testing.