mirror of
https://github.com/brl/mutter.git
synced 2024-11-22 08:00:42 -05:00
6d51a18e7c
This adds a new function to enable per-vertex point size on a pipeline. This can be set with cogl_pipeline_set_per_vertex_point_size(). Once enabled the point size can be set either by drawing with an attribute named 'cogl_point_size_in' or by writing to the 'cogl_point_size_out' builtin from a snippet. There is a feature flag which must be checked for before using per-vertex point sizes. This will only be set on GL >= 2.0 or on GLES 2.0. GL will only let you set a per-vertex point size from GLSL by writing to gl_PointSize. This is only available in GL2 and not in the older GLSL extensions. The per-vertex point size has its own pipeline state flag so that it can be part of the state that affects vertex shader generation. Having to enable the per vertex point size with a separate function is a bit awkward. Ideally it would work like the color attribute where you can just set it for every vertex in your primitive with cogl_pipeline_set_color or set it per-vertex by just using the attribute. This is harder to get working with the point size because we need to generate a different vertex shader depending on what attributes are bound. I think if we wanted to make this work transparently we would still want to internally have a pipeline property describing whether the shader was generated with per-vertex support so that it would work with the shader cache correctly. Potentially we could make the per-vertex property internal and automatically make a weak pipeline whenever the attribute is bound. However we would then also need to automatically detect when an application is writing to cogl_point_size_out from a snippet. Reviewed-by: Robert Bragg <robert@linux.intel.com> (cherry picked from commit 8495d9c1c15ce389885a9356d965eabd97758115) Conflicts: cogl/cogl-context.c cogl/cogl-pipeline-private.h cogl/cogl-pipeline.c cogl/cogl-private.h cogl/driver/gl/cogl-pipeline-progend-fixed.c cogl/driver/gl/gl/cogl-pipeline-progend-fixed-arbfp.c |
||
---|---|---|
.. | ||
conform | ||
data | ||
micro-perf | ||
unit | ||
config.env.in | ||
Makefile.am | ||
README | ||
run-tests.sh | ||
test-launcher.sh |
Outline of test categories: The conform/ tests: ------------------- These tests should be non-interactive unit-tests that verify a single feature is behaving as documented. See conform/ADDING_NEW_TESTS for more details. Although it may seem a bit awkward; all the tests are built into a single binary because it makes building the tests *much* faster by avoiding lots of linking. Each test has a wrapper script generated though so running the individual tests should be convenient enough. Running the wrapper script will also print out for convenience how you could run the test under gdb or valgrind like this for example: NOTE: For debugging purposes, you can run this single test as follows: $ libtool --mode=execute \ gdb --eval-command="b test_cogl_depth_test" \ --args ./test-conformance -p /conform/cogl/test_cogl_depth_test or: $ env G_SLICE=always-malloc \ libtool --mode=execute \ valgrind ./test-conformance -p /conform/cogl/test_cogl_depth_test By default the conformance tests are run offscreen. This makes the tests run much faster and they also don't interfere with other work you may want to do by constantly stealing focus. CoglOnscreen framebuffers obviously don't get tested this way so it's important that the tests also get run onscreen every once in a while, especially if changes are being made to CoglFramebuffer related code. Onscreen testing can be enabled by setting COGL_TEST_ONSCREEN=1 in your environment. The micro-bench/ tests: ----------------------- These should be focused performance tests, ideally testing a single metric. Please never forget that these tests are synthetic and if you are using them then you understand what metric is being tested. They probably don't reflect any real world application loads and the intention is that you use these tests once you have already determined the crux of your problem and need focused feedback that your changes are indeed improving matters. There is no exit status requirements for these tests, but they should give clear feedback as to their performance. If the framerate is the feedback metric, then the test should forcibly enable FPS debugging. The data/ directory: -------------------- This contains optional data (like images) that can be referenced by a test. Misc notes: ----------- • All tests should ideally include a detailed description in the source explaining exactly what the test is for, how the test was designed to work, and possibly a rationale for the approach taken for testing. • When running tests under Valgrind, you should follow the instructions available here: http://live.gnome.org/Valgrind and also use the suppression file available inside the data/ directory.