With test-clip it's possible to draw three different shapes depending
on what mouse button is used: a rectangle, an ellipse or a path
containing multiple shapes. However the ellipse is also a path so it
doesn't really test anything extra from the third option. This
replaces the ellipse with a rectangle that is first rotated by the
modelview matrix. The rotated matrix won't be able to use the scissor
so it can be used to test stencil and clip plane clipping.
Between Clutter 0.8 and 1.0, the new-frame signal of ClutterTimeline
changed the second parameter to be an elapsed time in milliseconds
rather than the frame number. However a few places in clutter were
still calling the parameter 'frame_num' which is a bit
misleading. Notably the signature for the signal class closure in the
header was using the wrong name. This changes them to use 'msecs'.
This adds a custom "rows" property, that allows to define the rows of a
ClutterModel. A single row can either an array of all columns or an
object with column-name : column-value pairs.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2528
* xi2: (41 commits)
test-devices: Actually print the axis data
device-manager/xi2: Sync the stage of source devices
event: Clean up clutter_event_copy()
device: unset the axes array pointer when resetting
device-manager/xi2: Fix device hotplugging
glx: Clean up GLX implementation
device/x11: Store min/max keycode in the XI device class
x11: Hide all private symbols
docs: More documentation fixes for InputDevice
*/event: Never manipulate the event queue directly
win32: Update DeviceManager device creation
device: Allow enabling/disabling non-master devices
backend/eglx: Add newly created stages to the translators
device: Add more doc annotations
device: Use a double for translate_axis() argument
test-devices: Clean up and show axes data
event: Fix up clutter_event_copy()
device/xi2: Translate the axis data after setting devices
device: Add more accessors for properties
docs: Update API reference
...
We have a bunch of experimental convenience functions like
cogl_primitive_p2/p2t2 that have corresponding vertex structures but it
seemed a bit odd to have the vertex annotation e.g. "P2T2" be an infix
of the type like CoglP2T2Vertex instead of be a postfix like
CoglVertexP2T2. This switches them all to follow the postfix naming
style.
Instead of using an idle handler that synchonously paints the stage
by manually calling clutter_actor_paint (stage) the test now uses an
idle handler that repeatedly queues a redraw on the stage.
do_events is now called from a "paint" signal handler for the stage, and
instead of manually determining the fps the test now uses
CLUTTER_SHOW_FPS=1 instead.
Hierarchy and Device changed events come through with the X window set
to be the root window, not the stage window. We need to whitelist them
so that we can actually support hotplugging and device changes.
Slave and floating devices should always be disabled, and not deliver
events to the scene. It is up to the user to enable non-master devices
and handle events coming from them.
ClutterInputDevice gets a new :enabled property, defaulting to FALSE;
when a device manager creates a new device it has to set it to TRUE if
the :device-mode property is set to CLUTTER_INPUT_MODE_MASTER.
The main event queue entry point, _clutter_event_push(), will
automatically discard events coming from disabled devices.
This is a lump commit that is fairly difficult to break down without
either breaking bisecting or breaking the test cases.
The new design for handling X11 event translation works this way:
- ClutterBackend::translate_event() has been added as the central
point used by a ClutterBackend implementation to translate a
native event into a ClutterEvent;
- ClutterEventTranslator is a private interface that should be
implemented by backend-specific objects, like stage
implementations and ClutterDeviceManager sub-classes, and
allows dealing with class-specific event translation;
- ClutterStageX11 implements EventTranslator, and deals with the
stage-relative X11 events coming from the X11 event source;
- ClutterStageGLX overrides EventTranslator, in order to
deal with the INTEL_GLX_swap_event extension, and it chains up
to the X11 default implementation;
- ClutterDeviceManagerX11 has been split into two separate classes,
one that deals with core and (optionally) XI1 events, and the
other that deals with XI2 events; the selection is done at run-time,
since the core+XI1 and XI2 mechanisms are mutually exclusive.
All the other backends we officially support still use their own
custom event source and translation function, but the end goal is to
migrate them to the translate_event() virtual function, and have the
event source be a shared part of Clutter core.
The ClutterGLXTexturePixmap actor is just a wrapper around
ClutterX11TexturePixmap, since the relevant texture-from-pixmap code has
been moved down to Cogl.
The config.h header should be considered a Clutter internal header, and
the test cases (especially the interactive test cases) should strive to
never rely on internal headers.
Atlasing needs to be disabled for the hand texture so that it can work
out the step value needed to fetch a neighbouring pixel in the blur
shader. If the texture ends up in the atlas then the test can't know
the actual size of the texture so it looks wrong.
Other frameworks expose the same functionality as "auto-reverse",
probably to match the cassette tape player. It actually makes sense
for Clutter to follow suit.
The test-viewport interactive test is exercising the clip code - a job
better done by the conformance test suite and by the test-clip
interactive test.
The test-project test case was an old test that was barely working after
landing the size allocation API in Clutter 0.8. It has never been fixed,
and it's been of relative use ever since.
The test-offscreen interactive test was a dummy test for the
ClutterStage:offscreen property, which has been deprecated and
not implemented since Clutter 1.0, and never really worked except
briefly in Clutter 0.2 or something.
When determining the maximum number of layers we also need to take
into account GL_MAX_VERTEX_ATTRIBS on GLES2. Cogl needs one vertex
attrib for each texture unit plus two for the position and color.
We know support EV_REL events comming from evdev devices. This addition
is pretty straigthforward, it adds a x,y per GSource listening to a
evdev device, updates from EL_REL (relative) events and craft new
ClutterMotionEvents. As for buttons, BTN_LEFT..BTN_TASK are translated
to ClutterButtonEvents with 1..8 as button number.
This is mostly a stub, starting point for a one-stop Cogl
micro-benchmarking tool. The first test it adds is one that tries to
hammer the Cogl journal, but no doubt there are lots of other things we
should be regularly testing. Currently the aim is that the tool will be
able to also generate reports which we can collect to keep track of
performance changes over time.
This creates a material which users a layer to override the color of
the rectangle. A simple vertex shader is then created which just
emulates the fixed function pipeline. No fragment shader is
added. This demonstrates a bug where the layer state is getting
ignored when a vertex shader is in use.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2221
Previously the alpha component of the test texture data was always set
to 255 and the data was read back as RGB so that the alpha component
is ignored. Now the alpha component is set to a generated value and
the data is read back a second time as RGBA to verify that Cogl is not
doing any premult conversions when the internal texture and target
data is the same.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2414
The CSS Color Module 3, available at:
http://www.w3.org/TR/css3-color/
allows defining colors as:
rgb ( r, g, b )
rgba ( r, g, b, a)
along with the usual hexadecimal and named notations.
The r, g, and b channels can be:
• integers between 0 and 255
• percentages, between 0% and 100%
The alpha channel, if included using the rgba() modifier, can be a
floating point value between 0.0 and 1.0.
The ClutterColor parser should support this notation.
The Behaviour class and its implementations have been replaced by the
new animation framework API and by the constraints for layout-related
animations.
Currently, we need to make tests build, so we undef DISABLE_DEPRECATED
in specific test cases while they get ported.
When COGL_ENABLE_EXPERIMENTAL_2_0_API is defined cogl.h will now include
cogl2-path.h which changes cogl_path_new() so it can directly return a
CoglPath pointer; it no longer exposes a prototype for
cogl_{get,set}_path and all the remaining cogl_path_ functions now take
an explicit path as their first argument.
The idea is that we want to encourage developers to retain path objects
for as long as possible so they can take advantage of us uploading the
path geometry to the GPU. Currently although it is possible to start a
new path and query the current path, it is not convenient.
The other thing is that we want to get Cogl to the point where nothing
depends on a global, current context variable. This will allow us to one
day define a sensible threading model if/when that is ever desired.
We now prepend a set of defines to any given GLSL shader so that we can
define builtin uniforms/attributes within the "cogl" namespace that we
can use to provide compatibility across a range of the earlier versions
of GLSL.
This updates test-cogl-shader-glsl.c and test-shader.c so they no longer
needs to special case GLES vs GL when splicing together its shaders as
well as the blur, colorize and desaturate effects.
To get a feel for the new, portable uniform/attribute names here are the
defines for OpenGL vertex shaders:
#define cogl_position_in gl_Vertex
#define cogl_color_in gl_Color
#define cogl_tex_coord_in gl_MultiTexCoord0
#define cogl_tex_coord0_in gl_MultiTexCoord0
#define cogl_tex_coord1_in gl_MultiTexCoord1
#define cogl_tex_coord2_in gl_MultiTexCoord2
#define cogl_tex_coord3_in gl_MultiTexCoord3
#define cogl_tex_coord4_in gl_MultiTexCoord4
#define cogl_tex_coord5_in gl_MultiTexCoord5
#define cogl_tex_coord6_in gl_MultiTexCoord6
#define cogl_tex_coord7_in gl_MultiTexCoord7
#define cogl_normal_in gl_Normal
#define cogl_position_out gl_Position
#define cogl_point_size_out gl_PointSize
#define cogl_color_out gl_FrontColor
#define cogl_tex_coord_out gl_TexCoord
#define cogl_modelview_matrix gl_ModelViewMatrix
#define cogl_modelview_projection_matrix gl_ModelViewProjectionMatrix
#define cogl_projection_matrix gl_ProjectionMatrix
#define cogl_texture_matrix gl_TextureMatrix
And for fragment shaders we have:
#define cogl_color_in gl_Color
#define cogl_tex_coord_in gl_TexCoord
#define cogl_color_out gl_FragColor
#define cogl_depth_out gl_FragDepth
#define cogl_front_facing gl_FrontFacing
The size of the texture used for test-cogl-npot-texture was only using
1 pixel of waste and the texture was scaled down so it would be quite
likely that the test would still pass if only the top left slice was
rendered. It also didn't test using non-default texture
coordinates. These problems made it fail to pick up bug 2398. The
texture is now using the maximum amount of waste and rendered in four
parts at 1:1 scale.
Previously in the tests/tools directory we build a disable-npots
library which was used as an LD_PRELOAD to trick Cogl in to thinking
there is no NPOT texture extension. This is a little awkward to use so
it seems much simpler to just define a COGL_DEBUG option to disable
npot textures.