On GLES2, we need to specify an array size for the texture coord
varying array. Previously this size would be decided in one of the
following ways:
- For generated vertex shaders it is always the number of layers in
the pipeline.
- For generated fragment shaders it is the highest sampled texture
unit in the pipeline or the number of attributes supplied by the
primitive, whichever is higher.
- For user shaders it is usually the number of attributes supplied by
the primitive. However, if the application tries to compile the
shader and query the result before using it, it will always be at
least 4.
These shaders can quite easily end up with different values for the
declaration which makes it fail to link. This patch changes it so that
all of the shaders are generated with the maximum of the number of
texture attributes supplied by the primitive and the number of layers
in the pipeline. If this value changes then the shaders are
regenerated, including user shaders. That way all of the shaders will
always have the same value.
https://bugzilla.gnome.org/show_bug.cgi?id=662184
Reviewed-by: Robert Bragg <robert@linux.intel.com>
It seems that cogl-context-private.h needs to be included before including
any of the pipeline-related stuff to avoid build errors on C89 compilers.
This is due to the recent cogl-pipeline decoupling, seems like.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
The GL or GLES library is now dynamically loaded by the CoglRenderer
so that it can choose between GL, GLES1 and GLES2 at runtime. The
library is loaded by the renderer because it needs to be done before
calling eglInitialize. There is a new environment variable called
COGL_DRIVER to choose between gl, gles1 or gles2.
The #ifdefs for HAVE_COGL_GL, HAVE_COGL_GLES and HAVE_COGL_GLES2 have
been changed so that they don't assume the ifdefs are mutually
exclusive. They haven't been removed entirely so that it's possible to
compile the GLES backends without the the enums from the GL headers.
When using GLX the winsys additionally dynamically loads libGL because
that also contains the GLX API. It can't be linked in directly because
that would probably conflict with the GLES API if the EGL is
selected. When compiling with EGL support the library links directly
to libEGL because it doesn't contain any GL API so it shouldn't have
any conflicts.
When building for WGL or OSX Cogl still directly links against the GL
API so there is a #define in config.h so that Cogl won't try to dlopen
the library.
Cogl-pango previously had a #ifdef to detect when the GL backend is
used so that it can sneakily pass GL_QUADS to
cogl_vertex_buffer_draw. This is now changed so that it queries the
CoglContext for the backend. However to get this to work Cogl now
needs to export the _cogl_context_get_default symbol and cogl-pango
needs some extra -I flags to so that it can include
cogl-context-private.h
cogl-ext-functions.h now contains definitions for all of the core GL
and GLES functions that we would normally link to directly. All of the
code has changed to access them through the cogl context pointer. The
GE macro now takes an extra parameter to specify the context because
the macro itself needs to make GL calls but various points in the Cogl
source use different names for the context variable.
When uploading the layer matrix to GL it wasn't first calling
glActiveTextureMatrix to set the right texture unit for the
layer. This would end up setting the texture matrix on whatever layer
happened to be previously active. This happened to work for
test-cogl-multitexture presumably because it was coincidentally
setting the layer matrix on the last used layer.
The CoglDebugFlags are now stored in an array of unsigned ints rather
than a single variable. The flags are accessed using macros instead of
directly peeking at the cogl_debug_flags variable. The index values
are stored in the enum rather than the actual mask values so that the
enum doesn't need to be more than 32 bits wide. The hope is that the
code to determine the index into the array can be optimized out by the
compiler so it should have exactly the same performance as the old
code.
The lighting parameters such as the diffuse and ambient colors were
previously only flushed in the fixed vertend. This meant that if a
vertex shader was used then they would not be set. The lighting
parameters are uniforms which are just as useful in a fragment shader
so it doesn't really make sense to set them in the vertend. They are
now flushed in the common cogl-pipeline-opengl code but the code is
#ifdef'd for GLES2 because they need to be part of the progend in that
case.
The GLSL vertend is mostly only useful for GLES2. The fixed function
vertend is kept at higher priority than the GLSL vertend so it is
unlikely to be used in any other circumstances.
The vertends are intended to flush state that would be represented in
a vertex program. Code to handle the layer matrix, lighting and
point size has now been moved from the common cogl-pipeline-opengl
backend to the fixed vertend.