Commit Graph

239 Commits

Author SHA1 Message Date
Emmanuele Bassi
b21de497d1 Fix compiler warning
Return a boolean value, not NULL.
2015-09-23 13:57:11 +01:00
Daniel Stone
5253264f2f GLES: Support glMapBufferRange from ES3
ES3 provides glMapBufferRange as core, with the added bonus that it also
supports read mappings. Use this where possible.

Signed-off-by: Daniel Stone <daniels@collabora.com>

https://bugzilla.gnome.org/show_bug.cgi?id=728355
2015-09-23 13:57:11 +01:00
Emmanuele Bassi
ca5226513e Initialize out variables
Avoids a compiler warning with strict compilation flags.
2015-09-03 08:58:44 +01:00
Emmanuele Bassi
f20cc24292 gl: Do not use deprecated constants with the GL3 driver
glGetIntegerv (GL_DEPTH_BITS, ...) and friends are deprecated in GL3; we
have to use glGetFramebufferAttachmentParameteriv() instead, like we do
for offscreen framebuffers.

Based on a patch by: Adel Gadllah <adel.gadllah@gmail.com>
Signed-off-by: Emmanuele Bassi <ebassi@gnome.org>

https://bugzilla.gnome.org/show_bug.cgi?id=753295
2015-08-06 16:16:30 +01:00
Emmanuele Bassi
15b952e03e Fix compiler warnings
Simple enumeration checks.
2015-06-10 14:10:34 +01:00
Jasper St. Pierre
f8cce5f6cb cogl-framebuffer-gl: Work again on GLESv2 2015-04-20 12:09:27 -07:00
Owen W. Taylor
d16df5a5aa Add support for setting up stereo CoglOnscreens
If we want to show quad-buffer stereo with Cogl, we need to pick an
appropriate fbconfig for creating the CoglOnscreen objects. Add
cogl_onscreen_template_set_stereo_enabled() to indicate whether
stereo support is needed.

Add cogl_framebuffer_get_stereo_mode() to see if a framebuffer was
created with stereo support.

Add cogl_framebuffer_get_stereo_mode() to pick whether to draw to
the left, right, or both buffers.

Reviewed-by: Robert Bragg <robert.bragg@intel.com>
2014-07-17 19:27:05 -04:00
Neil Roberts
52a646abc5 Use the EGL_KHR_surfacless_context extension
The surfaceless context extension can be used to bind a context
without a surface. We can use this to avoid creating a dummy surface
when the CoglContext is first created. Otherwise we have to have the
dummy surface so that we can bind it before the first onscreen is
created.

The main awkward part of this patch is that theoretically according to
the GL and GLES spec if you first bind a context without a surface
then the default state for glDrawBuffers is GL_NONE instead of
GL_BACK. In practice Mesa doesn't seem to do this but we need to be
robust against a GL implementation that does. Therefore we track when
the CoglContext is first used with a CoglOnscreen and force the
glDrawBuffers state to be GL_BACK.

There is a further awkward part in that GLES2 doesn't actually have a
glDrawBuffers state but GLES3 does. GLES3 also defaults to GL_NONE in
this case so if GLES3 is available then we have to be sure to set the
state to GL_BACK. As far as I can tell that actually makes GLES3
incompatible with GLES2 because in theory if the application is not
aware of GLES3 then it should be able to assume the draw buffer is
always GL_BACK.

Reviewed-by: Robert Bragg <robert.bragg@intel.com>
(cherry picked from commit e5f28f1e75db9bdc4f2688f420a74f908f96cf76)

Conflicts:
	cogl/winsys/cogl-winsys-egl-kms.c
	cogl/winsys/cogl-winsys-egl-x11.c
2014-06-17 13:31:03 +01:00
Neil Roberts
eb7c37812a Add a COGL_EXT_IN_GLES3 option to specify extensions that are in GLES3
Some features that were previously available as an extension in GLES2
are now in core in GLES3 so we should be able to specify that with the
gles_availability mask of COGL_EXT_BEGIN so that GL implementations
advertising GLES3 don't have to additionally advertise the extension
for us to take advantage of it.

Reviewed-by: Robert Bragg <robert.bragg@intel.com>
(cherry picked from commit 4d892cd97558da61ba526f947ac0555ebab632d2)
2014-06-17 13:26:20 +01:00
Robert Bragg
a0441778ad This re-licenses Cogl 1.18 under the MIT license
Since the Cogl 1.18 branch is actively maintained in parallel with the
master branch; this is a counter part to commit 1b83ef938fc16b which
re-licensed the master branch to use the MIT license.

This re-licensing is a follow up to the proposal that was sent to the
Cogl mailing list:
http://lists.freedesktop.org/archives/cogl/2013-December/001465.html

Note: there was a copyright assignment policy in place for Clutter (and
therefore Cogl which was part of Clutter at the time) until the 11th of
June 2010 and so we only checked the details after that point (commit
0bbf50f905)

For each file, authors were identified via this Git command:
$ git blame -p -C -C -C20 -M -M10  0bbf50f905..HEAD

We received blanket approvals for re-licensing all Red Hat and Collabora
contributions which reduced how many people needed to be contacted
individually:
- http://lists.freedesktop.org/archives/cogl/2013-December/001470.html
- http://lists.freedesktop.org/archives/cogl/2014-January/001536.html

Individual approval requests were sent to all the other identified authors
who all confirmed the re-license on the Cogl mailinglist:
http://lists.freedesktop.org/archives/cogl/2014-January

As well as updating the copyright header in all sources files, the
COPYING file has been updated to reflect the license change and also
document the other licenses used in Cogl such as the SGI Free Software
License B, version 2.0 and the 3-clause BSD license.

This patch was not simply cherry-picked from master; but the same
methodology was used to check the source files.
2014-02-22 02:02:53 +00:00
Neil Roberts
7bf0fe9df8 Don't dereference an unitialised pointer in _cogl_container_of
The previous implementation was dereferencing the sample pointer in
order to get the offset to subtract from the member pointer. The
resulting value is then only used to get a pointer to the member in
order to calculate the offset so it doesn't actually read from the
memory location and shouldn't cause any problems. However this is
probably technically invalid and could have undefined behaviour. It
looks like clang takes advantage of this undefined behaviour and
doesn't actually offset the pointer. It also generates a warning when
it does this.

This patch splits the _cogl_container_of macro into two
implementations. Previously the macro was always used in the list
iterator macros like this:

SomeType *sample = _cogl_container_of(list_node, sample, link)

Instead of doing that there is now a new macro called
_cogl_list_set_iterator which explicitly assigns to the sample pointer
with an initial value before assigning to it again with the real
offset. This redundant initialisation gets optimised out by compiler.

The second macro is still called _cogl_container_of but instead of
taking a sample pointer it just directly takes the type name. That way
it can use the standard offsetof macro.

https://bugzilla.gnome.org/show_bug.cgi?id=723530

Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1efed1e0a2bce706eb4901979ed4e717bb13e4e2)
2014-02-20 13:38:43 +00:00
Neil Roberts
eb7ef457cb Add support for RG textures
This adds COGL_PIXEL_FORMAT_RG_88 and COGL_TEXTURE_COMPONENTS_RG in
order to support two-component textures. The RG components for a
texture is only supported if COGL_FEATURE_ID_TEXTURE_RG is advertised.
This is only available on GL 3, GL 2 with the GL_ARB_texture_rg
extension or GLES with the GL_EXT_texture_rg extension. The RG pixel
format is always supported for images because Cogl can easily do the
conversion if an application uses this format to upload to a texture
with a different format.

If an application tries to create an RG texture when the feature isn't
supported then it will raise an error when the texture is allocated.

https://bugzilla.gnome.org/show_bug.cgi?id=712830

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 568677ab3bcb62ababad1623be0d6b9b117d0a26)

Conflicts:
	cogl/cogl-bitmap-packing.h
	cogl/cogl-types.h
	cogl/driver/gl/gl/cogl-driver-gl.c
	tests/conform/test-read-texture-formats.c
	tests/conform/test-write-texture-formats.c
2014-01-20 14:40:45 +00:00
Neil Roberts
5085919acc pipeline-cache: Prune old unused pipelines when the cache gets too big
Previously when a pipeline is added to the cache it would never be
removed. If the application is generating a lot of unique pipelines
this can end up effectively leaking a large number of resources
including the GL program objects. Arguably this isn't really a problem
because if the application is generating that many unique pipelines
then it is doing something wrong anyway. It also implies that it will
be recompiling shaders very often so the cache leaking will likely be
the least of the problems.

This patch makes it keep track of which pipelines in the cache are in
use. The cache now returns a struct representing the entry instead of
directly returning the pipeline. This entry contains a usage counter
which the pipeline backends can use to mark when there is a pipeline
alive that is using the cache entry. When the hash table decides that
it's a good time to prune some entries, it will make a list of all of
the pipelines that are not in use and then remove the least recently
used half of the pipelines. That way it is less likely to remove
pipelines that the application is actually regenerating often even if
they aren't in use all of the time.

When the cache is pruned the hash table makes a note of how small the
cache could be if it removed all of the unused pipelines. The hash
table starts pruning when there are more entries than twice this
minimum expected size. The idea is that if that case it hit then the
hash table is more than half full of useless pipelines so the
application is generating lots of redundant pipelines and it is a good
time to remove them.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit c21aac22992bb7fef5a8d0913130b8245e67f2eb)

Conflicts:
	cogl/driver/gl/cogl-pipeline-fragend-glsl.c
	cogl/driver/gl/cogl-pipeline-progend-glsl.c
	cogl/driver/gl/cogl-pipeline-vertend-glsl.c
	cogl/driver/gl/gl/cogl-pipeline-fragend-arbfp.c
2014-01-14 12:05:17 +00:00
Robert Bragg
af7398788a remove internal_format and redundant error arguments
Texture allocation is now consistently handled lazily such that the
internal format can now be controlled using
cogl_texture_set_components() and cogl_texture_set_premultiplied()
before allocating the texture with cogl_texture_allocate(). This means
that the internal_format arguments to texture constructors are now
redundant and since most of the texture constructors now can't ever fail
the error arguments are also redundant. This now means we no longer
use CoglPixelFormat in the public api for describing the internal format
of textures which had been bad solution originally due to how specific
CoglPixelFormat is which is missleading when we don't support such
explicit control over the internal format.

Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 99a53c82e9ab0a1e5ee35941bf83dc334b1fbe87)

Note: there are numerous API changes for functions currently marked
as 'unstable' which we don't think are in use by anyone depending on
a stable 1.x api. Compared to the original patch though this avoids
changing the cogl_texture_rectangle_new_with_size() api which we know
is used by Mutter.
2014-01-09 15:49:47 +00:00
Robert Bragg
cbd6951134 introduce texture loaders to make allocations lazy
This introduces the internal idea of texture loaders that track the
state for loading and allocating a texture. This defers a lot more work
until the texture is allocated.

There are several intentions to this change:

- provides a means for extending how textures are allocated without
  requiring all the parameters to be supplied in a single _texture_new()
  function call.

- allow us to remove the internal_format argument from all
  _texture_new() apis since using CoglPixelFormat is bad way of
  expressing the internal format constraints because it is too specific.

  For now the internal_format arguments haven't actually been removed
  but this patch does introduce replacement apis for controlling the
  internal format:

    cogl_texture_set_components() lets you specify what components your
    texture needs when it is allocated.
    cogl_texture_set_premultiplied() lets you specify whether a texture
    data should be interpreted as premultiplied or not.

- Enable us to support asynchronous texture loading + allocation in the
  future.

Of note, the _new_from_data() texture constructors all continue to
allocate textures immediately so that existing code doesn't need to be
adapted to manage the lifetime of the data being uploaded.

Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 6a83de9ef4210f380a31f410797447b365a8d02c)

Note: Compared to the original patch, the ->premultipled state for
textures isn't forced to be %TRUE in _cogl_texture_init since that
effectively ignores the users explicitly given internal_format which was
a mistake and on master that change should have been made in the patch
that followed. The gtk-doc comments for cogl_texture_set_premultiplied()
and cogl_texture_set_components() have also been updated in-line with
this fix.
2014-01-09 15:49:46 +00:00
Robert Bragg
99cdcc9e3c texture: make cogl_texture_get_format api private
CoglPixelFormat is not a good way of describing the internal
format of a texture because it's too specific given that we don't
actually have exact knowledge of the internal format used by the driver.

This makes cogl_texture_get_format private and in the future we'll
provide a better way of querying the channels and their precision.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit ffde82981f22bd0185a7f33e1e6e1479f4c295b8)

Note: Since we can't break API compatibility on the 1.x branch this adds
a cogl/deprecated/cogl-texture-deprecated.c file with a
cogl_texture_get_format() wrapper around the private api. This also
moves the cogl_texture_get_rowstride() and cogl_texture_ref/unref()
functions that were previously deprecated into cogl-texture-deprecated.c
2014-01-09 15:49:35 +00:00
Robert Bragg
96ed01cc18 framebuffer: defer calculating level size until allocation
The plan is to defer more of the work for creating a texture until
allocation time, but that means we won't be able to always assume
we can query the size of a texture when creating an offscreen
framebuffer from a texture (consider for example using
_texture_new_from_file() where the size isn't known until the file has
been loaded). This defers needing to know the size of the texture
underlying an offscreen framebuffer until calling
cogl_framebuffer_allocate().

Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 9688e7dc1eeae3144729dfd4a4bf409620346bf4)
2014-01-09 15:29:30 +00:00
Robert Bragg
2e61318914 framebuffer: make format internal
This removes cogl_framebuffer_get_color_format() since the actual
internal format isn't strictly controlled by us. CoglFramebuffer::format
has been renamed to ::internal_format to make it clearer that it only
really represents the premultiplication status.

The plan is to make most of the work involved in creating a texture
happen lazily when allocating so this patch also changes
_cogl_framebuffer_init() to not take a format argument anymore since we
won't know the format of offscreen framebuffers until the framebuffer is
allocated, after the corresponding texture has been allocated. In the
case of offscreen framebuffers we now update the framebuffer
internal_format during allocation.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 8cc9e1c8bd2fac8b2a95087249c23c952d5e379f)

Note: Since we can't break API compatibility on the 1.x branch this
actually keeps the cogl_framebuffer_get_color_format() api but moves it
into a new deprecated/cogl-framebuffer-deprecated.c file and it now
returns the newly name ::internal_format.
2014-01-09 15:29:30 +00:00
Neil Roberts
24412798dc Revert "Revert "Remove the framebuffer's stack of clip stacks""
This reverts commit bc41489336.

The reason this was causing problems for Clutter is that it defines
COGL_ENABLE_EXPERIMENTAL_2_0_API which is meant to cause the Cogl
headers not to declare the deprecated API. The reverted patch moved
some additional clipping API to a deprecated header which was
previously being used by Clutter. Clutter was still successfully
compiling but with some warnings for the missing function
declarations. However when the binary is run the clipping would get
completely messed up because it would assume all of the arguments to
the functions are integers instead of floats and the wrong values
would be passed.

Clutter now has commit to make it use the 2.0 API instead of the
deprecated functions so the revert is no longer necessary.

https://git.gnome.org/browse/clutter/commit?id=705640367a5c2ae21405806bfa

Reviewed-by: Robert Bragg <robert@linux.intel.com>
2013-12-04 17:22:01 +00:00
Jasper St. Pierre
bc41489336 Revert "Remove the framebuffer's stack of clip stacks"
This reverts commit ae9cd7ca01.

Pushing this for now so we can get gnome-shell working again without
memory corruption. Let's push a proper fix later for everybody.
2013-12-02 23:32:48 -05:00
Neil Roberts
ae9cd7ca01 Remove the framebuffer's stack of clip stacks
There used to be a function called cogl_clip_stack_save in the public
API which was used when temporarily switching to an offscreen buffer
to save the clip state. This is no longer necessary because each
framebuffer has its own clip stack anyway so the function was removed
in master. However the code to maintain the stack of stacks was
retained. This patch removes it in an effort to simplify the code.

On the 1.18 branch this function is deprecated and the documentation
says that it does nothing. However that is incorrect because it does
actually the push clip stack. I think it would be safe to backport
this patch to the 1.18 branch and actually make it do nothing like it
is documented to do.

https://bugzilla.gnome.org/show_bug.cgi?id=719546
(cherry picked from commit 8655027fdcf03b02fcbbb02d179a0a88ed79c5b3)

This patch has some extra changes while backporting to the 1.18
branch. Here the cogl-clip-state file still contained some deprecated
functions. Instead of deleting the file completely it has been moved
to the deprecated folder. The declarations for this functions have
been moved from cogl1-context.h to a new deprecated/cogl-clip-state.h
header.

Conflicts:
	cogl/Makefile.am
	cogl/cogl-clip-state.c

Reviewed-by: Robert Bragg <robert@linux.intel.com>
2013-11-29 16:35:58 +00:00
Neil Roberts
c5644723f8 Use COGL_FLAGS_* for the context's private feature flags
Previously the private feature flags were stored in an enum and we
already had 31 flags. Adding the 32nd flag would presumably make it
add -2³¹ as one of the values which might cause problems. To avoid
this we'll just use an fixed-size array of longs and use indices for
the enum values like we do for the public features.

A slight complication with this is in the CoglDriverDescription where
we were previously using a static intialised value to describe the set
of features that the driver supports. We can't easily do this with the
flags array so instead the features are stored in a fixed-size array
of indices.

Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d94cb984e3c93630f3c2e6e3be9d189672aa20f3)

Conflicts:
	cogl/cogl-context-private.h
	cogl/cogl-context.c
	cogl/cogl-private.h
	cogl/cogl-renderer.c
	cogl/driver/gl/cogl-pipeline-opengl.c
	cogl/driver/gl/gl/cogl-driver-gl.c
	cogl/driver/gl/gl/cogl-pipeline-progend-fixed-arbfp.c
	cogl/driver/gl/gles/cogl-driver-gles.c
	cogl/driver/nop/cogl-driver-nop.c
2013-11-28 18:12:22 +00:00
Hans Petter Jansson
a8e04a7d6b Add API to control per-framebuffer depth writing
Add framebuffer methods cogl_framebuffer_[gs]et_depth_write_enabled()
and backend bits to pass the state on to glDepthMask().

This allows us to enable or disable depth writing per-framebuffer, which
if disabled saves us some work in glClear(). When rendering, the flag
is combined with the pipeline's depth writing flag using a logical AND.

Depth writing is enabled by default.

https://bugzilla.gnome.org/show_bug.cgi?id=709827

Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 71406438c5357eb4e0ef03e940c5456a536602a0)
2013-10-28 16:34:58 +00:00
Neil Roberts
4bc7377727 Add the cogl_point_coord snippet builtin
This adds a #define for gl_PointCoord to all shaders so that it can be
accessed with a name in the Cogl namespace.

Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c28fc054788e88627bcc2346f4c4c368870ff777)
2013-09-02 16:22:08 +01:00
Neil Roberts
cb178b7a3a Always add the #version pragma to shaders
Previously we would only add the #version pragma to shaders when
point sprite texture coordinates are enabled for a layer so that we
can access the gl_PointCoord builtin. However I don't think there's
any good reason not to just always request GLSL version 1.2 if it's
available. That way applications can always use gl_PointCoord without
having to enable point sprite texture coordinates.

This adds a glsl_version_to_use member to CoglContext which is used to
generate the #version pragma as part of the shader boilerplate. On
desktop GL this is set to 120 if version 1.2 is available, otherwise
it is left at 110. On GLES it is always left as 100.

Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e4dfe8b07e8af111ecbcb0da20ff2a2875a2b5d0)

Conflicts:
	cogl/driver/gl/gl/cogl-driver-gl.c
2013-09-02 16:22:01 +01:00
Robert Bragg
29f08ef124 webgl: use DEPTH_STENCIL_ATTACHMENT
WebGL doesn't allow you to separately attach buffers to the
STENCIL_ATTACHMENT and DEPTH_ATTACHMENT framebuffer attachment points
and instead requires you to use the DEPTH_STENCIL_ATTACHMENT whenever
you want a depth and stencil buffer.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit ec7b6360c9c4e45e0b113f9dca7bb1502e7e93be)
2013-08-23 15:25:44 +01:00
Robert Bragg
34658ea057 generalize driver description and selection
This adds a table of driver descriptions to cogl-renderer.c in order of
preference and when choosing what driver to use we now iterate the table
instead of repeating boilerplate checks. For handling the "default driver"
that can be specified when building cogl and handling driver overrides
there is a foreach_driver_description() that will make sure to iterate
the default driver first or if an override has been set then nothing but
the override will be considered.

This patch introduces some driver flags that let us broadly categorize
what kind of GL driver we are currently running on. Since there are
numerous OpenGL apis with different broad feature sets and new apis
may be introduced in the future by Khronos then we should tend to
avoid using the driver id to do runtime feature checking. These flags
provide a more stable quantity for broad feature checks.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit e07d0fc7441dddc3f0a2bc33a6a37d62ddc3efc0)
2013-08-23 14:51:43 +01:00
Chun-wei Fan
45288c5d6d cogl-texture-gl.c: Don't include strings.h unconditionally
Use the HAVE_STRINGS_H check before we include strings.h, as it is not
universally available.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit ff65144c84a16f9470d3f3931dc91cc9a6ef5938)
2013-08-19 22:44:45 +01:00
Robert Bragg
04d943579b gl: bind position attribute to location 0
Full GL treats the position attribute specially and requires that it
must be bound to generic attribute location 0 unlike GLES 2.0 or
GL 3.2 core. We now make sure to unconditionally bind the
cogl_position_in attribute to location 0 before linking any glsl program
in cogl.

For reference the relevant part of the GL 3.0 spec that covers these
semantics is Section 2.7 "Vertex Specification" pg 27

After this change there was one remaining problem in
test-custom-attributes where the test_short_verts() test was using its
own "pos" attribute instead of cogl_position_in and so cogl wasn't able
to ensure it would be bound to location 0.

This updates the test to use cogl_position_in but to work around the
fact that glVertexPointer doesn't support UNSIGNED_SHORT components we
force the test to use the glsl backend by setting a shader snippet on
the pipeline.

https://bugs.freedesktop.org/show_bug.cgi?id=67548

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 992ef7b3b49ebb56adde2133bb36330c04133a3f)
2013-08-19 22:44:45 +01:00
Robert Bragg
7365c3aa77 Separate out CoglPath api into sub-library
This splits out the cogl_path_ api into a separate cogl-path sub-library
like cogl-pango and cogl-gst. This enables developers to build Cogl with
this sub-library disabled if they don't need it which can be useful when
its important to keep the size of an application and its dependencies
down to a minimum. The functions cogl_framebuffer_{fill,stroke}_path
have been renamed to cogl_path_{fill,stroke}.

There were a few places in core cogl and cogl-gst that referenced the
CoglPath api and these have been decoupled by using the CoglPrimitive
api instead. In the case of cogl_framebuffer_push_path_clip() the core
clip stack no longer accepts path clips directly but it's now possible
to get a CoglPrimitive for the fill of a path and so the implementation
of cogl_framebuffer_push_path_clip() now lives in cogl-path and works as
a shim that first gets a CoglPrimitive and uses
cogl_framebuffer_push_primitive_clip instead.

We may want to consider renaming cogl_framebuffer_push_path_clip to
put it in the cogl_path_ namespace.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 8aadfd829239534fb4ec8255cdea813d698c5a3f)

So as to avoid breaking the 1.x API or even the ABI since we are quite
late in the 1.16 development cycle the patch was modified to build
cogl-path as a noinst_LTLIBRARY before building cogl and link the code
directly into libcogl.so as it was previously. This way we can wait
until the start of the 1.18 cycle before splitting the code into a
separate libcogl-path.so.

This also adds shims for cogl_framebuffer_fill/stroke_path() to avoid
breaking the 1.x API/ABI.
2013-08-19 22:44:35 +01:00
Robert Bragg
e9f721216e Add _primitive_draw to replace _framebuffer_draw_primitive
When splitting out the CoglPath api we saw that we would be left with
inconsistent drawing apis if the drawing apis in core Cogl were lumped
into the cogl_framebuffer_ api considering other Cogl sub-libraries or
that others will want to create higher level drawing apis outside of
Cogl but can't use the same namespace.

So that we can aim for a more consistent style this adds a
cogl_primitive_draw() api, comparable to cogl_path_fill() or
cogl_pango_show_layout() that's intended to replace
cogl_framebuffer_draw_primitive()

Note: the attribute and rectangle drawing apis are still in the
cogl_framebuffer_ namespace and this might potentially change but in
these cases there is no single object representing the thing being drawn
so it seems a more reasonable they they live in the framebuffer
namespace for now.

Note: the cogl_framebuffer_draw_primitive() api isn't removed by this
patch so it can more conveniently be cherry picked to the 1.16 branch so
we can mark it deprecated for a short while. Even though it's marked as
experimental api we know that there are people using the api so we'd
like to give them a chance to switch to the new api.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 418912b93ff81a47f9b38114d05335ab76277c48)

Conflicts:
	cogl-pango/cogl-pango-display-list.c
	cogl/Makefile.am
	cogl/cogl-framebuffer.c
	cogl/cogl-pipeline-layer-state.h
	cogl/cogl2-path.c
	cogl/driver/gl/cogl-clip-stack-gl.c
2013-07-29 18:31:36 +01:00
Robert Bragg
6f480c7530 texture: remove _cogl_texture_prepare_for_upload
This removes the gl centric _cogl_texture_prepare_for_upload api from
cogl-texture.c and instead adds a _cogl_bitmap_convert_for_upload() api
which everything now uses instead. GL specific code that needed the gl
internal/format/type enums returned by _cogl_texture_prepare_for_upload
now use ->pixel_format_to_gl directly.

Since there was a special case optimization in
cogl_texture_new_from_file that aimed to avoid copying the temporary
bitmap that's created for the given file and allow conversions to
happen in-place the new _cogl_bitmap_convert_for_upload() api supports
converting in place depending on a 'can_convert_in_place' argument.

This ability to convert bitmaps in-place has been integrated across the
different components as appropriate.

In updating cogl-texture-2d-sliced.c this was able to remove a number of
other GL specific parts to how spans are setup.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit e190dd23c655da34b9c5c263a9f6006dcc0413b0)

Conflicts:
	cogl/cogl-auto-texture.c
	cogl/cogl.symbols
2013-07-29 16:31:44 +01:00
Neil Roberts
2f4d66f950 Don't create a layer when enabling texture coordinate attributes
When a primitive is drawn with an attribute that contains texture
coordinates Cogl will fetch the corresponding layer in order to
determine the unit number. However if the pipeline didn't actually
have a layer it would end up redundantly creating it. It's probably
not a good idea to be modifying the pipeline while flushing the
attributes state so this patch makes it pass the no-create flag to the
get_layer function and then skips out enabling the attribute if the
layer didn't already exist.

https://bugzilla.gnome.org/show_bug.cgi?id=702570

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 7507ad1a55a2aeb5beb8c0e3343e1e1f2805ddde)
2013-07-01 13:36:56 +01:00
Neil Roberts
2933423ca7 Don't generate GLSL for the point size for default pipelines
Previously on GLES2 where there is no builtin point size uniform then
we would always add a line to the vertex shader to write to the
builtin point size output because when generating the shader it is not
possible to determine if the pipeline will be used to draw points or
not. This patch changes it so that the default point size is 0.0f
which is documented to have undefined results when drawing points.
That way we can avoid adding the point size code to the shader in that
case. The assumption is that any application that is drawing points
will probably have explicitly set the point size on the pipeline
anyway so it is not a big deal to change the default size from 1.0f.

This adds a new pipeline state flag to track whether the point size is
non-zero. This needs to be its own state because altering it needs to
cause a different shader to be added to the pipeline cache. The state
flags that affect the vertex shader have been changed from a constant
to a runtime function because they will be different depending on
whether there is a builtin point size uniform.

There is also a unit test to ensure that changing the point size does
or doesn't generate a new shader depending on the values.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit b2eba06e16b587acbf5c57944a70ceccecb4f175)

Conflicts:
	cogl/cogl-pipeline-private.h
	cogl/cogl-pipeline-state-private.h
	cogl/cogl-pipeline-state.c
	cogl/cogl-pipeline.c
2013-06-21 14:18:37 +01:00
Neil Roberts
534e535a28 Use the Wayland embedded linked list implementation instead of BSD's
This removes cogl-queue.h and adds a copy of Wayland's embedded list
implementation. The advantage of the Wayland model is that it is much
simpler and so it is easier to follow. It also doesn't require
defining a typedef for every list type.

The downside is that there is only one list type which is a
doubly-linked list where the head has a pointer to both the beginning
and the end. The BSD implementation has many more combinations some of
which we were taking advantage of to reduce the size of critical
structs where we didn't need a pointer to the end of the list.

The corresponding changes to uses of cogl-queue.h are:

• COGL_STAILQ_* was used for onscreen the list of events and dirty
  notifications. This makes the size of the CoglContext grow by one
  pointer.

• COGL_TAILQ_* was used for fences.

• COGL_LIST_* for CoglClosures. In this case the list head now has an
  extra pointer which means CoglOnscreen will grow by the size of
  three pointers, but this doesn't seem like a particularly important
  struct to optimise for size anyway.

• COGL_LIST_* was used for the list of foreign GLES2 offscreens.

• COGL_TAILQ_* was used for the list of sub stacks in a
  CoglMemoryStack.

• COGL_LIST_* was used to track the list of layers that haven't had
  code generated yet while generating a fragment shader for a
  pipeline.

• COGL_LIST_* was used to track the pipeline hierarchy in CoglNode.

The last part is a bit more controversial because it increases the
size of CoglPipeline and CoglPipelineLayer by one pointer in order to
have the redundant tail pointer for the list head. Normally we try to
be very careful about the size of the CoglPipeline struct. Because
CoglPipeline is slice-allocated, this effectively ends up adding two
pointers to the size because GSlice rounds up to the size of two
pointers.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 13abf613b15f571ba1fcf6d2eb831ffc6fa31324)

Conflicts:
	cogl/cogl-context-private.h
	cogl/cogl-context.c
	cogl/driver/gl/cogl-pipeline-fragend-glsl.c
	doc/reference/cogl-2.0-experimental/Makefile.am
2013-06-13 13:45:47 +01:00
Neil Roberts
ed510dbe6d Use a GList instead of a BSD list for CoglPipelineSnippetList
Previously CoglPipelineSnippetList was using the BSD embedded list
type with a mini struct to combine the list node with a pointer to the
snippet. This is effectively equivalent to just using a GList so we
might as well do that. This will help if we eventually want to get rid
of cogl-queue.h

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 54a168f3c7829c427d54ab517533bb9f7384d022)
2013-06-13 13:45:46 +01:00
Neil Roberts
6d51a18e7c Add support for per-vertex point sizes
This adds a new function to enable per-vertex point size on a
pipeline. This can be set with
cogl_pipeline_set_per_vertex_point_size(). Once enabled the point size
can be set either by drawing with an attribute named
'cogl_point_size_in' or by writing to the 'cogl_point_size_out'
builtin from a snippet.

There is a feature flag which must be checked for before using
per-vertex point sizes. This will only be set on GL >= 2.0 or on GLES
2.0. GL will only let you set a per-vertex point size from GLSL by
writing to gl_PointSize. This is only available in GL2 and not in the
older GLSL extensions.

The per-vertex point size has its own pipeline state flag so that it
can be part of the state that affects vertex shader generation.

Having to enable the per vertex point size with a separate function is
a bit awkward. Ideally it would work like the color attribute where
you can just set it for every vertex in your primitive with
cogl_pipeline_set_color or set it per-vertex by just using the
attribute. This is harder to get working with the point size because
we need to generate a different vertex shader depending on what
attributes are bound. I think if we wanted to make this work
transparently we would still want to internally have a pipeline
property describing whether the shader was generated with per-vertex
support so that it would work with the shader cache correctly.
Potentially we could make the per-vertex property internal and
automatically make a weak pipeline whenever the attribute is bound.
However we would then also need to automatically detect when an
application is writing to cogl_point_size_out from a snippet.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 8495d9c1c15ce389885a9356d965eabd97758115)

Conflicts:
	cogl/cogl-context.c
	cogl/cogl-pipeline-private.h
	cogl/cogl-pipeline.c
	cogl/cogl-private.h
	cogl/driver/gl/cogl-pipeline-progend-fixed.c
	cogl/driver/gl/gl/cogl-pipeline-progend-fixed-arbfp.c
2013-06-07 16:53:29 +01:00
Robert Bragg
eb7fafe700 tests: Adds our first white-box unit test
This adds a white-box unit test that verifies that GL_BLEND is disabled
when drawing an opaque rectangle, enabled when drawing a transparent
rectangle and then disabled again when drawing a transparent rectangle
but with a blend string that effectively disables blending.

This shares the test utilities and launcher infrastructure we are using
for conformance tests so we get consistent reporting and so unit tests
will be run against a range of different drivers.

This adds a --enable-unit-tests configure option which is enabled by
default but if disabled will make all UNIT_TESTS() into static inline
functions that we should expect the compiler to discard since they won't
be referenced by anything.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 9047cce06bbf9051ec77e622be2fdbb96ed767a8)
2013-06-06 21:27:16 +01:00
Robert Bragg
8f9151303d pipeline: improve real_blend_enable checks
Since _cogl_pipeline_update_blend_enable() can sometimes show up quite
high in profiles; instead of calling
_cogl_pipeline_update_blend_enable() whenever we change pipeline state
that may affect blending we now just set a dirty flag and when we flush
a pipeline we check this dirty flag and lazily calculate whether blender
really needs to be enabled if it's set.

Since it turns out we were too optimistic in assuming most GL drivers
would recognize blending with ADD(src,0) is equivalent to disabling
GL_BLEND we now check this case ourselves so we can always explicitly
disable GL_BLEND if we know we don't need blending.

This introduces the idea of an 'unknown_color_alpha' boolean to the
pipeline flush code which is set whenever we can't guarantee that the
color attribute is opaque. For example this is set whenever a user
specifies a color attribute with 4 components when drawing a primitive.
This boolean needs to be cached along with every pipeline because
pipeline::real_blend_enabled depends on this and so we need to also call
_cogl_pipeline_update_blend_enable() if the status of this changes.

Incidentally with this patch we now no longer ever use
_cogl_pipeline_set_blend_enable() internally. For now the internal api
hasn't been removed though since we might want to consider re-purposing
it as a public api since it will now not conflict with our own internal
state tracking and could provide a more convenient way to disable
blending than setting a blend string.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit ab2ae18f3207514c91fa6fd9f2d3f2ed93a86497)
2013-06-06 21:27:09 +01:00
Daniel Stone
ea7d3b8476 Add fence API
cogl_framebuffer_add_fence creates a synchronisation fence, which will
invoke a user-specified callback when the GPU has finished executing all
commands provided to it up to that point in time.

Support is currently provided for GL 3.x's GL_ARB_sync extension, and
EGL's EGL_KHR_fence_sync (when used with OpenGL ES).

Signed-off-by: Daniel Stone <daniel@fooishbar.org>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>

https://bugzilla.gnome.org/show_bug.cgi?id=691752

(cherry picked from commit e6d37470da9294adc1554c0a8c91aa2af560ed9f)
2013-05-28 21:36:03 +01:00
Robert Bragg
2fa7b5573d gl: ensure depth isn't masked during clear
If a pipeline has been flushed that disables depth writing and then we
try to clear the framebuffer with cogl_framebuffer_clear4f, passing
COGL_BUFFER_BIT_DEPTH then we need to make sure that depth writing is
re-enabled before issuing the glClear call. We also need to make sure
that when the next primitive is flushed that we re-check what state the
depth mask should be in.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 3cf497042897d1aa6918bc55b71a36ff67e560b9)
2013-03-06 16:45:31 +00:00
Neil Roberts
a131b697d9 Add the layer's sampler and uniform declarations at the start
Previously the sampler uniform declarations such as cogl_sampler0 were
generated by walking the list of layers in the shader state. This had
two problems. Firstly it would only generate the declarations for
layers that have been referenced. If a layer has a combine mode of
replace then the samplers from previous layers couldn't be used by
custom snippets. Secondly it meant that the samplers couldn't be
referenced by functions in the declarations sections because the
samplers are declared too late.

This patch fixes it to generate the layer declarations in the backend
start function using all of the layers on the pipeline instead. In
addition it adds the sampler declarations to the vertex shader as they
were previously missing.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 1824df902bbb9995cae6ffb7a413913f2df35eef)

Conflicts:
	cogl/driver/gl/cogl-pipeline-fragend-glsl.c
	cogl/driver/gl/cogl-pipeline-vertend-glsl.c
2013-02-27 15:53:43 +00:00
Neil Roberts
956d39ac30 Add fragment and vertex snippet hooks for global declarations
This adds hook points to add global function and variable declarations
to either the fragment or vertex shader. The declarations can then be
used by subsequent snippets. Only the ‘declarations’ string of the
snippet is used and the code is directly put in the global scope near
the top of the shader.

The reason this is necessary rather than just adding a normal snippet
with the declarations is that for the other hooks Cogl assumes that
the snippets are independent of each other. That means if a snippet
has a replace string then it will assume that it doesn't even need to
generate the code for earlier hooks which means the global
declarations would be lost.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit ebb82d5b0bc30487b7101dc66b769160b40f92ca)
2013-02-27 14:43:55 +00:00
Neil Roberts
fc86d0e12e win32: Minor build fixes for building for win32
This fixes some minor errors and warnings that were preventing Cogl
building with mingw32:

• cogl-framebuffer-gl.c was not including cogl-texture-private.h.
  Presumably something else ends up including that when building for
  GLX.

• The WGL winsys was not including cogl-error-private.h

• A call to strsplit in the WGL winsys was wrong.

• For some reason the test-wrap-rectangle-textures test was trying to
  include the GDKPixbuf header.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 5380343399f834d9f96ca3b137d49c9c2193900a)
2013-02-21 15:20:55 +00:00
Neil Roberts
29983a7e2c buffer: Don't set the invalidate hint when requesting read access
glMapBufferRange is documented to fail with GL_INVALID_OPERATION if
GL_MAP_INVALIDATE_BUFFER_BIT is set as well as GL_MAP_READ_BIT. I
guess this makes sense when only read access is requested because
there would be no point in reading back uninitialised data. However,
Clutter requests read/write access with the discard hint when
rendering to a CoglBitmap with Cairo. The data is new so the discard
hint makes sense but it also needs read access so that it can read
back the data it just wrote for blending.

This patch works around the GL restriction by skipping the discard
hints if read access is requested. If the buffer discard hint is set
along with read access it will recreate the buffer store as an
alternative way to discard the buffer as it does in the case where the
GL_ARB_map_buffer_range extension is not supported.

https://bugzilla.gnome.org/show_bug.cgi?id=694164

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 986675d6043e8701f2d65415cf72ffc91734debd)
2013-02-19 15:03:30 +00:00
Neil Roberts
f79e78f648 Don't #ifdef the call to glDiscardFramebuffer
When Cogl is compiled with support for both the GL and GLES drivers it
only includes the GL header and not the GLES header. That means in
that case it would not compile in the code for the
GL_EXT_discard_framebuffer extension even though it could be used on
the GLES driver. This patch makes it use the standard names for the
GL_COLOR, GL_STENCIL etc names instead of the _EXT suffixed names and
manually defines them if we are using the GLES headers. That way the
discard code can be used unconditionally.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 59c30292d0f3c28d6e0e08bc5bf3b4b10545d856)
2013-02-13 12:05:26 +00:00
Neil Roberts
7d9e075917 Bind the framebuffer before calling glDiscardFramebuffer
This patch just adds a call to _cogl_framebuffer_flush_state to ensure
the correct framebuffer is bound before discarding its buffers.
Previously it would presumably just discard the buffers of whatever
framebuffer happened to be used last.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 37c390a5b33d4f65ff6c834e9be2f8de716635ee)
2013-02-13 12:05:22 +00:00
Neil Roberts
1e39819c49 Fix a clear of an array allocated with alloca which had the wrong size
The array allocated for storing the difference flags for each layer in
cogl-pipeline-opengl.c was being cleared with the size of a pointer
instead of the size actually allocated for the array. Presumably this
would mean that if there is more than one layer it wouldn't clear the
array properly.

Also the size of the array was slightly wrong because it was allocating
the size of a pointer for each layer instead of the size of an
unsigned long.

This was originally reported by Jasper St. Pierre on #clutter.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 1e134dd7cd5317651be158a483c7cb2723ce8869)
2013-02-08 12:20:32 +00:00
Neil Roberts
da7971f6be Don't set GL_TEXTURE_MAX_LEVEL on GLES
GL_TEXTURE_MAX_LEVEL is not supported on GLES so we can't set it. It
looks like Mesa was letting us get away with this but on other drivers
it may cause errors. The enum is not defined in the GLES headers so it
was failing to compile unless the GL driver is also enabled.

The test-texture-mipmap-get-set test is now marked as n/a on GLES2
because it can't support limiting the sampled mipmaps.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit ba51c393818582b058f5f1e66cf8d13835ad10e5)

Conflicts:
	tests/conform/test-conform-main.c
2013-01-25 18:21:09 +00:00
Neil Roberts
5d6160c751 Replace some #if HAVE_COGL_GL lines with #ifdef
This was generating warnings when the GL driver is disabled.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit f26682dcc04642fed9db959c63d6c6e4261d2148)

Conflicts:
	cogl/cogl-auto-texture.c
2013-01-25 18:21:09 +00:00