...for all of the copying of the pre-configured headers for Cogl. This
makes it much easier for people using the projects for building Cogl to
clean up files that are "generated", and this commit is the last bit for
it. Also clean up the property sheets as a result.
Also fix the Cogl project that it does indeed look for headers in
cogl/deprecated, so that the build is fixed.
Similar updates to the Visual Studio 2010 Projects will follow.
Since the Cogl 1.18 branch is actively maintained in parallel with the
master branch; this is a counter part to commit 1b83ef938fc16b which
re-licensed the master branch to use the MIT license.
This re-licensing is a follow up to the proposal that was sent to the
Cogl mailing list:
http://lists.freedesktop.org/archives/cogl/2013-December/001465.html
Note: there was a copyright assignment policy in place for Clutter (and
therefore Cogl which was part of Clutter at the time) until the 11th of
June 2010 and so we only checked the details after that point (commit
0bbf50f905)
For each file, authors were identified via this Git command:
$ git blame -p -C -C -C20 -M -M10 0bbf50f905..HEAD
We received blanket approvals for re-licensing all Red Hat and Collabora
contributions which reduced how many people needed to be contacted
individually:
- http://lists.freedesktop.org/archives/cogl/2013-December/001470.html
- http://lists.freedesktop.org/archives/cogl/2014-January/001536.html
Individual approval requests were sent to all the other identified authors
who all confirmed the re-license on the Cogl mailinglist:
http://lists.freedesktop.org/archives/cogl/2014-January
As well as updating the copyright header in all sources files, the
COPYING file has been updated to reflect the license change and also
document the other licenses used in Cogl such as the SGI Free Software
License B, version 2.0 and the 3-clause BSD license.
This patch was not simply cherry-picked from master; but the same
methodology was used to check the source files.
Not doing so leads to the following error, if stddef.h is not included
indirectly through EGL headers:
| libdrm/drm.h:132:2: error: unknown type name 'size_t'
| size_t name_len; /**< Length of name buffer */
Signed-off-by: Andreas Oberritter <obi@saftware.de>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 55c82476a93366a3e7d1a2537fccc3a7aab87c66)
The _cogl_egl_texture_2d_new_from_image function has a CoglError
argument which implies that it is unlike the other texture
constructors and returns errors immediately rather than having a
delayed-allocation mechanism. cogl_wayland_texture_2d_new_from_buffer
which calls it is also like this. We can't rely on delayed-allocation
semantics for this without changing the applications because the
texture needs to be allocated before the corresponding EGLImage is
destroyed. This patch just makes it immediately allocate.
A better patch might be to remove the error argument to make it
obvious that there are delayed-allocation semantics and then fix all
of the applications.
This was breaking Cogland and Mutter.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 0206c03d54823b2f6cbb2aa420d07a4db9bcd8a3)
The previous implementation was dereferencing the sample pointer in
order to get the offset to subtract from the member pointer. The
resulting value is then only used to get a pointer to the member in
order to calculate the offset so it doesn't actually read from the
memory location and shouldn't cause any problems. However this is
probably technically invalid and could have undefined behaviour. It
looks like clang takes advantage of this undefined behaviour and
doesn't actually offset the pointer. It also generates a warning when
it does this.
This patch splits the _cogl_container_of macro into two
implementations. Previously the macro was always used in the list
iterator macros like this:
SomeType *sample = _cogl_container_of(list_node, sample, link)
Instead of doing that there is now a new macro called
_cogl_list_set_iterator which explicitly assigns to the sample pointer
with an initial value before assigning to it again with the real
offset. This redundant initialisation gets optimised out by compiler.
The second macro is still called _cogl_container_of but instead of
taking a sample pointer it just directly takes the type name. That way
it can use the standard offsetof macro.
https://bugzilla.gnome.org/show_bug.cgi?id=723530
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1efed1e0a2bce706eb4901979ed4e717bb13e4e2)
The declaration of INTEL_swap_event was treating winsys features as
if they were a bitfield, but they aren't. The end result was that
instead of reporting two features when INTEL_swap_event is present,
we report none.
https://bugzilla.gnome.org/show_bug.cgi?id=719741
Reviewed-by: Robert Bragg <robert@linux.intel.com>
If the URI argument given on the command line doesn't look like a URI
then the basic video player will now try to parse it as a GStreamer
pipeline description. The pipeline must contain a coglsink element
which the player will look for. This can be used to test Cogl GST. For
example, to test NV12 this can be used:
./cogl-basic-video-player 'videotestsrc !
video/x-raw,format=(string)NV12 !
coglsink'
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 49e47c8329d50774e365fc7c3a7504b5fc005dc7)
If no context is set on the CoglGstVideoSink then it would previously
call gst_caps_ref with a NULL pointer. This patch makes it just return
NULL instead. I think that is a valid thing to do because that is what
gst_base_sink_default_get_caps does. If we don't do this then it's not
possible to use CoglGstVideoSink with GstParse because that tries to
link the pipeline after parsing the string. That was previously
causing a critical error because the freshly parsed sink doesn't have
a CoglContext.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Reviewed-by: Lionel Landwerlin <llandwerlin@gmail.com>
(cherry picked from commit cf26da2964e372c9fe5bd6da060a57006a83af38)
This adds a cogl-gst renderer to decode NV12 data. NV12 is split into
two buffers, one for the luma component and another for the two
chrominance components at a quarter of the resolution. The second
buffer is uploaded to a two-component RG texture. RG-component
textures are only supported if COGL_FEATURE_ID_TEXTURE_RG is
advertised by Cogl so the NV12 cap is also only advertised when that
is available.
Based on a patch by Lionel Landwerlin which was in turn based on a
patch from Edward Hervey and Matthieu Bouron for Clutter-Gst:
https://bugzilla.gnome.org/show_bug.cgi?id=712832
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Reviewed-by: Lionel Landwerlin <llandwerlin@gmail.com>
(cherry picked from commit 2c619216964b46aab313be3ef1c405dfc720d258)
Commit 99a53c82e9ab0a1e5 removed the internal format argument when
uploading a video frame to a texture so that the format will just be
determined automatically from the image format. However this also
leaves the premultiplied state at the default and the default is TRUE.
That means that when we upload RGBA data Cogl will do a premultiplied
conversion on the CPU. We probably don't want to be putting a CPU
conversion in the way of video frames so this patch changes it to set
the premultiplied state to FALSE on the textures and then do the
premultiplied conversion in the shader.
This is particularly important for AYUV which uses the alpha channel
for the V component so doing a premultiplied conversion on the CPU
just creates garbage and messes up the image.
The RGB and RGBA renderers have each been split into two; one that
uses GLSL and one that uses a regular pipeline. The RGBA pipeline
without GLSL is then changed to use 2 layers so we that we can do the
premultiplied conversion in the second layer with a special layer
combine string.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 07a57f26596c72507035369c90ed6d62568330b5)
The Cogl 1.x API exports cogl_set_path() and cogl_get_path(), which
means that the regular expression needs to catch those two symbols
as well.
https://bugzilla.gnome.org/show_bug.cgi?id=722765
Reviewed-by: Neil Roberts <neil@linux.intel.com>
The -L option makes curl follow redirections. This is needed for
downloading glext.h because khronos.org is using a redirect.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 85baaef4a4f4fd3a03c7c9f05002eae483ddd6b3)
Since 248a76f5eac7e5ae4fb45208577f9a55360812a7 cogl.h can no longer be
included in internal source files so the WGL winsys was no longer
compiling.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 91af97a2a27ab5ad3e7eaabebd03503b685d4d42)
This adds COGL_PIXEL_FORMAT_RG_88 and COGL_TEXTURE_COMPONENTS_RG in
order to support two-component textures. The RG components for a
texture is only supported if COGL_FEATURE_ID_TEXTURE_RG is advertised.
This is only available on GL 3, GL 2 with the GL_ARB_texture_rg
extension or GLES with the GL_EXT_texture_rg extension. The RG pixel
format is always supported for images because Cogl can easily do the
conversion if an application uses this format to upload to a texture
with a different format.
If an application tries to create an RG texture when the feature isn't
supported then it will raise an error when the texture is allocated.
https://bugzilla.gnome.org/show_bug.cgi?id=712830
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 568677ab3bcb62ababad1623be0d6b9b117d0a26)
Conflicts:
cogl/cogl-bitmap-packing.h
cogl/cogl-types.h
cogl/driver/gl/gl/cogl-driver-gl.c
tests/conform/test-read-texture-formats.c
tests/conform/test-write-texture-formats.c
The following changes are made to the documentation for CoglTexture:
• The description of the default value for the components property is
changed to say that it is always RGBA for textures created by the
‘_with_size’ textures. Previously it said that the default is based
on the pixel format used the first time data is set on the texture,
but this is only true if the data is set using a constructor.
• Added documentation for the CoglTextureComponents enum.
• Changed it to say that it _specifies_ what components are required
for sampling rather than determinging [sic] them.
• Added ‘Since: 1.18’ to
cogl_texture_{set,get}_{components,premultiplied}
• Changed the since tag for CoglTextureError from 2.0 to 1.8.
• Added documentation for COGL_TEXTURE_ERROR_{FORMAT,TYPE}.
• Added the following to the cogl2-sections.txt file:
COGL_TEXTURE_ERROR
CoglTextureError
cogl_texture_allocate
cogl_texture_set_components
cogl_texture_get_components
cogl_texture_set_premultiplied
cogl_texture_get_premultiplied
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 2f12c4329c519fa14b927b2dcd708dddcc903c32)
Previously when a pipeline is added to the cache it would never be
removed. If the application is generating a lot of unique pipelines
this can end up effectively leaking a large number of resources
including the GL program objects. Arguably this isn't really a problem
because if the application is generating that many unique pipelines
then it is doing something wrong anyway. It also implies that it will
be recompiling shaders very often so the cache leaking will likely be
the least of the problems.
This patch makes it keep track of which pipelines in the cache are in
use. The cache now returns a struct representing the entry instead of
directly returning the pipeline. This entry contains a usage counter
which the pipeline backends can use to mark when there is a pipeline
alive that is using the cache entry. When the hash table decides that
it's a good time to prune some entries, it will make a list of all of
the pipelines that are not in use and then remove the least recently
used half of the pipelines. That way it is less likely to remove
pipelines that the application is actually regenerating often even if
they aren't in use all of the time.
When the cache is pruned the hash table makes a note of how small the
cache could be if it removed all of the unused pipelines. The hash
table starts pruning when there are more entries than twice this
minimum expected size. The idea is that if that case it hit then the
hash table is more than half full of useless pipelines so the
application is generating lots of redundant pipelines and it is a good
time to remove them.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c21aac22992bb7fef5a8d0913130b8245e67f2eb)
Conflicts:
cogl/driver/gl/cogl-pipeline-fragend-glsl.c
cogl/driver/gl/cogl-pipeline-progend-glsl.c
cogl/driver/gl/cogl-pipeline-vertend-glsl.c
cogl/driver/gl/gl/cogl-pipeline-fragend-arbfp.c
This fixes the cogl_texture_get_components() prototype to have a return
type of CoglTextureComponents instead of CoglBool which was probably a
copy and paste error.
(cherry picked from commit 55b09f8a939db71ee5ff41afa0ed08cbe937a4ec)
This updates the cogl_texture_rectangle_new_with_size() api in line with
master to be consistent with other texture constructors. This removes
the internal_format and error arguments and allows the texture to be
allocated lazily which means the texture can be configured with apis
like cogl_texture_set_components() and cogl_texture_set_premultiplied()
before it is allocated.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
test-path for example uses COGL_ENABLE_EXPERIMENTAL_2_0_API to access
the 2.0 CoglPath api but also uses deprecated framebuffer api such as
cogl_push/pop_framebuffer. Similarly clutter-backend.c builds with
COGL_ENABLE_EXPERIMENTAL_2_0_API and uses cogl_set_framebuffer().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This moves the framebuffer stack apis out of cogl-framebuffer.c into
cogl/deprecated/cogl-framebuffer-deprecated.c so cogl-framebuffer.c is
more in-line with the master branch to ease cherry picking patches.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This moves all of the automagic texture constructor prototypes from
cogl-texture.h into a new deprecated/cogl-auto-texture.h file. This also
moves cogl_texture_new_from_sub_texture() into
deprecated/cogl-auto-texture.c
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This makes sure we install a cogl-gst-1.0.pc and
cogl-gst-2.0-experimental.pc file consistent with other sub-libraries
such that cogl-1.x packages can be parallel installed with cogl master.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This makes sure we install a cogl-gles2-1.0.pc and
cogl-gles2-2.0-experimental.pc file consistent with other sub-libraries
such that cogl-1.x packages can be parallel installed with cogl master.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Texture allocation is now consistently handled lazily such that the
internal format can now be controlled using
cogl_texture_set_components() and cogl_texture_set_premultiplied()
before allocating the texture with cogl_texture_allocate(). This means
that the internal_format arguments to texture constructors are now
redundant and since most of the texture constructors now can't ever fail
the error arguments are also redundant. This now means we no longer
use CoglPixelFormat in the public api for describing the internal format
of textures which had been bad solution originally due to how specific
CoglPixelFormat is which is missleading when we don't support such
explicit control over the internal format.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 99a53c82e9ab0a1e5ee35941bf83dc334b1fbe87)
Note: there are numerous API changes for functions currently marked
as 'unstable' which we don't think are in use by anyone depending on
a stable 1.x api. Compared to the original patch though this avoids
changing the cogl_texture_rectangle_new_with_size() api which we know
is used by Mutter.
This introduces the internal idea of texture loaders that track the
state for loading and allocating a texture. This defers a lot more work
until the texture is allocated.
There are several intentions to this change:
- provides a means for extending how textures are allocated without
requiring all the parameters to be supplied in a single _texture_new()
function call.
- allow us to remove the internal_format argument from all
_texture_new() apis since using CoglPixelFormat is bad way of
expressing the internal format constraints because it is too specific.
For now the internal_format arguments haven't actually been removed
but this patch does introduce replacement apis for controlling the
internal format:
cogl_texture_set_components() lets you specify what components your
texture needs when it is allocated.
cogl_texture_set_premultiplied() lets you specify whether a texture
data should be interpreted as premultiplied or not.
- Enable us to support asynchronous texture loading + allocation in the
future.
Of note, the _new_from_data() texture constructors all continue to
allocate textures immediately so that existing code doesn't need to be
adapted to manage the lifetime of the data being uploaded.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 6a83de9ef4210f380a31f410797447b365a8d02c)
Note: Compared to the original patch, the ->premultipled state for
textures isn't forced to be %TRUE in _cogl_texture_init since that
effectively ignores the users explicitly given internal_format which was
a mistake and on master that change should have been made in the patch
that followed. The gtk-doc comments for cogl_texture_set_premultiplied()
and cogl_texture_set_components() have also been updated in-line with
this fix.
When reading a texture back by first wrapping it as an offscreen
framebuffer and using _read_pixels_into_bitmap() we now make sure the
offscreen framebuffer has an internal format that matches the
meta-texture being read not that of the current sub-texture being
iterated. In the case of atlas textures the subtexture is a shared
texture whose format doesn't reflect the premultipled alpha status of
individual atlas-textures, nor whether the alpha component is valid.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 6ee425d4f10acd8b008a2c17e5c701fc1d850f59)
CoglPixelFormat is not a good way of describing the internal
format of a texture because it's too specific given that we don't
actually have exact knowledge of the internal format used by the driver.
This makes cogl_texture_get_format private and in the future we'll
provide a better way of querying the channels and their precision.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit ffde82981f22bd0185a7f33e1e6e1479f4c295b8)
Note: Since we can't break API compatibility on the 1.x branch this adds
a cogl/deprecated/cogl-texture-deprecated.c file with a
cogl_texture_get_format() wrapper around the private api. This also
moves the cogl_texture_get_rowstride() and cogl_texture_ref/unref()
functions that were previously deprecated into cogl-texture-deprecated.c
This defers checking the internal format and whether accelerated
migration is supported until allocating the texture.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 83b05cbe3969789bc3ec78480c0937a6722efbf1)
Instead of throwing a CoglError exception if an application tries to
allocate a zero size atlas texture this make that a programmer error
instead.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit d3eaeedc86d408669b81d6c43ef2b0ab9d859c85)
The plan is to defer a lot more work in creating a texture until
allocation time. This means that for some texture backends we might not
know until after allocation whether the texture is sliced or can support
hardware repeating. This makes sure we trigger an allocation if either
of these are queried.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 4868582812dbcd5125495b312d858f751fc31e9d)
The plan is to defer a lot more work in creating a texture until
allocation time. This means we wont be able to assume that all textures
being used to render must have already been allocated when data was
specified.
The latest point at which we will generally require a texture to be
allocated will be when we need to know the underlying GL handle for a
texture and so this updates cogl_texture_get_gl_texture() to ensure the
texture is allocated.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 59f6fefc37524f492512a71b831760a218d9bb95)
The plan is to defer more of the work for creating a texture until
allocation time, but that means we won't be able to always assume
we can query the size of a texture when creating an offscreen
framebuffer from a texture (consider for example using
_texture_new_from_file() where the size isn't known until the file has
been loaded). This defers needing to know the size of the texture
underlying an offscreen framebuffer until calling
cogl_framebuffer_allocate().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 9688e7dc1eeae3144729dfd4a4bf409620346bf4)
This ensures framebuffers are implicitly allocated when querying the
width, height or viewport width/height if the framebuffer's size is
currently unknown. The plan is to allow texture backends to defer
calculating the size of textures until they are allocated which in turn
means we won't know the size of offscreen framebuffers until the texture
has been allocated. Potentially we could be more specific about this in
the future and only ensure the texture is allocated, but for now it will
be simplest to just ensure the framebuffer is allocated.
Note: in the case of onscreen buffers which are always initialized with
a requested size we are careful to avoid triggering an allocation when
this is queried otherwise we will see recursion when the winsys code
queries the requested size during allocation.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f4b612f1b75e043f1852b9a32368cc37ab89308b)
Since we are planning on deferring more texture allocation work this
makes sure we don't query whether a texture is sliced until we know it
has been allocated.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b742638a1ce581f5a2c9f15907361c3b0c1b178c)
This removes cogl_framebuffer_get_color_format() since the actual
internal format isn't strictly controlled by us. CoglFramebuffer::format
has been renamed to ::internal_format to make it clearer that it only
really represents the premultiplication status.
The plan is to make most of the work involved in creating a texture
happen lazily when allocating so this patch also changes
_cogl_framebuffer_init() to not take a format argument anymore since we
won't know the format of offscreen framebuffers until the framebuffer is
allocated, after the corresponding texture has been allocated. In the
case of offscreen framebuffers we now update the framebuffer
internal_format during allocation.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 8cc9e1c8bd2fac8b2a95087249c23c952d5e379f)
Note: Since we can't break API compatibility on the 1.x branch this
actually keeps the cogl_framebuffer_get_color_format() api but moves it
into a new deprecated/cogl-framebuffer-deprecated.c file and it now
returns the newly name ::internal_format.
The bug that prevented MESA_copy_sub_buffer to work for swrast /
llvmpipe got fixed in mesa 10.1 git so enable it for mesa 10.1+.
https://bugzilla.gnome.org/show_bug.cgi?id=721450
When landing the patch, it was tweaked to #include "cogl-version.h" to
avoid a compiler warning about COGL_VERSION_ENCODE being implicitly
defined. -- Robert Bragg
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e7e216b1d3d151acf3fed619bd759692a989b4b4)