The declaration of INTEL_swap_event was treating winsys features as
if they were a bitfield, but they aren't. The end result was that
instead of reporting two features when INTEL_swap_event is present,
we report none.
https://bugzilla.gnome.org/show_bug.cgi?id=719741
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Since 248a76f5eac7e5ae4fb45208577f9a55360812a7 cogl.h can no longer be
included in internal source files so the WGL winsys was no longer
compiling.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 91af97a2a27ab5ad3e7eaabebd03503b685d4d42)
This adds COGL_PIXEL_FORMAT_RG_88 and COGL_TEXTURE_COMPONENTS_RG in
order to support two-component textures. The RG components for a
texture is only supported if COGL_FEATURE_ID_TEXTURE_RG is advertised.
This is only available on GL 3, GL 2 with the GL_ARB_texture_rg
extension or GLES with the GL_EXT_texture_rg extension. The RG pixel
format is always supported for images because Cogl can easily do the
conversion if an application uses this format to upload to a texture
with a different format.
If an application tries to create an RG texture when the feature isn't
supported then it will raise an error when the texture is allocated.
https://bugzilla.gnome.org/show_bug.cgi?id=712830
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 568677ab3bcb62ababad1623be0d6b9b117d0a26)
Conflicts:
cogl/cogl-bitmap-packing.h
cogl/cogl-types.h
cogl/driver/gl/gl/cogl-driver-gl.c
tests/conform/test-read-texture-formats.c
tests/conform/test-write-texture-formats.c
The following changes are made to the documentation for CoglTexture:
• The description of the default value for the components property is
changed to say that it is always RGBA for textures created by the
‘_with_size’ textures. Previously it said that the default is based
on the pixel format used the first time data is set on the texture,
but this is only true if the data is set using a constructor.
• Added documentation for the CoglTextureComponents enum.
• Changed it to say that it _specifies_ what components are required
for sampling rather than determinging [sic] them.
• Added ‘Since: 1.18’ to
cogl_texture_{set,get}_{components,premultiplied}
• Changed the since tag for CoglTextureError from 2.0 to 1.8.
• Added documentation for COGL_TEXTURE_ERROR_{FORMAT,TYPE}.
• Added the following to the cogl2-sections.txt file:
COGL_TEXTURE_ERROR
CoglTextureError
cogl_texture_allocate
cogl_texture_set_components
cogl_texture_get_components
cogl_texture_set_premultiplied
cogl_texture_get_premultiplied
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 2f12c4329c519fa14b927b2dcd708dddcc903c32)
Previously when a pipeline is added to the cache it would never be
removed. If the application is generating a lot of unique pipelines
this can end up effectively leaking a large number of resources
including the GL program objects. Arguably this isn't really a problem
because if the application is generating that many unique pipelines
then it is doing something wrong anyway. It also implies that it will
be recompiling shaders very often so the cache leaking will likely be
the least of the problems.
This patch makes it keep track of which pipelines in the cache are in
use. The cache now returns a struct representing the entry instead of
directly returning the pipeline. This entry contains a usage counter
which the pipeline backends can use to mark when there is a pipeline
alive that is using the cache entry. When the hash table decides that
it's a good time to prune some entries, it will make a list of all of
the pipelines that are not in use and then remove the least recently
used half of the pipelines. That way it is less likely to remove
pipelines that the application is actually regenerating often even if
they aren't in use all of the time.
When the cache is pruned the hash table makes a note of how small the
cache could be if it removed all of the unused pipelines. The hash
table starts pruning when there are more entries than twice this
minimum expected size. The idea is that if that case it hit then the
hash table is more than half full of useless pipelines so the
application is generating lots of redundant pipelines and it is a good
time to remove them.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c21aac22992bb7fef5a8d0913130b8245e67f2eb)
Conflicts:
cogl/driver/gl/cogl-pipeline-fragend-glsl.c
cogl/driver/gl/cogl-pipeline-progend-glsl.c
cogl/driver/gl/cogl-pipeline-vertend-glsl.c
cogl/driver/gl/gl/cogl-pipeline-fragend-arbfp.c
This fixes the cogl_texture_get_components() prototype to have a return
type of CoglTextureComponents instead of CoglBool which was probably a
copy and paste error.
(cherry picked from commit 55b09f8a939db71ee5ff41afa0ed08cbe937a4ec)
This updates the cogl_texture_rectangle_new_with_size() api in line with
master to be consistent with other texture constructors. This removes
the internal_format and error arguments and allows the texture to be
allocated lazily which means the texture can be configured with apis
like cogl_texture_set_components() and cogl_texture_set_premultiplied()
before it is allocated.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
test-path for example uses COGL_ENABLE_EXPERIMENTAL_2_0_API to access
the 2.0 CoglPath api but also uses deprecated framebuffer api such as
cogl_push/pop_framebuffer. Similarly clutter-backend.c builds with
COGL_ENABLE_EXPERIMENTAL_2_0_API and uses cogl_set_framebuffer().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This moves the framebuffer stack apis out of cogl-framebuffer.c into
cogl/deprecated/cogl-framebuffer-deprecated.c so cogl-framebuffer.c is
more in-line with the master branch to ease cherry picking patches.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This moves all of the automagic texture constructor prototypes from
cogl-texture.h into a new deprecated/cogl-auto-texture.h file. This also
moves cogl_texture_new_from_sub_texture() into
deprecated/cogl-auto-texture.c
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Texture allocation is now consistently handled lazily such that the
internal format can now be controlled using
cogl_texture_set_components() and cogl_texture_set_premultiplied()
before allocating the texture with cogl_texture_allocate(). This means
that the internal_format arguments to texture constructors are now
redundant and since most of the texture constructors now can't ever fail
the error arguments are also redundant. This now means we no longer
use CoglPixelFormat in the public api for describing the internal format
of textures which had been bad solution originally due to how specific
CoglPixelFormat is which is missleading when we don't support such
explicit control over the internal format.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 99a53c82e9ab0a1e5ee35941bf83dc334b1fbe87)
Note: there are numerous API changes for functions currently marked
as 'unstable' which we don't think are in use by anyone depending on
a stable 1.x api. Compared to the original patch though this avoids
changing the cogl_texture_rectangle_new_with_size() api which we know
is used by Mutter.
This introduces the internal idea of texture loaders that track the
state for loading and allocating a texture. This defers a lot more work
until the texture is allocated.
There are several intentions to this change:
- provides a means for extending how textures are allocated without
requiring all the parameters to be supplied in a single _texture_new()
function call.
- allow us to remove the internal_format argument from all
_texture_new() apis since using CoglPixelFormat is bad way of
expressing the internal format constraints because it is too specific.
For now the internal_format arguments haven't actually been removed
but this patch does introduce replacement apis for controlling the
internal format:
cogl_texture_set_components() lets you specify what components your
texture needs when it is allocated.
cogl_texture_set_premultiplied() lets you specify whether a texture
data should be interpreted as premultiplied or not.
- Enable us to support asynchronous texture loading + allocation in the
future.
Of note, the _new_from_data() texture constructors all continue to
allocate textures immediately so that existing code doesn't need to be
adapted to manage the lifetime of the data being uploaded.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 6a83de9ef4210f380a31f410797447b365a8d02c)
Note: Compared to the original patch, the ->premultipled state for
textures isn't forced to be %TRUE in _cogl_texture_init since that
effectively ignores the users explicitly given internal_format which was
a mistake and on master that change should have been made in the patch
that followed. The gtk-doc comments for cogl_texture_set_premultiplied()
and cogl_texture_set_components() have also been updated in-line with
this fix.
When reading a texture back by first wrapping it as an offscreen
framebuffer and using _read_pixels_into_bitmap() we now make sure the
offscreen framebuffer has an internal format that matches the
meta-texture being read not that of the current sub-texture being
iterated. In the case of atlas textures the subtexture is a shared
texture whose format doesn't reflect the premultipled alpha status of
individual atlas-textures, nor whether the alpha component is valid.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 6ee425d4f10acd8b008a2c17e5c701fc1d850f59)
CoglPixelFormat is not a good way of describing the internal
format of a texture because it's too specific given that we don't
actually have exact knowledge of the internal format used by the driver.
This makes cogl_texture_get_format private and in the future we'll
provide a better way of querying the channels and their precision.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit ffde82981f22bd0185a7f33e1e6e1479f4c295b8)
Note: Since we can't break API compatibility on the 1.x branch this adds
a cogl/deprecated/cogl-texture-deprecated.c file with a
cogl_texture_get_format() wrapper around the private api. This also
moves the cogl_texture_get_rowstride() and cogl_texture_ref/unref()
functions that were previously deprecated into cogl-texture-deprecated.c
This defers checking the internal format and whether accelerated
migration is supported until allocating the texture.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 83b05cbe3969789bc3ec78480c0937a6722efbf1)
Instead of throwing a CoglError exception if an application tries to
allocate a zero size atlas texture this make that a programmer error
instead.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit d3eaeedc86d408669b81d6c43ef2b0ab9d859c85)
The plan is to defer a lot more work in creating a texture until
allocation time. This means that for some texture backends we might not
know until after allocation whether the texture is sliced or can support
hardware repeating. This makes sure we trigger an allocation if either
of these are queried.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 4868582812dbcd5125495b312d858f751fc31e9d)
The plan is to defer a lot more work in creating a texture until
allocation time. This means we wont be able to assume that all textures
being used to render must have already been allocated when data was
specified.
The latest point at which we will generally require a texture to be
allocated will be when we need to know the underlying GL handle for a
texture and so this updates cogl_texture_get_gl_texture() to ensure the
texture is allocated.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 59f6fefc37524f492512a71b831760a218d9bb95)
The plan is to defer more of the work for creating a texture until
allocation time, but that means we won't be able to always assume
we can query the size of a texture when creating an offscreen
framebuffer from a texture (consider for example using
_texture_new_from_file() where the size isn't known until the file has
been loaded). This defers needing to know the size of the texture
underlying an offscreen framebuffer until calling
cogl_framebuffer_allocate().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 9688e7dc1eeae3144729dfd4a4bf409620346bf4)
This ensures framebuffers are implicitly allocated when querying the
width, height or viewport width/height if the framebuffer's size is
currently unknown. The plan is to allow texture backends to defer
calculating the size of textures until they are allocated which in turn
means we won't know the size of offscreen framebuffers until the texture
has been allocated. Potentially we could be more specific about this in
the future and only ensure the texture is allocated, but for now it will
be simplest to just ensure the framebuffer is allocated.
Note: in the case of onscreen buffers which are always initialized with
a requested size we are careful to avoid triggering an allocation when
this is queried otherwise we will see recursion when the winsys code
queries the requested size during allocation.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f4b612f1b75e043f1852b9a32368cc37ab89308b)
Since we are planning on deferring more texture allocation work this
makes sure we don't query whether a texture is sliced until we know it
has been allocated.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b742638a1ce581f5a2c9f15907361c3b0c1b178c)
This removes cogl_framebuffer_get_color_format() since the actual
internal format isn't strictly controlled by us. CoglFramebuffer::format
has been renamed to ::internal_format to make it clearer that it only
really represents the premultiplication status.
The plan is to make most of the work involved in creating a texture
happen lazily when allocating so this patch also changes
_cogl_framebuffer_init() to not take a format argument anymore since we
won't know the format of offscreen framebuffers until the framebuffer is
allocated, after the corresponding texture has been allocated. In the
case of offscreen framebuffers we now update the framebuffer
internal_format during allocation.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 8cc9e1c8bd2fac8b2a95087249c23c952d5e379f)
Note: Since we can't break API compatibility on the 1.x branch this
actually keeps the cogl_framebuffer_get_color_format() api but moves it
into a new deprecated/cogl-framebuffer-deprecated.c file and it now
returns the newly name ::internal_format.
The bug that prevented MESA_copy_sub_buffer to work for swrast /
llvmpipe got fixed in mesa 10.1 git so enable it for mesa 10.1+.
https://bugzilla.gnome.org/show_bug.cgi?id=721450
When landing the patch, it was tweaked to #include "cogl-version.h" to
avoid a compiler warning about COGL_VERSION_ENCODE being implicitly
defined. -- Robert Bragg
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e7e216b1d3d151acf3fed619bd759692a989b4b4)
Since we now have more time to ensure that Clutter is updated to check
for the now separate cogl-path package as part of its build
configuration we are now making the package split, in line with Cogl
master.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This makes it so that cogl-sdl.h is a top-level header no longer
automatically included by cogl.h. This avoids lots of warnings building
the conformance tests and examples due to SDL.h warning when
__STRICT_ANSI__ isn't defined.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f7536985437dc85c26b33d1bbe1b7f3d4b32476a)
This means that we can't cache the journal read_pixels optimization.
https://bugzilla.gnome.org/show_bug.cgi?id=719582
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 550bae22d20c8d6d7cf1d090faa9c91619594077)
This reverts commit bc41489336.
The reason this was causing problems for Clutter is that it defines
COGL_ENABLE_EXPERIMENTAL_2_0_API which is meant to cause the Cogl
headers not to declare the deprecated API. The reverted patch moved
some additional clipping API to a deprecated header which was
previously being used by Clutter. Clutter was still successfully
compiling but with some warnings for the missing function
declarations. However when the binary is run the clipping would get
completely messed up because it would assume all of the arguments to
the functions are integers instead of floats and the wrong values
would be passed.
Clutter now has commit to make it use the 2.0 API instead of the
deprecated functions so the revert is no longer necessary.
https://git.gnome.org/browse/clutter/commit?id=705640367a5c2ae21405806bfa
Reviewed-by: Robert Bragg <robert@linux.intel.com>
When projecting the bounding rectangle of a primitive it was using the
modelview matrix twice instead of the modelview and projection
matrices so it was coming out with garbage.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 7e1f05c84013bb91248d691091df00f4f634c6cf)
This reverts commit ae9cd7ca01.
Pushing this for now so we can get gnome-shell working again without
memory corruption. Let's push a proper fix later for everybody.
This was added in 361bd516f3 late during the 1.10 cycle to
contain experimental functions that we should never have made public.
The plan was to remove them once we started working on 1.12 but it
looks like we never got around to doing that. Better late than never!
The header for the file was already removed in 7365c3aa77.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
There used to be a function called cogl_clip_stack_save in the public
API which was used when temporarily switching to an offscreen buffer
to save the clip state. This is no longer necessary because each
framebuffer has its own clip stack anyway so the function was removed
in master. However the code to maintain the stack of stacks was
retained. This patch removes it in an effort to simplify the code.
On the 1.18 branch this function is deprecated and the documentation
says that it does nothing. However that is incorrect because it does
actually the push clip stack. I think it would be safe to backport
this patch to the 1.18 branch and actually make it do nothing like it
is documented to do.
https://bugzilla.gnome.org/show_bug.cgi?id=719546
(cherry picked from commit 8655027fdcf03b02fcbbb02d179a0a88ed79c5b3)
This patch has some extra changes while backporting to the 1.18
branch. Here the cogl-clip-state file still contained some deprecated
functions. Instead of deleting the file completely it has been moved
to the deprecated folder. The declarations for this functions have
been moved from cogl1-context.h to a new deprecated/cogl-clip-state.h
header.
Conflicts:
cogl/Makefile.am
cogl/cogl-clip-state.c
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Adds cogl_wayland_texture_set_region_from_shm_buffer which is a
convenience wrapper around cogl_texture_set_region but it uses the
correct format to copy the data from a Wayland SHM buffer. This will
typically be used by compositors to update the texture for a surface
when an SHM buffer is attached. The ordering of the arguments is based
on cogl_texture_set_region_from_bitmap.
Based on a patch by Jasper St. Pierre.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c76c1d136d2cac7f3d1331a4d1dc0dd0f06e812c)
Conflicts:
examples/cogland.c
This patch was accidentally added before it had any review and without
first going through master. Master now has a replacement patch with
some modifications. That will be cherry-picked to the 1.18 branch in a
subsequent commit.
This reverts commit af480a2b8b.
Previously the private feature flags were stored in an enum and we
already had 31 flags. Adding the 32nd flag would presumably make it
add -2³¹ as one of the values which might cause problems. To avoid
this we'll just use an fixed-size array of longs and use indices for
the enum values like we do for the public features.
A slight complication with this is in the CoglDriverDescription where
we were previously using a static intialised value to describe the set
of features that the driver supports. We can't easily do this with the
flags array so instead the features are stored in a fixed-size array
of indices.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d94cb984e3c93630f3c2e6e3be9d189672aa20f3)
Conflicts:
cogl/cogl-context-private.h
cogl/cogl-context.c
cogl/cogl-private.h
cogl/cogl-renderer.c
cogl/driver/gl/cogl-pipeline-opengl.c
cogl/driver/gl/gl/cogl-driver-gl.c
cogl/driver/gl/gl/cogl-pipeline-progend-fixed-arbfp.c
cogl/driver/gl/gles/cogl-driver-gles.c
cogl/driver/nop/cogl-driver-nop.c
This fixes the build with --enable-introspection. I'm not sure why
g-ir-scanner seems to parse all public headers in isolation instead of
being able take a more limited list of top-level public headers and
automatically parse all necessary #include directives but this means we
have to special case how we define and undefine __COGL_H_INSIDE__ to
subvert the guards we have in place for detecting misuse of the headers.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit e0b2255876c1cf11d124d5ae37cbe9a6e43777f1)
This declares the interface types CoglFramebuffer, CoglBuffer,
CoglTexture, CoglMetaTexture and CoglPrimitiveTexture as void when
including the public cogl.h header so that users don't have to use lots
of C type casts between instance types and interface types.
This also removes all of the COGL_XYZ() type cast macros since they do
nothing more than compile time type casting but it's less readable if
you haven't seen that coding pattern before.
Unlike with gobject based apis that use per-type macros for casting and
performing runtime type checking we instead prefer to do our runtime
type checking internally within the front-end public apis when objects
are passed into Cogl. This greatly reduces the verbosity for users of
the api and may help reduce the chance of excessive runtime type
checking that can sometimes be a problem.
(cherry picked from commit 248a76f5eac7e5ae4fb45208577f9a55360812a7)
Since we can't break the 1.x api this version of the patch actually
defines compatible NOP macros within deprecated/cogl-type-casts.h
This patch doesn't look right because now nothing will ever set
clear_clip_dirty = TRUE. Presumably that would mean that if a
rectangle is drawn and then the journal is flushed before the
framebuffer is read, then it would think it could return the clear
color even though it shouldn't.
Perhaps a better approach would be to make a second version of
_cogl_framebuffer_mark_mid_scene that doesn't set the clear_clip_dirty
flag and call that from the journal instead. As this patch was pushed
without review and without first going into the master branch I think
it makes sense to just revert it and apply a new version to master.
This reverts commit 3eb63f67a3.
Leaving the clip bounds untouched means that it will retain the stale value
of whatever it was when we last had a clip; reset it so that it contains the
full framebuffer contents instead.
https://bugzilla.gnome.org/show_bug.cgi?id=712562
Add framebuffer methods cogl_framebuffer_[gs]et_depth_write_enabled()
and backend bits to pass the state on to glDepthMask().
This allows us to enable or disable depth writing per-framebuffer, which
if disabled saves us some work in glClear(). When rendering, the flag
is combined with the pipeline's depth writing flag using a logical AND.
Depth writing is enabled by default.
https://bugzilla.gnome.org/show_bug.cgi?id=709827
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 71406438c5357eb4e0ef03e940c5456a536602a0)
Depending on what version of Mesa you have then eglQueryWaylandBuffer
may take a wl_buffer or wl_resource argument and the EGL header will
only forward declare the corresponding type.
The use of wl_buffer has been deprecated and so internally we assume
that eglQueryWaylandBuffer takes a wl_resource but for compatibility we
forward declare wl_resource in case we are building with EGL headers
that still use wl_buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=710926
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 9bd1ee544667cfe7ecae479ec7f778446dd8f326)
cogl_is_atlas_texture is supposed to be exported from the DLL/.so, so
update the cogl.symbols file to ensure this.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 13e037f096de5742db769500b4c0018249d8f8e4)
This makes cogl_framebuffer_set_color_mask immediately bail out if the
given mask equals the framebuffer's current mask, since the cost of
flushing the journal and flushing the gl state will hugely outweigh the
cost of the check.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 925174d99df7f1f4b11098e748bcc23eaa396a21)
This updates the definition of _COGL_STATIC_ASSERT to just use
_Static_assert if available or be NOP if not. We no longer worry about
supporting static assertions with older compilers. This fixes some
verbose warnings that newer compilers were giving with the old typedef
based static assertion method.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 645e3607ea7f210d6dcb9d217204790051de7c82)
When a pipeline is notified of a change we now make sure to notify all
progends of that change not just the progend directly associated with
that pipeline. A pipeline can have private state associated with it from
multiple progends because descendants will always try and cache state on
ancestors to maximize the chance that the state can later be re-used.
Descendants may be using different progends than the ancestors that they
cache state with.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 873939a18934185fb3c9c84c373cb86d1278add7)