Since the Cogl 1.18 branch is actively maintained in parallel with the
master branch; this is a counter part to commit 1b83ef938fc16b which
re-licensed the master branch to use the MIT license.
This re-licensing is a follow up to the proposal that was sent to the
Cogl mailing list:
http://lists.freedesktop.org/archives/cogl/2013-December/001465.html
Note: there was a copyright assignment policy in place for Clutter (and
therefore Cogl which was part of Clutter at the time) until the 11th of
June 2010 and so we only checked the details after that point (commit
0bbf50f905)
For each file, authors were identified via this Git command:
$ git blame -p -C -C -C20 -M -M10 0bbf50f905..HEAD
We received blanket approvals for re-licensing all Red Hat and Collabora
contributions which reduced how many people needed to be contacted
individually:
- http://lists.freedesktop.org/archives/cogl/2013-December/001470.html
- http://lists.freedesktop.org/archives/cogl/2014-January/001536.html
Individual approval requests were sent to all the other identified authors
who all confirmed the re-license on the Cogl mailinglist:
http://lists.freedesktop.org/archives/cogl/2014-January
As well as updating the copyright header in all sources files, the
COPYING file has been updated to reflect the license change and also
document the other licenses used in Cogl such as the SGI Free Software
License B, version 2.0 and the 3-clause BSD license.
This patch was not simply cherry-picked from master; but the same
methodology was used to check the source files.
This moves the framebuffer stack apis out of cogl-framebuffer.c into
cogl/deprecated/cogl-framebuffer-deprecated.c so cogl-framebuffer.c is
more in-line with the master branch to ease cherry picking patches.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
When reading a texture back by first wrapping it as an offscreen
framebuffer and using _read_pixels_into_bitmap() we now make sure the
offscreen framebuffer has an internal format that matches the
meta-texture being read not that of the current sub-texture being
iterated. In the case of atlas textures the subtexture is a shared
texture whose format doesn't reflect the premultipled alpha status of
individual atlas-textures, nor whether the alpha component is valid.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 6ee425d4f10acd8b008a2c17e5c701fc1d850f59)
CoglPixelFormat is not a good way of describing the internal
format of a texture because it's too specific given that we don't
actually have exact knowledge of the internal format used by the driver.
This makes cogl_texture_get_format private and in the future we'll
provide a better way of querying the channels and their precision.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit ffde82981f22bd0185a7f33e1e6e1479f4c295b8)
Note: Since we can't break API compatibility on the 1.x branch this adds
a cogl/deprecated/cogl-texture-deprecated.c file with a
cogl_texture_get_format() wrapper around the private api. This also
moves the cogl_texture_get_rowstride() and cogl_texture_ref/unref()
functions that were previously deprecated into cogl-texture-deprecated.c
The plan is to defer more of the work for creating a texture until
allocation time, but that means we won't be able to always assume
we can query the size of a texture when creating an offscreen
framebuffer from a texture (consider for example using
_texture_new_from_file() where the size isn't known until the file has
been loaded). This defers needing to know the size of the texture
underlying an offscreen framebuffer until calling
cogl_framebuffer_allocate().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 9688e7dc1eeae3144729dfd4a4bf409620346bf4)
This ensures framebuffers are implicitly allocated when querying the
width, height or viewport width/height if the framebuffer's size is
currently unknown. The plan is to allow texture backends to defer
calculating the size of textures until they are allocated which in turn
means we won't know the size of offscreen framebuffers until the texture
has been allocated. Potentially we could be more specific about this in
the future and only ensure the texture is allocated, but for now it will
be simplest to just ensure the framebuffer is allocated.
Note: in the case of onscreen buffers which are always initialized with
a requested size we are careful to avoid triggering an allocation when
this is queried otherwise we will see recursion when the winsys code
queries the requested size during allocation.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f4b612f1b75e043f1852b9a32368cc37ab89308b)
Since we are planning on deferring more texture allocation work this
makes sure we don't query whether a texture is sliced until we know it
has been allocated.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b742638a1ce581f5a2c9f15907361c3b0c1b178c)
This removes cogl_framebuffer_get_color_format() since the actual
internal format isn't strictly controlled by us. CoglFramebuffer::format
has been renamed to ::internal_format to make it clearer that it only
really represents the premultiplication status.
The plan is to make most of the work involved in creating a texture
happen lazily when allocating so this patch also changes
_cogl_framebuffer_init() to not take a format argument anymore since we
won't know the format of offscreen framebuffers until the framebuffer is
allocated, after the corresponding texture has been allocated. In the
case of offscreen framebuffers we now update the framebuffer
internal_format during allocation.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 8cc9e1c8bd2fac8b2a95087249c23c952d5e379f)
Note: Since we can't break API compatibility on the 1.x branch this
actually keeps the cogl_framebuffer_get_color_format() api but moves it
into a new deprecated/cogl-framebuffer-deprecated.c file and it now
returns the newly name ::internal_format.
This means that we can't cache the journal read_pixels optimization.
https://bugzilla.gnome.org/show_bug.cgi?id=719582
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 550bae22d20c8d6d7cf1d090faa9c91619594077)
This reverts commit bc41489336.
The reason this was causing problems for Clutter is that it defines
COGL_ENABLE_EXPERIMENTAL_2_0_API which is meant to cause the Cogl
headers not to declare the deprecated API. The reverted patch moved
some additional clipping API to a deprecated header which was
previously being used by Clutter. Clutter was still successfully
compiling but with some warnings for the missing function
declarations. However when the binary is run the clipping would get
completely messed up because it would assume all of the arguments to
the functions are integers instead of floats and the wrong values
would be passed.
Clutter now has commit to make it use the 2.0 API instead of the
deprecated functions so the revert is no longer necessary.
https://git.gnome.org/browse/clutter/commit?id=705640367a5c2ae21405806bfa
Reviewed-by: Robert Bragg <robert@linux.intel.com>
This reverts commit ae9cd7ca01.
Pushing this for now so we can get gnome-shell working again without
memory corruption. Let's push a proper fix later for everybody.
There used to be a function called cogl_clip_stack_save in the public
API which was used when temporarily switching to an offscreen buffer
to save the clip state. This is no longer necessary because each
framebuffer has its own clip stack anyway so the function was removed
in master. However the code to maintain the stack of stacks was
retained. This patch removes it in an effort to simplify the code.
On the 1.18 branch this function is deprecated and the documentation
says that it does nothing. However that is incorrect because it does
actually the push clip stack. I think it would be safe to backport
this patch to the 1.18 branch and actually make it do nothing like it
is documented to do.
https://bugzilla.gnome.org/show_bug.cgi?id=719546
(cherry picked from commit 8655027fdcf03b02fcbbb02d179a0a88ed79c5b3)
This patch has some extra changes while backporting to the 1.18
branch. Here the cogl-clip-state file still contained some deprecated
functions. Instead of deleting the file completely it has been moved
to the deprecated folder. The declarations for this functions have
been moved from cogl1-context.h to a new deprecated/cogl-clip-state.h
header.
Conflicts:
cogl/Makefile.am
cogl/cogl-clip-state.c
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Previously the private feature flags were stored in an enum and we
already had 31 flags. Adding the 32nd flag would presumably make it
add -2³¹ as one of the values which might cause problems. To avoid
this we'll just use an fixed-size array of longs and use indices for
the enum values like we do for the public features.
A slight complication with this is in the CoglDriverDescription where
we were previously using a static intialised value to describe the set
of features that the driver supports. We can't easily do this with the
flags array so instead the features are stored in a fixed-size array
of indices.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d94cb984e3c93630f3c2e6e3be9d189672aa20f3)
Conflicts:
cogl/cogl-context-private.h
cogl/cogl-context.c
cogl/cogl-private.h
cogl/cogl-renderer.c
cogl/driver/gl/cogl-pipeline-opengl.c
cogl/driver/gl/gl/cogl-driver-gl.c
cogl/driver/gl/gl/cogl-pipeline-progend-fixed-arbfp.c
cogl/driver/gl/gles/cogl-driver-gles.c
cogl/driver/nop/cogl-driver-nop.c
This patch doesn't look right because now nothing will ever set
clear_clip_dirty = TRUE. Presumably that would mean that if a
rectangle is drawn and then the journal is flushed before the
framebuffer is read, then it would think it could return the clear
color even though it shouldn't.
Perhaps a better approach would be to make a second version of
_cogl_framebuffer_mark_mid_scene that doesn't set the clear_clip_dirty
flag and call that from the journal instead. As this patch was pushed
without review and without first going into the master branch I think
it makes sense to just revert it and apply a new version to master.
This reverts commit 3eb63f67a3.
Leaving the clip bounds untouched means that it will retain the stale value
of whatever it was when we last had a clip; reset it so that it contains the
full framebuffer contents instead.
https://bugzilla.gnome.org/show_bug.cgi?id=712562
Add framebuffer methods cogl_framebuffer_[gs]et_depth_write_enabled()
and backend bits to pass the state on to glDepthMask().
This allows us to enable or disable depth writing per-framebuffer, which
if disabled saves us some work in glClear(). When rendering, the flag
is combined with the pipeline's depth writing flag using a logical AND.
Depth writing is enabled by default.
https://bugzilla.gnome.org/show_bug.cgi?id=709827
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 71406438c5357eb4e0ef03e940c5456a536602a0)
This makes cogl_framebuffer_set_color_mask immediately bail out if the
given mask equals the framebuffer's current mask, since the cost of
flushing the journal and flushing the gl state will hugely outweigh the
cost of the check.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 925174d99df7f1f4b11098e748bcc23eaa396a21)
This renames cogl_offscreen_new_to_texture to
cogl_offscreen_new_with_texture. The intention is to then cherry-pick
this back to the cogl-1.16 branch so we can maintain a parallel
cogl_offscreen_new_to_texture() function which keeps the synchronous
allocation semantics that some clutter applications are currently
relying on.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit ecc6d2f64481626992b2fe6cdfa7b999270b28f5)
Note: Since we can't break the 1.x api on this branch this keeps a
thin shim around cogl_offscreen_new_with_texture to implement
cogl_offscreen_new_to_texture with its synchronous allocation
semantics.
The API says that it should return NULL on failure but it does not do that
due to the lazy allocation.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=703174
This splits out the cogl_path_ api into a separate cogl-path sub-library
like cogl-pango and cogl-gst. This enables developers to build Cogl with
this sub-library disabled if they don't need it which can be useful when
its important to keep the size of an application and its dependencies
down to a minimum. The functions cogl_framebuffer_{fill,stroke}_path
have been renamed to cogl_path_{fill,stroke}.
There were a few places in core cogl and cogl-gst that referenced the
CoglPath api and these have been decoupled by using the CoglPrimitive
api instead. In the case of cogl_framebuffer_push_path_clip() the core
clip stack no longer accepts path clips directly but it's now possible
to get a CoglPrimitive for the fill of a path and so the implementation
of cogl_framebuffer_push_path_clip() now lives in cogl-path and works as
a shim that first gets a CoglPrimitive and uses
cogl_framebuffer_push_primitive_clip instead.
We may want to consider renaming cogl_framebuffer_push_path_clip to
put it in the cogl_path_ namespace.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 8aadfd829239534fb4ec8255cdea813d698c5a3f)
So as to avoid breaking the 1.x API or even the ABI since we are quite
late in the 1.16 development cycle the patch was modified to build
cogl-path as a noinst_LTLIBRARY before building cogl and link the code
directly into libcogl.so as it was previously. This way we can wait
until the start of the 1.18 cycle before splitting the code into a
separate libcogl-path.so.
This also adds shims for cogl_framebuffer_fill/stroke_path() to avoid
breaking the 1.x API/ABI.
Almost nothing draws attributes directly and for those things that do
it's trivial to adapt them to instead draw via the cogl_primitive api.
This simplifies the Cogl api a bit.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 7395925bcc01aad6c695fd0d9af78b784b3c64d4)
Conflicts:
cogl/cogl-framebuffer.c
cogl/cogl-framebuffer.h
When splitting out the CoglPath api we saw that we would be left with
inconsistent drawing apis if the drawing apis in core Cogl were lumped
into the cogl_framebuffer_ api considering other Cogl sub-libraries or
that others will want to create higher level drawing apis outside of
Cogl but can't use the same namespace.
So that we can aim for a more consistent style this adds a
cogl_primitive_draw() api, comparable to cogl_path_fill() or
cogl_pango_show_layout() that's intended to replace
cogl_framebuffer_draw_primitive()
Note: the attribute and rectangle drawing apis are still in the
cogl_framebuffer_ namespace and this might potentially change but in
these cases there is no single object representing the thing being drawn
so it seems a more reasonable they they live in the framebuffer
namespace for now.
Note: the cogl_framebuffer_draw_primitive() api isn't removed by this
patch so it can more conveniently be cherry picked to the 1.16 branch so
we can mark it deprecated for a short while. Even though it's marked as
experimental api we know that there are people using the api so we'd
like to give them a chance to switch to the new api.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 418912b93ff81a47f9b38114d05335ab76277c48)
Conflicts:
cogl-pango/cogl-pango-display-list.c
cogl/Makefile.am
cogl/cogl-framebuffer.c
cogl/cogl-pipeline-layer-state.h
cogl/cogl2-path.c
cogl/driver/gl/cogl-clip-stack-gl.c
This adds a callback that can be registered with
cogl_onscreen_add_dirty_callback which will get called whenever the
window system determines that the contents of the window is dirty and
needs to be redrawn. Under the two X-based winsys's, this is reported
off the back of the Expose events, under SDL it is reported from
SDL_VIDEOEXPOSE or SDL_WINDOWEVENT_EXPOSED and under Windows from the
WM_PAINT messages. The Wayland winsys doesn't really have the concept
of dirtying the buffer but in order to allow applications to work the
same way on all platforms it will emit the event when the surface is
first shown and whenever it is resized.
There is a private feature flag to specify whether dirty events are
supported. If the winsys does not set this then Cogl will simulate
dirty events by emitting one when the window is first allocated and
when it is resized. The only winsys's that don't set this flag are
things like KMS or the EGL null winsys where there is no windowing
system and showing and hiding the onscreen doesn't really make any
sense. In that case Cogl can assume the buffer will only become dirty
once when it is first allocated.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 85c5a9ba419b2247bd768284c79ee69164a0c098)
Conflicts:
cogl/cogl-private.h
After discussing with Kristian Høgsberg it seems that the semantics of
wl_egl_window_resize is meant to be that if nothing has been drawn to
the framebuffer since the last swap then the resize will take effect
immediately. Cogl was previously always delaying the call to
wl_egl_window_resize until the next swap. That meant that if you
wanted to resize the surface you would have to call
cogl_wayland_onscreen_resize and then redundantly draw a frame at the
old size so that you can swap to get the resize to occur before
drawing again at the right size. Typically an application would decide
to resize at the start of its paint sequence so it should be able to
just resize immediately.
In current Mesa master it seems that there is a bug which means that
it won't actually delay a resize that is done mid-scene and instead it
will just discard what came before. To get consistent behaviour in
Cogl, the code to delay the call to wl_egl_window_resize is still used
if it determines that the buffer is dirty. There is an existing
_cogl_framebuffer_mark_mid_scene call which was being used to track
when the framebuffer becomes dirty since the last clear. This function
is now also used to track a new flag to track whether something has
been drawn since the last swap. It is called ‘mid_scene’ under the
assumption that this may also be useful for other things later.
cogl_framebuffer_clear has been slightly altered to always call
_cogl_framebuffer_mark_mid_scene even if it determines that it doesn't
need to clear because the framebuffer should still be considered to be
in the middle of a scene. Adding a quad to the journal now also begins
the scene.
This also fixes a potential bug where it looks like pending_dx/dy were
never cleared so they would always be accumulated even after the
resize is flushed.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 945689a62903990a20abb87a85d2c96eb3985fe7)
In some later patches we want to be able to use the term ‘dirty’ as a
public facing concept which represents expose events from the window
system. In that case the internal concept of dirtying the framebuffer
is confusing, so this patch changes the name to instead mean that
we've doing something which causes the framebuffer to be in the middle
of a frame.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 88eed85b52c29f66659ea112038f3522c9bd864e)
cogl_framebuffer_add_fence creates a synchronisation fence, which will
invoke a user-specified callback when the GPU has finished executing all
commands provided to it up to that point in time.
Support is currently provided for GL 3.x's GL_ARB_sync extension, and
EGL's EGL_KHR_fence_sync (when used with OpenGL ES).
Signed-off-by: Daniel Stone <daniel@fooishbar.org>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=691752
(cherry picked from commit e6d37470da9294adc1554c0a8c91aa2af560ed9f)
This makes sure that a viewport change when comparing between separate
framebuffers also implies a clip change when we are applying the Intel
gen6 workaround for broken viewport clipping. Without this then
switching between different size framebuffers could leave a scissor
matching the size of a previous framebuffer.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f23f2129c58550f819cff783f47039d7bd91391e)
We have a workaround in Cogl to fix viewport clipping with Mesa Intel
Gen 6 drivers but this was breaking the semantics of
cogl_framebuffer_clear() which should not be affected by viewport
clipping. This makes sure we disable and restore the workaround when
clearing the framebuffer. This fixes Clutter's test-cogl-viewport
conformance test.
Previously when the context was initialised Cogl would query the
number of stencil bits and set a private feature flag to mark that it
can use the buffer for clipping if there was at least 3. The problem
with this is that the number of stencil bits returned by
GL_STENCIL_BITS depends on the currently bound framebuffer. This patch
adds an internal function to query the number of stencil bits in a
framebuffer and makes it use that instead when determining whether it
can push the clip using the stencil buffer.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e928d21516a6c07798655341f4f0f8e3c1d1686c)
Cogl publicly exposes the depth buffer state so we might as well have
a function to query the number of depth bits of a framebuffer.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 853143eb10387f50f8d32cf09af31b8829dc1e01)
The GL framebuffer driver now makes sure to bind the framebuffer
before counting the number of bits. Previously it would just query the
number of bits for whatever framebuffer happened to be used last.
In addition the virtual for querying the framebuffer bits has been
modified to take a pointer to a structure instead of a separate
pointer to each component. This should make it slightly more efficient
and easier to maintain.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit e9c58b2ba23a7cebcd4e633ea7c3191f02056fb5)
Consistent with how we lazily allocate framebuffers this patch allows us
to instantiate textures but still specify constraints and requirements
before allocating storage so that we can be sure to allocate the most
appropriate/efficient storage.
This adds a cogl_texture_allocate() function that is analogous to
cogl_framebuffer_allocate() which can optionally be called to explicitly
allocate storage and catch any errors. If this function isn't used
explicitly then Cogl will implicitly ensure textures are allocated
before the storage is needed.
It is generally recommended to rely on lazy storage allocation or at
least perform explicit allocation as late as possible so Cogl can be
fully informed about the best way to allocate storage.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 1fa7c0f10a8a03043e3c75cb079a49625df098b7)
Note: This reverts the cogl_texture_rectangle_new_with_size API change
that dropped the CoglError argument and keeps the semantics of
allocating the texture immediately. This is because Mutter currently
uses this API so we will probably look at updating this later once
we have a corresponding Mutter patch prepared. The other API changes
were kept since they only affected experimental api.
This moves the direct use of GL in cogl-framebuffer.c for handling
cogl_framebuffer_read_pixels_into_bitmap() into
driver/gl/cogl-framebuffer-gl.c and adds a
->framebuffer_read_pixels_into_bitmap vfunc to CoglDriverVtable.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 2f893054d6754e6bc7983f061b27c7858f1a593c)
This remove cogl-internal.h in favour of using cogl-private.h. Some
things in cogl-internal.h were moved to driver/gl/cogl-util-gl-private.h
and the _cogl_gl_error_to_string function whose prototype was moved from
cogl-internal.h to cogl-util-gl-private.h has had its implementation
moved from cogl.c to cogl-util-gl.c
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 01cc82ece091aa3bec4c07fdd6bc9e5135fca573)
We have found several times now when writing code using Cogl that it
would really help if Cogl's matrix stack api was public as a utility
api. In Rig for example we want to avoid redundant arithmetic when
deriving the matrices of entities used to render and we aren't able
to simply use the framebuffer's matrix stack to achieve this. Also when
implementing cairo-cogl we found that it would be really useful if we
could have a matrix stack utility api.
(cherry picked from commit d17a01fd935d88fab96fe6cc0b906c84026c0067)
‘Propagate’ was misspelled as ‘propogate’.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5fb4a6178c3e64371c01510690d9de1e8a740bde)
This make _cogl_framebuffer_blit take explicit src and dest framebuffer
pointers and updates all the texture blitting strategies in cogl-blit.c
to avoid pushing/popping to/from the the framebuffer stack.
The removes the last user of the framebuffer stack which we've been
aiming to remove before Cogl 2.0
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 598ca33950a93dd7a201045c4abccda2a855e936)
_cogl_bitmap_new_with_malloc_buffer() now takes a CoglError for throwing
exceptional errors and all callers have been updated to pass through
any application error pointer as appropriate.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 67cad9c0eb5e2650b75aff16abde49f23aabd0cc)
This allows apps to catch out-of-memory errors when allocating textures.
Textures can be pretty huge at times and so it's quite possible for an
application to try and allocate more memory than is available. It's also
very possible that the application can take some action in response to
reduce memory pressure (such as freeing up texture caches perhaps) so
we shouldn't just automatically abort like we do for trivial heap
allocations.
These public functions now take a CoglError argument so applications can
catch out of memory errors:
cogl_buffer_map
cogl_buffer_map_range
cogl_buffer_set_data
cogl_framebuffer_read_pixels_into_bitmap
cogl_pixel_buffer_new
cogl_texture_new_from_data
cogl_texture_new_from_bitmap
Note: we've been quite conservative with how many apis we let throw OOM
CoglErrors since we don't really want to put a burdon on developers to
be checking for errors with every cogl api call. So long as there is
some lower level api for apps to use that let them catch OOM errors
for everything necessary that's enough and we don't have to make more
convenient apis more awkward to use.
The main focus is on bitmaps and texture allocations since they
can be particularly large and prone to failing.
A new cogl_attribute_buffer_new_with_size() function has been added in
case developers need to catch OOM errors when allocating attribute buffers
whereby they can first use _buffer_new_with_size() (which doesn't take a
CoglError) followed by cogl_buffer_set_data() which will lazily allocate
the buffer storage and report OOM errors.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f7735e141ad537a253b02afa2a8238f96340b978)
Note: since we can't break the API for Cogl 1.x then actually the main
purpose of cherry picking this patch is to keep in-line with changes
on the master branch so that we can easily cherry-pick patches.
All the api changes relating stable apis released on the 1.12 branch
have been reverted as part of cherry-picking this patch so this most
just applies all the internal plumbing changes that enable us to
correctly propagate OOM errors.
There were two problems with the stencil viewport clip workaround
introduced in afc5daab8:
• When the viewport is changed the current clip state is not marked as
dirty. That means that when the framebuffer state is next flushed it
would continue to use the stencil from the previous viewport.
• When the viewport is automatically updated due to the window being
resized the viewport age was not incremented so the clip state
wouldn't be flushed.
I noticed the bugs by running cogl-sdl2-hello.
This patch makes it so that the clip state is dirtied in
cogl_framebuffer_set_viewport if the workaround is enabled.
The automatic viewport changing code now just calls
cogl_framebuffer_set_viewport instead of directly prodding the
viewport values. This has the side-effect that it will also cause the
journal to be flushed. This seems like the right thing to do anyway
and presumably there would have been a bug before where it wouldn't
have flushed the journal, although presumably this is extremely
unlikely because it would have to have done a resize in the middle of
painting the scene.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 0dca99ddf728c8d4e3003861a03e8a2beccf282d)
The Intel Mesa gen6 driver doesn't currently handle scissoring offset
viewports correctly, so this implements a workaround to intersect the
current viewport bounds with the scissor rectangle.
(cherry picked from commit afc5daab85e5faca99d6d6866658cb82c3954830)
This adds a new CoglDriver for GL 3 called COGL_DRIVER_GL3. When
requested, the GLX, EGL and SDL2 winsyss will set the necessary
attributes to request a forward-compatible core profile 3.1 context.
That means it will have no deprecated features.
To simplify the explosion of checks for specific combinations of
context->driver, many of these conditionals have now been replaced
with private feature flags that are checked instead. The GL and GLES
drivers now initialise these private feature flags depending on which
driver is used.
The fixed function backends now explicitly check whether the fixed
function private feature is available which means the GL3 driver will
fall back to always using the GLSL progend. Since Rob's latest patches
the GLSL progend no longer uses any fixed function API anyway so it
should just work.
The driver is currently lower priority than COGL_DRIVER_GL so it will
not be used unless it is specificly requested. We may want to change
this priority at some point because apparently Mesa can make some
memory savings if a core profile context is used.
In GL 3, getting the combined extensions string with glGetString is
deprecated so this patch changes it to use glGetStringi to build up an
array of extensions instead. _cogl_context_get_gl_extensions now
returns this array instead of trying to return a const string. The
caller is expected to free the array.
Some issues with this patch:
• GL 3 does not support GL_ALPHA format textures. We should probably
make this a feature flag or something. Cogl uses this to render text
which currently just throws a GL error and breaks so it's pretty
important to do something about this before considering the GL3
driver to be stable.
• GL 3 doesn't support client side vertex buffers. This probably
doesn't matter because CoglBuffer won't normally use malloc'd
buffers if VBOs are available, but it might but worth making
malloc'd buffers a private feature and forcing it not to use them.
• GL 3 doesn't support the default vertex array object. This patch
just makes it create and bind a single non-default vertex array
object which gets used just like the normal default object. Ideally
it would be good to use vertex array objects properly and attach
them to a CoglPrimitive to cache the state.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 66c9db993595b3a22e63f4c201ea468bc9b88cb6)
The buffer and bitmap _bind() functions are GL specific so to clarify
that, this patch adds a _gl infix to these functions, though it doesn't
yet move the implementations out into gl specific files.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 6371fbb9637d88ff187dfb6c4bcd18468ba44d19)
Although we use GLib internally in Cogl we would rather not leak GLib
api through Cogl's own api, except through explicitly namespaced
cogl_glib_ / cogl_gtype_ feature apis.
One of the benefits we see to not leaking GLib through Cogl's public API
is that documentation for Cogl won't need to first introduce the Glib
API to newcomers, thus hopefully lowering the barrier to learning Cogl.
This patch provides a Cogl specific typedef for reporting runtime errors
which by no coincidence matches the typedef for GError exactly. If Cogl
is built with --enable-glib (default) then developers can even safely
assume that a CoglError is a GError under the hood.
This patch also enforces a consistent policy for when NULL is passed as
an error argument and an error is thrown. In this case we log the error
and abort the application, instead of silently ignoring it. In common
cases where nothing has been implemented to handle a particular error
and/or where applications are just printing the error and aborting
themselves then this saves some typing. This also seems more consistent
with language based exceptions which usually cause a program to abort if
they are not explicitly caught (which passing a non-NULL error signifies
in this case)
Since this policy for NULL error pointers is stricter than the standard
GError convention, there is a clear note in the documentation to warn
developers that are used to using the GError api.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b068d5ea09ab32c37e8c965fc8582c85d1b2db46)
Note: Since we can't change the Cogl 1.x api the patch was changed to
not rename _error_quark() functions to be _error_domain() functions and
although it's a bit ugly, instead of providing our own CoglError type
that's compatible with GError we simply #define CoglError to GError
unless Cogl is built with glib disabled.
Note: this patch does technically introduce an API break since it drops
the cogl_error_get_type() symbol generated by glib-mkenum (Since the
CoglError enum was replaced by a CoglSystemError enum) but for now we
are assuming that this will not affect anyone currently using the Cogl
API. If this does turn out to be a problem in practice then we would be
able to fix this my manually copying an implementation of
cogl_error_get_type() generated by glib-mkenum into a compatibility
source file and we could also define the original COGL_ERROR_ enums for
compatibility too.
Note: another minor concern with cherry-picking this patch to the 1.14
branch is that an api scanner would be lead to believe that some APIs
have changed, and for example the gobject-introspection parser which
understands the semantics of GError will not understand the semantics of
CoglError. We expect most people that have tried to use
gobject-introspection with Cogl already understand though that it is not
well suited to generating bindings of the Cogl api anyway and we aren't
aware or anyone depending on such bindings for apis involving GErrors.
(GnomeShell only makes very-very minimal use of Cogl via the gjs
bindings for the cogl_rectangle and cogl_color apis.)
The main reason we have cherry-picked this patch to the 1.14 branch
even given the above concerns is that without it it would become very
awkward for us to cherry-pick other beneficial patches from master.
This splits out most of the OpenGL specific code from cogl-framebuffer.c
into cogl-framebuffer-gl.c and extends the CoglDriverVtable interface
for cogl-framebuffer.c to use.
There are hopes to support several different backends for Cogl
eventually to hopefully get us closer to the metal so this makes some
progress in organizing which parts of Cogl are OpenGL specific so these
parts can potentially be switched out later.
The only remaining use of OpenGL still in cogl-framebuffer.c is to
handle cogl_framebuffer_read_pixels.
This commit introduces some new framebuffer api to be able to
enable texture based depth buffers for a framebuffer (currently
only supported for offscreen framebuffers) and once allocated
to be able to retrieve the depth buffer as a texture for further
usage, say, to implement shadow mapping.
The API works as follow:
* Before the framebuffer is allocated, you can request that a depth
texture is created with
cogl_framebuffer_set_depth_texture_enabled()
* cogl_framebuffer_get_depth_texture() can then be used to grab a
CoglTexture once the framebuffer has been allocated.