The docs previously suggested that `cogl_frame_info_get_frame_counter`
returned a timestamp of an unknown clock ID. That's not correct. The
cogl source code shows that it does and must use the same clock as
`cogl_get_clock_time`.
Related to https://gitlab.gnome.org/GNOME/mutter/issues/131
Before we just set it to "none", but this was not enough since various
calls will depend on not just the context being active, but the main
rendering surface.
Fixes https://gitlab.gnome.org/GNOME/mutter/issues/21
By the looks of it, commit 95e9fa10ef was taping over an Intel DRI bug
that would make it return post-swizzling pixel data on glReadPixels().
There's been reports over time of that commit resulting in wrong colors
on other drivers, and lately Mesa >17.3 started showing the same symptoms
on Intel.
But texture swizzling works by changing parameters before fragment shaders
and reading pixels from an already drawn FBO/texture doesn't involve those.
This should thus use pixel_format_to_gl_with_target(), which will result in
correctly requesting the same pixel format than the underlying texture,
while still considering it BGRA for the upper layers in the swizzling case.
https://gitlab.gnome.org/GNOME/mutter/issues/72Closes: #72
We just arbitrarily chose the first EGL config matching the passed
attributes, but we then assumed we always got GBM_FORMAT_XRGB8888. That
was not a correct assumption. Instead, make sure we always pick the
format we expect.
Closes: https://gitlab.gnome.org/GNOME/mutter/issues/2
On drivers that do not support glGetTexImage2D (i.e. on GLES),
cogl_texture_get_data() has a "feature" that allows it to download
texture data by rendering the texture on an intermediate framebuffer
object and then reading back the data from there. However, this
feature requires the user to have previously set an "active"
framebuffer object in the context, which makes this very tricky
because it is not clear to the developer that he needs to do that
in order for some code to work on GLES (of course it works on
desktop GL, so nobody notices...) and additionally the code actually
crashes if an active fbo is not set!
This patch basically removes this feature in order to prevent
the crash and is in line with how this code has evolved in cogl-2.0:
https://git.gnome.org/browse/cogl/commit/?id=6d6a277b8e9a63a8268046e5258877ba94a1da5bhttps://bugzilla.gnome.org/show_bug.cgi?id=789961
When creating a renderer with a custom winsys (which is always how
mutter uses cogl) make it possible to pass a user data with the winsys.
Still unused.
https://bugzilla.gnome.org/show_bug.cgi?id=785381
The GL_BGRA definition is not available for GLES2 contexts, which use
the EXT_texture_format_BGRA8888 instead, causing a build failure when
trying to use it in those contexts.
Fortunately, this hack is only relevant for GL, so let's guard it to
prevent the failure in GLES2, where that extension is used instead.
https://bugzilla.gnome.org/show_bug.cgi?id=786568
Those are cached and reused across runs, which doesn't qualify to mesa
as "static" indeed. Properly marking those as dynamic is more true, and
brings in slight performance benefits just by avoiding the resulting
(and later silenced) mesa warning.
https://bugzilla.gnome.org/show_bug.cgi?id=782344
Fixes cogl_texture_get_data() resorting to the wrong conversions when
extracting the texture data. This notably resulted in RGB/RGBA buffers
copied as-is into BGRA buffers, for instance for the fullscreen animation,
or single-window screenshots of such buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
The ARB_robustness extension defined the following tokens as
returned by GetGraphicsResetStatusARB (see spec at [1]):
GUILTY_CONTEXT_RESET_ARB 0x8253
INNOCENT_CONTEXT_RESET_ARB 0x8254
UNKNOWN_CONTEXT_RESET_ARB 0x8255
These tokens might not be defined in some GL implementations,
such as Mesa 13's implementation of GLES 2.0, so we need to
define them ourselves not to break those builds.
[1] https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_robustness.txthttps://bugzilla.gnome.org/show_bug.cgi?id=781398
We already do have a texture with an internal format in these paths,
so we should check the required format according to it.
This fixes CoglAtlasTexture (and CoglPangoRenderer indirectly), as
it forces a RGBA format on its texture, but pixel_format_to_gl()
anyway assumed swizzling is performed on the texture, while it is
not the case.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
This is used by the GL driver in order to determine whether swizzling
actually applies given the bitmap and target texture internal format.
If both agree that they store BGRA, then swizzling may apply.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
If the GL implementation/hw supports the GL_*_texture_swizzle extension,
pretend that BGRA textures shall contain RGBA data, and let the flipping
happen when the texture will be used in the rendering pipeline.
This avoids rather expensive format conversions when forcing BGRA buffers
into RGBA textures, which happens rather often with WL_SHM_FORMAT_ARGB8888
buffers (like gtk+ uses) in little-endian machines.
In intel/mesa/wayland, the performance improvement is rather noticeable,
CPU% as seen by top decreases from 45-50% to 25-30% when running
gtk+/tests/scrolling-performance with a cairo renderer.
https://bugzilla.gnome.org/show_bug.cgi?id=779234
Because the threaded-swap-wait functionality requires XInitThreads(),
and because it isn't clear that it is a win for all applications,
add a API function to conditionally enable it.
Fix the cogl-crate example not to just have a hard-coded dependency
on libX11.
https://bugzilla.gnome.org/show_bug.cgi?id=779039
It's a good guess that the buffer swap will occur at the next vblank,
so use glXWaitVideoSync in a separate thread to deliver a sync event
rather than just letting the client block when frame drawing, which
can signficantly change app logic as compared to the INTEL_swap_event
case.
https://bugzilla.gnome.org/show_bug.cgi?id=779039
When we don't have GLX_OML_sync_control, we still can set the
frame presentation time, but we always use the system monotonic time,
so return that from get_clock_time().
https://bugzilla.gnome.org/show_bug.cgi?id=779039
As previously commented in the code, SGI_video_sync is per-display, rather
than per-renderer. The is_direct flag for the renderer was tested before
it was initialized (per-display) and that resulted in SGI_video_sync
never being used.
https://bugzilla.gnome.org/show_bug.cgi?id=779039
In order to minimize the amount of breakage, while at the same time
making it easier to make backward incompatible changes needed to
continue turning libmutter into a capable Wayland compositor, make the
libmutter and friends (libmutter-clutter, libmutter-cogl*) parallel
installable by adding a version number to the name. This changes
various filenames, for example what previously was libmutter.so is now
libmutter-0.so (assuming the version for now is 0), and
libmutter-clutter-1.0.so is now libmutter-clutter-0.so. The pkg-config
filenames and GObject introspection has been renamed to reflect this as
well.
This enables a downstream compositor rely on a specific version of the
libmutter API, while gracefully handling API/ABI changes by having to
update to the new version at their own pace.
https://bugzilla.gnome.org/show_bug.cgi?id=777317
Different libEGL will do different things for eglGetDisplay since it has
to guess what kind of display it's been handed. Better to just use the
API that makes it explicit.
Signed-off-by: Adam Jackson <ajax@redhat.com>
https://bugzilla.gnome.org/show_bug.cgi?id=772422
We need a GLES2 extension macro in cogl-texture-2d-gl.c, but we can't
include GLES2/gl2ext.h because it will conflict with things in
GL/glext.h. We can't rely on cogl including anything for us since it'd
only include GLES2/gl2ext.h if OpenGL support was explicitly disabled.
https://bugzilla.gnome.org/show_bug.cgi?id=774891
EGLDevice requires a define from GLES2, even when GL is used instead.
As type definitions may conflict between the two, we shouldn't include
both at the same time. Instead, provide the missing define explicitly
when not using GLES2.
https://bugzilla.gnome.org/show_bug.cgi?id=774891
Add API to enable the caller to have a custom method for allocating an
external texture. This will enable the possibility for mutter to
generate a texture from for example an EGLStream without having to add
support for that in Cogl.
https://bugzilla.gnome.org/show_bug.cgi?id=773629
Don't rely on the Cogl layer having Wayland specific paths by
determining the buffer type and creating the EGLImage ourself, while
using the newly exposed CoglTexture from EGLImage API. This changes the
API used by MetaWaylandSurface to make the MetaWaylandBuffer API be
aware when the buffer is being attached. For SHM and EGL buffers, only
the first time it is attached will result in a new texture being
allocated, but later for EGLStream's, more logic on every attach is
needed.
https://bugzilla.gnome.org/show_bug.cgi?id=773629
The WL_bind_wayland_display spec says that EGL images should be created
using EGL_WAYLAND_BUFFER_WL as the target and a NULL context. Mesa
seems to be lenient and accept any context, however some other stacks
aren't so forgiving and fail if anything apart from EGL_NO_CONTEXT is
used.
Signed-off-by: Sjoerd Simons <sjoerd.simons@collabora.co.uk>
https://bugzilla.gnome.org/show_bug.cgi?id=769731
Mutter (and libmutter users) are the only users of this version of
cogl, and will more or less only use the cogl-1.0, cogl-2.0 and cogl
experimental API variants, and having the possibility of having
different API versions of the same API depending on what file includes
it is error prone and confusing. Lets just remove the possibility of
having different versions of the same API.
https://bugzilla.gnome.org/show_bug.cgi?id=768977
We bypass our build configuration to fetch API from a version which
isn't the one we actually use. Stop bypassing and just admit that the
1.0 API is still there, but still deprecated.
https://bugzilla.gnome.org/show_bug.cgi?id=768977
CoglFrameInfo is a frame info container associated with a single
onscreen framebuffer. The clutter stage will eventually support drawing
a stage frame with multiple onscreen framebuffers, thus needs its own
frame info container.
This patch introduces a new stage signal 'presented' and a accompaning
ClutterFrameInfo and adapts the stage windows and past onscreen frame
callbacks users to use the signal and new info container.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
Move the KMS interaction from cogl into mutter, where most of the other
KMS interaction already takes place. This also removes dead code which
were only excercised when non-mutter callers used the cogl KMS backend.
The cogl KMS API was updated to pass via MetaRendererNative instead of
via the different cogl objects.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
In cogl use cogl-config.h and in clutter use clutter-build-config.h. We
can't use clutter-config.h in clutter because its already used and
installed.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
If you include a file that might define __INSIDE_COGL_H__, don't
undefine it if it wasn't defined in that file. This makes it possible
to include for example cogl-gles2.h from some other file which defines
__INSIDE_COGL_H__.
https://bugzilla.gnome.org/show_bug.cgi?id=768976
When using a context with robustness, glGetError() may return
GL_CONTEXT_LOST at any time and this error doesn't get cleared until
the application calls glGetGraphicsResetStatus() . This means that our
error checking can't call glGetError() in a loop without checking for
that return value and returning in that case.
https://bugzilla.gnome.org/show_bug.cgi?id=739178
commit 188752158 changed cogl to stop needlessly creating its own
monitor output configuration when mutter would just soon overwrite
it anyway.
Unfortunately, that commit is causing a crash in some cases because
cogl will now create and later draw to a 0x0 egl surface until mutter
sets the monitor layout.
This commit changes cogl to avoid creating and using a surface, before
it knows how big of a surface to create.
https://bugzilla.gnome.org/show_bug.cgi?id=758073
If we get EACCES from drmPageFlip we're not going to get
a flip event and shouldn't wait for one.
This commit changes the EACCES path to silently ignore the
failed flip request and just clean up the fb.
https://bugzilla.gnome.org/show_bug.cgi?id=756926
ES3 provides glMapBufferRange as core, with the added bonus that it also
supports read mappings. Use this where possible.
Signed-off-by: Daniel Stone <daniels@collabora.com>
https://bugzilla.gnome.org/show_bug.cgi?id=728355
GLES2 does not support passing READ to glMapBuffer; attempting to call
cogl_buffer_map for read on ES2 will bring it down with an assert. Make
sure COGL_DEBUG=journal doesn't do this when it's not possible.
Signed-off-by: Daniel Stone <daniels@collabora.com>
https://bugzilla.gnome.org/show_bug.cgi?id=728355
On Wayland deinit() of an onscreen buffer is going to destroy the
associated native window of an EGLSurface. So we should destroy the
EGLSurface as well otherwise we might end up confusing the GL driver.
We also currently guard against setting a EGL_NO_SURFACE as current
EGLSurface, but this shouldn't be a problem if we have a surfaceless
context. So we allow surface destruction under that condition.
https://bugzilla.gnome.org/show_bug.cgi?id=754667
If the user switches VTs in the middle of a page flip, the
page flip operation may fail with EACCES. page flipping will
work next time the VT becomes active, so we shouldn't disable
page flipping in that case.
https://bugzilla.gnome.org/show_bug.cgi?id=754540
If cogl fails to open the drm device, initialize gbm, or open the
egl display, then it closes the drm fd, uninitializes gbm, closes the
display and then calls _cogl_winsys_renderer_disconnect which does
most of those things again, on the, now deinitialized, members.
This commit removes the explicit failure handling in renderer_connect and
defers cleanup to disconnect.
https://bugzilla.gnome.org/show_bug.cgi?id=754540
gbm confusingly has two different format types, and cogl
is using the wrong one in some of its calls to gbm_surface_create
This commit fixes the calls that are wrong.
https://bugzilla.gnome.org/show_bug.cgi?id=754540
glGetIntegerv (GL_DEPTH_BITS, ...) and friends are deprecated in GL3; we
have to use glGetFramebufferAttachmentParameteriv() instead, like we do
for offscreen framebuffers.
Based on a patch by: Adel Gadllah <adel.gadllah@gmail.com>
Signed-off-by: Emmanuele Bassi <ebassi@gnome.org>
https://bugzilla.gnome.org/show_bug.cgi?id=753295
We want to be able to retrieve the XVisualInfo used when creating the
GL context under GLX and EGL-X11, so that we can use the visual before
we have an onscreen frame buffer.
Initialize variables; GCC does not always catch all cases where the
variables are used after being initialized, especially when it comes to
out parameters.
Some drivers ( like mgag200 ) don't yet support drmModePageFlip.
This commit forgoes waiting for vblank and flips right away
in those cases. That prevents the hardware from freezing up the screen,
but does mean there will be some visible tearing.
https://bugzilla.gnome.org/show_bug.cgi?id=746042
We can't just destroy and replace the EGL and gbm surfaces while
they are still in use i.e. while there is a pending flip. In fact, in
that case, we were calling gbm_surface_destroy() on a surface that
still had the front buffer locked and then, on the flip handler,
gbm_surface_release_buffer() for a buffer that didn't belong to the
new surface.
Instead, we still allocate new surfaces when requested but they only
replace the old ones on the next swap buffers when we're sure that the
previous flip has been handled and buffers properly released.
Currently the code queries the current msc then tries to approximate the
value of the next msc satisfing the modulus 2 for when to wait. This
introduces some instability as the msc may tick over during the
roundtrip leading to a 32ms wait instead of a 16ms wait. This happens
often enough to cause jerky animations, and affect gnome-shell-perf-tool.
A simpler solution is just use a single roundtrip by using WaitForMsc to
ask the driver to compute the next vblank itself.
Cc: Owen W. Taylor <otaylor@fishsoup.net>
Cc: Robert Bragg <robert@linux.intel.com>
Reviewed-by: Robert Bragg <robert@sixbynine.org>
Add cogl_texture_pixmap_x11_new_left() and
cogl_texture_pixmap_x11_new_right() (which takes the left texture
as an argument) for texture pixmap rendering with stereo content.
The underlying GLXPixmap is created using a stereo visual and shared
between the left and right textures.
Reviewed-by: Robert Bragg <robert.bragg@intel.com>
If we want to show quad-buffer stereo with Cogl, we need to pick an
appropriate fbconfig for creating the CoglOnscreen objects. Add
cogl_onscreen_template_set_stereo_enabled() to indicate whether
stereo support is needed.
Add cogl_framebuffer_get_stereo_mode() to see if a framebuffer was
created with stereo support.
Add cogl_framebuffer_get_stereo_mode() to pick whether to draw to
the left, right, or both buffers.
Reviewed-by: Robert Bragg <robert.bragg@intel.com>
An application might for whatever reason want to control a specific output
directly and have cogl only swap the other outputs if any. So add an api that
allows setting a crtc to be ignored.
https://bugzilla.gnome.org/show_bug.cgi?id=730536
The surfaceless context extension can be used to bind a context
without a surface. We can use this to avoid creating a dummy surface
when the CoglContext is first created. Otherwise we have to have the
dummy surface so that we can bind it before the first onscreen is
created.
The main awkward part of this patch is that theoretically according to
the GL and GLES spec if you first bind a context without a surface
then the default state for glDrawBuffers is GL_NONE instead of
GL_BACK. In practice Mesa doesn't seem to do this but we need to be
robust against a GL implementation that does. Therefore we track when
the CoglContext is first used with a CoglOnscreen and force the
glDrawBuffers state to be GL_BACK.
There is a further awkward part in that GLES2 doesn't actually have a
glDrawBuffers state but GLES3 does. GLES3 also defaults to GL_NONE in
this case so if GLES3 is available then we have to be sure to set the
state to GL_BACK. As far as I can tell that actually makes GLES3
incompatible with GLES2 because in theory if the application is not
aware of GLES3 then it should be able to assume the draw buffer is
always GL_BACK.
Reviewed-by: Robert Bragg <robert.bragg@intel.com>
(cherry picked from commit e5f28f1e75db9bdc4f2688f420a74f908f96cf76)
Conflicts:
cogl/winsys/cogl-winsys-egl-kms.c
cogl/winsys/cogl-winsys-egl-x11.c
Some features that were previously available as an extension in GLES2
are now in core in GLES3 so we should be able to specify that with the
gles_availability mask of COGL_EXT_BEGIN so that GL implementations
advertising GLES3 don't have to additionally advertise the extension
for us to take advantage of it.
Reviewed-by: Robert Bragg <robert.bragg@intel.com>
(cherry picked from commit 4d892cd97558da61ba526f947ac0555ebab632d2)
When a new CoglAtlasTexture tries to fit into an existing CoglAtlas
it should make sure the atlas stays valid while it expands.
https://bugzilla.gnome.org/show_bug.cgi?id=728064
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 2eec9758f67e9073371c2edd63379324849373c4)
This can happen when we dpms off the output or when login1 takes away
drm master status from our drm fd. In either case, we need to call
the swap notify handler so that the compositor dosn't get stuck waiting
for that notification. The compositor should stop repainting shortly in
both cases, as it's either going into dpms off mode or vt switching away.
https://bugzilla.gnome.org/show_bug.cgi?id=728979
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This environment variable predates the reliable platform detection in mesa
and typically just causes crashes when the specified platform doesn't
match what's passed in. Aside from being unecessary and problematic
it also leaks into the GNOME session, preventing clients from
automatically detecting the wayland platform.
https://bugzilla.gnome.org/show_bug.cgi?id=728978
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Prevent Cogl function names from being mangled when a C++ compiler is
being used by adding some missing COGL_{BEGIN,END}_DECLS guards.
https://bugzilla.gnome.org/show_bug.cgi?id=728628
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Commit 1b2dd815 (Registers gtypes for all public objects and structs)
introduced GCCism's in its use of varargs, which broke the build of Cogl
on other non-GCC compilers, such as Visual Studio.
Define the COGL_GTYPE_DEFINE_BASE_CLASS and COGL_GTYPE_DEFINE_CLASS macros
using ISO-style varargs so that the build of Cogl can be fixed on non-GCC
compilers.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Remove #pragma directives that causes any applications that use Cogl to
link to the SDL libraries when Cogl was built with the SDL winsys. This is
mainly due to the availability of both SDL-1.x and SDL-2.x support in the
SDL winsys, where different libraries are linked for SDL-1.x and SDL-2.x.
To avoid having to link to the SDL/SDL2 libraries when the application code
is not directly using SDL/SDL2, define SDL_MAIN_HANDLED in the CFLAGS so
that SDL's wrapper main() implementation will not be used when the
application is being built.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit c3035912833eabe1f6dadbea23c78e595aac79dc)
In Lionel's work for supporting introspection better for Cogl, a number of
public symbols were added for Cogl and Cogl-Path, so add these symbols to
cogl.symbols and cogl-path.symbols so that they can be exported, which will
fix the build of the Cogl conformance test and the introspection files
for the Windows-based builds.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This macro is internal to gobject so using it risks breaking Cogl if
glib changes its API. Instead we just use its expansion. Note that
glib provides two expansions for this depending on the glib version
but this only uses the one for older versions.
Reviewed-by: Robert Bragg <robert.bragg@intel.com>
This adds much more comprehensive support for gobject-introspection
based bindings by registering all objects as fundamental types that
inherit from CoglObject, and all structs as boxed types.
Co-Author: Robert Bragg <robert@linux.intel.com>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Remove the symbols that are now in cogl-path (where cogl-path.symbols
already include), and add the symbols that were added to the Cogl API.
Also add internal symbols as required by cogl-path and cogl-pango.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
The DriverCallback is a function that is defined by the Windows SDK 8.0+
headers, which was initially used for device driver development. The use
of DriverCallback would cause a clash, causing things to break when built
with newer Windows SDKs, so rename DriverCallback to CoglDriverCallback to
avoid this problem.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
To help facilitate integration with third party frameworks this exposes
the EGL context and display to applications as well as the GLX context.
(Note that the GLX display is already available via
cogl_xlib_renderer_get_display())
This adds a new top-level <cogl/cogl-glx.h> header that needs to be
included explicitly to access the glx specific api.
Anyone using these apis will be responsible for checking that Cogl
is indeed using EGL or GLX by calling cogl_renderer_get_winsys_id()
This will enable GStreamer, for example, to be able to create a GL
context that shares resources with Cogl's context.
https://bugzilla.gnome.org/show_bug.cgi?id=724992
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This splits out the GLeglImageOES define in cogl-egl.h into a private
cogl-egl-private.h header and updates the guards in cogl-egl.h to be
consistent with other top-level headers where we need to be careful
about how __COGL_H_INSIDE__ is defined and undefined, esp when the
gobject introspection scanner is running.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This ensures we use EGLNativeWindowType and EGLNativeDisplayType
everywhere instead. The previous names come from EGL 1.2 but it seems
reasonable to require more recent EGL versions. If someone wanted to add
compatibility for EGL 1.2 later it would be straightforward to define
the new names to the old.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
SDL2 supports selecting between full OpenGL or OpenGL ES 1/2 but our
selection code was written before SDL 2.0 was officially released and
since then a new SDL_GL_CONTEXT_PROFILE_MASK attribute was added and
we have to explicitly set the SDL_GL_CONTEXT_MINOR_VERSION attribute.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This adds a new COGL_FEATURE_ID_BUFFER_AGE feature id that can be used
to determine if cogl_onscreen_get_buffer_age() will ever return an age
other than 0. This should be used instead of querying the winsys feature
via cogl_clutter_winsys_has_feature().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
We have an #ifdef EGL_WL_bind_wayland_display guard in
cogl-winsys-egl-feature-functions.h to avoid referencing wayland types
when the EGL header doesn't know about them, but somehow this guard also
ended up around the KHR_create_context and EXT_buffer age features too
even though they aren't wayland specific.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This winsys feature flag is exposed via the deprecated
cogl_clutter_winsys_has_feature function and Clutter is curently
relying on it. Previously the EGL winsys was only setting the internal
COGL_EGL_WINSYS_FEATURE_BUFFER_AGE flag and there was no mapping to
the public flag. Therefore the feature would only be used on GLX. This
patch just adds the mapping.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 8418e98b2b1b25515a961ad1bb9f0c4770d6eb1d)
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 7bc7ea4cb5e8134a3aeed9615477f4152b558509)
Conflicts:
cogl/winsys/cogl-winsys-egl-kms.c
Instead of spinning forever, do a roundtrip, which guarantees that the
global messages have been sent by the time we read the sync message.
If the proper globals aren't initialized yet, error out immediately. This
does mean that users can't use CoglOnscreen with foreign custom surface
types without xdg_shell, but when a use case comes for this, we'll
investigate then...
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit af9057d35f331e2c9509958fb40627917c477b80)
Since the Cogl 1.18 branch is actively maintained in parallel with the
master branch; this is a counter part to commit 1b83ef938fc16b which
re-licensed the master branch to use the MIT license.
This re-licensing is a follow up to the proposal that was sent to the
Cogl mailing list:
http://lists.freedesktop.org/archives/cogl/2013-December/001465.html
Note: there was a copyright assignment policy in place for Clutter (and
therefore Cogl which was part of Clutter at the time) until the 11th of
June 2010 and so we only checked the details after that point (commit
0bbf50f905)
For each file, authors were identified via this Git command:
$ git blame -p -C -C -C20 -M -M10 0bbf50f905..HEAD
We received blanket approvals for re-licensing all Red Hat and Collabora
contributions which reduced how many people needed to be contacted
individually:
- http://lists.freedesktop.org/archives/cogl/2013-December/001470.html
- http://lists.freedesktop.org/archives/cogl/2014-January/001536.html
Individual approval requests were sent to all the other identified authors
who all confirmed the re-license on the Cogl mailinglist:
http://lists.freedesktop.org/archives/cogl/2014-January
As well as updating the copyright header in all sources files, the
COPYING file has been updated to reflect the license change and also
document the other licenses used in Cogl such as the SGI Free Software
License B, version 2.0 and the 3-clause BSD license.
This patch was not simply cherry-picked from master; but the same
methodology was used to check the source files.
Not doing so leads to the following error, if stddef.h is not included
indirectly through EGL headers:
| libdrm/drm.h:132:2: error: unknown type name 'size_t'
| size_t name_len; /**< Length of name buffer */
Signed-off-by: Andreas Oberritter <obi@saftware.de>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 55c82476a93366a3e7d1a2537fccc3a7aab87c66)
The _cogl_egl_texture_2d_new_from_image function has a CoglError
argument which implies that it is unlike the other texture
constructors and returns errors immediately rather than having a
delayed-allocation mechanism. cogl_wayland_texture_2d_new_from_buffer
which calls it is also like this. We can't rely on delayed-allocation
semantics for this without changing the applications because the
texture needs to be allocated before the corresponding EGLImage is
destroyed. This patch just makes it immediately allocate.
A better patch might be to remove the error argument to make it
obvious that there are delayed-allocation semantics and then fix all
of the applications.
This was breaking Cogland and Mutter.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 0206c03d54823b2f6cbb2aa420d07a4db9bcd8a3)
The previous implementation was dereferencing the sample pointer in
order to get the offset to subtract from the member pointer. The
resulting value is then only used to get a pointer to the member in
order to calculate the offset so it doesn't actually read from the
memory location and shouldn't cause any problems. However this is
probably technically invalid and could have undefined behaviour. It
looks like clang takes advantage of this undefined behaviour and
doesn't actually offset the pointer. It also generates a warning when
it does this.
This patch splits the _cogl_container_of macro into two
implementations. Previously the macro was always used in the list
iterator macros like this:
SomeType *sample = _cogl_container_of(list_node, sample, link)
Instead of doing that there is now a new macro called
_cogl_list_set_iterator which explicitly assigns to the sample pointer
with an initial value before assigning to it again with the real
offset. This redundant initialisation gets optimised out by compiler.
The second macro is still called _cogl_container_of but instead of
taking a sample pointer it just directly takes the type name. That way
it can use the standard offsetof macro.
https://bugzilla.gnome.org/show_bug.cgi?id=723530
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1efed1e0a2bce706eb4901979ed4e717bb13e4e2)
The declaration of INTEL_swap_event was treating winsys features as
if they were a bitfield, but they aren't. The end result was that
instead of reporting two features when INTEL_swap_event is present,
we report none.
https://bugzilla.gnome.org/show_bug.cgi?id=719741
Reviewed-by: Robert Bragg <robert@linux.intel.com>
Since 248a76f5eac7e5ae4fb45208577f9a55360812a7 cogl.h can no longer be
included in internal source files so the WGL winsys was no longer
compiling.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 91af97a2a27ab5ad3e7eaabebd03503b685d4d42)
This adds COGL_PIXEL_FORMAT_RG_88 and COGL_TEXTURE_COMPONENTS_RG in
order to support two-component textures. The RG components for a
texture is only supported if COGL_FEATURE_ID_TEXTURE_RG is advertised.
This is only available on GL 3, GL 2 with the GL_ARB_texture_rg
extension or GLES with the GL_EXT_texture_rg extension. The RG pixel
format is always supported for images because Cogl can easily do the
conversion if an application uses this format to upload to a texture
with a different format.
If an application tries to create an RG texture when the feature isn't
supported then it will raise an error when the texture is allocated.
https://bugzilla.gnome.org/show_bug.cgi?id=712830
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 568677ab3bcb62ababad1623be0d6b9b117d0a26)
Conflicts:
cogl/cogl-bitmap-packing.h
cogl/cogl-types.h
cogl/driver/gl/gl/cogl-driver-gl.c
tests/conform/test-read-texture-formats.c
tests/conform/test-write-texture-formats.c
The following changes are made to the documentation for CoglTexture:
• The description of the default value for the components property is
changed to say that it is always RGBA for textures created by the
‘_with_size’ textures. Previously it said that the default is based
on the pixel format used the first time data is set on the texture,
but this is only true if the data is set using a constructor.
• Added documentation for the CoglTextureComponents enum.
• Changed it to say that it _specifies_ what components are required
for sampling rather than determinging [sic] them.
• Added ‘Since: 1.18’ to
cogl_texture_{set,get}_{components,premultiplied}
• Changed the since tag for CoglTextureError from 2.0 to 1.8.
• Added documentation for COGL_TEXTURE_ERROR_{FORMAT,TYPE}.
• Added the following to the cogl2-sections.txt file:
COGL_TEXTURE_ERROR
CoglTextureError
cogl_texture_allocate
cogl_texture_set_components
cogl_texture_get_components
cogl_texture_set_premultiplied
cogl_texture_get_premultiplied
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 2f12c4329c519fa14b927b2dcd708dddcc903c32)
Previously when a pipeline is added to the cache it would never be
removed. If the application is generating a lot of unique pipelines
this can end up effectively leaking a large number of resources
including the GL program objects. Arguably this isn't really a problem
because if the application is generating that many unique pipelines
then it is doing something wrong anyway. It also implies that it will
be recompiling shaders very often so the cache leaking will likely be
the least of the problems.
This patch makes it keep track of which pipelines in the cache are in
use. The cache now returns a struct representing the entry instead of
directly returning the pipeline. This entry contains a usage counter
which the pipeline backends can use to mark when there is a pipeline
alive that is using the cache entry. When the hash table decides that
it's a good time to prune some entries, it will make a list of all of
the pipelines that are not in use and then remove the least recently
used half of the pipelines. That way it is less likely to remove
pipelines that the application is actually regenerating often even if
they aren't in use all of the time.
When the cache is pruned the hash table makes a note of how small the
cache could be if it removed all of the unused pipelines. The hash
table starts pruning when there are more entries than twice this
minimum expected size. The idea is that if that case it hit then the
hash table is more than half full of useless pipelines so the
application is generating lots of redundant pipelines and it is a good
time to remove them.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit c21aac22992bb7fef5a8d0913130b8245e67f2eb)
Conflicts:
cogl/driver/gl/cogl-pipeline-fragend-glsl.c
cogl/driver/gl/cogl-pipeline-progend-glsl.c
cogl/driver/gl/cogl-pipeline-vertend-glsl.c
cogl/driver/gl/gl/cogl-pipeline-fragend-arbfp.c
This fixes the cogl_texture_get_components() prototype to have a return
type of CoglTextureComponents instead of CoglBool which was probably a
copy and paste error.
(cherry picked from commit 55b09f8a939db71ee5ff41afa0ed08cbe937a4ec)
This updates the cogl_texture_rectangle_new_with_size() api in line with
master to be consistent with other texture constructors. This removes
the internal_format and error arguments and allows the texture to be
allocated lazily which means the texture can be configured with apis
like cogl_texture_set_components() and cogl_texture_set_premultiplied()
before it is allocated.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
test-path for example uses COGL_ENABLE_EXPERIMENTAL_2_0_API to access
the 2.0 CoglPath api but also uses deprecated framebuffer api such as
cogl_push/pop_framebuffer. Similarly clutter-backend.c builds with
COGL_ENABLE_EXPERIMENTAL_2_0_API and uses cogl_set_framebuffer().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This moves the framebuffer stack apis out of cogl-framebuffer.c into
cogl/deprecated/cogl-framebuffer-deprecated.c so cogl-framebuffer.c is
more in-line with the master branch to ease cherry picking patches.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
This moves all of the automagic texture constructor prototypes from
cogl-texture.h into a new deprecated/cogl-auto-texture.h file. This also
moves cogl_texture_new_from_sub_texture() into
deprecated/cogl-auto-texture.c
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Texture allocation is now consistently handled lazily such that the
internal format can now be controlled using
cogl_texture_set_components() and cogl_texture_set_premultiplied()
before allocating the texture with cogl_texture_allocate(). This means
that the internal_format arguments to texture constructors are now
redundant and since most of the texture constructors now can't ever fail
the error arguments are also redundant. This now means we no longer
use CoglPixelFormat in the public api for describing the internal format
of textures which had been bad solution originally due to how specific
CoglPixelFormat is which is missleading when we don't support such
explicit control over the internal format.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 99a53c82e9ab0a1e5ee35941bf83dc334b1fbe87)
Note: there are numerous API changes for functions currently marked
as 'unstable' which we don't think are in use by anyone depending on
a stable 1.x api. Compared to the original patch though this avoids
changing the cogl_texture_rectangle_new_with_size() api which we know
is used by Mutter.
This introduces the internal idea of texture loaders that track the
state for loading and allocating a texture. This defers a lot more work
until the texture is allocated.
There are several intentions to this change:
- provides a means for extending how textures are allocated without
requiring all the parameters to be supplied in a single _texture_new()
function call.
- allow us to remove the internal_format argument from all
_texture_new() apis since using CoglPixelFormat is bad way of
expressing the internal format constraints because it is too specific.
For now the internal_format arguments haven't actually been removed
but this patch does introduce replacement apis for controlling the
internal format:
cogl_texture_set_components() lets you specify what components your
texture needs when it is allocated.
cogl_texture_set_premultiplied() lets you specify whether a texture
data should be interpreted as premultiplied or not.
- Enable us to support asynchronous texture loading + allocation in the
future.
Of note, the _new_from_data() texture constructors all continue to
allocate textures immediately so that existing code doesn't need to be
adapted to manage the lifetime of the data being uploaded.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 6a83de9ef4210f380a31f410797447b365a8d02c)
Note: Compared to the original patch, the ->premultipled state for
textures isn't forced to be %TRUE in _cogl_texture_init since that
effectively ignores the users explicitly given internal_format which was
a mistake and on master that change should have been made in the patch
that followed. The gtk-doc comments for cogl_texture_set_premultiplied()
and cogl_texture_set_components() have also been updated in-line with
this fix.
When reading a texture back by first wrapping it as an offscreen
framebuffer and using _read_pixels_into_bitmap() we now make sure the
offscreen framebuffer has an internal format that matches the
meta-texture being read not that of the current sub-texture being
iterated. In the case of atlas textures the subtexture is a shared
texture whose format doesn't reflect the premultipled alpha status of
individual atlas-textures, nor whether the alpha component is valid.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 6ee425d4f10acd8b008a2c17e5c701fc1d850f59)
CoglPixelFormat is not a good way of describing the internal
format of a texture because it's too specific given that we don't
actually have exact knowledge of the internal format used by the driver.
This makes cogl_texture_get_format private and in the future we'll
provide a better way of querying the channels and their precision.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit ffde82981f22bd0185a7f33e1e6e1479f4c295b8)
Note: Since we can't break API compatibility on the 1.x branch this adds
a cogl/deprecated/cogl-texture-deprecated.c file with a
cogl_texture_get_format() wrapper around the private api. This also
moves the cogl_texture_get_rowstride() and cogl_texture_ref/unref()
functions that were previously deprecated into cogl-texture-deprecated.c
This defers checking the internal format and whether accelerated
migration is supported until allocating the texture.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 83b05cbe3969789bc3ec78480c0937a6722efbf1)
Instead of throwing a CoglError exception if an application tries to
allocate a zero size atlas texture this make that a programmer error
instead.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit d3eaeedc86d408669b81d6c43ef2b0ab9d859c85)
The plan is to defer a lot more work in creating a texture until
allocation time. This means that for some texture backends we might not
know until after allocation whether the texture is sliced or can support
hardware repeating. This makes sure we trigger an allocation if either
of these are queried.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 4868582812dbcd5125495b312d858f751fc31e9d)
The plan is to defer a lot more work in creating a texture until
allocation time. This means we wont be able to assume that all textures
being used to render must have already been allocated when data was
specified.
The latest point at which we will generally require a texture to be
allocated will be when we need to know the underlying GL handle for a
texture and so this updates cogl_texture_get_gl_texture() to ensure the
texture is allocated.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 59f6fefc37524f492512a71b831760a218d9bb95)
The plan is to defer more of the work for creating a texture until
allocation time, but that means we won't be able to always assume
we can query the size of a texture when creating an offscreen
framebuffer from a texture (consider for example using
_texture_new_from_file() where the size isn't known until the file has
been loaded). This defers needing to know the size of the texture
underlying an offscreen framebuffer until calling
cogl_framebuffer_allocate().
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 9688e7dc1eeae3144729dfd4a4bf409620346bf4)
This ensures framebuffers are implicitly allocated when querying the
width, height or viewport width/height if the framebuffer's size is
currently unknown. The plan is to allow texture backends to defer
calculating the size of textures until they are allocated which in turn
means we won't know the size of offscreen framebuffers until the texture
has been allocated. Potentially we could be more specific about this in
the future and only ensure the texture is allocated, but for now it will
be simplest to just ensure the framebuffer is allocated.
Note: in the case of onscreen buffers which are always initialized with
a requested size we are careful to avoid triggering an allocation when
this is queried otherwise we will see recursion when the winsys code
queries the requested size during allocation.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f4b612f1b75e043f1852b9a32368cc37ab89308b)
Since we are planning on deferring more texture allocation work this
makes sure we don't query whether a texture is sliced until we know it
has been allocated.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit b742638a1ce581f5a2c9f15907361c3b0c1b178c)
This removes cogl_framebuffer_get_color_format() since the actual
internal format isn't strictly controlled by us. CoglFramebuffer::format
has been renamed to ::internal_format to make it clearer that it only
really represents the premultiplication status.
The plan is to make most of the work involved in creating a texture
happen lazily when allocating so this patch also changes
_cogl_framebuffer_init() to not take a format argument anymore since we
won't know the format of offscreen framebuffers until the framebuffer is
allocated, after the corresponding texture has been allocated. In the
case of offscreen framebuffers we now update the framebuffer
internal_format during allocation.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 8cc9e1c8bd2fac8b2a95087249c23c952d5e379f)
Note: Since we can't break API compatibility on the 1.x branch this
actually keeps the cogl_framebuffer_get_color_format() api but moves it
into a new deprecated/cogl-framebuffer-deprecated.c file and it now
returns the newly name ::internal_format.