Link to SDL.lib and SDLmain.lib if Cogl was built with the SDL winsys.
Recent changes to the SDL winsys introduced a direct dependency to
SDLmain.lib (and hence SDL.lib) when programs are built, causing linker
errors to appear when any programs using cogl (with the SDL winsys built
in) are built.
Since we cannot determine whether a Cogl build is built with the SDL winsys
at build time easily, we could use #pragma comment (lib, ...) whenever
cogl-sdl.h is included by cogl.h so that SDLmain.lib and SDL.lib is linked
into the resulting binary, so that the program can link and run correctly.
This does not add any external dependencies as the Cogl DLL already depends
on SDL.dll when it is built with the SDL winsys.
https://bugzilla.gnome.org/show_bug.cgi?id=682071
(cherry picked from commit 2921d2a4d9c79f1ca7530171e0dfa8c945607bc7)
We don't need to split the wrapper snippet into two separate parts
because it should be ok to declare the flip uniform in the middle of
the shader as long as it is somewhere in the global scope. Therefore
we can just declare it right before the definition for the replacement
main function. This is important because we don't want to put anything
at the top of the application's shader in case it is using a
'#version' directive. In that case moving it to anything other than
the first line would break things.
This patch also adds a marker in a comment around the wrapper snippet
so that we can easily locate the snippet when glGetShaderSource is
called and remove it.
The wrapper for glGetAttachedShaders has been removed because there
are no longer any additional shaders attached to the program so we can
just let GL handle it directly.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit dbd92e24ae61dcbe7ef26f61c9117c5516a7fa87)
In our wrapper for glShaderSource we special case when a vertex shader
is being specified so we can sneak in a wrapper for the main function to
potentially flip all rendering upside down for better integration with
Cogl.
Previously we were appending the wrapper to all the sub-strings passed
via the vector of strings to glShaderSource but we now grow the vector
instead and insert the prelude and wrapper strings into the beginning
and end of the vector respectively so we should only have one copy for a
single shader.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit d2904d518718e3fbf4441abe2c2bcfd63edfd64b)
The SGX GLSL compiler refuses to accept shaders of the form:
void foo();
void bar() {
foo();
}
where foo is undefined at glShaderSource() time, left for definition at
link time. To work around this, simply append the wrapper shader to
user shaders, rather than building a separate shader that's always
linked with user shaders.
Signed-off-by: Daniel Stone <daniel@fooishbar.org>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 96f02c445f763ace088be71dc32b3374b2cdbab2)
Mesa now reports a vendor string of "Mesa Project" instead of "VMWare,
Inc." and the software rasterizer renderer string is now "Software
Rasterizer". This update cogl-gpu-info.c to recognize these new strings.
Thanks to Alexander Larsson for the original patch.
https://bugzilla.gnome.org/show_bug.cgi?id=683818
(cherry picked from commit dfacbbd96f3fbadaffa4a76dfd71c47ece6ed6a3)
As part of our on-going goal to remove our dependence on a global Cogl
context this patch adds a pointer to the context to each CoglTexture
so that the various texture apis no longer need to use
_COGL_GET_CONTEXT.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 83131072eea395f18ab0525ea2446f443a6033b1)
We want applications to fully control the lifetime of a CoglContext
without having to worry that internal resources (such as the default
2d,3d and rectangle textures, or any caches we maintain) could result in
circular references that keep the context alive. We also want to avoid
making CoglContext into a special kind of object that isn't ref-counted
or that can't be used with object apis such as
cogl_object_set_user_data. Being able to reliably destroy the context is
important on platforms such as Android where you may be required
bring-up and tear-down a CoglContext numerous times throughout the
applications lifetime. A dissadvantage of this policy is that it is now
possible to leave other object such as framebuffers in an inconsistent
state if the context is unreferenced and destroyed. The documentation
states that all objects that directly or indirectly depend on a context
that has been destroyed will be left in an inconsistent state and must
not be accessed thereafter. Applications (such as Android applications)
that need to cleanly destroy and re-create Cogl resources should make
sure to manually unref these dependant objects before destroying the
context.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 23ce51beba1bb739a224e47614a59327dfbb65af)
cogl_context_new() had a mixture of references to the file scope context
variable (_context) and the local (context) variable. This renames the
file scope variable to _cogl_context to catch unnecessary references to
the old name and fixes the code accordingly to reference the local
variable instead.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 33a9397ee1ae1729200be2e5084cf43cebb64289)
There were lots of places where cogl_texture_2d_new_from_foreign would
simply return NULL without returning a corresponding error. We now
return an error wherever we are returning NULL except in cases where the
user provided invalid data.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit a1efc9405a13ac8aaf692c5f631a3b8f95d2f259)
There are two extensions, GL_OES_packed_depth_stencil and
GL_EXT_packed_depth_stencil, that inform us that the hardware supports
packing the depth and stencil values together into one format.
The OES extension is the GLES equivalent of the EXT extension and the
two extensions provide the same enums with basically the same semantics,
except that the EXT extension is a lot more wordy due to a larger number
of features in the full OpenGL api and the OES extension has some
asymmetric limitations on when the GL_DEPTH_STENCIL and
GL_DEPTH24_STENCIL8 enums can be used as internal formats.
GL_OES_packed_depth_stencil doesn't allow the GL_DEPTH_STENCIL enum
to be passed to glRenderbufferStorage (GL_DEPTH24_STENCIL8 should be
used instead) and GL_OES_packed_depth_stencil doesn't allow
GL_DEPTH24_STENCIL8 to be passed as an internal format to glTexImage2D.
We had been handling the two extensions differently in Cogl by calling
try_creating_fbo with different flags depending on whether the OES or
EXT extension was available and passing GL_DEPTH_STENCIL to
glRenderbufferStorage when we have the EXT extension or
GL_DEPTH24_STENCIL8 with the OES extension.
To localize the code that deals with the differences between the
extensions this patch does away with the need for separate flags
so we now just have COGL_OFFSCREEN_ALLOCATE_FLAG_DEPTH_DEPTH_STENCIL and
right before calling glRenderbufferStorage we check which extension we
are using to decide whether to use the GL_DEPTH_STENCIL or
GL_DEPTH24_STENCIL8 enums.
(cherry picked from commit 88a05fac6609f88c0f46d9df2611d9fbaf159939)
textures[iter_y.index * n_y_spans + iter_x.index]
only works for vertical rectangles when n_x_spans > 0 (ie x != {0} )
is also wrong for horizontal rectangles ( x = {0, 1, 2, 3} , y = {0, 1}
-> second line will start at 2 = iter_y.index * n_y_spans + iter_x.index
-> iteration are 0, 1, 2, 3, \n 2, 3, 4, 5 instead of 0, 1, 2, 3 \n 4, 5, 6, 7
Reviewed-by: Robert Bragg <robert@linux.inte.com>
(cherry picked from commit bf0d187f1b5423b9ce1281aab1333fa2dfb9863f)
When pruning a pipeline to a set number of layers it records the index
of the first layer after the given number of layers have been found.
This is stored in a variable called 'first_index_to_prune' implying
that this layer should be included in the layers to be pruned. However
the subsequent if-statement was only pruning layers with an index
greater than the recorded index so it would presumably only prune the
following layers. This patch fixes it to use '>=' instead.
https://bugzilla.gnome.org/show_bug.cgi?id=683414
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit d3063e8dea92a8f668acef6435cc68e0c901dc8d)
When pruning layers from a pipeline the pipeline cache would once be
freed due to the call to pre_change_notify but it would immediately be
recreated again when foreach_layer_internal is called. When n_layers
is later set to 0 it would end up with an invalid cache lying around.
This patch changes the order so that it will iterate the layers first
before triggering the pre-change notify so that the cache will be
cleared correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=683414
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 1c8efdc838cc5ace380365cb54e0741645856edf)
The check for whether to use ‘stride’ instead of ‘pitch’ from the GBM
API tries to check whether the GBM version is >= 8.1.0. However it was
comparing the major and micro components independently so any version
with the minor part set to 0 would fail. The GBM version in Mesa
master is now 9.0.0 which breaks it. This patch changes it to check
the version using the COGL_VERSION_ENCODE macro instead.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 38f1dc58b35023f9e6bbc0db746b1554bd0377fc)
The generated release message and the README have been updated to point
to the reference manuals hosted on developer.gnome.org and state that
documentation for the experimental 2.0 api is not currently available
online since we are migrating services away from clutter-project.org and
may not be able to rely on it for much longer.
When building COGL with multiple backends it can be useful to force a
default driver to be selected. For example while for Debian we do want to
build the GL renderer on ARM, GLESv2 is much more suitable as the
default renderer on that platform.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 8a43aa7167b56784f7b50c557391b990861d594f)
So that we can show a UProf profile when running test-journal the test
now declares a static "Mainloop" timer which it starts/stops around the
GLib mainloop it runs.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 66b3d23c201b2e7dd74602e6e6e6c2d8ead47bcd)
As a helpful aid Cogl will now print a warning if no "Mainloop" UProf
timer was setup by the application that explains that either Clutter
should be built with --enable-profile or if Clutter isn't being used
then it shows how it can create its own Mainloop timer.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 3d052dbca86bf36f30b2d60ff59b967d14665436)
This adds a uprof timer around the _cogl_journal_discard() at the end of
_cogl_journal_flush() since this sometimes takes a significant
proportion of the time to flush the journal.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 14ffc3a197100be814452af2d0f839970353b04d)
Since we only want to disable the debug features that may impact
performance when building with --disable-debug this ensures that the
COGL_DEBUG_ macros aren't defined as NOPs for non-debug builds.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit c4b50040b5c033e370eb721d1d217eced8ebdaad)
_cogl_pipeline_get_layer_with_flags accepts a CoglPipelineGetLayerFlags
flags argument and understands one COGL_PIPELINE_GET_LAYER_NO_CREATE
flag. There was a mistake with the definition of this enum though so
COGL_PIPELINE_GET_LAYER_NO_CREATE had a value of 0 and so testing for
the flag using the bitwise & operator would never find the flag set.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 5923f92f1428b3eb4977b5f21723f1b19a9d284a)