Previously Cogl would only call wl_display_flush after doing a swap
buffers on the onscreen because that is the only place where Cogl
itself would end up queueing requests. However since commit
323fe188748 Cogl takes control of calling wl_display_dispatch as well
which effectively makes it very difficult for the application to
handle the Wayland event queue itself. Therefore it needs to rely on
Cogl to do it which means that other parts of the application may also
queue requests that need to be flushed.
This patch tries to copy the display fd handling of window.c in the
Weston example clients. wl_display_flush will always be called in
prepare function for the fd which means it will always be called
before going idle. If flushing the display causes the socket buffer to
become full, it will additionally poll for write on the FD to try
flushing again when it becomes empty.
We also need to call wl_display_dispatch_pending in the prepare
because apparently calling eglSwapBuffers can cause it to read data
from the FD to receive events for a different queue. In that case
there will be events that need to be handled but the FD will no longer
be ready for reading so we won't wake up the main loop any other way.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 962d1825105a87dd8358a765353b77f6af8fe760)
_cogl_poll_rendererer_modify_fd can be used internally to modify the
event mask on an FD to be polled. This will be used in the Wayland
backend to start blocking on write whenever flushing the display fills
the socket's buffer. Modifying the FD's events causes the poll age to
increase.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 8bc0df53ee508687b87e547c1cbac5e8d7d5fc80)
Eventually the Wayland winsys will want to do useful work in its
prepare callback before the main loop goes idle. Previously
cogl_poll_renderer_get_info would stop calling any further prepare
functions if it found one with a zero timeout. That would mean the
Wayland prepare function might not get called before going idle in
some cases. This patch changes it so that it continues to call all of
the prepare functions regardless of the timeout.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 02f7fa538c9d2b383fa0f601177140b571ecf315)
If we don't do this then it might leak connections to the display if
multiple different renderers are tried.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 8e5b4d40a4d960d0d20927d30ee68a37387fe776)
When a primitive is drawn with an attribute that contains texture
coordinates Cogl will fetch the corresponding layer in order to
determine the unit number. However if the pipeline didn't actually
have a layer it would end up redundantly creating it. It's probably
not a good idea to be modifying the pipeline while flushing the
attributes state so this patch makes it pass the no-create flag to the
get_layer function and then skips out enabling the attribute if the
layer didn't already exist.
https://bugzilla.gnome.org/show_bug.cgi?id=702570
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 7507ad1a55a2aeb5beb8c0e3343e1e1f2805ddde)
When a layer is added to a pipeline without setting a texture it ends
up sampling from a default 1x1 texture which is meant to be solid
white. However for some reason we were creating the texture with 0
opacity which is effectively an invalid premultiplied colour. This
would make the blending behave oddly if it was used.
https://bugzilla.gnome.org/show_bug.cgi?id=702570
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 2ffc77565fb6395b986d3274f8bdb6eee6addbf9)
Unlike in GError, the policy in Cogl for when NULL is passed as the
CoglError argument is that the program should abort with a fatal
error. Previously however any errors that were being propagated were
being silently dropped if the application passed NULL. This patch
fixes it to also log a fatal error in that case.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 41e233b4b27de579f77b82115cf43a618bf0c93f)
This adds several utility apis that aim to make it as easy as possible
for an application to determine what size a video should be drawn at.
The important detail here is that these apis take into account the
pixel-aspect-ratio in addition to the video's own aspect ratio.
This patch updates the cogl-basic-video-player example to use the
cogl_gst_video_sink_fit_size() api to perform letterboxing.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit d26f17c97ff6b9f6d6211e0527d5965a85305a56)
Previously cogl_gst_video_sink_attach_frame returned the last layer
used by the sink. This can also be retrieved via
cogl_gst_video_sink_get_free_layer so I don't think it's necessary to
return it here and it seems kind of out of place.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 0805f3e9db715fc2c97b7e60d4f3a25e7fa598fc)
These are just internal convenience macros to define the GObject
properties so they shouldn't be in the public headers.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit b7b861aa87ad05a2c253afdd87323acd82fd988f)
Adds documentation comments to CoglGstVideoSink and makes it generate
a separate manual to contain it.
One thing that I wasn't able to figure out with this was how to get
the documentation to have correct references to the main Cogl docs.
You can pass arguments to gtkdoc-fixxref to point to other manuals,
but presumably this needs the installed locations and when the
Cogl-Gst documentation is generated the Cogl docs may not have been
installed yet.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5acdf8db47311893a9cf9ea04a66a287b657b8b0)
This declaration was a leftover from a previous iteration of the
CoglGST patches.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 53ae6d39efa113e28f5d1177ccb8613ddf28ee43)
These functions are used when attaching frames at a start point other
than layer 0 is required. This could potentially be used to "layer"
videos and textures on top of each other in the CoglPipeline. This
layering could come handy if videos are used as alpha masks or normal
maps, or when arranging layers in a perticular order, so Cogl could
blend them nicely without extra hassle.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f98b437c9ea79f04ef1ebbb6ff547f58b49b0008)
Previously, when CoglGST generated the default sampling snippet it
would set the replace string so that Cogl wouldn't generate any
redundant code for the other layers. However this also meant that it
wouldn't modulate with the default colour. This patch changes it to
set the combine mode on all of the layers to REPLACE(PREVIOUS) so that
it just copies the previous layer without generating any texture
sampling. That way the fragment snippet for the final layer can just
modulate the previous value with the video sampling function. This
makes it possible to set a color on the pipeline and have it modulate
the video. Also if we eventually add a way to insert layers before the
GST sampling layer then it can modulate with those.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit f5da8d2caf4b2fbaf6941642a5d5ea7b93d0dd0f)
This is an inappropriately pessimistic remark for the header of
CoglGstVideoSink.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ce2a70165054fea3e9d992c14650ba8cccf60e97)
Luminance textures are not supported on GL3 and as the textures are
accessed via a shader anyway it doesn't seem like it should make much
difference which component the single-component textures are in. Cogl
already has code to fake alpha textures via the texture swizzle
extension on GL3.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ca1666860a325fa4d2362cdd38297d6281e997d8)
Instead of hardcoding the version number “0.0” it now uses the version
number from the Cogl source. The web address has been changed to
cogl3d.org.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 5363e4a68da9811fb136a01b278846ce15913287)
• Fixes some overly long lines, hugging asterisks in the pointer type
declarations and indentation issues.
• Tidies up the GLSL source so that it will look nicer in the
debug output.
• Removes the backwards ‘parent_class’ define which hacks the
implementation of the G_DEFINE_TYPE macro and just uses the full
type name instead.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 3ffe8979da4d4dc7deb221e5653b6f24f41b412c)
There were a few hugging pointer asterisks and inconsistent newlines for
a prototype which this patch tweaks.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 488c074316b270b17d1000c68f26210108b1b0ca)
The experimental HW decode path was adapted from clutter-gst based on
some experimental gstreamer api. This path was disabled by upstream
gstreamer developers back in september last year due to instabilities.
Without understanding how the experimental api is implemented it seems
rather strange to be plucking out the GL handle of a cogl texture and
passing that to some unknown gstreamer code which would presumably
somehow have to use the same GL context as Cogl to be able to do
something with that texture. For now we can strip all of this unused
code and it would be easy enough to re-instate later if it's useful.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 2a24cb9f2a283af87bb685b3e9c7e3af1ffeab6e)
CoglGst is a GStreamer integration library that facilitates
video playback using the Cogl API. It works by retrieving
each video frame from the GStreamer pipeline and attaching
it to a Cogl pipeline in the form of a Cogl texture along
with possible color model conversion shaders. The pipeline
is then retrieved by the user during each draw. An example
use of the CoglGst API is included in the examples directory.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit dfb94cf4b0b6b42d6465df362a0c0af780596890)
When determining whether a pipeline needs blending, it was previously
returning TRUE if the pipeline has no snippets, whereas it should be
the other way around because we can't determine the final colour when
there are snipets.
https://bugzilla.gnome.org/show_bug.cgi?id=702570
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 109c815bf747fe027a74f098b4fcb6ea4846a482)
We want any run-time warnings to cause the conformance tests to fail.
We are currently setting G_DEBUG in test_utils_init and this would
previously cause the fatal-warnings debug option to be set. However
since commit 47444dac of glib this no longer works because the
environment variable is read in a magic constructor of libglib so it
is too late to try to set it there. This patch makes it also set it in
run-tests.sh to avoid the problem.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 95a6d962f5bc2f21bfcdb2f0bc6b55cfa28792b3)
Previously on GLES2 where there is no builtin point size uniform then
we would always add a line to the vertex shader to write to the
builtin point size output because when generating the shader it is not
possible to determine if the pipeline will be used to draw points or
not. This patch changes it so that the default point size is 0.0f
which is documented to have undefined results when drawing points.
That way we can avoid adding the point size code to the shader in that
case. The assumption is that any application that is drawing points
will probably have explicitly set the point size on the pipeline
anyway so it is not a big deal to change the default size from 1.0f.
This adds a new pipeline state flag to track whether the point size is
non-zero. This needs to be its own state because altering it needs to
cause a different shader to be added to the pipeline cache. The state
flags that affect the vertex shader have been changed from a constant
to a runtime function because they will be different depending on
whether there is a builtin point size uniform.
There is also a unit test to ensure that changing the point size does
or doesn't generate a new shader depending on the values.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit b2eba06e16b587acbf5c57944a70ceccecb4f175)
Conflicts:
cogl/cogl-pipeline-private.h
cogl/cogl-pipeline-state-private.h
cogl/cogl-pipeline-state.c
cogl/cogl-pipeline.c
The handler for ConfigureNotify events in the EGL X11 winsys was
incorrectly trying dereference the onscreen pointer even if it didn't
find an onscreen for the X window that has resized. This meant that if
the application has other windows that weren't created by Cogl then it
would crash when handling events for them.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit a0056df61903d74180d4e4caa1046e68396d1be0)
Add register constraints to prevent asm statement complaints like:
{standard input}:382: rdhi, rdlo and rm must all be different
Signed-off-by: Donn Seeley <donn.seeley@windriver.com>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
There are two asm() statements in cogl-fixed.c that can't be assembled
in Thumb mode. This patch switches it to the generic code in Thumb
mode.
Signed-off-by: Donn Seeley <donn.seeley@windriver.com>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Previously when trying the three different texture types to create an
automagic texture it would handle the out-of-memory error specially
and bypass trying the remaining texture types. Presumably the idea is
that out-of-memory is a serious error and it can't be recovered from.
However, in the case of atlas textures, this error will be thrown if
the texture is too large to fit into an atlas. In that case it makes
sense to try another texture type so that it can fallback to using a
sliced texture. I think conceptually each different texture type will
have different memory requirements so it seems reasonable to try the
others if there is not enough memory for one of them.
This was causing cogl_texture_new_from_data to break when loading very
large textures because it wouldn't end up slicing them.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ad6968135a01823eb6a94668dd22c7a4df6f9327)
This removes cogl-queue.h and adds a copy of Wayland's embedded list
implementation. The advantage of the Wayland model is that it is much
simpler and so it is easier to follow. It also doesn't require
defining a typedef for every list type.
The downside is that there is only one list type which is a
doubly-linked list where the head has a pointer to both the beginning
and the end. The BSD implementation has many more combinations some of
which we were taking advantage of to reduce the size of critical
structs where we didn't need a pointer to the end of the list.
The corresponding changes to uses of cogl-queue.h are:
• COGL_STAILQ_* was used for onscreen the list of events and dirty
notifications. This makes the size of the CoglContext grow by one
pointer.
• COGL_TAILQ_* was used for fences.
• COGL_LIST_* for CoglClosures. In this case the list head now has an
extra pointer which means CoglOnscreen will grow by the size of
three pointers, but this doesn't seem like a particularly important
struct to optimise for size anyway.
• COGL_LIST_* was used for the list of foreign GLES2 offscreens.
• COGL_TAILQ_* was used for the list of sub stacks in a
CoglMemoryStack.
• COGL_LIST_* was used to track the list of layers that haven't had
code generated yet while generating a fragment shader for a
pipeline.
• COGL_LIST_* was used to track the pipeline hierarchy in CoglNode.
The last part is a bit more controversial because it increases the
size of CoglPipeline and CoglPipelineLayer by one pointer in order to
have the redundant tail pointer for the list head. Normally we try to
be very careful about the size of the CoglPipeline struct. Because
CoglPipeline is slice-allocated, this effectively ends up adding two
pointers to the size because GSlice rounds up to the size of two
pointers.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 13abf613b15f571ba1fcf6d2eb831ffc6fa31324)
Conflicts:
cogl/cogl-context-private.h
cogl/cogl-context.c
cogl/driver/gl/cogl-pipeline-fragend-glsl.c
doc/reference/cogl-2.0-experimental/Makefile.am
Previously CoglPipelineSnippetList was using the BSD embedded list
type with a mini struct to combine the list node with a pointer to the
snippet. This is effectively equivalent to just using a GList so we
might as well do that. This will help if we eventually want to get rid
of cogl-queue.h
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 54a168f3c7829c427d54ab517533bb9f7384d022)
The free function for atlas textures was previously always assuming
that there will be a valid sub_texture pointer but this might not be
the case if the texture was never successfully allocated. This was
causing Cogl to crash if the application tries to make a texture that
can not fit in the atlas using the automagic texture API.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit a7a8b7aefc8cb03fe8b716bee06b3449a7dba85f)
It looks like Cogl doesn't currently clean up an unallocated atlas
texture properly so if it fails to allocate it. It will then crash
when it is freed. This adds a conformance test to verify that all of
the various texture types can be freed without allocating them.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit bfada22d9a7f60baad5ae5615e45f10327e23e36)
Previously the unit tests were using libdl without directly linking to
it. It looks like this ends up working because one of Cogl's
dependencies ends up pulling adding -ldl via libtool. However in some
configurations it looks like this wasn't happening.
To avoid this problem we can just use GModule to resolve the symbols.
g_module_open is documented to return a handle to the ‘main program’
when NULL is passed as the filename and looking at the code it seems
that this ends up using RTLD_DEFAULT so it will have the same effect.
The in-tree copy of glib already has the code for gmodule so this
shouldn't cause problems for --disable-glib.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit b14ece116ed3e4b18d59b645e77b3449fac51137)
This adds a new function to enable per-vertex point size on a
pipeline. This can be set with
cogl_pipeline_set_per_vertex_point_size(). Once enabled the point size
can be set either by drawing with an attribute named
'cogl_point_size_in' or by writing to the 'cogl_point_size_out'
builtin from a snippet.
There is a feature flag which must be checked for before using
per-vertex point sizes. This will only be set on GL >= 2.0 or on GLES
2.0. GL will only let you set a per-vertex point size from GLSL by
writing to gl_PointSize. This is only available in GL2 and not in the
older GLSL extensions.
The per-vertex point size has its own pipeline state flag so that it
can be part of the state that affects vertex shader generation.
Having to enable the per vertex point size with a separate function is
a bit awkward. Ideally it would work like the color attribute where
you can just set it for every vertex in your primitive with
cogl_pipeline_set_color or set it per-vertex by just using the
attribute. This is harder to get working with the point size because
we need to generate a different vertex shader depending on what
attributes are bound. I think if we wanted to make this work
transparently we would still want to internally have a pipeline
property describing whether the shader was generated with per-vertex
support so that it would work with the shader cache correctly.
Potentially we could make the per-vertex property internal and
automatically make a weak pipeline whenever the attribute is bound.
However we would then also need to automatically detect when an
application is writing to cogl_point_size_out from a snippet.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 8495d9c1c15ce389885a9356d965eabd97758115)
Conflicts:
cogl/cogl-context.c
cogl/cogl-pipeline-private.h
cogl/cogl-pipeline.c
cogl/cogl-private.h
cogl/driver/gl/cogl-pipeline-progend-fixed.c
cogl/driver/gl/gl/cogl-pipeline-progend-fixed-arbfp.c
This ensures we only add a static_breadcrumb pointer to every
CoglPipeline when build with debugging enabled. Since applications may
allocate a lot of pipelines we want to keep the basic size of pipelines
(ignoring optional sparse state) down to a minimum.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 4716312e14bc253cd174a22b3db9d2c9cf031fa1)
This moves the code in test-bitmask into a UNIT_TEST() directly in
cogl-bitmask.c which will now be run as a tests/unit/ test.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 693c85e0cde8a1ffbffc03a5f8fcc1f92e8d0ac7)
Includes fix to build conform tests with -I$(top_builddir)/cogl to
be able to find cogl-gl-header.h
This adds a white-box unit test that verifies that GL_BLEND is disabled
when drawing an opaque rectangle, enabled when drawing a transparent
rectangle and then disabled again when drawing a transparent rectangle
but with a blend string that effectively disables blending.
This shares the test utilities and launcher infrastructure we are using
for conformance tests so we get consistent reporting and so unit tests
will be run against a range of different drivers.
This adds a --enable-unit-tests configure option which is enabled by
default but if disabled will make all UNIT_TESTS() into static inline
functions that we should expect the compiler to discard since they won't
be referenced by anything.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 9047cce06bbf9051ec77e622be2fdbb96ed767a8)
This adds a test to make sure that if the same pipeline is used to draw
two primitives, one which doesn't need blending because it has an opaque
color associated, and another using a color attribute that requires
blending then Cogl should recognize that it needs to enable blending for
the second primitive even though the pipeline hasn't changed.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit c15d91f1c6293bebd4494d1e20586121483cddef)
Since _cogl_pipeline_update_blend_enable() can sometimes show up quite
high in profiles; instead of calling
_cogl_pipeline_update_blend_enable() whenever we change pipeline state
that may affect blending we now just set a dirty flag and when we flush
a pipeline we check this dirty flag and lazily calculate whether blender
really needs to be enabled if it's set.
Since it turns out we were too optimistic in assuming most GL drivers
would recognize blending with ADD(src,0) is equivalent to disabling
GL_BLEND we now check this case ourselves so we can always explicitly
disable GL_BLEND if we know we don't need blending.
This introduces the idea of an 'unknown_color_alpha' boolean to the
pipeline flush code which is set whenever we can't guarantee that the
color attribute is opaque. For example this is set whenever a user
specifies a color attribute with 4 components when drawing a primitive.
This boolean needs to be cached along with every pipeline because
pipeline::real_blend_enabled depends on this and so we need to also call
_cogl_pipeline_update_blend_enable() if the status of this changes.
Incidentally with this patch we now no longer ever use
_cogl_pipeline_set_blend_enable() internally. For now the internal api
hasn't been removed though since we might want to consider re-purposing
it as a public api since it will now not conflict with our own internal
state tracking and could provide a more convenient way to disable
blending than setting a blend string.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit ab2ae18f3207514c91fa6fd9f2d3f2ed93a86497)
_cogl_egl_query_wayland_buffer was using _COGL_RETURN_IF_FAIL but the
function needs to return a CoglBool so it was giving a warning.
(cherry picked from commit d0290eb19fc9bf56fb24f8eab573e19966ea7e1a)
This updates Cogland and the three hello examples to use the dirty
callback. For Cogland, this removes the manual handling of X events
and for the other examples it removes the need to redraw continously.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 108e3c548b7866190845f163c84a05212718bd70)
This adds a callback that can be registered with
cogl_onscreen_add_dirty_callback which will get called whenever the
window system determines that the contents of the window is dirty and
needs to be redrawn. Under the two X-based winsys's, this is reported
off the back of the Expose events, under SDL it is reported from
SDL_VIDEOEXPOSE or SDL_WINDOWEVENT_EXPOSED and under Windows from the
WM_PAINT messages. The Wayland winsys doesn't really have the concept
of dirtying the buffer but in order to allow applications to work the
same way on all platforms it will emit the event when the surface is
first shown and whenever it is resized.
There is a private feature flag to specify whether dirty events are
supported. If the winsys does not set this then Cogl will simulate
dirty events by emitting one when the window is first allocated and
when it is resized. The only winsys's that don't set this flag are
things like KMS or the EGL null winsys where there is no windowing
system and showing and hiding the onscreen doesn't really make any
sense. In that case Cogl can assume the buffer will only become dirty
once when it is first allocated.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 85c5a9ba419b2247bd768284c79ee69164a0c098)
Conflicts:
cogl/cogl-private.h
After discussing with Kristian Høgsberg it seems that the semantics of
wl_egl_window_resize is meant to be that if nothing has been drawn to
the framebuffer since the last swap then the resize will take effect
immediately. Cogl was previously always delaying the call to
wl_egl_window_resize until the next swap. That meant that if you
wanted to resize the surface you would have to call
cogl_wayland_onscreen_resize and then redundantly draw a frame at the
old size so that you can swap to get the resize to occur before
drawing again at the right size. Typically an application would decide
to resize at the start of its paint sequence so it should be able to
just resize immediately.
In current Mesa master it seems that there is a bug which means that
it won't actually delay a resize that is done mid-scene and instead it
will just discard what came before. To get consistent behaviour in
Cogl, the code to delay the call to wl_egl_window_resize is still used
if it determines that the buffer is dirty. There is an existing
_cogl_framebuffer_mark_mid_scene call which was being used to track
when the framebuffer becomes dirty since the last clear. This function
is now also used to track a new flag to track whether something has
been drawn since the last swap. It is called ‘mid_scene’ under the
assumption that this may also be useful for other things later.
cogl_framebuffer_clear has been slightly altered to always call
_cogl_framebuffer_mark_mid_scene even if it determines that it doesn't
need to clear because the framebuffer should still be considered to be
in the middle of a scene. Adding a quad to the journal now also begins
the scene.
This also fixes a potential bug where it looks like pending_dx/dy were
never cleared so they would always be accumulated even after the
resize is flushed.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 945689a62903990a20abb87a85d2c96eb3985fe7)
In some later patches we want to be able to use the term ‘dirty’ as a
public facing concept which represents expose events from the window
system. In that case the internal concept of dirtying the framebuffer
is confusing, so this patch changes the name to instead mean that
we've doing something which causes the framebuffer to be in the middle
of a frame.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 88eed85b52c29f66659ea112038f3522c9bd864e)
If we delay setting the surface to toplevel until it is shown then
that gives the application an opportunity to avoid calling show so
that it can set its own surface type.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit ab59c3a421968d7f159d89ca2f0ba8a9f098cbf6)
Previously the WGL winsys was expecting the application to send all
windows messages to Cogl via the cogl_win32_renderer_handle_event
function. When using a GLib main loop we can make this work
transparently to the application with a GSource for the magic
G_WIN32_MSG_HANDLE file descriptor. That causes the GMainLoop to wake
up whenever a message is available.
This patch makes the WGL winsys add that magic value as a source fd.
This will only have any meaning if the application is using glib, but
it shouldn't matter because the cogl_poll_renderer_get_info function
is documented to only work on Unix-based winsys's anyway.
This patch is an API break because by default Cogl will now start
stealing all of the Windows messages. Something like Clutter that wants to handle
its own event retrieval would now need to call
cogl_win32_renderer_set_event_retrieval_enabled to stop Cogl from
stealing the events.
Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 99a7f84d7149f24f3e86c5d3562f9f2632ff6df8)
The implementation of cogl_wayland_texture_2d_new_from_buffer now uses
eglQueryWaylandBuffer to query the format of the buffer before trying to
create a texture from the buffer. This makes sure we don't try and
create a texture from YUV buffers for instance that may actually require
multiple textures. We now also report an error when we don't understand
the buffer type or format.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit 79252d4e419e2462c5bc89ea4614b40bddc932c5)
This removes the various checks for != COGL_DRIVER_GLES1 when tracking
blend state that was trying to avoid checking the equation or alpha
component factors when they are known to be fixed on gles1. Now we just
rely on the opengl driver to do the right thing for the different
drivers and ignore the differences in the general pipeline state
tracking.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit f67c7eaf23e1e2088e9956cb2b66dfdc9abc8b3b)
Instead of simply relying on the emscripten mainloop api to wake us up
periodically so that we can poll for SDL events we now pause the
emscripten mainloop whenever no redraw is queued and instead hook an
input event listener into the real browser mainloop to resume the
emscripten mainloop whenever input is received. This way the example
can go to sleep while there's no input to handle.
This provides a simple example of binding custom javascript into native
code.
Reviewed-by: Neil Roberts <neil@linux.intel.com>
(cherry picked from commit a8b1e2eda491dc7c44c84cd47e160c7b8ba0f792)