It's a deprecated API that can surprise us. Namely, when the internal
format passed is COGL_PIXEL_FORMAT_ANY, it will *always* allocate an
RGBA8888 pixel format texture, even if we only passed it a RGB format
or even an A8 format.
cogl_texture_2d_new_with_data is the newer, better API and doesn't have
these warts.
Connecting to size-changed is wrong -- size-changed tells us when
we *told* the X server or resize the window. For X11, we're sort of
guaranteed that the surface will be updated at some point before the
next frame, but for Xwayland, we can't be sure that the new surface is
attached at this point.
This fixes weird artifacts when resizing apps like xclock.
This was wrong for subsurfaces that extend beyond the parent's shape,
since the paint volume would be wrong in this case. Instead of using the
shape region which can be out of date and wrong, just use the union of
our children's volumes, which is a lot easier to manage.
Use connect_after() to accomodate code in GNOME Shell that,
when benchmarking drawing performance, connects to ::after-paint
and calls glFinish(). The timing information from that will be
more accurate if we hold off until that completes before we signal
apps to begin drawing the next frame. If there are no other
connections to ::after-paint, connect() vs. connect_after() doesn't
matter.
https://bugzilla.gnome.org/show_bug.cgi?id=732350
The output_id is more of an opaque identifier for the monitor, based on
its underlying ID from the windowing system. Since we also use the term
"output_id" for the output's index, rename our use of the opaque cookie
"output_id" to "winsys_id".
This signal is emitted the first time a frame of contents of the
window is completed by the application and has been drawn on the
screen. This is meant to be used for performance measurement of
application startup.
https://bugzilla.gnome.org/show_bug.cgi?id=732343
With get_input_region existing, get_input_rect is a misnomer. Really,
it's about the geometry of the output surface, and it's only used that
way in the compositor code.
Way back when in GNOME 3.2, get_input_rect was added when we added
invisible borders. get_outer_rect was always synonymous with server-side
geometry of the toplevel. get_outer_rect was used for both user-side
policy (the "frame rect") and to get the geometry of the window.
Invisible borders were meant to extend the input region of the frame
window silently. Since most users of get_outer_rect cared about the
frame rect, we kept that the same and added a new method, get_input_rect
to get the full rect of the framed window with all invisible borders for
input kept on.
As time went on and CSD and Wayland became a reality, the relationship
between the server-side geometry and the "frame rect" became more
complicated, as can be evidenced by the recent commits. Since clients
don't tend to be framed anymore, they set their own input region.
get_buffer_rect is also sort of a poor name, since X11 doesn't really
have buffers, but we don't really have many other alternatives.
This doesn't change any of the code, nor the meaning. It will always
refer to the rectangle where the toplevel should be placed.
The smallest possible spread corresponds to an unblurred shadow, which
neither grows nor shrinks - thus the spread should be zero not negative
as returned by our current calculation.
https://bugzilla.gnome.org/show_bug.cgi?id=731353
Avoid populating *_VERSION constants through cflags in pkg-config-file
which could be overridden by the project using it. Properly prefix the
defines with META_ to make gi-scanner happy.
When opening the window menu without an associated control - e.g.
by right-clicking the titlebar or by keyboard - using coordinates
for the menu position is appropriate. However when the menu is
associated with a window button, the expected behavior in the
shell can be implemented much easier with the full button geometry:
the menu will point to the center of the button's bottom edge
rather than align to the left/right side of the titlebar as it
does now, and the clickable area where a release event does not
dismiss the menu will match the actual clickable area in mutter.
So add an additional show_window_menu_for_rect() function and
use it when opening the menu from a button.
https://bugzilla.gnome.org/show_bug.cgi?id=731058
The last commit added support for the "appmenu" button in decorations,
but didn't actually implement it. Add a new MetaWindowMenuType parameter
to the show_window_menu () functions and use it to ask the compositor
to display the app menu when the new button is activated.
https://bugzilla.gnome.org/show_bug.cgi?id=730752
It looks weird to have Alt+Space pop up under the cursor instead
of the top-left corner of the window, and the Wayland request will
pass through the coordinates as well.
Add it to the compositor interface, and extend the
_GTK_SHOW_WINDOW_MENU ClientMessage to support it as well.
Scale surfaces based on output scale and the buffer scale set by them.
We pick the scale factor of the monitor there are mostly on.
We only handle native i.e non xwayland / legacy clients yet.
https://bugzilla.gnome.org/show_bug.cgi?id=728902
Talking it over with Owen, we weren't sure why this was here.
At one point, we were creating a foreign stage window, so potentially
Clutter didn't select for its own events, but now we're using a standard
stage window, so this seems weird.
Why we did it on the COW, nobody knows. Maybe copy/paste bugginess?
Each level in the tower is initialized by binding the texture for that
level to an offscreen framebuffer and rendering the previous level as a
textured rectangle. The problem was that we were blending the previous
level with undefined data so argb32 windows with transparencies would
result in artefacts. This makes sure to disable blending when drawing
the textured rectangle.
The code here before was completely wrong. Not only did it mix up
coordinate spaces of "client rect" vs. "frame rect", but it used
meta_frame_get_frame_bounds, which is specifically for the *visible*
bounds of a window!
In the case that we don't have a bounding or input shape region at
all on the client window, the input shape that we should apply is
the surface's natural shape. So, set the region to NULL to get the
natural rect picking semantics.
Compositors haven't been able to manage more than one screen for
quite a while. Merge MetaCompScreen into MetaCompositor, and update
the API to match.
We still keep MetaScreen in the public compositor API for compatibility
purposes.
We previously separated out MetaDisplay and MetaScreen. mutter
would only manage one screen, but we still kept a list of screens
for simplicity.
With Wayland support, we no longer care about the ability to
manage more than one screen at a time. Remove this by killing
the list of screens, in favor of having just one MetaScreen
in MetaDisplay.
We also kill off active_screen at the same time, since it's
not necessary anymore.
A future cleanup should merge MetaDisplay and MetaScreen. To avoid
breaking API, we should probably keep MetaScreen around as a dummy
type.
cogl_texture_get_components can be used on both X11 and Wayland
backends. Technically, the detection is different: we actually
check the actual RENDER format in the old code, while Cogl simply
assumes that any pixmap with a depth >= 32 is ARGB32. Since Cogl
already seems to be working with its internal checks, it makes
more sense to use Cogl's check rather than keeping our own.
is_argb32 can be called at any time, including times when we don't
have a texture. In that case, just assume we're ARGB32. The value
really shouldn't be important though.
Previously, a sequence like this would crash a client:
=> surface.attach(buffer)
=> buffer.destroy()
The correct behavior is to wait until we release the buffer before
destroying it.
=> surface.attach(buffer)
=> surface.attach(buffer2)
<= buffer.release()
=> buffer.destroy()
The protocol upstream says that "the surface contents are undefined"
in a case like this. Personally, I think that this is broken behavior
and no client should ever do it, so I explicitly killed any client
that tried to do this.
But unfortunately, as we're all well aware, XWayland does this.
Rather than wait for XWayland to be fixed, let's just allow this.
Technically, since we always copy SHM buffers into GL textures, we
could release the buffer as soon as the Cogl texture is made.
Since we do this copy, the semantics we apply are that the texture is
"frozen" in time until another newer buffer is attached. For simple
clients that simply abort on exit and don't wait for the buffer event
anyhow, this has the added bonus that we'll get nice destroy animations.
If we have a CLICKING grab op we still need to send events to xwayland
so that we get them back for gtk+ to process thus we can't steer
wayland input focus away from it.
https://bugzilla.gnome.org/show_bug.cgi?id=726123
This ensures that we send the proper leave and enter events to wayland
clients.
Particularly, this solves a bug in SSD xwayland windows where clicking
and dragging on the title bar to move the window only works on the odd
turn (unless the pointer moves away from the title bar between
tries). This happens because xwayland gets a button press but doesn't
see the release so when it gets the next button press it discards it
because its pointer button tracking logic says that the button is
already pressed. Sending the proper wayland pointer leave event fixes
it since wayland clients must forget about button state at that point.
https://bugzilla.gnome.org/show_bug.cgi?id=726123
Creating a new cogl texture may fail, in which case the intent to
free it will crash. While something is clearly wrong (insanely
large window, oom, ...), crashing the WM is harsh and we should
try to avoid it if at all possible, so carry on.
https://bugzilla.gnome.org/show_bug.cgi?id=722266