Use connect_after() to accomodate code in GNOME Shell that,
when benchmarking drawing performance, connects to ::after-paint
and calls glFinish(). The timing information from that will be
more accurate if we hold off until that completes before we signal
apps to begin drawing the next frame. If there are no other
connections to ::after-paint, connect() vs. connect_after() doesn't
matter.
https://bugzilla.gnome.org/show_bug.cgi?id=732350
The output_id is more of an opaque identifier for the monitor, based on
its underlying ID from the windowing system. Since we also use the term
"output_id" for the output's index, rename our use of the opaque cookie
"output_id" to "winsys_id".
This signal is emitted the first time a frame of contents of the
window is completed by the application and has been drawn on the
screen. This is meant to be used for performance measurement of
application startup.
https://bugzilla.gnome.org/show_bug.cgi?id=732343
With get_input_region existing, get_input_rect is a misnomer. Really,
it's about the geometry of the output surface, and it's only used that
way in the compositor code.
Way back when in GNOME 3.2, get_input_rect was added when we added
invisible borders. get_outer_rect was always synonymous with server-side
geometry of the toplevel. get_outer_rect was used for both user-side
policy (the "frame rect") and to get the geometry of the window.
Invisible borders were meant to extend the input region of the frame
window silently. Since most users of get_outer_rect cared about the
frame rect, we kept that the same and added a new method, get_input_rect
to get the full rect of the framed window with all invisible borders for
input kept on.
As time went on and CSD and Wayland became a reality, the relationship
between the server-side geometry and the "frame rect" became more
complicated, as can be evidenced by the recent commits. Since clients
don't tend to be framed anymore, they set their own input region.
get_buffer_rect is also sort of a poor name, since X11 doesn't really
have buffers, but we don't really have many other alternatives.
This doesn't change any of the code, nor the meaning. It will always
refer to the rectangle where the toplevel should be placed.
The smallest possible spread corresponds to an unblurred shadow, which
neither grows nor shrinks - thus the spread should be zero not negative
as returned by our current calculation.
https://bugzilla.gnome.org/show_bug.cgi?id=731353
Avoid populating *_VERSION constants through cflags in pkg-config-file
which could be overridden by the project using it. Properly prefix the
defines with META_ to make gi-scanner happy.
When opening the window menu without an associated control - e.g.
by right-clicking the titlebar or by keyboard - using coordinates
for the menu position is appropriate. However when the menu is
associated with a window button, the expected behavior in the
shell can be implemented much easier with the full button geometry:
the menu will point to the center of the button's bottom edge
rather than align to the left/right side of the titlebar as it
does now, and the clickable area where a release event does not
dismiss the menu will match the actual clickable area in mutter.
So add an additional show_window_menu_for_rect() function and
use it when opening the menu from a button.
https://bugzilla.gnome.org/show_bug.cgi?id=731058
The last commit added support for the "appmenu" button in decorations,
but didn't actually implement it. Add a new MetaWindowMenuType parameter
to the show_window_menu () functions and use it to ask the compositor
to display the app menu when the new button is activated.
https://bugzilla.gnome.org/show_bug.cgi?id=730752
It looks weird to have Alt+Space pop up under the cursor instead
of the top-left corner of the window, and the Wayland request will
pass through the coordinates as well.
Add it to the compositor interface, and extend the
_GTK_SHOW_WINDOW_MENU ClientMessage to support it as well.
Scale surfaces based on output scale and the buffer scale set by them.
We pick the scale factor of the monitor there are mostly on.
We only handle native i.e non xwayland / legacy clients yet.
https://bugzilla.gnome.org/show_bug.cgi?id=728902
Talking it over with Owen, we weren't sure why this was here.
At one point, we were creating a foreign stage window, so potentially
Clutter didn't select for its own events, but now we're using a standard
stage window, so this seems weird.
Why we did it on the COW, nobody knows. Maybe copy/paste bugginess?
Each level in the tower is initialized by binding the texture for that
level to an offscreen framebuffer and rendering the previous level as a
textured rectangle. The problem was that we were blending the previous
level with undefined data so argb32 windows with transparencies would
result in artefacts. This makes sure to disable blending when drawing
the textured rectangle.
The code here before was completely wrong. Not only did it mix up
coordinate spaces of "client rect" vs. "frame rect", but it used
meta_frame_get_frame_bounds, which is specifically for the *visible*
bounds of a window!
In the case that we don't have a bounding or input shape region at
all on the client window, the input shape that we should apply is
the surface's natural shape. So, set the region to NULL to get the
natural rect picking semantics.
Compositors haven't been able to manage more than one screen for
quite a while. Merge MetaCompScreen into MetaCompositor, and update
the API to match.
We still keep MetaScreen in the public compositor API for compatibility
purposes.
We previously separated out MetaDisplay and MetaScreen. mutter
would only manage one screen, but we still kept a list of screens
for simplicity.
With Wayland support, we no longer care about the ability to
manage more than one screen at a time. Remove this by killing
the list of screens, in favor of having just one MetaScreen
in MetaDisplay.
We also kill off active_screen at the same time, since it's
not necessary anymore.
A future cleanup should merge MetaDisplay and MetaScreen. To avoid
breaking API, we should probably keep MetaScreen around as a dummy
type.
cogl_texture_get_components can be used on both X11 and Wayland
backends. Technically, the detection is different: we actually
check the actual RENDER format in the old code, while Cogl simply
assumes that any pixmap with a depth >= 32 is ARGB32. Since Cogl
already seems to be working with its internal checks, it makes
more sense to use Cogl's check rather than keeping our own.
is_argb32 can be called at any time, including times when we don't
have a texture. In that case, just assume we're ARGB32. The value
really shouldn't be important though.
Previously, a sequence like this would crash a client:
=> surface.attach(buffer)
=> buffer.destroy()
The correct behavior is to wait until we release the buffer before
destroying it.
=> surface.attach(buffer)
=> surface.attach(buffer2)
<= buffer.release()
=> buffer.destroy()
The protocol upstream says that "the surface contents are undefined"
in a case like this. Personally, I think that this is broken behavior
and no client should ever do it, so I explicitly killed any client
that tried to do this.
But unfortunately, as we're all well aware, XWayland does this.
Rather than wait for XWayland to be fixed, let's just allow this.
Technically, since we always copy SHM buffers into GL textures, we
could release the buffer as soon as the Cogl texture is made.
Since we do this copy, the semantics we apply are that the texture is
"frozen" in time until another newer buffer is attached. For simple
clients that simply abort on exit and don't wait for the buffer event
anyhow, this has the added bonus that we'll get nice destroy animations.
If we have a CLICKING grab op we still need to send events to xwayland
so that we get them back for gtk+ to process thus we can't steer
wayland input focus away from it.
https://bugzilla.gnome.org/show_bug.cgi?id=726123
This ensures that we send the proper leave and enter events to wayland
clients.
Particularly, this solves a bug in SSD xwayland windows where clicking
and dragging on the title bar to move the window only works on the odd
turn (unless the pointer moves away from the title bar between
tries). This happens because xwayland gets a button press but doesn't
see the release so when it gets the next button press it discards it
because its pointer button tracking logic says that the button is
already pressed. Sending the proper wayland pointer leave event fixes
it since wayland clients must forget about button state at that point.
https://bugzilla.gnome.org/show_bug.cgi?id=726123
Creating a new cogl texture may fail, in which case the intent to
free it will crash. While something is clearly wrong (insanely
large window, oom, ...), crashing the WM is harsh and we should
try to avoid it if at all possible, so carry on.
https://bugzilla.gnome.org/show_bug.cgi?id=722266
All WM events (passive button grabs and passive keyboard grabs)
are handled through clutter now, so we must make sure we spoof
them even if they happen on frames (because that's where we
grab on)
Weirdly, clutter stopped segfaulting when we call clutter_x11 methods
and the backend is not right, but this is correct anyway, and
probably fixes some BadDrawable errors in mutter-wayland on x11,
caused by mixing windows of the outer X and windows of Xwayland.
Mouse event handling was duplicated, resulting in weird interactions
if clutter was allowed to see certain events (for example under
wayland, where it gets all events). Because now clutter sees all
X events, even when running as an x11 compositor, we can handle
everything using the clutter variants.
At the same time, rewrite a little the passive button grab code,
to make it clear what is being matched on what and why.
meta_ui_window_is_widget() returns FALSE for frame windows, so we
must filter those explicitly (by letting the event go to gtk
and from there to MetaFrames). Also, for proper gtk widgets
(window menus) we want to let gtk see all events, including
keyboard, otherwise we break keynav in the window menu.
This means that having a window menu open disables keybindings
(because the event doesn't run through clutter)
We must spoof events to clutter even if they are associated
with a MetaWindow, because keyboard events are always associated
with one (the focus window), and we must process keybindings
for window togheter with the global ones if they include Super,
because we're not going to see them again.
... and individually. It turns out that updating the opaque region
was causing the shape region to be updated, which was causing a new
shape mask to be generated and uploaded to the GPU. Considering
GTK+ regenerates the opaque region on pretty much any focus change,
this is not good.
At some point meta_window_actor_cull_out stopped calling
meta_cullable_cull_out_children which caused the unobscured region
to never be set for the stex.
https://bugzilla.gnome.org/show_bug.cgi?id=725216
For decorated windows, we don't want to apply any input
shape, because the frame is always rectangular and eats
all the input.
The real check is in meta-window-actor, where we consider
if we need to apply the bounding shape and the input shape
(or the intersection of the two) to the surface-actor,
but as an optimization we avoid querying the server in
meta-window.
Additionally, for undecorated windows, the "has input shape"
check is wrong if the window has a bounding shape but not an
input shape.
We need a MetaWaylandSurface to build a MetaSurfaceActor, but
we don't have one until we get the set_window_xid() call from
XWayland. On the other hand, plugins expect to see the window
actor right from when the window is created, so we need this
empty state.
Based on a patch by Jasper St. Pierre.
Turns out we only ever need to freeze/thaw whole windows, not
surfaces or subsurfaces.
This will allow removing the surface actor without losing
the count.
This time, to make way for MetaSurfaceActorEmpty. This also fixes
destroy effects as a side effect. It still has issues if we try
to re-assign an actor that's already toplevel (e.g. somebody
re-popping up a menu that's already being destroyed), but this
will be fixed soon.
The idea here is that MetaWindowActor will do the unparenting of
the surface actor when it itself is destroyed. To prevent bad issues
with picking, we only make the surface actor reactive when it's
toplevel.
gnome-shell has some complex tracking to set the X input focus
correctly, assuming various things about how the stage is set up in X11.
For instance, it assumes that all actors that get key focus are
gnome-shell Chrome actors that will get events through the stage, so
when one of them is focused, it will try to set the focus back to the
stage.
In Wayland, windows are considered chrome actors that will get key
events through the stage, so this only has the result of unfocusing any
windows that have just received key focus.
We should probably move this input focus moving to mutter instead of
gnome-shell so we can better use mutter's internal state and heuristics.
We cannot intersect the the complete volume with the unobscured bounds
because it does not include the shadows. So just intersect it with the
windows's shape bounds and union it with the shadow bounds.
This also matches what the comment in the code says:
"We could compute an full clip region as we do for the window texture,
but the shadow is relatively cheap to draw, and a little more complex to clip,
so we just catch the case where the shadow is completely obscured
and doesn't need to be drawn at all."
The rendering logic before was somewhat complex. We had three independent
cases to take into account when doing rendering:
* X11 compositor. In this case, we're a traditional X11 compositor,
not a Wayland compositor. We use XCompositeNameWindowPixmap to get
the backing pixmap for the window, and deal with the COMPOSITE
extension messiness.
In this case, meta_is_wayland_compositor() is FALSE.
* Wayland clients. In this case, we're a Wayland compositor managing
Wayland surfaces. The rendering for this is fairly straightforward,
as Cogl handles most of the complexity with EGL and SHM buffers...
Wayland clients give us the input and opaque regions through
wl_surface.
In this case, meta_is_wayland_compositor() is TRUE and
priv->window->client_type == META_WINDOW_CLIENT_TYPE_WAYLAND.
* XWayland clients. In this case, we're a Wayland compositor, like
above, and XWayland hands us Wayland surfaces. XWayland handles
the COMPOSITE extension messiness for us, and hands us a buffer
like any other Wayland client. We have to fetch the input and
opaque regions from the X11 window ourselves.
In this case, meta_is_wayland_compositor() is TRUE and
priv->window->client_type == META_WINDOW_CLIENT_TYPE_X11.
We now split the rendering logic into two subclasses, which are:
* MetaSurfaceActorX11, which handles the X11 compositor case, in that
it uses XCompositeNameWindowPixmap to get the backing pixmap, and
deal with all the COMPOSITE extension messiness.
* MetaSurfaceActorWayland, which handles the Wayland compositor case
for both native Wayland clients and XWayland clients. XWayland handles
COMPOSITE for us, and handles pushing a surface over through the
xf86-video-wayland DDX.
Frame sync is still in MetaWindowActor, as it needs to work for both the
X11 compositor and XWayland client cases. When Wayland's video display
protocol lands, this will need to be significantly overhauled, as it would
have to work for any wl_surface, including subsurfaces, so we would need
surface-level discretion.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
It's mostly equivalent to the case where we've already detached
the pixmap, *except* for the x11_size_changed case. We can simply
detach the pixmap at the time the window changes size, though.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
We guarantee ourselves that a valid pixmap will appear any time
that the window is painted. The window actor will be scheduled
for a repaint if it's added / removed from the scene graph, like
during construction, if the size changes, or if we receive damage,
which are the existing use cases where this function is called.
So, I can't see any reason that we queue a redraw in here.
With the split into surface actors, we don't have an easy place
we can use to queue a redraw, and since it's unnecessary, we can
just drop it on the floor.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
We can never have a window actor that represents either the X root
window or the stage window, so it doesn't make sense to bail out
early in case we do.
I'd imagine that this came from a much earlier version of the code
where the compositor was much separate and had its own MapNotify
handling.
Since the unredirected window MetaWindowActor is stacked on top, it
will naturally get culled out of the process, so we can remove the
special casing here. Unfortunately, with the way that the code is
currently structured, it's too difficult to actually prevent setting
the clip / visible regions if the window is redirected, so just let
those be set for unredirected windows for now.
The input region was set on the shaped texture, but the shaped texture
was never picked properly, as it was never set to be reactive. Move the
pick implementation and reactivity to the MetaSurfaceActor, and update
the code everywhere else to expect a MetaSurfaceActor.
It doesn't work now that we set the pivot point. This breaks the
maximize effect, but it fixes the destroy effect. The maximize effect
looks bad anyway, so it's not too important to me.
In order for the compositor to properly determine whether a client
is an X11 client or not, we need to wait until XWayland calls
set_window_id to mark the surface as an XWayland client. To prevent
the compositor from getting tripped up over this, make sure that
the window has been fully initialized by the time we call
meta_compositor_add_window.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
Traditionally, WMs unmap windows when minimizing them, and map them
when restoring them or wanting to show them for other reasons, like
upon creation.
However, as metacity morphed into mutter, we optionally chose to keep
windows mapped for the lifetime of the window under the user option
"live-window-previews", which makes the code keep windows mapped so it
can show window preview for minimized windows in other places, like
Alt-Tab and Expose.
I removed this preference two years ago mechanically, by removing all
the if statements, but never went through and cleaned up the code so
that windows are simply mapped for the lifetime of the window -- the
"architecture" of the old code that maps and unmaps on show/hide was
still there.
Remove this now.
The one case we still need to be careful of is shaded windows, in which
we do still unmap the client window. In the future, we might want to
show previews of shaded windows in the overview and Alt-Tab. In that
we'd also keep shaded windows mapped, and could remove all unmap logic,
but we'd need a more complex method of showing the shaded titlebar, such
as using a different actor.
At the same time, simplify the compositor interface by removing
meta_compositor_window_[un]mapped API, and instead adding/removing the
window on-demand.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
We want to remove a bunch of auxilliary duties from the MetaWindowActor
and MetaSurfaceActor, including some details of how culling is done.
Move the unobscured region culling code to the MetaShapedTexture, which
helps the actor become "more independent".
https://bugzilla.gnome.org/show_bug.cgi?id=720631
When we traversed down to reset the culling state, previously we
would just skip any actors that wanted culling. In order to properly
reset the unobscured_region before painting, we need to traverse down
to these places as well. Do this by calling cull_out with NULL regions
for both arguments, and adapt existing cull_out implementations to
match.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
It seems that this code is trying to transform from "surface coordinates"
(the size of texture that's being displayed) to "actor coordinates"
(the actor's allocation), but I can't find any place where the two are
different. As such, let's just go back to using "surface coordinates"
everywhere and see what breaks.
Ever since the change to create the output window synchronously at startup,
there hasn't been any time where somebody could set a stage region the
output window was ready, so this was effectively dead code.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
I know it's confusing with the triple negative, but unredirected is how
we track it elsewhere: we have an 'unredirected' flag, and 'should_unredirect'.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
We currently ignore the unobscured region when we have mapped clones in
meta_window_actor_process_damage and meta_window_actor_damage_all but
use it unconditionally when computing the paint volume.
This is wrong. We should ignore it there as well or we will end up with
empty clones if the cloned window is completly obscured
(like the tray icons in gnome-shell).
https://bugzilla.gnome.org/show_bug.cgi?id=721596
We need to do this for XWayland windows, since we only get the event
telling us they're an XWayland window after the compositor knows about
the window.
I know it's confusing with the triple negative, but unredirected is how
we track it elsewhere: we have an 'unredirected' flag, and 'should_unredirect'.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
Ever since the change to create the output window synchronously at startup,
there hasn't been any time where somebody could set a stage region the
output window was ready, so this was effectively dead code.
We no longer unmap the toplevel windows during normal operation. The
toplevel state is tied to the window's lifetime.
Call meta_compositor_add_window / meta_compositor_remove_window instead...
Traditionally, WMs unmap windows when minimizing them, and map them
when restoring them or wanting to show them for other reasons, like
upon creation.
However, as metacity morphed into mutter, we optionally chose to keep
windows mapped for the lifetime of the window under the user option
"live-window-previews", which makes the code keep windows mapped so it
can show window preview for minimized windows in other places, like
Alt-Tab and Expose.
I removed this preference two years ago mechanically, by removing all
the if statements, but never went through and cleaned up the code so
that windows are simply mapped for the lifetime of the window -- the
"architecture" of the old code that maps and unmaps on show/hide was
still there.
Remove this now.
The one case we still need to be careful of is shaded windows, in which
we do still unmap the client window. Theoretically, we might want to
show previews of shaded windows in the overview and Alt-Tab, so we remove
the complex unmap tracking for this later.
For x defined below, x == -INT32_MAX assuming that the arithmetic
expression actually uses the fpu.
float f = 1.0f;
int32_t x = INT32_MAX * f;
This would result in the calculated clip width/height to be -INT_MAX
if the damage width/height is INT_MAX. To solve this, use a double
variable instead.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=705502
We need to set the number of components on the CoglTextureRectangle to
prevent wasting too much GPU memory. As we need to do this before we call
cogl_texture_set_region, just remove the meta_texture_rectangle_new wrapper,
and make callers call cogl_texture_rectangle_new_with_size directly.
The shadow is added in the paint step, not as a separate actor,
so the raise is a no-op. It also gets rid of an annoying misspelling
that's driving me crazy.
In the past, MetaWindowGroup was allocated the size of the screen and
painted the size of the screen because it contained the screen background,
but now we also have the "top window group" which contains only popup
windows, so the allocation doesn't properly reflect the paint bounds
of the window group. Compute the paint bounds accurately from the
children.
https://bugzilla.gnome.org/show_bug.cgi?id=719669
Since subsurfaces won't have toplevel MetaWindowActors, we need to
use MetaSurfaceActor instead. These are embedded in the MetaWindowActor,
just like MetaShapedTexture was (in fact, MetaSurfaceActor now contains
a MetaShapedTexture)
Make MetaWindowActor chain up to the generic default MetaCullable
implementation, and remove the helper methods for MetaSurfaceActor
and MetaShapedTexture.
Instead of hardcoded knowledge of certain classes in MetaWindowGroup,
create a generic interface that all actors can implement to get parts of
their regions culled out during redraw, without needing any special
knowledge of how to handle a specific actor.
The names now are a bit suspect. MetaBackgroundGroup is a simple
MetaCullable that knows how to cull children, and MetaWindowGroup is the
"toplevel" cullable that computes the initial two regions. A future
cleanup here could be to merge MetaWindowGroup / MetaBackgroundGroup so
that we only have a generic MetaSimpleCullable, and move the "toplevel"
cullability to be a MetaCullableToplevel.
https://bugzilla.gnome.org/show_bug.cgi?id=714706
For clarity, rename meta_window_get_outer_rect() to match terminology
we use elsewhere. The old function is left as a deprecated
compatibility wrapper.
When a Wayland compositor, simply rely on the clutter actor allocation
changed signal to sync geometry and emit window actor size changed
signals.
Attaching a wl_buffer to a MetaShapedTexture will signal allocation
changed on the corresponding MetaSurfaceActor, which the MetaWindowActor
is listening to.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=705502
Instead of having MetaWindowActor only have one single MetaShapedTexture
as actor drawing its content, introduce a new abstract MetaSurfaceActor
that takes care of drawing.
This is one step in the direction to decouple MetaWaylandSurface with a
MetaWindow and MetaWindowActor (except for shell/xdg surfaces) in order
to finally support subsurfaces like features, or any feature where
window is not drawn using a single texture.
The first step, implemented in this patch, is to not have
MetaWindowActor work directly with a shaped texture. There are still
some cases where it simply gets the texture and goes on as before, but
this should be changed by either removing the need of going via
MetaWindowActor or by adding some generic interface to MetaSurfaceActor
that doesn't limit its functionality to one shaped texture.
There should be no visible difference nor after this patch, but
meta_window_actor_get_texture() and meta_surface_actor_get_texture()
should be deprecated when equivalent functionality has been introduced.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=705502
The current time offset calculation is wrong. It is supposed to calculate
the offset between the current time and the
"time where it message should be sent" (last_time + interval).
Fix the math to actually do that.
https://bugzilla.gnome.org/show_bug.cgi?id=709340
We must set x11_size_changed even if we are frozen, as every window
size change makes the X server drop the pixmap, and we might lose
the information at the next thaw() if the window changes size
twice in one frame (so we would keep drawing with the old pixmap
until something else causes another resize)
To properly resize clients, we need to send them configure events
with the size we computed from the constraint system, and
then check if the new size they ask is compatible with
our expectation.
Note that this does not handle interactive resizing yet, it
merely makes the API calls work for wayland clients.
https://bugzilla.gnome.org/show_bug.cgi?id=707401
If we skip getting the clip rectangle because we don't have an
allocation or a texture, don't intersect with the visible region.
This avoids a pixman warning of an invalid rectangle.
Reviewed by drago01 in IRC.
Switching meta/util.h to gi18n.h was wrong, mutter is a library
and needs gi18n-lib.h, but that cannot be included from a public
header (since it depends on config.h or command line options),
so split util.h into a public and a private part.
https://bugzilla.gnome.org/show_bug.cgi?id=707897
The meta_create_texture_pipeline function used to create a dummy 1x1
texture so that it could make sure that the all of the state that
affects the shader generation would be set on the template pipeline so
that Cogl could share the pipeline's shader with any other pipelines
that are just rendering a texture. This is no longer necessary because
the only thing that affects the shader generation is the texture type,
not the actual texture data and Cogl now has a function to explicitly
set the texture type which we can use instead. Additionally even if
the template mechanism is not used at all Cogl will still end up
reusing the same shader because it now has a shader cache which is
indexed by the pipeline state so pipeline's don't strictly need to
share ancestry in order to take advantage of it. However we still
might as well use the function because if there is a common ancestry
it is faster to look up the shader because Cogl doesn't need to hash
the pipeline state.
https://bugzilla.gnome.org/show_bug.cgi?id=707458
(cherry picked from commit c5bf60eab4)
Calling XIGrabDevice has no effect under wayland, because the
xserver is getting events from us. Instead, we need to use our
own interfaces for grabs.
At the same time, we can simplify the public API, as plugins
should always listen for events using clutter.
https://bugzilla.gnome.org/show_bug.cgi?id=705917
We must send frame_drawn and frame_timing messages to even when
we don't actually queue a redraw on screen to comply with the
WM sync spec.
So throttle such apps to down to a ~100ms interval.
https://bugzilla.gnome.org/show_bug.cgi?id=703332
When we get a damage event we update the window by calling
meta_shaped_texture_update_area which queues a redraw on the actor.
We can avoid that for obscured regions by comparing the damage area to
our visible area.
This patch causes _NET_WM_FRAME_DRAWN messages to be not sent in some cases
where they should be sent; they will be added back in a later commit.
https://bugzilla.gnome.org/show_bug.cgi?id=703332
When drawing entirely opaque regions, we traditionally kept blending on
simply because it made the code more convenient and obvious to handle.
However, this can cause lots of performance issues on GPUs that aren't
too powerful, as they have to readback the buffer underneath.
Keep track of the opaque region set by windows (through _NET_WM_OPAQUE_REGION,
Wayland opaque_region hints, standard RGB32 frame masks or similar), and draw
those rectangles separately through a different path with blending turned off.
https://bugzilla.gnome.org/show_bug.cgi?id=707019
Split out pipeline creation to a separate function so that we don't
have so much dense code in the paint function itself, and remove some
indentation levels.
Also, don't use our own template for the unmasked pipeline, since it
has nothing different from the default pipeline template.
We also don't store the pipelines anymore since their creation isn't
really helping us; we set the mask texture and paint texture on every
paint anyway.
https://bugzilla.gnome.org/show_bug.cgi?id=707019
We can't use the X11 stage window, if clutter is not using the X11
backend (and even if it was, it would be bogus when the xwayland
server is not the one clutter is talking to). Instead, we introduce
the concept of "focus type", which we use to differentiate the
various meanings of None in the focus_xwindow field.
https://bugzilla.gnome.org/show_bug.cgi?id=706364
Remove grab window and cursor from the API, and just grab always
on the stage window with no cursor.
This is mainly to remove the X11 usage in the public API, in preparation
for implementing this in wayland.
https://bugzilla.gnome.org/show_bug.cgi?id=705917
We want to show a dialog when a display change happens from the
control center. To do so, add a new vfunc to MetaPlugin and
call it when a configuration change is requested via DBus.
https://bugzilla.gnome.org/show_bug.cgi?id=705670
Instead of a full white background, make one with a random color.
This way the different "monitors" are visible and it's easier
to debug the DBus API.
https://bugzilla.gnome.org/show_bug.cgi?id=705670
We need to use g_signal_connect_object(), rather than g_signal_connect(),
because the window actor can be destroyed before the window emits
the final notify::appears-focused inside unmanage, if the plugin
decides that it doesn't want to animate the destruction (which
happens with dialogs and the default plugin)
https://bugzilla.gnome.org/show_bug.cgi?id=706207
We need to use g_signal_connect_object(), rather than g_signal_connect(),
because the window actor can be destroyed before the window emits
the final notify::appears-focused inside unmanage, if the plugin
decides that it doesn't want to animate the destruction (which
happens with dialogs and the default plugin)
https://bugzilla.gnome.org/show_bug.cgi?id=706207
The previous code was leaving focus fields dirty in MetaWaylandPointer
and MetaWaylandKeyboard at time (which could crash the X server
because of invalid object IDs)
The new code is more tighly integrated in the normal X11 code
for handling keyboard focus (meaning that the core idea of input
focus is also correct now), so that meta_window_unmanage() can
do the right thing. As a side benefit, clicking on wayland clients
now unfocus X11 clients.
For the mouse focus, we need to clear the surface pointer when
the metawindowactor is destroyed (even if the actual actor is
kept alive for effects), so that a repick finds a different pointer
focus.
https://bugzilla.gnome.org/show_bug.cgi?id=705859
Remove window_surfaces, as the FIXME asks for. We don't need it
because we can obtain the surface from the MetaWindow, and
follow the wayland compositor path for both types of clients.
https://bugzilla.gnome.org/show_bug.cgi?id=705818
This copies the basic input support from the Clayland demo compositor.
It adds a basic wl_seat implementation which can convert Clutter mouse
events to Wayland events. For this to work all of the wayland surface
actors need to be made reactive.
The wayland keyboard input focus surface is updated whenever Mutter
sees a FocusIn event so that it will stay in synch with whatever
surface Mutter wants as the focus. Wayland surfaces don't get this
event so for now it will just give them focus whenever they are
clicked as a hack to test the code.
Authored-by: Neil Roberts <neil@linux.intel.com>
Authored-by: Giovanni Campagna <gcampagna@src.gnome.org>
This adds support for running mutter as a hybrid X and Wayland
compositor. It runs a headless XWayland server for X applications
that presents wayland surfaces back to mutter which mutter can then
composite.
This aims to not break Mutter's existing support for the traditional X
compositing model which means a single build of Mutter can be
distributed supporting the traditional model and the new Wayland based
compositing model.
TODO: although building with --disable-wayland has at least been tested,
I still haven't actually verified that running as a traditional
compositor isn't broken currently.
Note: At this point no input is supported
Note: multiple authors have contributed to this patch:
Authored-by: Robert Bragg <robert@linux.intel.com>
Authored-by: Neil Roberts <neil@linux.intel.com>
Authored-by: Rico Tzschichholz.
Authored-by: Giovanni Campagna <gcampagna@src.gnome.org>
We now track whether a window has an input shape specified via the X
Shape extension. Intersecting that with the bounding shape (as required
by the X Shape extension) we use the resulting rectangles to paint
window silhouettes when picking. As well as improving the correctness of
picking this should also be much more efficient because typically when
only picking solid rectangles then the need to actually render and issue
a read_pixels request can be optimized away and instead the picking is
done on the cpu.
GNOME Shell's actors aren't touch capable, so we need to make sure that
they get the fallback pointer emulated events for now. This fixes the top
bar and other elements not working on a touchscreen without a grab.
https://bugzilla.gnome.org/show_bug.cgi?id=697192
Some cards have 2k texture limits, which can be smaller than
commonly sized backgrounds.
One way to get around this problem is to use Cogl's "sliced texture"
feature, that transparently uses several hardware textures under the hood.
This commit changes background textures loaded from file to potentially
use slicing. Based on a patch by Jasper St. Pierre
<jstpierre@mecheye.net>.
https://bugzilla.gnome.org/show_bug.cgi?id=702283
Some cards have 2k texture limits, which can be smaller than
commonly sized backgrounds.
This commit downscales the background in this situation, so that
it won't fail to load.
https://bugzilla.gnome.org/show_bug.cgi?id=702283
gnome-shell needs to know whether the stage window is focused so
it can synchronize between stage window focus and Clutter key actor
focus. Track all X windows, even those without MetaWindows, when
tracking the focus window, and add a compositor-level API to determine
when the stage is focused.
https://bugzilla.gnome.org/show_bug.cgi?id=700735
We substract one from the unredirect counter when enable_unredirect_for_screen
gets called. It is an unsigned integer so substracting one from zero (which means enable) would overflow and thus keep it peramently enabled.
This should never happen because it means there is an unmatched
enable / disable pair somewhere. So in addition to fixing it add a
warning when this case gets triggered.
https://bugzilla.gnome.org/show_bug.cgi?id=701224
Commit 4f2bb583bf changed things so that the compositor used
clutter_threads_add_repaint_func_full (CLUTTER_REPAINT_FLAGS_POST_PAINT
to get after-paint notification and send _NET_WM_FRAME_DRAWN, but this
doesn't actually work, since Clutter will already have blocked for
VBlank before calling post-paint functions.
The result is that frame synced toolkits like GTK 3.8 will normally
only be able to draw every other frame.
Since ::paint doesn't work either, a new function
clutter_stage_set_paint_callback() has been added to Clutter
(and will be included in the 1.14 branch)
https://bugzilla.gnome.org/show_bug.cgi?id=698794
gnome-shell has traditionally just called XSetInputFocus when wanting to
set the input focus to the stage window, but this might cause strange,
hard-to-reproduce bugs because of an interference with mutter's focus
prediction. Add API to allow gnome-shell to focus the stage window that
also updates mutter's internal focus prediction state.
https://bugzilla.gnome.org/show_bug.cgi?id=700735
The hierarchy handling is handled in the shell by adding stuff
directly to the uiGroup, and we have a dedicated actor for
the overview there, so we don't need this anymore.
https://bugzilla.gnome.org/show_bug.cgi?id=700735
This essentially just moves install_corners() from the compositor, through
the core, into the UI layer where it arguably should have been anyway,
leaving behind stub functions which call through the various layers. This
removes the compositor's special knowledge of how rounded corners work,
replacing it with "ask the UI for an alpha mask".
The computation of border widths and heights changes a bit, because the
width and height used in install_corners() are the
meta_window_get_outer_rect() (which includes the visible borders but not
the invisible ones), whereas the more readily-available rectangle is the
MetaFrame.rect (which includes both). Computing the same width and height
as meta_window_get_outer_rect() involves compensating for the invisible
borders, but the UI layer is the authority on those anyway, so it seems
clearer to have it do the calculations from scratch.
Bug: https://bugzilla.gnome.org/show_bug.cgi?id=697758
Signed-off-by: Simon McVittie <simon.mcvittie@collabora.co.uk>
Reviewed-by: Jasper St. Pierre <jstpierre@mecheye.net>
compositor/meta-background.c:64: error: redefinition of typedef 'MetaBackgroundPrivate'
./meta/meta-background.h:51: error: previous declaration of 'MetaBackgroundPrivate' was here
Right now we call unset_texture from MetaBackground's dispose method.
unset_texture assumes there's a pipeline available, but there may not
be if the object was just created.
This commit fixes that incorrect assumption.
https://bugzilla.gnome.org/show_bug.cgi?id=696157
Cogl automatically caches pipelines with no eviction policy,
so we need to make sure to reuse snippets to prevent
identical pipelines from getting cached separately.
https://bugzilla.gnome.org/show_bug.cgi?id=696157
g_task_propagate_pointer relinishes the GTask
of its reference to the propagated pointer, so we need to
unref it ourselves when we're done with it.
https://bugzilla.gnome.org/show_bug.cgi?id=696157
ClutterBinLayout's get_preferred_width / get_preferred_height
doesn't respect fixed child positioning when calculating the
size of the layout, but does when allocating. This is absurdly
broken, but it's what we're given. Use a ClutterFixedLayout,
which doesn't have these issues.
https://bugzilla.gnome.org/show_bug.cgi?id=696089
If _NET_WM_OPAQUE_REGION is set when the window is first mapped, the
initial load_properties will happen before the window actor is created,
and we'll have a call to meta_compositor_window_shape_changed. Just
fizzle this call out instead of doing anything fancy, as we'll pick
up the opaque region when the window actor is eventually created.
https://bugzilla.gnome.org/show_bug.cgi?id=695813
Window actors might be temporarily parented to intermediate actors during
effect, but we should not require that the plugin keeps track of stacking.
Rather, assume that the intermediate groups holds a whole stack, and
applying position within it.
https://bugzilla.gnome.org/show_bug.cgi?id=695711
When windows get redirected off screen, all that gets left behind
is black. We don't want to flicker black at startup, though.
This commit maps the overlay window early, before redirecting
toplevels, so they end up getting snapshotted onto the background
pixmap of the overlay window when the overlay window is mapped.
https://bugzilla.gnome.org/show_bug.cgi?id=694321
Send a _NET_WM_FRAME_DRAWN for each newly created window, as required
by the specification. This avoids a race where a window might be created
frozen but already unfrozen by the time we first see fetch the
counter value.
Remove a duplicate call to meta_compositor_set_updates_frozen() which
was called before the MetaWindowActor is created and hence did nothing.
https://bugzilla.gnome.org/show_bug.cgi?id=694771
The background vignette currently fits itself to the painted
texture, instead of the monitor. This causes some very
wrong looking drawing for backgrounds that don't fill the screen.
This commit reworks the vignette shader code to be clearer, more
correct, and parameterized so that it knows how to scale and
position the vignette.
https://bugzilla.gnome.org/show_bug.cgi?id=694393
The WALLPAPER style of background painting currently
draws starting in the upper left corner of each monitor.
This isn't really correct, it means the seam between
monitors doesn't match up and edges look unbalanced if
the tile isn't a multipe of monitor size.
Really, the tiles should be centered in the middle of
the screen. (Just like when tiling a bathroom floor,
tiles should start in the center of the room.)
This commit reworks the math to make that happen.
https://bugzilla.gnome.org/show_bug.cgi?id=694393
Commit 4f2bb583bf started to use a clutter_threads_add_repaint_func_full
callback instead of connecting to the stage's paint signal.
The callback has to return TRUE if it wants to be called again, so fix that
as we want to call it for every frame (otherwise apps supporting the WM SYNC
protocol will stop drawing).
https://bugzilla.gnome.org/show_bug.cgi?id=695006
Doing so causes useless full stage redraws and breaks culling
as clutter cannot know how the signal handler affects painting.
So use clutter_threads_add_repaint_func_full with the
CLUTTER_REPAINT_FLAGS_POST_PAINT flag instead.
https://bugzilla.gnome.org/show_bug.cgi?id=694988
There is currently code to try to fill gradients in a
hardware native format, using #ifdefs. That optimization is
unimportant since gradients only use 2 byte buffers. It's
also incorrect because it's getting the channel order wrong
at buffer initialization time.
This commit drops the ifdefs for clarity and fixes the
channel order.
https://bugzilla.gnome.org/show_bug.cgi?id=694641
Background handling in GNOME is very roundabout at the moment.
gnome-settings-daemon uses gnome-desktop to read the background from
disk into a screen-sized pixmap. It then sets the XID of that pixmap
on the _XROOTPMAP_ID root window property.
mutter puts that pixmap into a texture/actor which gnome-shell then
uses.
Having the gnome-settings-daemon detour from disk to screen means we
can't easily let the compositor handle transition effects when
switching backgrounds. Also, having the background actor be
per-screen instead of per-monitor means we may have oversized
textures in certain multihead setups.
This commit changes mutter to read backgrounds from disk itself, and
it changes backgrounds to be per-monitor.
This way background handling/compositing is left to the compositor.
https://bugzilla.gnome.org/show_bug.cgi?id=682427
actor_is_untransformed is a function meta-window-group uses to determine
if an actor is relatively pixel aligned and not contorted. It then
returns the coordinates of the actor.
In a subsequent commit will need the function in a different file, so
this commit separates it out.
https://bugzilla.gnome.org/show_bug.cgi?id=682427
Now that the background actor is reactive, this means that
clicks on the window group part of the stage, even when they're
on an X window, will be registered as the background actor, as
all of the other children of the group aren't reactive. This can
happen when a plugin takes a modal grab, for instance.
https://bugzilla.gnome.org/show_bug.cgi?id=681540
We do, in fact, need freezing to affect window geometry, so that
move-resize operations (such as an interactive resize from the
left, or a resize of a popup centered by the application) occur
atomically.
So to make map effects work properly, only exclude the initial
placement of a window from freezing. (In the future, we may want
to consider whether pure moves of a window being done in response
to a user drag should also be excluded from freezing.)
Rename meta_window_sync_actor_position() to
meta_window_sync_actor_geometry() for clarity.
https://bugzilla.gnome.org/show_bug.cgi?id=693922
If a window is frozen because it is repainting, that shouldn't kee[p
us from updating its position: we don't want a slow-to-update window
to move around the screen chunkily when dragged. (This does reduce
the efficiency of begin/end frames for replacing double-buffering,
but that never works very well in the case where there was an overlapping
window or the entire screen needed redrawing for whatever reason.)
This fixes a bug where a window that was mapped frozen would not get
positioned properly until after the map effect finished, and would
jump from 0,0 at that point. Since effects *do* need to prevent
actor repositioning by Mutter, we must position the actor before any
effect starts.
Because we now are queuing invalidates on frozen windows, fix the
logic for that so that we properly update everything when the window
unfreezes.
https://bugzilla.gnome.org/show_bug.cgi?id=693922
The WM spec requires _NET_WM_FRAME_DRAWN to *always* be sent when
there is an appropriate update to the sync counter value. We were
potentially missing _NET_WM_FRAME_DRAWN when an application did a
spontaneous update during an interactive resize and during effects.
Refactor the code to always send _NET_WM_FRAME_DRAWN, even when
a window is frozen.
https://bugzilla.gnome.org/show_bug.cgi?id=693833
Put override redirect windows such as menus into a separate window group
stacked above everything else. This will allow us to visually put these
above other compositior chrome.
Based on a patch from Muffin.
https://bugzilla.gnome.org/show_bug.cgi?id=633620
When a client is drawing as hard as possible (without sleeping
between frames) we need to draw as soon possible, since sleeping
will decrease the effective frame rate shown to the user, and
can also result in the system never kicking out of power-saving
mode because it doesn't look fully utilized.
Use the amount the client increments the counter value by when
ending the frame to distinguish these cases:
- Increment by 1: a no-delay frame
- Increment by more than 1: a non-urgent frame, handle normally
https://bugzilla.gnome.org/show_bug.cgi?id=685463
We previously had timestamp information stubbed out in
_NET_WM_FRAME_DRAWN. Instead of this, add a high-resolution timestamp
in _NET_WM_FRAME_DRAWN then send a _NET_WM_FRAME_TIMINGS message
after when we have complete frame timing information, representing
the "presentation time" of the frame as an offset from the timestamp
in _NET_WM_FRAME_DRAWN.
To provide maximum space in the messages,_NET_WM_FRAME_DRAWN and
_NET_WM_FRAME_TIMINGS are not done as WM_PROTOCOLS messages but
have their own message types.
https://bugzilla.gnome.org/show_bug.cgi?id=685463
Add a function to convert from g_get_monotonic_time() to a
"high-resolution server timestamp" with microsecond precision.
These timestamps will be used when communicating frame timing
information to the client.
https://bugzilla.gnome.org/show_bug.cgi?id=685463
Using a "sync delay" where we wait for 2 ms after the vblank before
starting to draw the next frame provides for much more predictable
latency for applications. An application can know that if it completes
a frame any time between 8ms before the vblank to the vblank,
it will reliably be drawn on the following vblank period, rather than
having an unpredictable latency depending on whether the compositor
is currently busy drawing a frame or not.
https://bugzilla.gnome.org/show_bug.cgi?id=685463
Instead of defining CLUTTER_ENABLE_EXPERIMENTAL_API and
COGL_ENABLE_EXPERIMENTAL_API in individual source files, enable
them on the command line. We weren't tracking exactly what pieces of
experimental API we were using and we were using the experimental
API in most source files that used Clutter and Cogl, so the
local #defines were annoying rather than useful.
https://bugzilla.gnome.org/show_bug.cgi?id=685463
It's possible that a client might update the (extended)
_NET_WM_SYNC_REQUEST_COUNTER counter twice without actually drawing
anything. In that case, we still should send a _NET_WM_FRAME_DRAWN
message since it's hard for a client to know every case in which
no damage is generated. For now, do it the easy way by forcing a
stage repaint.
https://bugzilla.gnome.org/show_bug.cgi?id=685463
When the application provides the extended second counter for
_NET_WM_SYNC_REQUEST, send a client message with completion
information after the next redraw after each counter update
by the application.
https://bugzilla.gnome.org/show_bug.cgi?id=685463
Replace the unused meta_compositor_set_updates() with
a reversed-meaning meta_compositor_set_updates_frozen(), and use
it to implement freezing application window updates during
interactive resizing. This avoids drawing new areas of the window
with blank content before the application has a chance to repaint.
https://bugzilla.gnome.org/show_bug.cgi?id=685463
Some windows may already have event masks on them that we've selected
for, especially if we're using GTK+ windows. In particular, this fixes
window menus in the XI2 port.
https://bugzilla.gnome.org/show_bug.cgi?id=690581
This new hint allows compositors to know what portions of a window
will be obscured, as a region above them is opaque. For an RGB window,
possible to glean this information from the bounding shape region of
a client window, but not for an ARGB32 window. This new hint allows
clients that use ARGB32 windows to say which part of the window is
opaque, allowing this sort of optimization.
https://bugzilla.gnome.org/show_bug.cgi?id=679901
With the shape region always set, it turns out the bounding region
is only used in one place, that's easily replaced with a variable
we already have available to us.
https://bugzilla.gnome.org/show_bug.cgi?id=679901
With recent changes in the way the window mask texture is constructed,
the shape_region is always set, which means that we can remove
conditionals checking if the shape region is set.
https://bugzilla.gnome.org/show_bug.cgi?id=679901
With the shape region always set, it turns out the bounding region
is only used in one place, that's easily replaced with a variable
we already have available to us.
https://bugzilla.gnome.org/show_bug.cgi?id=679901
With recent changes in the way the window mask texture is constructed,
the shape_region is always set, which means that we can remove
conditionals checking if the shape region is set.
https://bugzilla.gnome.org/show_bug.cgi?id=679901
Currently we only unredirect monitor sized override redirect windows.
This was supposed to catch fullscreen opengl games and improve
their performance.
Newer games like fullscreen webgl games and SDL2 using games (like L4D) as well as wine based games do not use override redirect windows so we need a better
heuristic to catch them.
GLX windows always damage the whole window when calling glxSwapBuffers and
never damage sub regions. So we can use that to detect them.
The new heuristic unredirects windows fullscreen windows that have damaged the
whole window more then 100 times in a row.
https://bugzilla.gnome.org/show_bug.cgi?id=683786
We should call meta_window_actor_detach not
meta_window_actor_queue_create_pixmap to create a new pixmap when we redirect a
previously unredirected window again.
https://bugzilla.gnome.org/show_bug.cgi?id=693042
With some recent changes to how mask textures are constructed from
shapes, a helper method we made was only called in one place, allowing
us to drop a reference/destroy, and remove a double clear.
https://bugzilla.gnome.org/show_bug.cgi?id=679901
Due to a conditional error, meta_region_builder_add_rectangle was called
on every single blank pixel, rather than at the end of spans. With the new
rename, it's fairly clear to see the error. Fix the check to ensure that
we no longer make extraneous calls to meta_region_builder_add_rectangle.
https://bugzilla.gnome.org/show_bug.cgi?id=691874