| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
This is needed for cairo_set_device_scale()
|
|
|
|
|
|
|
| |
These are the last two global GDK symbols that have a libgtk_only
suffix.
https://bugzilla.gnome.org/show_bug.cgi?id=739781
|
| |
|
|
|
|
|
|
|
|
|
| |
If buffer age is undefined and the updated area is not the whole
window then we use bit-blits instead of swap-buffers to end the
frame.
This allows us to not repaint the entire window unnecessarily if
buffer_age is not supported, like e.g. with DRI2.
|
|
|
|
|
| |
This broke when gdk_gl_texture_quad moved to shaders. We need
a specialized shader for the rectangle case.
|
|
|
|
|
|
| |
This is the modern way OpenGL works, and using it will let us
switch to a core context for the paint context, and work on
OpenGL ES 2.0.
|
| |
|
|
|
|
|
| |
Right now this just centralizes the glBegin/glEnd code, but
this will be replaced with buffer objects later.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Commit afd9709afff151e04b84b91c6d90b7 made us keep impl window
cairo surfaces around across changes of window scale. But the
window scale setter forgot to update the size and scale of the
surface. The effect of this was that toggling the window scale
from 1 to 2 in the inspector was not causing the window to draw
at twice the size, although the X window was made twice as big,
and input was scaled too. Fix this by updating the surface when
the window scale changes.
|
|
|
|
|
|
|
| |
We need to use this in the code path where we make the context
non-current during destroy, because at that point the window
could be destroyed and gdk_window_get_display() would return
NULL.
|
|
|
|
|
|
|
|
|
|
| |
This moves the code related to the frame sync code into
the is_attached check, which means we don't have to ever
run this when making non-window-paint contexts current.
This is a minior speed thing, but the main advantage
is that it makes making a non-paint context current
threadsafe.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This is not really needed. The gl context is totally tied to the
window it is created from by virtue of sharing the context with the
paint context of that window and that context always has the visual
of the window (which we already can get).
Also, all user visible contexts are essentially offscreen contexts, so
a visual doesn't make sense for them. They only use FBOs which have
whatever format that the users sets up.
|
|
|
|
|
| |
This allows us to read it back, but primarily it ensures
the shared context wrapper stays alive as long as the context.
|
|
|
|
|
|
|
| |
To properly support multithreaded use we use a global GPrivate
to track the current context. Since we also don't need to track
the current context on the display we move gdk_display_destroy_gl_context
to GdkGLContext::discard.
|
|
|
|
|
|
|
|
|
| |
We used to have a weak ref to the cairo surface and it was keep
alive by the references in the normal windows, but that reference
was removed by d48adf9cee7e340acd7f8b9a5f9716695352b848, causing
us to constantly create and destroy the surface.
https://bugzilla.gnome.org/show_bug.cgi?id=738648
|
| |
|
|
|
|
|
|
|
| |
This means we don't have to try to initialize opengl in every gtk
instance that is stated. It will only happen for the first one.
https://bugzilla.gnome.org/show_bug.cgi?id=738670
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We want to create windows with the default visuals such that we then
have the right visual for GLX when we want to create the paint GL
context for the window.
For instance, (in bug 738670) the default rgba visual we picked for the
NVidia driver had an alpha size of 0 which gave us a BadMatch when later
trying to initialize a gl context on it with a alpha FBConfig.
Instead of just picking what the Xserver likes for the default, and just
picking the first rgba visual we now actually call into GLX to pick
an appropriate visual.
|
| |
|
|
|
|
| |
valgrind pointed these out.
|
|
|
|
|
|
| |
The visuals are typically sorted by some sort of "most useful first"
order. And picking the last one is likely to give us the weirdest
matching glx visual.
|
|
|
|
|
|
|
| |
We really want a gl context with exactly the same visual, or we will
get a badmatch later, which hits us on nvidia as per:
https://bugzilla.gnome.org/show_bug.cgi?id=738670
|
|
|
|
|
| |
Every single implementation but Quartz is a no-op for this, so just
provide it once rather than in every backend.
|
| |
|
|
|
|
|
| |
This is more standard, and most driver support non-power-of-2 TEXTURE_2D
these days. We fall back for ancient drivers.
|
|
|
|
| |
This makes a lot more sense.
|
|
|
|
|
|
|
|
|
|
|
| |
Commits 314b6abbe8d8daae and eb9223c008ccf1c2faab were ignoring
the fact that the code where found is set to 1 was modifying
col - which was an ok thing to do when that part of the code
was still breaking out of the loop, but it is no longer doing
that (since 2003 !). Fix things up by storing the final col
value in a separate variable and using that after the loop.
https://bugzilla.gnome.org/show_bug.cgi?id=738886
|
|
|
|
|
|
| |
When iterating over the list of displays gotten from the
display manager, we have to check if what we got is actually
an X11 display.
|
| |
|
|
|
|
|
|
|
| |
We always want to send the position in device pixels,
so apply the window scale before sending them out.
https://bugzilla.gnome.org/show_bug.cgi?id=738955
|
|
|
|
|
| |
All the GDK type defines are GDK_TYPE_..., so follow this
pattern for the GLContext subclasses as well.
|
|
|
|
|
|
| |
Always use NPOT textures
https://bugzilla.gnome.org/show_bug.cgi?id=738670
|
| |
|
|
|
|
|
| |
We need to look at the impl_window for the gl rendering, not
the subwindow we're rendering into.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Its not really reasonable to handle failures to make_current, it
basically only happens if you pass invalid arguments to it, and
thats not something we trap on similar things on the X drawing side.
If GL is not supported that should be handled by the context creation
failing, and anything going wrong after that is essentially a critical
(or an async X error).
|
|
|
|
|
| |
This XSync doesn't seem to be necessary. Remove it until otherwise
proven.
|
|
|
|
|
|
| |
We make user facing gl contexts not attached to a surface if possible,
or attached to dummy surfaces. This means nothing can accidentally
read/write to the toplevel back buffer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the new type GdkGLContext that wraps an OpenGL context for a
particular native window. It also adds support for the gdk paint
machinery to use OpenGL to draw everything. As soon as anyone creates
a GL context for a native window we create a "paint context" for that
GdkWindow and switch to using GL for painting it.
This commit contains only an implementation for X11 (using GLX).
The way painting works is that all client gl contexts draw into
offscreen buffers rather than directly to the back buffer, and the
way something gets onto the window is by using gdk_cairo_draw_from_gl()
to draw part of that buffer onto the draw cairo context.
As a fallback (if we're doing redirected drawing or some effect like a
cairo_push_group()) we read back the gl buffer into memory and composite
using cairo. This means that GL rendering works in all cases, including
rendering to a PDF. However, this is not particularly fast.
In the *typical* case, where we're drawing directly to the window in
the regular paint loop we hit the fast path. The fast path uses opengl
to draw the buffer to the window back buffer, either by blitting or
texturing. Then we track the region that was drawn, and when the draw
ends we paint the normal cairo surface to the window (using
texture-from-pixmap in the X11 case, or texture from cairo image
otherwise) in the regions where there is no gl painted.
There are some complexities wrt layering of gl and cairo areas though:
* We track via gdk_window_mark_paint_from_clip() whenever gtk is
painting over a region we previously rendered with opengl
(flushed_region). This area (needs_blend_region) is blended
rather than copied at the end of the frame.
* If we're drawing a gl texture with alpha we first copy the current
cairo_surface inside the target region to the back buffer before
we blend over it.
These two operations allow us full stacking of transparent gl and cairo
regions.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before 5e325c4, the default BitGravity was NorthWestGravity.
When static gravities were removed in 5e325c4, the BitGravity regressed
to the X11 default, Forget. Forget causes giant graphical glitches and
black flashes when resizing, especially in some environments that aren't
synchronized to a paint clock yet, like XWayland.
I'm assuming that the author assumed that the default of BitGravity was
NorthWestGravity, which is the default of WinGravity. Just go ahead and
fix this regression to make resizing look smooth again.
|
|
|
|
|
| |
... and remove all implementations. The API allows to not work "if the
server doesn't support it. So from now on, no server does!
|
|
|
|
|
|
| |
window->parent must exist, it's dereferenced a few lines below.
Avoids clang complaints.
|
|
|
|
| |
It triggers coverity warnings.
|
|
|
|
|
|
|
|
| |
Remove checks for NULL before g_free() and g_clear_object().
Merge check for NULL, freeing of pointer and its setting
to NULL by g_clear_pointer().
https://bugzilla.gnome.org/show_bug.cgi?id=733157
|
|
|
|
|
|
| |
The warning may have had some value at some point, but if
people uninstall large icons just to make the warning go
away, it does more harm than good. So just remove it.
|
|
|
|
| |
https://bugzilla.gnome.org/show_bug.cgi?id=729782
|
|
|
|
|
|
|
|
|
| |
If we have a fullscreen window that covers a monitor, desktop
chrome is not relevant for placing of menus and other popups.
Therefore, return the full monitor geometry instead of the
workarea in this case.
https://bugzilla.gnome.org/show_bug.cgi?id=737251
|
|
|
|
| |
Since we know what size was too large here, why not say it.
|