|
|
Subscribe / Log in / New account

Graphics world domination may be closer than it appears

By Jonathan Corbet
October 18, 2016

Kernel Recipes
The mainline kernel has support for a wide range of hardware. One place where support has traditionally been lacking, though, is graphics adapters. As a result, a great many people are still using proprietary, out-of-tree GPU drivers. Daniel Vetter went before the crowd at Kernel Recipes 2016 to say that the situation is not as bad as some think; indeed, he said, in this area as well as others, world domination is proceeding according to plan.

The current state of affairs

The first stop on Vetter's tour of the direct rendering manager (DRM) subsystem was documentation, and, in particular, the transition to Sphinx that has unfolded over the last couple of release cycles. The new formatted documentation system for the kernel is "pretty and awesome", and makes writing the documentation fun. As a result, there's now a lot more documentation than there used to be; indeed, the DRM documentation is pretty much complete. The biggest gap at this point is a top-level picture that nicely ties all the pieces together.

Moving on to rather older work (he titled this section "dungeons and dragons"), Vetter noted that there are still some DRM1 drivers around; these are at least ten years old at this point. They feature nasty user-space APIs, root holes, and other delightful things. These drivers are built around a midlayer architecture, a design which has gone out of fashion in recent years; the idea was to make it possible to build the drivers on BSD systems. In current kernels, these drivers are hidden behind the CONFIG_DRM_LEGACY option. They cannot be removed outright without breaking things, though, so they will remain for a while.

The IGT tools from Intel have proved to be a useful test suite for the validation of DRM drivers. They are Intel-specific for now, but are being modified to be more generic. At this point, a number of drivers and continuous-integration systems are using these tests to trap regressions. See the DRM documentation for information on how to validate drivers with the IGT suite.

Recently there has been an influx of DRM developers from the ARM community; that has led to a new set of problems. The DRM subsystem is special, Vetter said, in that it requires that the user-space API for any driver be open source. Much of the code for these drivers runs in user space; the 10% that runs in the kernel is "useless" without the user-space side as well. A kernel driver without the user-space code cannot be enhanced or maintained. [Daniel Vetter] The ARM folks were unaware of this restriction and not used to operating in this mode, so the DRM maintainers have had to start rejecting their patches. The result was some screaming, but, at this point, the ARM community understands the requirements and is starting to look at opening up the user-space code as well.

One of the big changes in the DRM subsystem in recent years has been the switch to the atomic mode-setting API. The original DRM API featured one ioctl() call for each operation to be done; that resulted in a lot of display flickering as applications worked through a long series of changes. The atomic API allows everything to be done with a single call, leading to flicker-free changes. An atomic change is an all-or-nothing affair; if it succeeds at all, it will succeed completely.

This API also provides a separate call to check whether a set of changes would succeed without actually making those changes. It can be hard to know before trying; hardware often has weird restrictions that get in the way. He mentioned adapters with three video outputs but only two clocks as an example. Overlay support (the ability to directly display a video stream from another source, such as a camera, without going through user space) has been added to this API as well. Overlays went out of fashion for a while, but it turns out that a lot of power can be saved by outputting the video directly; it is a crucial feature for mobile systems.

At this point, there are 20 drivers in the mainline with atomic mode-setting implementations; another two or three are added with each release. The adoption of this API far exceeds the rate of adoption of the original kernel mode-setting API. It helps that a lot of functionality is in common code now, so the drivers themselves have gotten smaller. The support library has been made more modular; using it is not an all-or-nothing affair like it used to be.

Use of the atomic API is growing; one example is the drm_hwcomposer library, written by Google for use with Android systems. The ChromeOS Ozone interface running on Wayland uses it, as do all the other Wayland implementations. We have, he said, "a driver API to rule them all" for the first time.

Looking forward

Turning to future work, Vetter mentioned that there is interest in an interface that can allocate buffers for use with multiple devices. The ION memory allocator offers this functionality, but it remains Android-specific for now.

The old framebuffer device (fbdev) interface has been deprecated for some time, but it still turns out to be useful in some settings. In particular, it can save memory bandwidth and power on some low-end displays — those that require manual uploading of display data. The generic fbdev "defio" interface can now be remapped onto kernel mode-setting operations, making it possible to write a full fbdev driver on top of the DRM subsystem.

The simple display pipeline helper also makes writing simple drivers easy. For settings where there is a simple processing pipeline and a single connector, it can provide access to the atomic API without most of the complexity. With this helper, the DRM API is "now strictly better" than fbdev.

Fences are currently an area of active development. A fence is like the kernel's completion structure, in that it can be used to wait for (and signal) the completion of an operation; it is intended to be used with DMA operations in particular. There are two models for fence usage. In the "implicit" model, the kernel attaches fences to I/O buffers and takes care of everything; user space never sees it. The "explicit" model, instead, has the kernel providing fences to user space, which must then manage them itself.

The implicit model has been implemented for some time, in the form of reservation_object structures attached to DMA buffers. The TTM memory manager (used with the AMD and Nouveau drivers) has always supported it; other drivers are picking up support over time. This is the model preferred by the Linux desktop; both X and Wayland expect implicit fencing.

On the other hand, the Android system wants to use explicit fencing. It provides more control to user space and reduces the need for complexity in (vendor-supplied) graphics drivers. That was the driving factor in Android's decision, Vetter said; no vendor proved able to implement implicit fences correctly. The DRM subsystem implements an explicit fence as a sync_file structure, which is returned to user space as a file descriptor. User-space fences will be supported in the 4.9 kernel; the MSM/freedreno driver has added support so far.

As one might imagine, there is some tricky interaction between implicit and explicit fences. The solution that has been chosen is to use implicit fences by default, but to switch to the explicit model as soon as an application calls one of the explicit-fencing extensions.

Google has created the "HWC2" composer that can make use of DRM's explicit-fencing support; it is not yet publicly released, Vetter said, but will hopefully show up in 4.10. More information will be available at the Linux Plumbers Conference. Sometime soon it will be possible to run Android on a mainline kernel with an open-source graphics stack, he said.

Along those lines, what is the status of low-level GPU drivers? At this point, there are three vendor-supported open drivers in the mainline, and three more reverse-engineered ones. Of those, the Nouveau driver runs fairly well on Tegra systems. The freedreno driver is "pretty feature-complete" and is now competitive with proprietary drivers. The etnaviv driver is coming along, but still needs work on the user-space side. But, he said, there are still no vendor-supported system-on-chip drivers; that situation is "pretty dire."

He finished up by noting that the atomic API now "rules them all." There has been a lot of progress in documentation and general cleanup; all of the major gaps for authors of display drivers have been closed. Cross-driver fencing is reaching a point of being ready for everyone, and even rendering is showing some (albeit slow) progress. Upstream graphics, he said, is finally winning.

Index entries for this article
KernelDevice drivers/Graphics
ConferenceKernel Recipes/2016


to post comments

Graphics world domination may be closer than it appears

Posted Oct 18, 2016 16:02 UTC (Tue) by pabs (subscriber, #43278) [Link] (4 responses)

> there are still no vendor-supported system-on-chip drivers

I thought anholt was paid by Broadcom to work on the open RPi/VC4 drivers?

Graphics world domination may be closer than it appears

Posted Oct 18, 2016 16:06 UTC (Tue) by pabs (subscriber, #43278) [Link] (2 responses)

Also, looks like Imagination Technologies are trying for a while now to hire a developer for open powervr drivers:

http://careers.imgtec.com/cw/en/job/495667/linux-graphics...

Graphics world domination may be closer than it appears

Posted Oct 18, 2016 16:11 UTC (Tue) by tau (subscriber, #79651) [Link] (1 responses)

Nothing in that job posting suggests that the driver to be developed would be open source, although there's nothing there saying that it wouldn't be, either.

Graphics world domination may be closer than it appears

Posted Oct 19, 2016 6:05 UTC (Wed) by blackwood (guest, #44174) [Link]

imgtec engineers even contribute the occasional patch to upstream drm, but their driver isn't because they don't want to open up their userspace-side driver part.

Graphics world domination may be closer than it appears

Posted Oct 19, 2016 6:10 UTC (Wed) by blackwood (guest, #44174) [Link]

I might have mangled this a bit, but what I wanted to say is that there's now 2 vendors support open gfx drivers completely: Intel&AMD, plus broadcom doing an open driver for the rasberry pi (but none of the other gpus they have). That means no vendor who covers the full spectrum of socs with vendor-support open drivers. That makes running anything like a phone or tablet pretty painful, since it's all reverse-engineered drivers. But the good news at least is that there's more of those reverse engieneered drivers, and their progressing pretty briskly.

Is it, though?

Posted Oct 18, 2016 16:45 UTC (Tue) by fratti (guest, #105722) [Link] (19 responses)

I really want to believe things are changing for the better, but in my experience, especially xf86-video-intel is still a buggy mess (partially because everyone has to grab some random commit due to the slow rate of release) and Mesa has some pretty bad things to it (such as about 10 different IRs for shaders). Last I heard, Intel's developers also didn't even have access to the latency/throughput info of the GPU instructions, so the shader optimisations in their compiler is all based on counting instructions.

The kernel-space components of graphics accelerators may be getting better, but as the article itself says, 90% of it runs in userspace.

Is it, though?

Posted Oct 18, 2016 18:01 UTC (Tue) by nybble41 (subscriber, #55106) [Link] (18 responses)

> xf86-video-intel is still a buggy mess (partially because everyone has to grab some random commit due to the slow rate of release)

As I understand it, the current recommendation is to use xf86-video-modesetting rather than xf86-video-intel, at least for the last several generations of Intel graphics chips. I switched over a couple of months ago in an attempt to resolve some annoying visual glitches and haven't experienced any difficulties since.

Is it, though?

Posted Oct 18, 2016 19:20 UTC (Tue) by fratti (guest, #105722) [Link] (14 responses)

Does modesetting tear?

Is it, though?

Posted Oct 18, 2016 22:30 UTC (Tue) by nybble41 (subscriber, #55106) [Link] (10 responses)

> Does modesetting tear?

Not that I've noticed. It has the same VSYNC capability as the other drivers, and shouldn't tear as long as the application/compositor employs double-buffering. IIRC the intel driver had a special option to minimize tearing (for single-buffered, non-composited outputs?) but I didn't have that enabled to begin with.

Is it, though?

Posted Oct 19, 2016 18:22 UTC (Wed) by fratti (guest, #105722) [Link] (9 responses)

I've seen tearing happen with a double-buffering compositor on intel without the "TearFree" option, which disappeared with the "TearFree" option, so I have my doubts as to whether the intel documentation is accurate.

Is it, though?

Posted Oct 19, 2016 20:47 UTC (Wed) by barryascott (subscriber, #80640) [Link] (8 responses)

Double buffering is not the way to fix tearing.

The compositor has to change the display during VSYNC to
avoid tearing.

The Intel driver has the right OpenGL support to allow
a compositor to be tear free. A nice primitive that allows
you to wait for VSYNC and then draw.

Is it, though?

Posted Oct 20, 2016 4:18 UTC (Thu) by nybble41 (subscriber, #55106) [Link] (7 responses)

> Double buffering is not the way to fix tearing. The compositor has to change the display during VSYNC to avoid tearing.

Sure, but unless your drawing is trivial you're going to need double-buffering in order to finish changing the display between the start of VSYNC and when the new image is scanned out. To avoid limiting your rendering time to a small fraction of the frame time, you draw on a back buffer and swap buffers during the VSYNC. If this is implemented correctly there won't be any tearing.

One could implement double-buffering without synchronizing on VSYNC, but why? It isn't much easier and causes pointless tearing artifacts. One can also avoid tearing without double-buffering, but only if one is willing to leave the GPU mostly idle and only render during the VSYNC interval.

IIRC, the TearFree option in the intel driver essentially was double-buffering the screen (for the internal 2D acceleration path), giving the same effect as a double-buffered OpenGL compositor, or the GLAMOR code used with the modesetting driver to provide 2D acceleration via the 3D drivers.

Is it, though?

Posted Oct 27, 2016 0:31 UTC (Thu) by daenzer (subscriber, #7050) [Link] (6 responses)

Mostly spot on, but glamor doesn't have any double buffering or other tearing prevention functionality itself, nor does the modesetting driver have any TearFree functionality. The only way to reliably avoid tearing with the modesetting driver is via DRI2/3 page flipping. I suspect the reason somebody was seeing tearing with the intel driver but not with modesetting is that the former still doesn't enable DRI3 by default, and the DRI2 mechanism for partial screen updates inherently tears.

(Shameless plug: The xf86-video-amdgpu/ati drivers for AMD/ATI Radeon GPUs enable DRI3 by default with current xserver and have optional TearFree functionality which works on top of glamor or EXA, and in the upcoming releases to go with xserver 1.19 can prevent tearing with any supported display configuration (including rotation and general transformations, and RandR 1.4 style multihead))

Is it, though?

Posted Oct 27, 2016 17:06 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

The other thing I've picked up on - if you have two displays - they can have different VSYNC but could only report one to the video subsystem. So if the screen you're drawing to has the other one, it's impossible to synchronise even if you want to ...

Cheers,
Wol

Is it, though?

Posted Oct 28, 2016 0:54 UTC (Fri) by daenzer (subscriber, #7050) [Link]

That's not an issue per se for the compositor, because DRI page flipping flips the buffers for each display during its respective vertical blank period, so there's no tearing.

Is it, though?

Posted Oct 27, 2016 19:34 UTC (Thu) by nix (subscriber, #2304) [Link]

And it works very nicely. I had my heart in my throat when updating to a glamor-capable X server on my Radeon (BARTS) box, but it worked perfectly, zero problems, despite using things like multihead that are almost guaranteed to have been less tested. And wow is it nippy (and noticeably lower in power consumption too).

Is it, though?

Posted Oct 27, 2016 21:19 UTC (Thu) by flussence (guest, #85566) [Link] (2 responses)

Dumb question here, but this seems like an appropriate time/place to ask:

Do the -ati/-amdgpu drivers have a significant advantage over just using -modesetting? I know what the Intel situation is on Xorg, but it's not obvious here.

Is it, though?

Posted Oct 28, 2016 1:00 UTC (Fri) by daenzer (subscriber, #7050) [Link] (1 responses)

I described an advantage in my grandparent post — the former support TearFree, the latter doesn't.

In general, the former have received considerably more development effort over the last year+, affecting all of features, bug fixes and polish.

For all those and other reasons, we recommend using the specialized DDX drivers for our GPUs.

Is it, though?

Posted Oct 28, 2016 3:14 UTC (Fri) by flussence (guest, #85566) [Link]

>In general, the former have received considerably more development effort over the last year+, affecting all of features, bug fixes and polish.
I've noticed, and it's much appreciated — the -ati driver's been solid as a rock for as long as I can remember. Even when I try out random git versions for fun it's rare that it breaks.

(Meanwhile, in the few hours since that last post I got a distro package update for -intel and had to roll it back due to crashes... I wish I was making this up!)

Is it, though?

Posted Oct 19, 2016 8:06 UTC (Wed) by mgedmin (subscriber, #34497) [Link] (2 responses)

I could reproduce terrible tearing by moving a window on the external screen using the 'intel' driver on Ubuntu 16.04.

I can no longer reproduce that tearing using the 'modesetting' driver on Ubuntu 16.10.

Is it, though?

Posted Oct 19, 2016 8:56 UTC (Wed) by chris.wilson (guest, #42619) [Link] (1 responses)

Which is comparing apples-to-oranges. The reason you got tearing on -intel was that the compositor choose to using a tearing path, using a method that is not available on -modesetting. Up until you get buffer hand off from client to server to compositor and back to the server (or combine the compositor and server into one) and pass the damage along the way from client to display hardware, you will have to choose a compromise.

Is it, though?

Posted Oct 21, 2016 9:23 UTC (Fri) by mgedmin (subscriber, #34497) [Link]

For the record, the compositor was gnome-shell in both instances (version 3.20 -- in the first case from the gnome3-staging PPA for Ubuntu 16.04, in the second case from the main Ubuntu 16.10 archive).

My dual-head layout has screens of different sizes (1366x768 internal LCD, 1280x1024 external LCD) and I have them positioned side-by-side with their bottoms aligned. The tearing I saw while moving a window showed up near the top of the external screen.

I think whatever-system-component-was-responsible-for-VSYNCing was VSYNCing with the internal LCD, which resulted in tearing on the external LCD.

*shrug*

I don't understand any of the subtleties, I'm just happy to have the tearing gone.

Is it, though?

Posted Oct 19, 2016 8:58 UTC (Wed) by chris.wilson (guest, #42619) [Link]

Ymmv, bugzilla has plenty of visual glitches, GPU hangs and crashes for -modesetting.

Please note that for the past couple of years, the primary cause of visual glitches for -intel has been the kernel.

Is it, though?

Posted Oct 19, 2016 18:42 UTC (Wed) by drag (guest, #31333) [Link] (1 responses)

Thanks. I didn't know that xf86-video-modesetting was even a thing.

Is it, though?

Posted Oct 20, 2016 20:04 UTC (Thu) by flussence (guest, #85566) [Link]

You'd be forgiven for not finding it — the modesetting driver comes with Xorg 1.18+ now, not as a standalone driver package.

I think they could've made a little more noise about the “1 driver + Mesa = fully functional modern desktop” aspect… between this and libinput it seems like the biggest advancement of this release cycle, maybe even the year.

Graphics world domination may be closer than it appears

Posted Oct 18, 2016 17:52 UTC (Tue) by mlankhorst (subscriber, #52260) [Link] (2 responses)

Something lost in translation here, nouveau doesn't support the atomic api yet. Tegra uses its own display drivers, and nouveau for 3d acceleration only.

Graphics world domination may be closer than it appears

Posted Oct 19, 2016 5:34 UTC (Wed) by mupuf (subscriber, #86890) [Link] (1 responses)

And I also would like to say that NVIDIA officially maintains the Nouveau driver for Tegra and even released a device with it (Pixel C). This is not something we communicated well-enough, probably!

Graphics world domination may be closer than it appears

Posted Oct 19, 2016 6:00 UTC (Wed) by blackwood (guest, #44174) [Link]

Pixel C was Google's doing, and nvidia got dragged along kicking and screaming. It also still uses closed-source blobs in userspace, making it (in my book at least) not really open source&upstream graphics support. That's why I did not count nvidia as a company supporting upstream graphics.

Graphics world domination may be closer than it appears

Posted Oct 19, 2016 5:57 UTC (Wed) by blackwood (guest, #44174) [Link]

Not sure I fumbled this in the presentation or not, but the fbdev subsystem isn't better for low power and at saving memory bandwidth than drm. The manual-upload display stuff I talked about is a property of the hardware, and drm of course supports it. What is new is support in the drm helper library for fbdev backwards compatibility for this (called "defio" in fbdev speak), implemented using the existing drm support for manual upload panels. This means when you write a driver for such hardware you now only need to write the manual upload code once (for drm core) instead of twice (drm core plus fbdev). This way you get fbdev uabi backwards compatibility pretty much for free when writing a drm driver.

Really the only reason fbdev is still around is that it exists and provides uabi, and uabi sticks around forever ;-)

Graphics world domination may be closer than it appears

Posted Oct 19, 2016 8:20 UTC (Wed) by liam (guest, #84133) [Link]

The explicit/implicit fencing issue reminds me of the two other areas where supporting both models would be a win for users: 1) readiness vs completion and 2) push vs pull
Unlike with the graphics subsystem, we've not made much progress in enabling those paradigms.
I'm not complaining, btw, it just struck me as a curious pattern.

Graphics world domination may be closer than it appears

Posted Oct 19, 2016 9:05 UTC (Wed) by ehiggs (subscriber, #90713) [Link] (4 responses)

The documentation has no link to the underlying repository (that I can find) or any link to contact someone to report errors. This is pretty annoying since most of the links in the table of contents on the left side are broken. e.g. https://dri.freedesktop.org/docs/drm/media/media_uapi.html reports 404 Not Found (as of writing this post).

Graphics world domination may be closer than it appears

Posted Oct 19, 2016 9:59 UTC (Wed) by blackwood (guest, #44174) [Link] (3 responses)

Seems like a bug in our upload scripts. Wrt making kernel docs more approachable overall: I think Jon Corbet is working on tying all the "how to contribute stuff to the kernel" into one section, atm it's still all spread over different places. That should hopefully address your question.

Graphics world domination may be closer than it appears

Posted Oct 19, 2016 10:08 UTC (Wed) by jani (subscriber, #74547) [Link]

https://01.org/linuxgraphics/gfx-docs/drm/ seems to work fine. I think the plan is to get all of this in kernel.org one of these days too.

Graphics world domination may be closer than it appears

Posted Oct 20, 2016 7:52 UTC (Thu) by ehiggs (subscriber, #90713) [Link]

"I think Jon Corbet is working on tying all the "how to contribute stuff to the kernel" into one section, atm it's still all spread over different places."

This should be helpful. Especially if it highlights different ways to contact people for different levels of engagement. It's not appetizing to subscribe to the LKML firehose just to report that the documentation hasn't uploaded correctly. Or that the LKML.org FAQ (https://www.tux.org/lkml/)[1] appears to be missing As it is, I'm glad you happened to see my comment.

Otherwise, the docs look great!

[1] Linked to from here: https://lkml.org/

Process manual

Posted Oct 20, 2016 13:34 UTC (Thu) by corbet (editor, #1) [Link]

Credit where it's due: Mauro Carvalho Chehab has been doing the work to pull the development process docs together, not me. My role has been limited to bikshedding the result :)

Graphics world domination may be closer than it appears

Posted Oct 20, 2016 19:43 UTC (Thu) by Lennie (subscriber, #49641) [Link]

His other talk about "Maintainers don't scale" (also links to the playlist with other talks):

https://www.youtube.com/watch?v=gZE5ovQq9g8&index=6&...

Graphics world domination may be closer than it appears

Posted Oct 23, 2016 12:30 UTC (Sun) by swilmet (subscriber, #98424) [Link]

It's nice to see the Linux graphics stack is such a good shape. It benefits a wide range of applications: the "traditional" desktop (GNOME, KDE, …), ChromeOS, mobile, embedded, IVI, TVs, … There is hope for the future. But "The year of the Linux desktop" will probably be with ChromeOS, sadly (i.e. not fully open).


Copyright © 2016, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds