In the first blog of libdrm, we went through the
repainting loop using the libdrm. Now we move on to the topic of
framebuffers. A framebuffer is a piece of memory (could be on main memory or on
GPU) for repainting every frame. It is like a canvas, represents what you would
finally see on screen. Framebuffer is already a familiar concept to rendering
programmers, in OpenGL, We have GL_FRAMEBUFFER_0
for presenting the
surface, additional framebuffers for roles like G-Buffer and
post-processing. In GPU programming, framebuffer is also a broader concept,
“abstraction of a collection of attachments”, the underlining attachments
corresponds to actual pieces of memory we talk about here.
Dumb buffer
Back to our topic, we have two options for allocating framebuffers for
libdrm. The first is through dumb buffer, a slow, linear, in main memory
buffer. Creating that, we have drmioctl(fd, DRM_IOCTL_MODE_CREATE_DUMB, &req)
. the req
argument here specifies parameters like the width, height,
pitch, bit-per-pixel. For retriving a framebuffer handle, libdrm provides
function drmModeAddFB(fd, width, height, depth, ...)
. Then we map the
buffer though good old mmap
function and do all kinds of magic on it, very
1990s ish. Finally, having the fb handle, we would use either
drmModeSetCrtc
or drmModePageFlip
to present that buffer on screen.
GBM buffers
The second method is through device GEM buffers, they are usually the GPU
memory, supports compression, modifiers and of course, GPU commands. In the
open source world, Mesa project provides a generic libgbm
for vendor agnostic
allocations, we get framebuffer handle from libgbm through the gbm_bo
object. libgbm is rather flexible, for getting the gbm_bo
s. We can either go
with the traditional method, allocating gbm_surface
and getting the buffers
through gbm_surface_lock_front_buffer
, or we can mannually creating them with
gbm_bo_create[_with_modifiers]
. The first approach reflects to the OpenGL
rendering api, A gbm_surface
maps to a EGLSurface
,
gbm_surface_lock_front_buffer
can only be called after eglSwapBuffers
. The
disadvantage is obvious as well, it is tied to the OpenGL
pipeline and we have no
control of how many gbm_bo
s to allocate. As it is tied to OpenGL
pipeline. In Vulkan, there is no mapping of VKSurfaceKHR
to GBM, for that, we
will go through the details later.
EGLStream
EGLStream
is the Nvidia’s proprietary approach on supporting libdrm, as
apparently they don’t use GEM buffers, they came up with this EGLStream
stack
for working with their driver blob. It is not simply a buffer allocation API as
it also introduces EGLDeviceEXT
, EGLOutputLayerEXT
. Using EGLStream
,
there are fixed steps to follow:
- discovering an
EGLDeviceEXT
which supports drm flags, - getting a
EGLDisplay
from opened device usingEGL_DRM_MASTER_FD_EXT
. - creating a
EGLStreamKHR
, - getting an output layer(plane) using
eglGetOutputLayersEXT
. - attach plane as consumer using
eglStreamConsumerOutputEXT
. - creating
EGLSurface
with theeglCreateStreamProducerSurfaceKHR
as the producer. - driving pageflipping using
eglStreamConsumerAcquireAttribNV
.
Vulkan workflow
Vulkan has its own path in supporing direct compositing. On Vulkan side, there
is an extension called VK_DISPLAY_KHR
, very much like the EGLStream
extension, trying to do everying inside vulkan. You can create a VkSurface
with VkDisplayKHR
through vkCreateDisplayPlaneSurfaceKHR
, once you have the
surface, everything else would look like a typical Vulkan application as
well. The downsides is that you also lose access of direct modesetting control
like drmModeSetCrtc
or drmPageFlip
, and there is no plane assignment. If
you were previously using libdrm, it also means you need to
throw-the-code-out-of-window. It is a viable approach if you decide to go with
Vulkan and only Vulkan, as some
people
prefer this approach but not every one. For people wanting an alternative
way(Vulkan + KMS WSI), The journal of the discussion is
here. The problem is
that unlike OpenGL, Vulkan is an “explicit” API, As tomek-brcm wrote:
Currently GLES/EGL implemntation of Wayland compositor and clients is based on a silent assumption that driver has an implicit cross-process synchronisation. This is not GLES or EGL requirement, just an unwrittenWayland assumption. Vulkan is all about explicit synchronisation that isresponsibility of an application but it doesn’t provide any cross-processsynchronisation primitives. This is, to my untrained eye, a blocker issue.
We get the client buffer on the wl_surface.commit
but there is no guarantee
what-so-ever about client has finished drawing. If a wayland server has to wait
on a fence every time a client commits a buffer. It is probably not very
usable. Luckly the issue was resolved through the VK_KHR_external_*_extension
when Vulkan lands on 1.1. They enable the implementation of all the Vulkan WSI
mechanisms/extensions on top of core Vulkan + the OS-specific parts of external
objects.