Working with libdrm, buffer allocation

In the first blog of libdrm, we went through the repainting loop using the libdrm. Now we move on to the topic of framebuffers. A framebuffer is a piece of memory (could be on main memory or on GPU) for repainting every frame. It is like a canvas, represents what you would finally see on screen. Framebuffer is already a familiar concept to rendering programmers, in OpenGL, We have GL_FRAMEBUFFER_0 for presenting the surface, additional framebuffers for roles like G-Buffer and post-processing. In GPU programming, framebuffer is also a broader concept, “abstraction of a collection of attachments”, the underlining attachments corresponds to actual pieces of memory we talk about here.

Working with libdrm, repainting timeline

As the taiwins project finished with X11/Wayland backend rigging. Now I am fully on the libdrm backend development, dealing with hardwares directly. This backend requires 200 percent my energy to tame the complexity.

Like other backends, libdrm also needs to provide two resources, input and output. Backends like nested wayland backend or X11 backend, the output device are the windows created by ourselves (or by the user). Meaning we can create it or destroy it as we please. As for libdrm, those are provided by the OS. An output in libdrm’s terminology, is called connector. We query and change the connectors state by drmModeGet* functions. Then later set the hardware states by drmModeSet* or the new atomic API. It may sound simple but actually, for displaying images on the screen using the libdrm mechanism, it takes quite a few components, working together, to achieve the goal. We uses drmMode* routines for kernel-modesetting; libgbm for framebuffer allocation; OpenGL or Vulkan for rendering. If done right, you would get tear-free images on your screen constant refresh rate. But should anything goes wrong, you probably end up with a black screen.