Comparing Vulkan Frameworks

There are indeed many people tried to implement a rendering framework on top of Vulkan to reduce the amount of code to write. But so many of them merely just create a wrapper around existing Vulkan objects, like wrapping the command buffer with a vk::CommandBuffer::Ptr and you still have to fill all the VkObjCreateInfos .

The rendering framework focus on the render passes should provide a compact yet descriptive API to create render pass. Leaving the user out of the mess of managing/binding descriptors, uniform buffers. Have them focus on the shaders.

I will compare some of the rendering frameworks available publicly and see their pros and cons. To what level they archived and for a user, what granularity you need to work with them.

TU-Wien's Auto Vk framework

Preparing user with an application template, user need to fill up the `initialize()` and update() and render(). Let's see how it creates a render pass.

mPipeline = avk::context().create_graphics_pipeline_for(
	// Just use the main window's renderpass for this pipeline:

So this API uses a fix set of parameters to set up a graphics pipeline, it takes the shaders, fixed functions and parameters and a render pass pointer. It is very close to Vulkan itself actually, you do have some syntax sugar to create Vulkan objects for you but you are exposed to them.

				avk::attachment::declare(avk::format_from_window_color_buffer(avk::context().main_window()), avk::on_load::clear.from_previous_layout(avk::layout::undefined), avk::usage::color(0)     , avk::on_store::store),	 
				avk::attachment::declare(avk::format_from_window_depth_buffer(avk::context().main_window()), avk::on_load::clear.from_previous_layout(avk::layout::undefined), avk::usage::depth_stencil, avk::on_store::dont_care)
			}, avk::context().main_window()->renderpass_reference().subpass_dependencies()),

The way it creates render pass is okay, you define the attachments then sub-passes that uses them, then it should give you a default 2 sub-pass. For the resource creation you will need to manually create the vertex buffers, materials, samples all together.

So expect to write good a few thousands of lines of code to get a complete pipeline.

But I really like some of its APIs though. The updater API is interesting.

//hot updating the shaders
//swapchain update
	avk::swapchain_resized_event(avk::context().main_window()), // In the case of window resizes,
	avk::shader_files_changed_event(mPipeline.as_reference())   // or in the case of changes to the shader files (hot reloading), ...
.update(mPipeline); // ... it will recreate the pipeline.

It has two methods, the on() which register a event, then the update() for doing the actual work. It has additional then_on() but man it is equal to just call mUpdater->on() twice.


The vkcv framework is another framework based on Vulkan-HPP. It's does not have a application framework, which means you can write everything in main(). The good part is that the vkcv framework has quite a few additional APIs to manage the scene, camera and so on. But anyway, lets look how the render pipeline is created

vkcv::GraphicsPipelineHandle firstMeshPipeline = core.createGraphicsPipeline(
				{ firstMeshLayout },
				{ setLayoutHandle }

It is interestingly the same actually. Taking number of parameters which were created earlier. However the hustle lies on creating individual parameters.

The good part of the API I think is Vulkan feature request,

		[](vk::PhysicalDevice16BitStorageFeatures &features) {
//then in the context
vkcv::Core core = vkcv::Core::create(
		VK_MAKE_VERSION(0, 0, 1),
		{ vk::QueueFlagBits::eGraphics ,vk::QueueFlagBits::eCompute , vk::QueueFlagBits::eTransfer },

This does save quite a lot of hustle for querying the support.


The Tauray is a research platform for the vga group in Tampere University, based on its stance, it is similar to TU Wien's position, though they may have different research directions. Based on their paper presents that they targeted on heterogeneous hardware.


Tauray's binary includes a runtime shader compiler, so it reads the shader from source and then compiles it before creating pipeline. It utilize the spriv-reflect library to generate Vulkan's descriptor set layout automatically. Pity though the shader is not hot-reloadable.

Data Types


The first thing we see is the vkm<T>, which is the Vulkan object manager. Everything wrapped in vkm need to unique and the underlining Vulkan object gets destroyed upon calling destructors. You can assign a handle from vkm but that handle is not memory manged. I find this method is interestingly elegant and avoid shared_ptr for many uses case


Refer to a single GPU pipeline such as rasterization, compute, raytracing. But the pipeline does not decide how the rendering going to use it, that is decided by stage.


Stage is the collection of GPU pipelines, the purpose of the stage is to record the command buffer for the given pipeline, interestingly the recording phase usually happens before any drawing so during the render time only thing for stages to do is submitting commend buffer.

For example, we have a svgf stage, interestingly, it takes gbuffer_target, which means the target need to be created before initializing the pass


on top of stage, it also has a concept of renderer, it seemed to be the place where it hard coded the stages and its related configurations.


Tauray has some interesting designs, but it also fairly rigid. Renderer is pretty much hard coded and stages has hard dependencies on each other. I also find there is many unnecessary copy constructor used everywhere. Overall it's research platform with some engineering efforts but it's not a friendly one to use.

comments powered by Disqus