When I first started graphics programming, I thought about pixels and scanlines first, and later graduated to
happily shading 3D geometry via vertex and fragment shaders (circa OpenGL 2.0 and DirectX9 days). It didn’t take
us long, however, to grow to a veritable zoo of available techniques. You have forward and deferred, variants of
each (tiled, tiled deferred, clustered, clustered deferred), sub-variants like deferred texturing, visibility buffers, and the list
goes on. For graphics programmers that didn’t have a benefit of learning these various pipelines as they emerged,
this article is for you. Especially with yet more tools in our toolbox (mesh/amplification shaders, hardware
raytracing, compute capabilities, etc), it’s important to understand the underlying principles before trying
to grok the hyper-modern AAA pipeline.
C++20 ✨coroutines✨ are here! Having spent some 20-odd hours with it, I am by no means an expert on the feature,
but I wanted to jot down my initial impressions and provide some pointers looking to get their feet wet and
try their hand at implementing a coroutine-powered async framework themselves. Spoiler-alert: for engineers that
need to parallelize their code, in particular, engineers used to needing to “jobify” CPU work using lambdas or
function objects, I think coroutines are a godsend in writing expressive code. I have no doubt that they will
be a staple in future AAA game engines, simulation software, and more. What follows below is a pretty
abbreviated tour of coroutines, as I really don’t think you can truly “grok” it without writing a lot of
code yourself, stepping in with a debugger, littering your code with dozens of
printfs, etc. However,
I hope that this post both motivates the use of C++20 coroutines, and also demonstrates that once you
do understand the language facilities for coroutines more, you can do a surprising amount with not a lot
Content mirrored from the Github repository
When I implemented my own render graph, I needed to make a number of decisions for proceeding and wanted to
record the various considerations for authoring such a library.
By way of background, a render graph is an acyclic directed graph of nodes, each of which may consume a set of
resources and produce a set of resources.
Edges in the graph denote an execution dependency (the child node should happen after the parent), with the caveat
that the dependency need not be a full pipeline barrier. For example, a parent node may produce results A and B,
but a child node may only depend on A.
Another caveat is that the dependencies may be relaxed to allow some overlapping.
For example, a parent node may produce some result in its fragment stage which is consumed in the fragment stage of a child node.
In such a case, the child node can execute its vertex stage while the parent node is also executing its vertex stage.
This is part II (and possibly the final part) on this series titled the Vulkan Synchronization Primer. For the first
part, click here.
The intent of this post is to provide a mental model for understanding the various synchronization
nouns, verbs, and adjectives Vulkan offers. In particular, after reading this series, hopefully, you’ll
have a good understanding of what problems exist, when you should use which sychronization feature, and
what is likely to perform better or worse. There are no real prerequisites to reading this, except that
you’ve at least encountered a few barriers in tutorial or sample code. I would guess that many readers
might have also tried to read the standard on synchronization (with varying degrees of success).
This is a collection of ideas I’ve developed over the years that have resulted in higher
quality and more ergonomic code. In this article, I’m going to say the caveat once (right now)
that you should always code and architect for your particular workflow, and these ideas may or may not
apply. Henceforth, I’m going to be prescriptive about what I think a good set of patterns for,
and do my best to provide the rationale. I’m not going to talk about actual data structures
themselves, but instead about design principles and coding practices that I think apply to
all data structures as it relates to C++. In the code examples, pretend I did all the
noexcept, and any other aspects of the attribute and modifier zoo properly
(omitted for brevity).
Earlier this year (February 2018), I sent an email to the ISO
SG13 C++ group to the effect of why I felt the C++ graphics proposal was, in short, not a good idea. You’re welcome to read it if you want,
but this post is an attempt at presenting a more complete and better-organized argument.
This blog post is a meta post on the general act of going through the motions in learning Vulkan, and outlines what is hopefully an effective strategy for newer practicitioners. I’ll do my best to outline major pitfalls that I encountered on my own, and where I recommend spending the bulk of your time, as well as a rough “timeline.” For people already familiar with OpenGL and DirectX, I hope to also explain in plain terms what functionality you were relying on the driver for previous that you are now responsible for (and what that means). I’m not going to try to explain how to solve each one of the problems you will encounter, as this post will get unbearably long as to not serve any real purpose. I will endeavor to provide links to good resources in the community already and explain how to read/utilize them. This blog post is meant to be read/skimmed once, and then bookmarked for reference as you proceed with your journey. If there were resources you think I missed that you think may be worth including, feel free to tweet me (twitter link at the bottom of this page)! So without further ado…