### Comments on "Projective Geometric Algebra Done Right"

This is a response to the criticism in a recent post called Projective Geometric Algebra Done Right of work done in the 2019 SIGGRAPH presentation “Geometric Algebra for Graphics Engineers” (I did not personally create the SIGGRAPH presentation, but I do subscribe to its conclusions). I mean no ill-will to the author (I even have a couple of his books!), but felt somewhat compelled to write this response because I believe the view espoused actually prevents a more intuitive grasp of the concepts therein. Central to the debate is how geometers should represent projective space algebraically. Namely, what possible representations are there and what are the tradeoffs? I can’t get into the entirety of Geometric Algebra in just a blogpost (that will need to be a longer form writeup), so be advised that this is written for people that have had some brushes with GA already. All quoted text in this article is lifted verbatim from the source material as of this writing. So without further ado, let’s dig into it.

### Render Graph Optimization Scribbles

When I implemented my own render graph, I needed to make a number of decisions for proceeding and wanted to record the various considerations for authoring such a library. By way of background, a render graph is an acyclic directed graph of nodes, each of which may consume a set of resources and produce a set of resources. Edges in the graph denote an execution dependency (the child node should happen after the parent), with the caveat that the dependency need not be a full pipeline barrier. For example, a parent node may produce results A and B, but a child node may only depend on A. Another caveat is that the dependencies may be relaxed to allow some overlapping. For example, a parent node may produce some result in its fragment stage which is consumed in the fragment stage of a child node. In such a case, the child node can execute its vertex stage while the parent node is also executing its vertex stage.

### Optimizing C++ by Avoiding Moves

This is a quick post, but I wanted to document an evolution in how my thinking with respect to move operators and move construction. Back when C++11 was released and we were getting used to the concepts, C++ moves were groundbreaking in their ability to greatly accelerate STL containers, which were often forced to invoke copy constructors wholesale due to reallocation (e.g. a std::vector grows in size and copies N elements as a result). A move constructor allowed the programmer to create a “shallow copy” so to speak which is much faster than the default (presumably) deep copy. Ergo, to think that avoiding moves entirely might be a performance win is somewhat paradoxical. Of course, it isn’t without it’s caveats, but for me, it’s been well worth it to go all in on possibly never writing a move constructor again.

### Vulkan Synchronization Primer - Part II

This is part II (and possibly the final part) on this series titled the Vulkan Synchronization Primer. For the first part, click here.