My Journey Self-Learning Computer Graphics Programming

I tried to teach myself computer graphics programming. Along the way, I learned many valuable lessons. This post is to document the path I took, and potentially guide others who want to learn.

From a high-level point of view, there are three main steps in my journey.

  1. Learning OpenGL
  2. Learning Rust
  3. Learning Vulkan

This is not to say it’s the only way to learn computer graphics, but it’s the path I took. For example, initially I didn’t even consider using Rust, but it ended up being quite a good decision later on.

Part 1: Learning OpenGL by rendering a Sierpinksi triangle

The first step in understanding computer graphics was to learn OpenGL.

While learning computer graphics from the guide, I came across the following images in the comment section.

These are called Sierpinski triangles, and I thought about how I could recreate them. Clearly, some sort of recursion algorithm was at work. To complete an iteration it felt like I needed to replace every triangle with a triforce version of itself.

Basically, I needed to use the midpoint of each side as the vertice of a new triangle. Where I could calculate the midpoint quite easily using this simple formula. Now I just needed to write the algorithm in OpenGL.

But first, how does OpenGL work in a nutshell?

The process of using OpenGL can be summarized as follows:

First, you create an array of 3d vertex data used to render a bunch of triangles later on. That data will then be sent to the GPU and run through a pipeline of customizable programs called “shaders”. Shaders figure out how to translate the input vertex data into the position and color of the pixels on your 2d screen.

The two main levers developers can pull are:

  1. The vertex data sent in
  2. The algorithm in the shaders

In this case, I focused on the vertex data. Although theoretically possible to render a Sierpinski triangle using only shaders, it would be kind of inefficient on a large scale. I’m sure somebody on ShaderToy has done it though.

Defining the recursive function

The function needs to take in the three corners of the original triangle, and the number of iterations we want to perform. It needs to output an array containing the mesh of triangles we want to render.

I set up some classes to simplify the code. There’s a Point class (with member values x, y and z) and a vertex class that will be used as an interface to safely manipulate the final vertex data. The subfunction addTriangle takes care of adding a single triangle to the mesh.

Here’s the code for the function:

void addSierpinski(Point a, Point b, Point c, int level, Vertices& vertices)
    Point m0, m1, m2;
    if (level > 0)
        m0.x = (a.x + b.x)/2.0;
    } else {

We then forward the mesh into our shaders, and voila.

The Final Result

Refactoring the function to work in 3d is a good exercise.

Following the OpenGL tutorial was the first step in my journey. But the next thing I did was to learn a whole new programming language: Rust.

Part 2: Why I chose to learn computer graphics in Rust

Up to this point, I had been doing everything in c++. The reason I hated the language so much is simple. Build systems.

CMake is singlehandedly the reason I turned away from c++. It’s so confusing. And c++ header files weren’t easy either. Building anything more than a single file project required hours of research into how to set up the build system. I hated it.

After wasting my time for a few hours one day, I decided I needed to try something else. So I opened up the page for the Rust port of the learn OpenGL computer graphics guide.

The computer graphics ideas were the exact same, but the difference was the build system. Rust installations come with cargo. Cargo offered a flow that was orders of magnitudes easier. Not to mention because dependencies, tests, and documentation are available by default, they’re widely used (as opposed to c++ where bringing in a dependency can be a nightmare).

In fact looking back at it, Rust has many significant advantages over c++, and I’m glad I switched. Other than the build system cargo, The two other most significant benefits are memory safety and helpful compiler messages.

All the fundamental APIs are written in C and C++. So people write rust wrappers(glium, luminance, ash, vulkano, glx-hal+rendy) that make life nice and safe to varying degrees.

Learning computer graphics in rust was a fantastic decision, and I encourage others to do so as well. All of my most recent projects have been in rust. I am a convert.

Learning Vulkan by diving into the specs

The next thing I did was start a major fluid simulation project in Rust. I started learning about Vulkano, which is a safe Rust wrapper around the Vulkan graphics API. I debated whether starting with OpenGL was a better choice, but stuck with Vulkan in the end because it better reflects the process of running programs on graphics hardware architectures.

I believe learning OpenGL is not the best if you wish to learn graphics programming from first principles. It doesn’t really explain what’s going on but allows you to do things by example. And honestly, I think it’s a good thing they don’t explain the background! OpenGL is inherently confusing because it tries to support all the graphics programming paradigms from the last two decades. There are 10 ways to do the same thing, and in the case when a feature isn’t supported, it silently falls back to software emulation, or worse just flags an error and refuses to work.

Vulkan on the other hand is extremely verbose, where everything you do has a purpose. The more the complexity of your program rises, the more valuable Vulkan becomes. This is because Vulkan is more consistent. However this extreme detail comes at a cost, the downside being that it presents a massive learning curve and everything must be set up from scratch.

In my experience as a beginner going through the Rust version of the vulkan-tutorial, whenever I encountered a problem, my first reaction was to turn to Reddit or stack-overflow and find an explanation. This is because I thought the actual specification or vulkano documentation would surely be arcane and filled with accurate but unhelpful explanations and legal jargon. For example at one point I was very confused about the role render passes play in rending vertices onto the screen, and no suitable answer was to be found on Reddit/stack overflow.

The reason questions like these can’t be found on Reddit/stack overflow is because they have already been answered. Answered in the Vulkan specs. The vulkan spec is a suprisingly good computer graphics guide. It is in fact very approachable and helpful. I’ve trained myself to use the specs much more frequently when I don’t understand a feature. I find the glossary at the end to be particularly useful for quick recaps. My methodology for understanding a concept is as follows:

  1. Read the main explanation in the table of contents
  2. Highlight and search all words in the explanation I don’t understand
  3. Rinse and repeat until I’ve reduced the concept to its basics (recursive)
  4. Slowly start building everything back up again from the group up

Let me give an example for the render pass.

Iteration 1:

  • A render pass represents a collection of attachmentssurpasses, and dependencies between the surpasses, and describes how the attachments are used over the course of the surpasses.

Iteration 2:

  • Attachment (Render Pass): A zero-based integer index name used in render pass creation to refer to a framebuffer attachment that is accessed by one or more subpasses. The index also refers to an attachment description which includes information about the properties of the image view that will later be attached.
  • A subpass represents a phase of rendering that reads and writes a subset of the attachments in a render pass. Rendering commands are recorded into particular subpasses of a render pass instance.

Iteration 3:

  • Framebuffer: A collection of image views and a set of dimensions that, in conjunction with a render pass, define the inputs and outputs used by drawing commands
  • Image View: An object that represents an image subresource range of a specific image, and state that controls how the contents are interpreted.

Iteration 4:

  • Image subresource range: A set of image subresources that are contiguous mipmap levels and layers
  • Image subresource: A specific mipmap level and layer of an image
  • Image: A resource that represents a multi-dimensional formatted interpretation of device memory

Perfect! The concept has been fully reduced to components I believe I understand. For reference this is a mipmap:

Now I can slowly go back up composing the statements together making sure I fully understand each component intuitively.

In fact another technique I use to seal the deal and fully understand is that I then try to pass on the explanation to somebody else (usually my brother or dad if they’re around).

Anyways that concludes the story of how I learned graphics programming. As I mentioned earlier in the post, I started working on and eventually finished a large wrap-up project: an SPH fluid simulation. Here’s what that looked like:

Thanks for reading this computer graphics guide! I’m Davide, a 19-year-old self-learner who runs The Feynman Mafia, exploring how learning by explaining can be used to teach yourself any topic.

If you want to follow along on my journey, you can join my newsletter, check out my website, and connect on YouTube or Twitter.