Month: January 2022

Modern OpenGL: Part 11 Creating a basic Obj loader and loading models into Buffers

Here we go

Get ready, because this is going to be a bit of a longer post than usual because we are going to be writing an obj loader from scratch. This will involve us diving into some of the finer details of understanding vertex and polygon data.

Why write another obj loader?

You might be asking why we need to do this and why aren’t we using a library to load meshes because it’s been done a thousand times already. You might also be frustrated that I am not suggested we use glTF as the format of choice. You are right on all counts. You should use glTF and you should use a library. That doesn’t mean that learning how to write an obj loader isn’t useful. It serves a purpose.

obj is a very odd format in the way that it stores its data. By learning how to read data, interpret it and store it into your own c++ data structures, you will learn some useful skills in data processing and it will add to your arsenal of mental tools in how you approach thinking about 3D data.

I also feel that writing an obj loader is a kind of ‘rite of passage’ of graphics programming. It’s one of those things that everyone seems to want to do and it is accessible enough that anyone with a small knowledge of programming can achieve.

What do we need to do?

One of the issues with obj is that the data stored in the format is not in a layout that is ready to be ingested by OpenGL in its raw form. It will need to be read, analysed, processed and prepared into buffers for OpenGL to read.

First we need to define the final vertex format that we will want to be ended up with. We have the power to decide what the requirements are for the meshes we want to render. We are going to start with just 3 kinds of attributes. positions, normals and texture coordinates. These 3 things alone will be enough for us to get quite far and be able to load meshes, light them and apply textures.

We will define a data structure in our c++ code that we want to load the obj data into for ingesting by OpenGL. Let’s define a struct called vertex3D that has 3 members in it…..

#pragma once

#include <cstdio>
#include <fmt/core.h>
#include <vector>
#include "glm/glm.hpp"

struct vertex3D {
    glm::vec3 position;
    glm::vec3 normal;
    glm::vec2 texCoord;
};

We will be parsing the obj file and processing then loading each vertex’s data into one of these structs. There are many ways we could layout out our data (interleaved vs separate arrays) but we will choose to interleave here. We can always transform the data later on. This struct definition is public to other cpp files because they will use it also. We will also need std vector so lets include that now.

Next we will open a new namespace and put all of our reading code in it.

namespace objLoader {

using namespace std;

Next we will want to have a temporary store of data that will simply read in the data from the obj to get it into memory. This is even before we do the interleaving and store the data in the custom vertex3D struct we defined before. This is just getting the obj file data in to c++ vectors for now.

struct RawMeshData {

    RawMeshData() : positions(1), normals(1), textureCoords(1) {
    }
    // dummy value at 0. removes the need for subtracting 1 from obj file
    std::vector<glm::vec3> positions;
    std::vector<glm::vec3> normals;
    std::vector<glm::vec2> textureCoords;
    std::vector<glm::ivec3> faceIndices;

};

Here is it very important that we initialize the vectors with a dummy value by giving them a size of 1 from the get go. This is a nifty trick that I’d learned in a forum somewhere. The reason is because of the fact that obj indices start at 1 ( they are not zero index based) and this will remove the need for us to have to subtract 1 later.

So now we will write the code to read in the data from the file and store it in the RawMeshData structure. The basic list of things

  1. Loop through each line of the obj
  2. Figure out the line length int erms of how many characters there are
  3. find out what kind of data is on that line
  4. Find out where the spaces are located in the line, store those locations and replace them with a special end line character (for reasons I will explain)
  5. based on the kind of data stored on that line, take the values separated by spaces and store it in one of the vector in the raw mesh data

We are starting off by writing a function called readObjRaw() that takes a file path that is the path to the obj file on disk and returns one of these raw mesh structures that has all the data loaded into it…

RawMeshData readObjRaw(const std::string& filePath) {



So now within this function we will begin by declaring an empty RawMeshData for us to load…

    RawMeshData meshData;

Then we will use some C functions to open the file. I did consider using c++ all the way for this part of the code, but as much as I try and defend c++ from its shortcomings, in this case using c++ iostreams are simply slower than straight C functions so I’ll just going to use the tool that isn’t slow by default. The code is slightly uglier, but I can live with that.

    FILE* fp = fopen(filePath.c_str(), "r");
    if (!fp) {
        fmt::print(stderr, "Error opening file\n");
    }

Next we need to create a small buffer in the form of a character array that we will read each line of the file into. This needs to be big enough to contain the whole line. I am going to choose 128 as the max length of the line. There might very well be obj files out there that contain 128 characters or more per line, but this should cover most cases and is enough for our purposes. We are also going to be calculating the length of the string after it has been read into the buffer, so we declare a variable that we can reuse for that purpose.

    char line[128];
    size_t line_size;

Then we want to have a temporary dynamic buffer of index positions that store where in the line of text that spaces occur. This allows us to track where the values in the text are stores so we can easily parse them into variables. We are also going to have a char pointer variable called end which is going to track the last character in the line that we were reading so we can pass that pointer into the next parsing function to start from…

    std::vector<int> spacePositions(8);

    char* end;
    uint32_t key;

We also add this key variable which is going to store which type of line the current line is.

Now we can start the code that loops through each line of the text…

    while (fgets(line, 128, fp)) {

This line starts a while loop and calls the C function fgets() which takes the buffer that we want to fill with the line, how many characters to read into that buffer (we just choose the same length as the buffer in this case) and the pointer to the file handle that we opened earlier. This while loop will continue to loop until the end of the file.

Now we are going to do some setup work at the beginning of each loop iteration once the line has been read into the buffer…

        { // setup
            line_size = strlen(line);
            spacePositions.clear();
            key = packCharsToIntKey(line[0], line[1]);
        }

What we are doing here is reading how many characters were actually read because each line can be of arbitrary length. Then we are going to clear the vector that stores the positions of where the spaces are in the line. We do this because std vectors are dynamic memory and we want to reuse the memory that has already been allocated and prevent having to reallocate every loop iteration. This is a common pattern in computer graphics where you need to track multiple things each loop iteration where the count can change per iteration.

Then we calling a packCharsToIntKey() function (which we will show how to write next) that reads what the first two characters that are in the line and returns a special ‘key’ that we can use in a switch case statement (we are passing in the first two characters by indexing directly into position 0 and 1 of the line buffer). Switch case mechanisms in c and c++ are useful constructs to be able to branch on multiple values and jump to the right bit of code depending on what the value is. The reason we use this special key conversion is because switch case doesn’t work on multiple characters. We are lucky that two chars can be encoded into a single integer value by using some bit shifting. Then we can shift on the integer.

Here is how we write that encoding packCharsToIntKey() function (which we can put at the top of this file within the namespace)…

constexpr uint32_t packCharsToIntKey(char a, char b) {
    return (static_cast<uint32_t>(a) << 8) | static_cast<uint32_t>(b);
}

This function now gives us a key to switch on, but what do we put in the switch statement as the values of the key? For example, if the line in the obj file starts with a “vn” then we want to switch on the key that is the integer corresponding to those letters. We can use some predetermined values for those keys and then just refer to those values without having to know what the keys actually are.

constexpr uint32_t v = packCharsToIntKey('v', ' ');
constexpr uint32_t vn = packCharsToIntKey('v', 'n');
constexpr uint32_t vt = packCharsToIntKey('v', 't');
constexpr uint32_t vp = packCharsToIntKey('v', 'p');
constexpr uint32_t f = packCharsToIntKey('f', ' ');

Before we start our switching on values, we have to first detect in the current line where the spaces exist. This bit of code does that for us.

        
        // spaces after the first will always be after 3
        for (auto i = 0u; i < line_size; ++i) {
            if (line[i] == ' ') {
                line[i] = '\0';
                spacePositions.push_back(i + 1);
            }
        }
        spacePositions.push_back((int)line_size));

          
        

What this does is loop though the line and when it detects a space, it replaces it with a null character (‘\0’). The reason for this will be made clear soon. It also stores the position of that space which we will use as the starting position for parsing the data. Then we also store the end position which gives us a way later to check wether we are reading a triangle or a quad polygon in the obj file. If there are a particular number of spaces (5), then we know its a quad.

Now lets start our switch case within the while loop on the key that we retrieved in the setup section previously…

        switch (key) {

and for the first case we want to handle when the line is a vertex, (which is denoted by the letter ‘v’)

        case v: {
            meshData.positions.emplace_back(
                std::strtof(&line[spacePositions[0]], nullptr),
                std::strtof(&line[spacePositions[1]], nullptr),
                std::strtof(&line[spacePositions[2]], nullptr));
            break;
        }

This will store the data of the 3 coordinates of the position of the vertex in the positions vector in the meshData structure. The emplace_back function is a function very similar to push_back on the std::vector, but allows us to pass the arguments of the vec3 constructor directly. We could have called push_back, but that would mean we have to construct a vec3 first which could incur a copy.

I will explain the strtof() function as well. This is one of the C functions that can read text and convert to binary floating point values. It takes the position in the line of text where the spaces occur given to us by the spacePositions indices we stores previously and parses the text until it meets a null character. This is why we previously injected null characters into the spaces so that this function would know when to end its parsing between spaces.

The next step is to create the case for a vertex normal, denoted by “vn” at the start of the line. We use the vn variable which is the key associated with those two characters.

        case vn: {
            meshData.normals.emplace_back(
                std::strtof(&line[spacePositions[0]], nullptr),
                std::strtof(&line[spacePositions[1]], nullptr),
                std::strtof(&line[spacePositions[2]], nullptr));
            if (!startGroupTracking) {
                startGroupTracking = true;
            }
            break;
        }

and then we do the same for the vertex texture coordinates…

        case vt: {
            meshData.textureCoords.emplace_back(
                std::strtof(&line[spacePositions[0]], nullptr),
                std::strtof(&line[spacePositions[1]], nullptr));
            break;
        }

That takes care of all of the individual point data. Now we move onto the case that reads the lines in the obj that describe the polygons which are lists of the point indices.

        case f: {
            // is face

            int a = std::strtol(&line[spacePositions[0]], &end, 10);
            int b = std::strtol(end + (*end == '/'), &end, 10);
            int c = std::strtol(end + (*end == '/'), &end, 10);
            meshData.faceIndices.emplace_back(a, b, c);

            int d = std::strtol(&line[spacePositions[1]], &end, 10);
            int e = std::strtol(end + (*end == '/'), &end, 10);
            int f = std::strtol(end + (*end == '/'), &end, 10);

            meshData.faceIndices.emplace_back(d, e, f);

            int g = std::strtol(&line[spacePositions[2]], &end, 10);
            int h = std::strtol(end + (*end == '/'), &end, 10);
            int i = std::strtol(end + (*end == '/'), &end, 10);

            meshData.faceIndices.emplace_back(g, h, i);

            if (spacePositions.size() == 5) {
                // face 0
                meshData.faceIndices.emplace_back(a, b, c);
                // face 2
                meshData.faceIndices.emplace_back(g, h, i);

                // reuse def as those temps aren't needed
                d = std::strtol(&line[spacePositions[3]], &end, 10);
                e = std::strtol(end + (*end == '/'), &end, 10);
                f = std::strtol(end + (*end == '/'), &end, 10);

                meshData.faceIndices.emplace_back(d, e, f);
            }

            break;
        }

You can think of the polygons in an obj file as a game of ‘join the dots’. We look at the line and each index triple is made up of indices into the points we have read, and is in the format position/normal/textureCoordinate.. If the polygon is made up of 3 triples, then it is a triangle, if it is 4 then it is a quad. If it is 5 then you need to talk to your modeller about their life choices.

So what we do in the case of the polygon being a quad is that we first make a triangle from 3 of the 4 vertices, effectively splitting the quad into two triangles. Then we detect the case that it is a quad (with the spacePositions.size() == 5 check, because if there are 5 space, it means there a 4 triples)., and if it is, then we create the other triangle that makes up the quad by emplacing back 3 more vertices. We have to be careful with the order here though to make sure that the winding order is correct.

It is also worth quickly explaining the use of the ‘end’ variable inside of the strtol function. The strtol function converts a string to a long integer data type. It takes the starting position as a pointer and also takes another variable which is an ‘out’ parameter that the function will set to where it detected the end of the string. Then in the next use of the function to get the next index, we use that end variable but add on a value that results in either 0 or 1 depending on the (*end == ‘/’) equality test. This is there so that we can skip the ‘/’ character if it exists. The 10 is the last argument which specifies that we are using base 10 which is the base of everyday numbers that we use on a day to day basis.

Finally we need to provide a default case for the types of lines in obj files that we aren’t reading just yet ( we aren’t supporting comments, groups, parameter space vertices, lines or materials).

        default: {
        }
        }
    }

Now we can simply return the mesh data out of the function to the caller. We will revisit this function in later posts and add support for other parts of the obj format.

    return meshData;
}

So we have loaded all of the point data, but its not in the right order, its just a bunch of points. Also those points are going to be reused by multiple triangle so we need to flatten out the points into a list of triangles that OpenGL understands. For that, we have the face/polygon indices that are a list of triangle triple indexes. But because they are a list of indices, we need to loop through them and turn them into into vertices. First we will define a new struct that we can use as a type that we can return from a function that we will write.

struct MeshDataSplit {
    std::vector<vertex3D> vertices;
}

Now we will write a function that will get the raw obj data and process them into triangle vertex data. It will take the filePath as its only argument. (Why not just return a vector<vertex3D> you might ask? We will be adding to this struct in later posts).

MeshDataSplit readObjSplit(const std::string& filePath) {
    auto rawMeshData = readObjRaw(filePath);

    MeshDataSplit meshData;

    meshData.vertices.resize(rawMeshData.faceIndices.size());
    if (rawMeshData.textureCoords.size() == 0) {
        rawMeshData.textureCoords.resize(rawMeshData.faceIndices.size());
    }

    if (rawMeshData.normals.size() == 0) {
        rawMeshData.normals.resize(rawMeshData.faceIndices.size());
    }

#pragma omp parallel for
    for (int i = 0u; i < rawMeshData.faceIndices.size(); ++i) {
        meshData.vertices[i] = {
            rawMeshData.positions[rawMeshData.faceIndices[i].x],
            rawMeshData.normals[rawMeshData.faceIndices[i].z],
            rawMeshData.textureCoords[rawMeshData.faceIndices[i].y]};
    }

    return meshData;
}

Immediately in this function, we call the previous functionreadObjRaw() we just wrote to get the raw mesh data using the file path. Then we create a blank MeshDataSplit which is what we will return out of the function. Then we resize the std::vectors inside the meshData and also the raw mesh data so that the data is reserved and we know that we can index into them. The vertices we will be writing into and the texture coords and normals we will be reading from. The reason we check the size before resizing is that we might have read an obj file that didn’t contain coords or normals so we are resizing so that if they were empty, we get some default values.

Then we can simply loop through the vertices and set their values using the indices stored in the faceIndices triples. It might be a bit hard to see whats going on here, but we are basically looking up the vertex data by index and pulling the data into the right location. Another great benefit of this approach is that because we pre reserved all the data, we can make this step be done in parallel by adding an openmp pragma to the loop. Thats nice!

Now lets go and write our c++ file to be able to load and draw the mesh. A lot of this will be the same as previous chapters, but we will be modifying our vertex binding to be able to read the vertex format (Vertex3D) we have decided to use.

… previous code (click to expand)
#include "error_handling.hpp"
#include "obj_loader_simple.hpp"

#include <array>
#include <chrono>     // current time
#include <cmath>      // sin & cos
#include <cstdlib>    // for std::exit()
#include <fmt/core.h> // for fmt::print(). implements c++20 std::format
#include <unordered_map>

// this is really important to make sure that glbindings does not clash with
// glfw's opengl includes. otherwise we get ambigous overloads.
#define GLFW_INCLUDE_NONE
#include <GLFW/glfw3.h>

#include <glbinding/gl/gl.h>
#include <glbinding/glbinding.h>

#include <glbinding-aux/debug.h>

#include "glm/glm.hpp"
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>

using namespace gl;
using namespace std::chrono;

int main() {

    auto startTime = system_clock::now();

    const int width = 900;
    const int height = 900;

    auto windowPtr = [&]() {
        if (!glfwInit()) {
            fmt::print("glfw didnt initialize!\n");
            std::exit(EXIT_FAILURE);
        }

        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 5);

        /* Create a windowed mode window and its OpenGL context */
        auto windowPtr = glfwCreateWindow(width, height,
                                          "Chapter 11 - Loading Data from Disk",
                                          nullptr, nullptr);

        if (!windowPtr) {
            fmt::print("window doesn't exist\n");
            glfwTerminate();
            std::exit(EXIT_FAILURE);
        }
        glfwSetWindowPos(windowPtr, 480, 90);

        glfwMakeContextCurrent(windowPtr);
        glbinding::initialize(glfwGetProcAddress, false);
        return windowPtr;
    }();

    // debugging
    {
        glEnable(GL_DEBUG_OUTPUT);
        glDebugMessageCallback(errorHandler::MessageCallback, 0);
        glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
        glDebugMessageControl(GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_OTHER,
                              GL_DEBUG_SEVERITY_NOTIFICATION, 0, nullptr,
                              false);
    }

Previously we used lambdas to create the vertex and fragment shaders and combine them into a file. I’ve now done a similar thing, except I’ve taken the shader text creation out of the lambda and made them arguments to the lambda function. The lambda now takes two strings which are the source text for the vertex and fragment shaders. This makes the lambda reusable.

    auto createProgram = [](const char* vertexShaderSource,
                            const char* fragmentShaderSource) -> GLuint {
        auto vertexShader = glCreateShader(GL_VERTEX_SHADER);
        glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr);
        glCompileShader(vertexShader);
        errorHandler::checkShader(vertexShader, "Vertex");

        auto fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
        glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr);
        glCompileShader(fragmentShader);
        errorHandler::checkShader(fragmentShader, "Fragment");

        auto program = glCreateProgram();
        glAttachShader(program, vertexShader);
        glAttachShader(program, fragmentShader);

        glLinkProgram(program);
        return program;
    };

Next, we are going to be using the same fragment shader for drawing the gradient background and also for the mesh drawing. It is a simple shader that simply takes a colour attribute as input and outputs a pixel value of that interpolated vertex colour….

    const char* fragmentShaderSource = R"(
            #version 450 core

            in vec3 colour;
            out vec4 finalColor;

            void main() {
                finalColor = vec4(colour, 1.0);
            }
        )";

Then, we are going to use our lambda and as the first argument for the vertex shader, pass it an in place string which is the same that we have been using for the gradiwent background. Notice we pass in the fragmentShaderSource variable as the second argument.

    auto programBG = createProgram(R"(
        #version 450 core

        out vec3 colour;

        const vec4 vertices[] = vec4[]( vec4(-1.f, -1.f, 0.0, 1.0),
                                        vec4( 3.f, -1.f, 0.0, 1.0),    
                                        vec4(-1.f,  3.f, 0.0, 1.0));   
        const vec3 colours[]   = vec3[](vec3(0.12f, 0.14f, 0.16f),
                                        vec3(0.12f, 0.14f, 0.16f),
                                        vec3(0.80f, 0.80f, 0.82f));
        

        void main(){
            colour = colours[gl_VertexID];
            gl_Position = vertices[gl_VertexID];  
        }
    )",
                                   fragmentShaderSource);

For our mesh drawing shader program, we want to have a slightly different vertex shader. This is going to take in the attributes from our vertex buffers and interpret the normals as colours and pass them to the fragment shader. The reason we do this is because this shader does a remap operation that shifts the colours from a range of (-1 -> 1) to (0 ->1). This makes the normals easier to visualize. Also see that we are reusing the fragment shader source. This is a powerful feature, being able to reuse shader stages in different shader programs.

    auto program = createProgram(R"(
            #version 450 core
            layout (location = 0) in vec3 position;
            layout (location = 1) in vec3 normal;

            out vec3 colour;

            vec3 remappedColour = (normal + vec3(1.f)) / 2.f;

            void main(){
                colour = remappedColour;
                gl_Position = vec4((position * vec3(1.0f, 1.0f, -1.0f)) +
                                   (vec3(0, -0.5, 0)), 1.0f);
            }
        )",
                                 fragmentShaderSource);

Finally we get to use our obj loader now! Lets read an obj file into our program!

    auto meshData = objLoader::readObjSplit("rubberToy.obj");

That was easy wasn’t it?! We now have a meshData variable which has our vertices in a std::vector ready to use directly. Now lets get that data into a buffer…

This block of code is another one of our creation lambdas that does something for us and returns the result.

    auto createBuffer =
        [&program](const std::vector<vertex3D>& vertices) -> GLuint {
        GLuint bufferObject;
        glCreateBuffers(1, &bufferObject);

        // upload immediately
        glNamedBufferStorage(bufferObject, vertices.size() * sizeof(vertex3D),
                             vertices.data(),
                             GL_MAP_WRITE_BIT | GL_DYNAMIC_STORAGE_BIT);

        return bufferObject;
    };

    auto meshBuffer = createBuffer(meshData.vertices);

In this case, it takes in a vector of our vertices and sends the data from it into a buffer on the GPU.

Next we need to create a vertex array object and set up our binding mapping that will set up how the shader will read and interpret the data from the buffer. This doesn’t actually connect the buffer with the vaertex array object just yet (we will do that next), it just sets up the mapping.

    auto createVertexArrayObject = [](GLuint program) -> GLuint {
        GLuint vao;
        glCreateVertexArrays(1, &vao);

        glEnableVertexArrayAttrib(vao, 0);
        glEnableVertexArrayAttrib(vao, 1);

        glVertexArrayAttribBinding(vao,
                                   glGetAttribLocation(program, "position"),
                                   /*buffer index*/ 0);
        glVertexArrayAttribBinding(vao, glGetAttribLocation(program, "normal"),
                                   /*buffer index*/ 0);

        glVertexArrayAttribFormat(vao, 0, glm::vec3::length(), GL_FLOAT,
                                  GL_FALSE, offsetof(vertex3D, position));
        glVertexArrayAttribFormat(vao, 1, glm::vec3::length(), GL_FLOAT,
                                  GL_FALSE, offsetof(vertex3D, normal));

        return vao;
    };

    auto meshVao = createVertexArrayObject(program);

What this does is…

  • enables 2 attributes with the call to glEnableVertexArrayAttrib()
  • tells the vertex array object that the two attribute locations will be getting their data from a single buffer at a buffer index of 0 with glVertexArrayAttribBinding()
  • Lets the vao know what the format of each attribute is at attribute locations 0 and 1 and tells it where in each vertex the data is as a byte offset with glVertexArrayAttribFormat()
For a reminder of how to set up vertex attribute binding please see chapters 5 & 6. It’s easy to forget this stuff and I find I often need a refresher…
https://dokipen.com/modern-opengl-part-5-feeding-vertex-data-to-shaders/
https://dokipen.com/modern-opengl-part-6-multiple-vertex-buffers/
    glVertexArrayVertexBuffer(meshVao, 0, meshBuffer,
                              /*offset*/ 0,
                              /*stride*/ sizeof(vertex3D));

This is the function that tells opengl how to associate the vertex array object with actual buffer data. Remember we could keep using the vertex array object that matches our vertex format ands shader program attribute binding and use this to switch out to a completely different buffer. (as a side note, it’s even possible to use a shader program and switch out a whole vertex array object and buffer setup that uses completely different vertex formats and layout. Thats outside of the scope of this series though).

    glBindVertexArray(meshVao);

    glEnable(GL_DEPTH_TEST);

    std::array<GLfloat, 4> clearColour{0.f, 0.f, 0.f, 1.f};
    GLfloat clearDepth{1.0f};

    while (!glfwWindowShouldClose(windowPtr)) {

        glClearBufferfv(GL_DEPTH, 0, &clearDepth);

        glUseProgram(programBG);
        glDrawArrays(GL_TRIANGLES, 0, 3);

        glUseProgram(program);
        glDrawArrays(GL_TRIANGLES, 0, (gl::GLsizei)meshData.vertices.size());

        glfwSwapBuffers(windowPtr);
        glfwPollEvents();
    }

    glfwTerminate();
}

All we do now is bind our vao as its the only one we will be using, and do our rendering as usual. In between draw calls, we simply switch the shader program used and draw a different number of triangles each time. And we are done! I hope that was fun for you.

In the next chapter, we will be learning about shader transforms so that we can start to draw our meshes in perspective views and move them around the screen!

Modern OpenGL: Part 10 Sending Uniform Parameters to Shaders

So far, our shader programs that we have been using in previous posts, have been static. By that, I mean that you compile them and all of the variables in the shader are fixed and never change (The only thing that can change is the vertex data itself). In this part on our journey of learning opengl, we are going to be controlling some parameters in the shader program so that we can change values over time by sending them into the shader from our c++ code, every frame. The are basically variables that we can tweak, and they are very fast and easy to change as opposed to having to change vertex data.

The way we do this is by using “uniform parameters”. These are variables in the shader that are exposed to the external c++ code and can be set dynamically. The reason they are called uniform, is because they have the same value across all invocations of the shader on all of your vertices/fragments.

There isn’t much we actually need to do for this to work. First lets deal with what we have to do in our shader. Previously we have some shader variables which were just set in the program itself to a hard coded value. Now we need to declare them as uniform. As we just said before, the word uniform itself describes how the value is going to be the same for all vertices/fragments that the shader program operates on. Instead of varying (which means different per vertex/fragment, like attributes), it is the same for each vertex/fragment invocation.

… previous code (click to expand)
#include "error_handling.hpp"
#include <array>
#include <chrono>     // current time
#include <cmath>      // sin & cos
#include <cstdlib>    // for std::exit()
#include <fmt/core.h> // for fmt::print(). implements c++20 std::format
#include <unordered_map>

// this is really important to make sure that glbindings does not clash with
// glfw's opengl includes. otherwise we get ambigous overloads.
#define GLFW_INCLUDE_NONE
#include <GLFW/glfw3.h>

#include <glbinding/gl/gl.h>
#include <glbinding/glbinding.h>

#include <glbinding-aux/debug.h>

#include "glm/glm.hpp"

using namespace gl;
using namespace std::chrono;

int main() {

    auto startTime = system_clock::now();
    const int width = 1280;
    const int height = 720;

    auto windowPtr = [](int w, int h) {
        if (!glfwInit()) {
            fmt::print("glfw didnt initialize!\n");
            std::exit(EXIT_FAILURE);
        }

        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);

        /* Create a windowed mode window and its OpenGL context */
        auto windowPtr = glfwCreateWindow(w, h, "Chapter 9 - Full Screen Effects (Diy Shadertoy!)",
                                       nullptr, nullptr);

        if (!windowPtr) {
            fmt::print("window doesn't exist\n");
            glfwTerminate();
            std::exit(EXIT_FAILURE);
        }
        glfwSetWindowPos(windowPtr, 520, 180);

        glfwMakeContextCurrent(windowPtr);
        glbinding::initialize(glfwGetProcAddress, false);
        return windowPtr;
    }(width, height);
    // debugging
    {
        glEnable(GL_DEBUG_OUTPUT);
        glDebugMessageCallback(errorHandler::MessageCallback, 0);
        glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
        glDebugMessageControl(GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_OTHER,
                              GL_DEBUG_SEVERITY_NOTIFICATION, 0, nullptr, false);
    }

    auto createProgram = [](const char* vertexShaderSource,
                            const char* fragmentShaderSource) -> GLuint {
        auto vertexShader = glCreateShader(GL_VERTEX_SHADER);
        glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr);
        glCompileShader(vertexShader);
        errorHandler::checkShader(vertexShader, "Vertex");

        auto fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
        glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr);
        glCompileShader(fragmentShader);
        errorHandler::checkShader(fragmentShader, "Fragment");

        auto program = glCreateProgram();
        glAttachShader(program, vertexShader);
        glAttachShader(program, fragmentShader);

        glLinkProgram(program);
        return program;
    };

    auto program = createProgram(R"(
            #version 450 core

            const vec4 vertices[] = vec4[]( vec4(-1.f, -1.f, 0.0, 1.0),
                                        vec4( 3.f, -1.f, 0.0, 1.0),    
                                        vec4(-1.f,  3.f, 0.0, 1.0));    

            void main(){
                gl_Position = vertices[gl_VertexID]; 
            }
        )",
                                 R"(
        #version 450 core

        // The MIT License
        // Copyright © 2013 Inigo Quilez
        // Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
        // associated documentation files (the "Software"), to deal in the Software without restriction,
        // including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
        // and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
        // subject to the following conditions: The above copyright notice and this permission notice shall be
        // included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS",
        // WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
        // MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
        // COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
        // TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
        // IN THE SOFTWARE.
        //
        // I've not seen anybody out there computing correct cell interior distances for Voronoi
        // patterns yet. That's why they cannot shade the cell interior correctly, and why you've
        // never seen cell boundaries rendered correctly.
        //
        // However, here's how you do mathematically correct distances (note the equidistant and non
        // degenerated grey isolines inside the cells) and hence edges (in yellow):
        //
        // http://www.iquilezles.org/www/articles/voronoilines/voronoilines.htm
        //
        // More Voronoi shaders:
        //
        // Exact edges:  https://www.shadertoy.com/view/ldl3W8
        // Hierarchical: https://www.shadertoy.com/view/Xll3zX
        // Smooth:       https://www.shadertoy.com/view/ldB3zc
        // Voronoise:    https://www.shadertoy.com/view/Xd23Dh
   
        out vec4 fragColor;
        vec4 fragCoord  = gl_FragCoord;
        vec2 iMouse = vec2(960.f,0.f);



        

What we have done here is that we have created the same declarations of the variables that the shader program requires (that were previously provided by shadertoy behind the scenes) but now we want to change two of them (iTime and iResolution) to be exposed to and controlled by our c++ code. Here is how we do that…

        uniform float iTime;
        uniform vec2 iResolution;
        

We have declared them as uniform variables. This is pretty much like declaring any other variable, but it has the “uniform” specifier in front of it. Now when the shader gets compiled, it will take that into account and provide a way for us to set the value of that variable any time we want.

… previous code (click to expand)
        mat2 rot(in float a){float c = cos(a), s = sin(a);return mat2(c,s,-s,c);}
        const mat3 m3 = mat3(0.33338, 0.56034, -0.71817, -0.87887, 0.32651, -0.15323, 0.15162, 0.69596, 0.61339)*1.93;
        float mag2(vec2 p){return dot(p,p);}
        float linstep(in float mn, in float mx, in float x){ return clamp((x - mn)/(mx - mn), 0., 1.); }
        float prm1 = 0.;
        vec2 bsMo = vec2(0);

        vec2 disp(float t){ return vec2(sin(t*0.22)*1., cos(t*0.175)*1.)*2.; }

        vec2 map(vec3 p)
        {
            vec3 p2 = p;
            p2.xy -= disp(p.z).xy;
            p.xy *= rot(sin(p.z+iTime)*(0.1 + prm1*0.05) + iTime*0.09);
            float cl = mag2(p2.xy);
            float d = 0.;
            p *= .61;
            float z = 1.;
            float trk = 1.;
            float dspAmp = 0.1 + prm1*0.2;
            for(int i = 0; i < 5; i++)
            {
                p += sin(p.zxy*0.75*trk + iTime*trk*.8)*dspAmp;
                d -= abs(dot(cos(p), sin(p.yzx))*z);
                z *= 0.57;
                trk *= 1.4;
                p = p*m3;
            }
            d = abs(d + prm1*3.)+ prm1*.3 - 2.5 + bsMo.y;
            return vec2(d + cl*.2 + 0.25, cl);
        }

        vec4 render( in vec3 ro, in vec3 rd, float time )
        {
            vec4 rez = vec4(0);
            const float ldst = 8.;
            vec3 lpos = vec3(disp(time + ldst)*0.5, time + ldst);
            float t = 1.5;
            float fogT = 0.;
            for(int i=0; i<130; i++)
            {
                if(rez.a > 0.99)break;

                vec3 pos = ro + t*rd;
                vec2 mpv = map(pos);
                float den = clamp(mpv.x-0.3,0.,1.)*1.12;
                float dn = clamp((mpv.x + 2.),0.,3.);
                
                vec4 col = vec4(0);
                if (mpv.x > 0.6)
                {
                
                    col = vec4(sin(vec3(5.,0.4,0.2) + mpv.y*0.1 +sin(pos.z*0.4)*0.5 + 1.8)*0.5 + 0.5,0.08);
                    col *= den*den*den;
                    col.rgb *= linstep(4.,-2.5, mpv.x)*2.3;
                    float dif =  clamp((den - map(pos+.8).x)/9., 0.001, 1. );
                    dif += clamp((den - map(pos+.35).x)/2.5, 0.001, 1. );
                    col.xyz *= den*(vec3(0.005,.045,.075) + 1.5*vec3(0.033,0.07,0.03)*dif);
                }
                
                float fogC = exp(t*0.2 - 2.2);
                col.rgba += vec4(0.06,0.11,0.11, 0.1)*clamp(fogC-fogT, 0., 1.);
                fogT = fogC;
                rez = rez + col*(1. - rez.a);
                t += clamp(0.5 - dn*dn*.05, 0.09, 0.3);
            }
            return clamp(rez, 0.0, 1.0);
        }

        float getsat(vec3 c)
        {
            float mi = min(min(c.x, c.y), c.z);
            float ma = max(max(c.x, c.y), c.z);
            return (ma - mi)/(ma+ 1e-7);
        }

        //from my "Will it blend" shader (https://www.shadertoy.com/view/lsdGzN)
        vec3 iLerp(in vec3 a, in vec3 b, in float x)
        {
            vec3 ic = mix(a, b, x) + vec3(1e-6,0.,0.);
            float sd = abs(getsat(ic) - mix(getsat(a), getsat(b), x));
            vec3 dir = normalize(vec3(2.*ic.x - ic.y - ic.z, 2.*ic.y - ic.x - ic.z, 2.*ic.z - ic.y - ic.x));
            float lgt = dot(vec3(1.0), ic);
            float ff = dot(dir, normalize(ic));
            ic += 1.5*dir*sd*ff*lgt;
            return clamp(ic,0.,1.);
        }

        void main() 
        {	
            vec2 q = fragCoord.xy/iResolution.xy;
            vec2 p = (gl_FragCoord.xy - 0.5*iResolution.xy)/iResolution.y;
            bsMo = (iMouse.xy - 0.5*iResolution.xy)/iResolution.y;
            
            float time = iTime*3.;
            vec3 ro = vec3(0,0,time);
            
            ro += vec3(sin(iTime)*0.5,sin(iTime*1.)*0.,0);
                
            float dspAmp = .85;
            ro.xy += disp(ro.z)*dspAmp;
            float tgtDst = 3.5;
            
            vec3 target = normalize(ro - vec3(disp(time + tgtDst)*dspAmp, time + tgtDst));
            ro.x -= bsMo.x*2.;
            vec3 rightdir = normalize(cross(target, vec3(0,1,0)));
            vec3 updir = normalize(cross(rightdir, target));
            rightdir = normalize(cross(updir, target));
            vec3 rd=normalize((p.x*rightdir + p.y*updir)*1. - target);
            rd.xy *= rot(-disp(time + 3.5).x*0.2 + bsMo.x);
            prm1 = smoothstep(-0.4, 0.4,sin(iTime*0.3));
            vec4 scn = render(ro, rd, time);
                
            vec3 col = scn.rgb;
            col = iLerp(col.bgr, col.rgb, clamp(1.-prm1,0.05,1.));
            
            col = pow(col, vec3(.55,0.65,0.6))*vec3(1.,.97,.9);

            col *= pow( 16.0*q.x*q.y*(1.0-q.x)*(1.0-q.y), 0.12)*0.7+0.3; //Vign
            
            fragColor = vec4( col, 1.0 );
        }
    )");



    GLuint vao;
    glCreateVertexArrays(1, &vao);
    glBindVertexArray(vao);

    glUseProgram(program);

    int timeUniformLocation = glGetUniformLocation(program, "iTime");
    int resolutionUniformLocation = glGetUniformLocation(program, "iResolution");

Now in our c++ code, we need a way to be able to refer to the location of that variable from the shader. This is done with the glGetUniformLocation function call. We ask the shader “program” where it’s “iTime” variable is and OpenGL will return us back an integer location that will be something like 0, 1, 2 etc depending on what OpenGL itself decided it would be. It doesn’t actually really matter what it is, because we are storing it as a variable and we never need to know what the actual value is itself. We just need to know that we can use it later on to set the value of the variable in the shader code using that integer ‘location’.

Previously, we used binding locations inside of shaders as a way to refer to the location inside of a shader, however I thought this was a good way to show you an alternative. This is often viewed as an older way of doing things, but I like it and still think it has its uses. So you can decide for yourself which method you want to use.

So now that we have that location, then we can use it to set the value of the shader variable at that location. We do that with a family of functions that all start with

glProgramUniform* (the star/wildcard here states that there is some text missing that dictates the type we want to change).

So for example, if we want to change the uniform variable that has a type in the shader that is a vec2, then we want to use the glProgramUniform2f function. And for a uniform variable that is a single ‘float’, then we want to use glProgramUniform1f.

    glProgramUniform2f(program, resolutionUniformLocation, width, height);


    while (!glfwWindowShouldClose(windowPtr)) {

        auto currentTime = duration<float>(system_clock::now() - startTime).count();
        glProgramUniform1f(program, timeUniformLocation, currentTime);

        // draw full screen triangle
        glDrawArrays(GL_TRIANGLES, 0, 3);

        glfwSwapBuffers(windowPtr);
        glfwPollEvents();
    }

    glfwTerminate();
}

Here, just outside our main loop, we are setting the shader variable using the location “resolutionUniformLocation” that we previously got from asking where the “iResolution” variable was.= in the shader. We pass the two float values in as arguments to the function and they will get passed into the shader for us. This function is being called outside the loop because we only really want to set it once. But it still needs to be called because the resolution might have been set at program start up based on some user input (like command line parameters) and we still need to tell the shader what that is. If the window was changed size and we had written a function to handle that (which glfw allows up to do, although we haven’t explored that yet), then we would be able to update the shader accordingly.

Then we are calling the next glProgramUniform1f function inside the loop. This is because we want to update the value of the “iTime” in the shader with the current value from the runtime of the program. It is “1f” because it is a single float and this is something that is realtime and needs to update every frame.

That is pretty much it then for uniform variables. In the next chapter we will be tackling the ultimate graphics programming rite of passage… writing an obj loader from scratch to be able to load meshes into our programs and have them rendered on the screen!