Month: November 2020

Modern OpenGL: Part 4 Error Handling

So hopefully, you have had some initial excitement about having your first triangle on the screen. I’m guessing though that you might have had a typo somewhere or passing the wrong value into a function and something went wrong, and maybe it took way more time than it should have to find the issue. You are not alone.

There are ways to find out if something has gone wrong though.

I’m going to briefly introduce you to the legacy way that OpenGL applications could find out what went wrong (you may have seen this shown in other tutorials as it is a widely propagated method of checking for errors) and then I’m quickly going to move on to show you the modern and better way of doing it. In fact if you go to this page, the following method is listed as ‘the hard way’. Once we have briefly covered it, then we will move on to the easy way.

There exists a function called glGetError which asks the OpenGL runtime if anything went wrong. You get back an error code which you are able to check against some particular values to see what kind of error you have. It is usually wrapped up into some kind of helper function like this one that you can call occasionally after your code to see when these errors occur……

void CheckGLError(std::string str) {
    GLenum error = glGetError();
    if (error != GL_NO_ERROR) {
        printf("Error! %s %s\n", str.c_str(), openGLErrorString(error));
    }
}

the openGLErrorString() function might look something like this…..

const char* openGLErrorString(GLenum _errorCode) {
    if (_errorCode == GL_INVALID_ENUM) {
        return "GL_INVALID_ENUM";
    } else if (_errorCode == GL_INVALID_VALUE) {
        return "GL_INVALID_VALUE";
    } else if (_errorCode == GL_INVALID_OPERATION) {
        return "GL_INVALID_OPERATION";
    } else if (_errorCode == GL_INVALID_FRAMEBUFFER_OPERATION) {
        return "GL_INVALID_FRAMEBUFFER_OPERATION";
    } else if (_errorCode == GL_OUT_OF_MEMORY) {
        return "GL_OUT_OF_MEMORY";
    } else if (_errorCode == GL_NO_ERROR) {
        return "GL_NO_ERROR";
    } else {
        return "unknown error";
    }
}

The reason this in not ideal, is that

  • You have to call the function manually yourself when you want to find out if there was an error. If you don’t know where the error came from, you may have to copy and paste it all over your code or wrap your gl* function calls into a CHECK() macro to make sure that you know after which call the error came from
  • Just because you have an error, doesn’t mean that the place that you get an error back is where the issue is. Because OpenGL is a state machine, with complication rules on what operations are valid and allowed it particular states, it could be that the problem is because you did something like forget to bind some framebuffer, or set the wrong read/write bits on a buffer.
  • You just get GL_INVALID_something. No detailed information on what specifically is wrong.

Anyway writing this function is not only the worse way to check for errors, it is also not really necessary because we can simply use our fancy glbindings library to do it for us. If we add a new header and one line of code into our program after the window creation, then all of the above is taken care of for us.

New header…

#include <glbinding-aux/debug.h>

Single line of code to enable basic error checking.

    glbinding::aux::enableGetErrorCallback();

Now if you try some thing like commenting out one single line of our program, for example the function that binds the Vertex Array Object that we had to put in our code in the last chapter, then we should get some error message when we run our program.

    //glBindVertexArray(vao);

This is the output which is just a stream of errors being printed because glDrawArrays is where the error is occuring.

GL_INVALID_OPERATIONglDrawArrays generated

But glDrawArrays() is not the problem. The problem is because there is no Vertex Array Object bound.

The Modern way

Now we are going to use a better way of debugging. First we have to enable ‘debug output’ by calling glEnable….

… surrounding code (click to expand)
#include "error_handling.hpp"
#include <array>
#include <chrono>     // current time
#include <cmath>      // sin & cos
#include <cstdlib>    // for std::exit()
#include <fmt/core.h> // for fmt::print(). implements c++20 std::format
#include <pystring.h>
#include <string>
#include <unordered_map>

// this is really important to make sure that glbindings does not clash with
// glfw's opengl includes. otherwise we get ambigous overloads.
#define GLFW_INCLUDE_NONE
#include <GLFW/glfw3.h>

#include <glbinding/gl/gl.h>
#include <glbinding/glbinding.h>

#include <glbinding-aux/debug.h>

using namespace gl;
using namespace std::chrono;

int main() {

    auto startTime = system_clock::now();

    const auto windowPtr = []() {
        if (!glfwInit()) {
            fmt::print("glfw didnt initialize!\n");
            std::exit(EXIT_FAILURE);
        }

        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);

        auto windowPtr =
            glfwCreateWindow(1280, 720, "Chapter 4 - Error Handling", nullptr, nullptr);

        if (!windowPtr) {
            fmt::print("window doesn't exist\n");
            glfwTerminate();
            std::exit(EXIT_FAILURE);
        }
        glfwSetWindowPos(windowPtr, 520, 180);

        glfwMakeContextCurrent(windowPtr);
        glbinding::initialize(glfwGetProcAddress, false);
        return windowPtr;
    }();
    glEnable(GL_DEBUG_OUTPUT);

Next we are going to tell OpenGL that we want it to call a function (that we will write) when it detects an error. Isn’t that nice? We do that by setting this so called callback function….

    glDebugMessageCallback(errorHandler::MessageCallback, 0);
    glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
… previous code (click to expand)
    const char* vertexShaderSource = R"VERTEX(
        #version 460 core
        out vec3 colour;

        const vec4 vertices[] = vec4[]( vec4(-0.5f, -0.7f,    0.0, 1.0), 
                                        vec4( 0.5f, -0.7f,    0.0, 1.0),    
                                        vec4( 0.0f,  0.6888f, 0.0, 1.0));   

        const vec3 colours[] = vec3[](  vec3( 1.0, 0.0, 0.0), 
                                        vec3( 0.0, 1.0, 0.0),    
                                        vec3( 0.0, 0.0, 1.0));   

        void main(){
            colour = colours[gl_VertexID];
            gl_Position = vertices[gl_VertexID];  
        }
    )VERTEX";

    const char* fragmentShaderSource = R"FRAGMENT(
        #version 460 core

        in vec3 colour;
        out vec4 finalColor;

        void main() {
            finalColor = vec4(colour.x, colour.y, colour.z, 1.0);
        }
    )FRAGMENT";

    auto vertexShader = glCreateShader(GL_VERTEX_SHADER);
    glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr);
    glCompileShader(vertexShader);

    auto fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
    glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr);
    glCompileShader(fragmentShader);

    auto program = glCreateProgram();
    glAttachShader(program, vertexShader);
    glAttachShader(program, fragmentShader);

    glLinkProgram(program);
    glUseProgram(program);

    // in core profile, at least 1 vao is needed
    GLuint vao;
    glCreateVertexArrays(1, &vao);
    glBindVertexArray(vao);

    std::array<GLfloat, 4> clearColour;

    while (!glfwWindowShouldClose(windowPtr)) {

        auto currentTime = duration<float>(system_clock::now() - startTime).count();
        clearColour = {std::sin(currentTime) * 0.5f + 0.5f, std::cos(currentTime) * 0.5f + 0.5f,
                       0.2f, 1.0f};

        glClearBufferfv(GL_COLOR, 0, clearColour.data());

        glDrawArrays(GL_TRIANGLES, 0, 3);
        glfwSwapBuffers(windowPtr);
        glfwPollEvents();
    }

    glfwTerminate();
}

We

This errorHandler::MessageCallback function doesn’t exist yet, so we will write that next. It has to adhere to a specific signature so that when opengl calls it, it can pass the right arguments. Enabling GL_DEBUG_OUTPUT_SYNCHRONOUS is also useful as it will allow us to use the debugger in our c++ program if we need to to ensure that we get the right output when we step through our program (because OpenGL can be asynchronous in its operation, debug messages could appear later than we want them too).

Lets create a new file error_handler.hpp. We won’t be using too many extra files in this series, but to keep the main program cpp a bit leaner, I’ve opted for these header only files for ‘utility’ type functionality only. It does increase compile time a little, but its the order of seconds, not minutes, and we are here to learn so who cares.

#pragma once
#include <fmt/color.h>
#include <fmt/core.h> // for fmt::print(). implements c++20 std::format

#include <glbinding/gl/gl.h>
#include <iostream>
#include <string>
#include <unordered_map>

using namespace gl;

namespace errorHandler {

void MessageCallback(GLenum source, GLenum type, GLuint id, GLenum severity,
                     GLsizei length, const GLchar* message,
                     const void* userParam) {
    std::string src = errorSourceMap.at(source);
    std::string tp = errorTypeMap.at(type);
    std::string sv = severityMap.at(severity);
    fmt::print(
        stderr,
        "GL CALLBACK: {0:s} type = {1:s}, severity = {2:s}, message = {3:s}\n",
        src, tp, sv, message);
}

} // namespace errorHandler
… previous code (click to expand)
#pragma once
#include <fmt/color.h>
#include <fmt/core.h> // for fmt::print(). implements c++20 std::format

#include <glbinding/gl/gl.h>
#include <iostream>
#include <string>
#include <unordered_map>

using namespace gl;

namespace errorHandler {
static const std::unordered_map<GLenum, std::string> errorSourceMap{
    {GL_DEBUG_SOURCE_API, "SOURCE_API"},
    {GL_DEBUG_SOURCE_WINDOW_SYSTEM, "WINDOW_SYSTEM"},
    {GL_DEBUG_SOURCE_SHADER_COMPILER, "SHADER_COMPILER"},
    {GL_DEBUG_SOURCE_THIRD_PARTY, "THIRD_PARTY"},
    {GL_DEBUG_SOURCE_APPLICATION, "APPLICATION"},
    {GL_DEBUG_SOURCE_OTHER, "OTHER"}};

static const std::unordered_map<GLenum, std::string> errorTypeMap{
    {GL_DEBUG_TYPE_ERROR, "ERROR"},
    {GL_DEBUG_TYPE_DEPRECATED_BEHAVIOR, "DEPRECATED_BEHAVIOR"},
    {GL_DEBUG_TYPE_UNDEFINED_BEHAVIOR, "UNDEFINED_BEHAVIOR"},
    {GL_DEBUG_TYPE_PORTABILITY, "PORTABILITY"},
    {GL_DEBUG_TYPE_PERFORMANCE, "PERFORMANCE"},
    {GL_DEBUG_TYPE_OTHER, "OTHER"},
    {GL_DEBUG_TYPE_MARKER, "MARKER"}};

static const std::unordered_map<GLenum, std::string> severityMap{
    {GL_DEBUG_SEVERITY_HIGH, "HIGH"},
    {GL_DEBUG_SEVERITY_MEDIUM, "MEDIUM"},
    {GL_DEBUG_SEVERITY_LOW, "LOW"},
    {GL_DEBUG_SEVERITY_NOTIFICATION, "NOTIFICATION"}};
… previous code (click to expand)
void MessageCallback(GLenum source, GLenum type, GLuint id, GLenum severity,
                     GLsizei length, const GLchar* message,
                     const void* userParam) {
    std::string src = errorSourceMap.at(source);
    std::string tp = errorTypeMap.at(type);
    std::string sv = severityMap.at(severity);
    fmt::print(
        stderr,
        "GL CALLBACK: {0:s} type = {1:s}, severity = {2:s}, message = {3:s}\n",
        src, tp, sv, message);
}

} // namespace errorHandler
GL CALLBACK: SOURCE_API type = ERROR, severity = HIGH, message = GL_INVALID_OPERATION error generated. Array object is not active.  

Amazing! We can see that we are getting a message telling us that the Array object is not active. Awesome. Lets re-enable it then!

I have seen on my Nvidia card that this message is printed sometimes….

“GL CALLBACK: SOURCE_API type = PERFORMANCE, severity = MEDIUM, message = Program/shader state performance warning: Fragment shader in program 3 is being recompiled based on GL state.”

Apparently from my internet searching, this is okay so if you see it, feel free to ingore it…

So this is all very good,

Checking for Shader Compiler Errors

This is a nice state to be in, but there is still one thing that the debug output cannot tell us, and that is what errors are occurring in our shaders. If we get something wrong in our shader (for example if we accidently set the out colour attribute in the vertex shader to a vec4 and not the vec3 it should be), we will get these messages printed…

GL CALLBACK: SOURCE_API type = ERROR, severity = HIGH, message = GL_INVALID_OPERATION error generated. <program> has not been linked, or is not a program object.
GL CALLBACK: SOURCE_API type = ERROR, severity = HIGH, message = GL_INVALID_OPERATION error generated. <program> object is not successfully linked, or is not a program object.

But we don’t get told what was wrong. That is where we have to step in to do a little bit of extra work.

OpenGL has some functions that allow us to query whether the shader compilation was successful and if not, what messages the compiler spat out. Let write a function that wraps up that functionality so we can check the result of the shader compilation after we call glCompileShader().

bool checkShader(GLuint shaderIn, std::string shaderName) {
    GLboolean fShaderCompiled = GL_FALSE;

Here we write the signature of out helper function, which takes a shader ID (the id that was returned to us when we called glCreateShader), and also some ‘user’ text that we can pass in to print to the console which will help indication which shader we are printing information for. We also set up a variable ‘fShaderCompiled’ that we will want OpenGL to set for us next….

    glGetShaderiv(shaderIn, GL_COMPILE_STATUS, &fShaderCompiled);
    if (fShaderCompiled != GL_TRUE) {

Then we use the OpenGL function glGetShaderiv to get some information about the shader that we are interested in. We pass in the shader ID inShader, and the piece of information we are after which is the GL_COMPILE_STATUS. We tell it to set the value of the variable we prepared just before based on wether the compilation was successful or not. If was wasn’t, then the if statement will enter it’s block…

        fmt::print(stderr, "Unable to compile {0} shader {1}\n", shaderName,
                   shaderIn);

First we print a message to the log telling the user of the program that something was wrong.

        GLint log_length;
        glGetShaderiv(shaderIn, GL_INFO_LOG_LENGTH, &log_length);

Then we will use the same glGetShaderiv function to return to us the length of the compiler log message.

        std::vector<char> v(log_length);
        glGetShaderInfoLog(shaderIn, log_length, nullptr, v.data());

Then based on that log length, we will create a vector of chars big enough for OpenGL to store the message in. Using the glGetShaderInfoLog() function, OpenGL will store the compiler message in that vector for us.

        fmt::print(stderr, fmt::fg(fmt::color::light_green), "{}\n", v.data());
        return false;
    }
    return true;
}

Then we simply print that message. Then we return false to let any caller detect wether the compiler failed and to do something about it if they so wish. If there wasn’t any error, we return true;

The reason we use a vector of chars and not a std::string is that until c++17, std:::string doesn’t provide a non const data() method to give to OpenGL to write into. The vector of chars works for c++11 and is fine in this case.

Now if we use this function right after compiling the shaders, we get a message containing the line number in the shader of where the error occurred and what the error was.

    auto vertexShader = glCreateShader(GL_VERTEX_SHADER);
    glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr);
    glCompileShader(vertexShader);
    errorHandler::checkShader(vertexShader, "Vertex");

    auto fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
    glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr);
    glCompileShader(fragmentShader);
    errorHandler::checkShader(fragmentShader, "Fragment");

That wraps things up for this chapter. Next we will be getting back to some exciting topics and seeing how we can provide vertex data from our c++ program to feed OpenGL to draw multiple triangles at locations that we set.

The Future of Graphics Programming on MacOS

With Apple releasing the new Macbook Air, Macbook Pro 13 and Mac Mini with their custom M1 chips, I want to discuss the future of Graphics Programming and the API’s you can use in the Mac ecosystem.

Apple have just released their first ‘Apple Silicon’ macs which contain their ARM based M1 chips. From the initial benchmarks that have been released, they apparently seem to be little power machines.

What I am interested in as well, is the GPU capability, how developers will view these machines and see if they are viable machines for computer graphics developers.

Ever since macOS Mojave, OpenGL (and it’s compute counterpart) has been deprecated. Apple have stated that they will still work, but that users should not expect support or updates. This is a shame, as OpenGL has, for years, been the API that many programmers use when they learning graphics programming for the first time. It’s cross platform, and there are a ton of resources, books and tutorials available to learn from. Also, the OpenGL support on macOS wasn’t that great anyway, with it only supporting up to core profile 4.1. This means no compute shaders or advanced rendering features such as multidraw indirect.

That is not to say that OpenGL is still getting updates itself. The entity that is responsible for its standardization/specification, the Khronos Group, has also effectively put it into legacy mode and focused it’s efforts on it’s ‘replacement’, Vulkan. As you may know, Vulkan is a very low level API and is designed to allow developers to get the best performance from the hardware, without a lot of the driver overhead. OpenGL, by design and due to its long evolution, has built up a lot of cruft over the years and things have been tacked on, extension by extension, to add addition capability, but at the expense of a muddy API and drastically different performance characteristics across different hardware vendors. This means that the same OpenGL program that runs fine on AMD cards, might run poorly on intel or Nvidia cards, and you would have to be in the know, to write different code paths that use different opengl features to get the best performance depending on what card the program is being run on.

So why is it sad that OpenGL is being left on the sidelines? Is there a future for it?

Apple have even gone their own route by designing their own API, Metal, that is macOS/iOS/iPadOS only. Many were hoping that they would adopt Vulkan as the one true API that we would all use and then we would only have to learn one thing.

Enter the translation layer

Something very interesting happened in the last few years. Various so called ‘translatation layers’ have started to pop up which take code from one API and translate it into code that targets another API. A couple of examples of this are ANGLE, which started life as a library that allowed opengl to run on top of directX. Then MoltenVK and MoltenGL we created which allowed Vulkan and OpenGL ES to run on Metal. What’s amazing, is that The Khronons Group has got behind these efforts and has helped guide these projects under what it calls the ‘portability initiative’. They are actively encouraging the sheparding these projects to effectively allow anyone to be able to write code in their API of choice and have it runs almost anywhere else on any platform. Not all of these are complete and fully working yet, but they a lot of them show promise.

Of particular mention is MoltenVK which started it’s life as a commercial project. Thanks to what I believe is funding from the Khronos group, this has now become open source and is released as part of the vulkan lugarG SDK. This bring macOS and it’s iOS counterparts back into the fold of cross platform API’s. It is running Metal under the hood, and it is not 100% of the performance and capability of Vulkan (I believe at the time I was writing this, Vulkan 1.1 was fully supported), but it is an amazing achievement.

Back to OpenGL

So where does this leave OpenGL? Well in the case of macOS, there is the ANGLE project which has been extended from its original goal of just opengl on directx to having backends from multiple APIs on multiple platforms. This means that a Metal backend is being worked on as well as vulkan backends. This means that OpenGL ES 3.2 might be able to be run on top of MoltenVK which runs of top of vulkan. This is OpenGL ES though, not desktop OpenGL.

I Zink, therefore I am….

Even more recently, there has been some very exciting development over on the Mesa Project side of things. If you don’t know, Mesa is a project that is comprised of multiple components, but mainly, it is an OpenGL interface layer or state tracker and secondly, it has many real open source hardware drivers for running on actual cards which are known as ‘Gallium drivers’. There is even a software renderer (LLVMPipe) which acts like a hardware driver to allow rendering on the CPU. This means that applications can write standard opengl code, then link their application against the mesa opengl library as opposed to the system or hardware vendor supplied one, and their code will run using Mesa.

Mesa has mainly been focused on the Linux side of things for the last few years, but it is technically possible to run the state tracker and software renderer on windows and macOS.

The new development has been the creation of a new ‘driver’ Zink, which is actually a translation layer that uses vulkan as a back end. A developer Mike Blumenkrantz has been working furiously in his spare time to get features and performance of Zink up to OpenGL 4.6 and running at 95% of some native drivers that were tested. This is very encouraging as it seems like there is more to go in terms of performance, even though there is still some work on the compatibility side still to do.

And hot off the press, some patches have recently started to come in which allows the use of MoltekVK as a back-back-end. This will allow OpenGL->Vulkan->MoltenVK->Metal as a valid and workable solution to using OpenGL on macOS. In fact being able to use OpenGL on anywhere there is a Vulkan driver is amazing, and might means that in future, new cards only ship with a Vulkan Driver, and thanks to Zink will be able to run OpenGL applications out of the box without all of the complicated driver issues that has plagued OpenGL for many years.

What will this bring us?

What I am hoping this will provide, is a valid route, for people who want to learn graphics programming can take, to get started and build upon the years of work and educational material that has been created over the last 25 years. It will let developers still be able to write a simple OpenGL 1.1 application with glBegin() and glEnd(). It will then allow someone to experiment with basic pixel and vertex shaders without having to worry about separate spirv compilers or libraries.

I know there are many arguments against OpenGL as an introduction to Graphics Programming, and that Modern Game Engines like Unity and Unreal Engine should be were people start. I do agree with that. I am a huge fan of UE4 myself and do a lot of my creative hobby endeavours within UE4, but going from UE4 straight to vulkan is a massive leap. There needs to be middle ground. I believe OpenGL is that middle ground. Specifically, Modern OpenGL, which encourages the ADZO techniques that have been popular in recent years. This means Direct State Access, MultiDraw indirect, efficient texture usage and manual synchronization for buffer updates.

For more information on these techniques, please see my tutorial series and blog posts on Modern OpenGL.

Who know what the new macs will do for graphics programming. I, for one, am excited about the power efficiency and performance they bring, and am hopeful that the Operating system remains open enough for us to be able to tinker and hack away at fun little programs, all in the name of learning for learning’s sake.

I’m happy to see some healthy discussion on the topic. Thanks for reading.

Dan Dokipen Elliott Jones

Modern OpenGL: Part 3, Hello Triangle

There is quite a lot to learn and absorb in this chapter. Lets get stuck in!

Goals in this chapter

  • Draw our first triangle
  • Learn about vertex and fragment shaders
  • Passing Data from a vertex shader to a fragment shader
  • out and in qualifiers
  • How attributes are interpolated across triangles
  • compiling and linking shader programs
  • the glDrawArrays() function

Drawing a Triangle

That’s why we are here isn’t it? To draw triangles? Well drawing triangles themselves isn’t much fun. But you can make other things out of triangles. So we must walk before we can run.

OpenGL draws triangle by drawing 3 vertices at a time.

We can draw triangles by calling the glDrawArrays() function. We have to tell it what type of geometry to draw (in this case GL_TRIANGLES), an offset paramter (which we will cover in a later chapter) and how many vertices to draw.

When we ask OpenGL to draw 3 vertices when the mode is set to GL_TRIANGLES, it will connect them up and fill in the middle with pixels.

Here we are seeing 3 vertices that we have requested to be drawn by calling the glDrawTriangles function. We haven’t yet specified where those vertices are positioned in space yet (we will get to that very soon) but OpenGL assumes that they form a triangle and will interpret them as such and ‘fill in’ the middle for us.

The glDrawTriangles function just instructs OpenGL to draw triangles and doesn’t require that the positions be provided by your app yet. You can think of it like you are writing a ‘blank check’ where you specify how many vertices you want, but the positions will be specified at a later time. How? We will get to that soon.

If we want to draw more than 1 triangle we can simply draw more vertices. 3 for every triangle.

vertices 0, 1 & 2 form the first triangle, and vertices 3, 4 & 5 form the second. notice how the triangles are defined by the counter clockwise order of the points
You might be wondering, why points 0 & 3, and 2 & 4 are separate points if they share the same position. Isn’t that a waste telling opengl to draw an extra vertex there? You are right! But for now we are going to draw two separate triangle and move on to reusing vertices in chapter 13.

Setting Vertex Positions

We need a way to tell OpenGL where these vertices are, so it knows where to draw them. We need a way to process each vertex and output a position. We do that by using Shader Programs. These are actual programs that we write in a language called GLSL (GL Shading Language). They have a main function in which we set the position of the vertex. What does this program look like? Is it a big for loop that sets each vertex position one after the other? No actually, it is a program that each vertex get run on individually. That’s right, a little program is run for each vertex that sets the position for that vertex.

What does one of these programs look like?

Vertex Shader

#version 460 core

void main(){
    gl_Position = vec3(1.f, 2.71f, 3.1459f);
}

Here we see that it looks very much like other c languages. We have the #version 460 core line which tells opengl which version of the shader language we want this to be. New features are added in newer versions so we want to make sure that a certain feature set is available to us.

Then the main() function is what will be executed for each vertex. We set a special variable call gl_Position. This is the in-built variable that we have to write to to tell OpenGL where this particular vertex is positioned. OpenGL can then go on and draw the triangle using that position for that vertex. In this case, if we ran this program over all 3 vertices, then the vertices would all be at a single point and the triangle would have no area and hence wouldn’t be visible. That’s not good. So we need a way to tell each individual vertex to have it’s own unique position.

One way we could do that is to declare an array directly in the shader program itself.

#version 460 core

const vec4 positions[] = vec4[]( vec4(-0.5f, -0.7f,    0.0, 1.0), 
                                 vec4( 0.5f, -0.7f,    0.0, 1.0),    
                                 vec4( 0.0f,  0.6888f, 0.0, 1.0));

void main(){
    gl_Position = ???
}

But how do we index into that array for that particular vertex? Luckily in this situation, OpenGL provides us with a read-only built-in integer variable called gl_VertexID which is the number of the vertex being drawn. If we use that to index into the array, then each vertex will run the program, and get the right value written to gl_Position.

#version 460 core

const vec4 positions[] = vec4[]( vec4(-0.5f, -0.7f,    0.0, 1.0), 
                                 vec4( 0.5f, -0.7f,    0.0, 1.0),    
                                 vec4( 0.0f,  0.6888f, 0.0, 1.0));   

void main(){
    gl_Position = vertices[gl_VertexID];  
}

This diagram is a rough approximation of what is happening. We call glDrawArrays, tell it to draw 3 vertices and because it is told that the mode is GL_TRIANGLES, then we get a single triangle for those 3 vertices. Then each vertex will run the shader program that we wrote and get its position set by indexing into the array. The triangle will get Rasterized which means the stage where the pixels are filled with colour values. But there are a couple of issues that we need to consider…

  1. What colour will the pixels be set to?
    • Right now we haven’t stated what value the filled pixels will have. Can we even call them pixels yet?
  2. What if we have more than 1 triangle or want the values to change?
    • If we draw 6 vertices, or 9, or any amount, that means that we need a massive array in our vertex shader to be able to index into. Also we can’t access that array at the moment. Its statically compiled into the shader program. That isn’t feasible. In the chapter 5 we will tackle this issue (spoiler alert, we will be looking at vertex buffers).

For now lets look at the first issue, which is “what colour do we want to set the pixels to be and how do we do that?”

Setting Colour of Pixels

So we have sent our vertex positions, and the rasterizer has filled in where the triangle is going to be. But what has it filled? How can we colour the pixels if there aren’t any yet? Well that where fragments come in. You can think of fragments as like pixels that haven’t been born yet. They are a placeholder until we set what the colour is and then there are various process to get them turned into a pixel (outside the scope of this chapter).

So we have our fragments which are the tiny blank canvases which wil become our pixels, we just need to tell OpenGL what colour they should be. We do that with some thing called a fragment shader.

Fragment Shader – https://www.khronos.org/opengl/wiki/Fragment_Shader

Just like Vertex shaders which ran a little program over every vertex, we can run a little program over every fragment to set it’s colour. Just like with vertex shaders, we do that by setting attributes. The attribute that we set though is no longer a built-in like gl_Position was. We have to define an ‘out’ attribute of type vec4 that we will write to.

#version 460 core

out vec4 finalColor;

void main(){
    finalColor= vec4(1.0f, 1.0f, 1.0f, 1.0f);
}

This example shader sets the output colour of the fragment to pure white with an alpha value of 1 (fully opaque).

The values that we write to out attributes to set fragments are sent to temporary locations called framebuffers. This is a slightly advanced topic and the order and type of output attributes from fragment shaders and how they attach to framebuffers won’t be tackled now. All you need to know for now is that if you define 1 attribute of vec4 type, it will get automatically taken care of for you to render to the main ‘colour’ part of the framebuffer.

This example sets the colour for every fragment the same colour. What if we want to set each fragment a different colour? Setting the colour for each individual fragment could get messy if we decide to do it like we did with the vertex shader where we created an array in the shader itself. There are a couple of reasons why. First, we don’t know ahead of time how many fragments there are as the developer of the application (hopefully you) can make the resolution of the window any arbitrary size. Secondly, the array would have to be quite large as there could be hundreds, thousands or even millions of fragments. Thirdly, how would each instance of the fragment program decide which entry in the array it would index into? Before we used gl_vertexID, but there is no such equivelent for fragment shaders. (we’ll actually there kind of is but I’m not going to dwell on it here. Message me if you want to know more).

The alternative is to specify colours on the vertex shader and have the fragment shader recieve those colours as an attribute. So lets go back to our vertex shader and create an out attribute.

const char* vertexShaderSource = R"(
    #version 460 core
    out vec3 colour;

    const vec4 vertices[] = vec4[]( vec4(-0.5f, -0.7f,    0.0, 1.0), 
                                    vec4( 0.5f, -0.7f,    0.0, 1.0),    
                                    vec4( 0.0f,  0.6888f, 0.0, 1.0));   

    const vec3 colours[] = vec3[](  vec3( 1.0, 0.0, 0.0), 
                                    vec3( 0.0, 1.0, 0.0),    
                                    vec3( 0.0, 0.0, 1.0));   

    void main(){
        colour = colours[gl_VertexID];
        gl_Position = vertices[gl_VertexID];  
    }
)";

Line 3: Here we are explicitly telling OpenGL that for every vertex, we are setting a value that will be sent out of the vertex shader. In this case it is of type vec3 and we give it a name of our choosing, ‘colour’.

Lines 9-11: We are using the same trick as before by creating an array directly in the vertex shader.

Line 14: Then we use the same gl_vertexID built in variable to index into that array to pick a particular value for that vertex.

Receiving attributes in the fragment shader

So we are sending it out of the vertex shader, what happens to the attribute now? Well with the position attribute, OpenGL uses that information to know where to draw the triangle. For the colour attribute, by default it doesn’t do anything. We have to state in the the fragment shader that…

  • there is an in attribute that it will have to deal with
  • how we want that attribute to colour the pixels

The ‘in’ attribute qualifier

The first thing we do is state in the fragment shader that we are expecting an in attribute.

const char* fragmentShaderSource = R"(
    #version 460 core

    in vec3 colour;
    out vec4 finalColor;

    void main() {
        finalColor = vec4(colour, 1.0);
    }
)";

Here, we state the in attribute has type vec3 and has the name colour. You might notice this is named exactly the same as the out variable from the vertex shader. This is intentional. OpenGL will automatically detect that there is an out and in attribute on both the vertex and fragment shaders respectively and be able to figure out that they should be the source and target for each other.

There is another way to specify the association of the out and in attributes with special ‘layout qualifiers’. You don’t need to know about that just yet, but it’s worth planting a seed here so that later on I can introduce that concept and mention “remember when we associated attributes by name?! what were we thinking?!”

Attribute Interpolation

So we now have this attribute that is accessible in the fragment shader, but what value is it. And by that I mean, when the triangle is rasterized, all the fragments in the middle of the triangle are filled in. What vertex do they get the colour from? The nearest? When does it switch from one vertex colour to the next? The answer to this is attribute interpolation. Part of the triangle rasterization is that attributes that are specified per vertex are blended based on distance to the vertices. So if a fragment is in the middle of a triangle, then it will receive equal parts of each vertices colour (divided by 3 if it is in the middle only). If it is closer to one of the vertices, then it will receive a higher weighting of that vertices colour and if the fragment is exactly at the vertices position, then it will receive the full vertex colour.

TODO: insert illustration of attribute interpolation

The final code for chapter 2

Now that we know a little more about how shader programs work and how data flows between them and finally ends up on the screen, lets walk through the code for chapter 3 and see how it all fits together.

Here is the final code with the sections we haven’t changed collapsed (some parts have also been left expanded to help show the surrounding context).

… previous code (click to expand)
#include <array>
#include <chrono>     // current time
#include <cmath>      // sin & cos
#include <cstdlib>    // for std::exit()
#include <fmt/core.h> // for fmt::print(). implements c++20 std::format

// this is really important to make sure that glbindings does not clash with
// glfw's opengl includes. otherwise we get ambigous overloads.
#define GLFW_INCLUDE_NONE
#include <GLFW/glfw3.h>

#include <glbinding/gl/gl.h>
#include <glbinding/glbinding.h>

using namespace gl;
using namespace std::chrono;

int main() {

    auto startTime = system_clock::now();

    const auto windowPtr = []() {
… previous code (click to expand)
        if (!glfwInit()) {
            fmt::print("glfw didnt initialize!\n");
            std::exit(EXIT_FAILURE);
        }

        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);
        auto windowPtr = glfwCreateWindow(
            1280, 720, "Chapter 3 - Hello Triangle", nullptr, nullptr);
… previous code (click to expand)
        if (!windowPtr) {
            fmt::print("window doesn't exist\n");
            glfwTerminate();
            std::exit(EXIT_FAILURE);
        }
        glfwSetWindowPos(windowPtr, 520, 180);

        glfwMakeContextCurrent(windowPtr);
        glbinding::initialize(glfwGetProcAddress, false);
        return windowPtr;
  
    }();
 
    const char* vertexShaderSource = R"(
        #version 460 core
        out vec3 colour;

        const vec4 vertices[] = vec4[]( vec4(-0.5f, -0.7f,    0.0, 1.0), 
                                        vec4( 0.5f, -0.7f,    0.0, 1.0),    
                                        vec4( 0.0f,  0.6888f, 0.0, 1.0));   

        const vec3 colours[] = vec3[](  vec3( 1.0, 0.0, 0.0), 
                                        vec3( 0.0, 1.0, 0.0),    
                                        vec3( 0.0, 0.0, 1.0));   

        void main(){
            colour = colours[gl_VertexID];
            gl_Position = vertices[gl_VertexID];  
        }
    )";


This is the vertex shader that we will be telling opengl to use to

  • set the position of the vertices that will be used to rasterize the triangles
  • set the colour attribute that will be interpolated across the surface of the triangle.
    const char* fragmentShaderSource = R"(
        #version 460 core

        in vec3 colour;
        out vec4 finalColor;

        void main() {
            finalColor = vec4(colour, 1.0);
        }
    )";

This is the fragment shader that receives the pre-interpolated attribute that we can use to set the colour of the fragment. In this case we pass the attribute value directly into the final out colour by creating a vec4 value who’s first 3 arguments are specified by passing ina vec3 (our colour) and a float for the opacity to make a vec4.

    auto vertexShader = glCreateShader(GL_VERTEX_SHADER);
    glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr);
    glCompileShader(vertexShader);

When we were learning about shaders in the middle of this chapter, we didn’t say how to actually get the source code into OpenGL and tell it to use that program. That is what we will do here.

  1. We first have to create a shader object by calling the glCreateShader function and passing an argument GL_VERTEX_SHADER which tells opengl that this object is going to be a, you guessed it, vertex shader. This isn’t the object that will do the work, its just a variable that represents the shader and allows us to uniquely reference it by this c++ variable (its actually just an integer id).
  2. We will associate our source code with that shader object by calling glShaderSource and passing it an address to a c-string.
  3. We have to compile the shader by calling glCompileShader. This is similar to how you compile c++ programs, its just that the OpenGL driver compiles the shader while your c++ program is running.
    auto fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
    glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr);
    glCompileShader(fragmentShader);

We have to do the same for the fragment shader.

    auto program = glCreateProgram();
    glAttachShader(program, vertexShader);
    glAttachShader(program, fragmentShader);

The shader program isn’t ready just yet. We have to create a program object by calling glCreateProgram. Then we have to attach the vertex and fragment shaders to this program by calling glAttachShader(). This is all very boilerplate-y isn’t it. Is that a word? I don’t know.

    glLinkProgram(program);
    glUseProgram(program);

Finally we are almost ready to use the program. The second to last thing we need to do is to link the program. Compiling is the process of taking the source code and turning it into machine code, but there is one final step to produce and executable, and that is to link the separate vertex and fragment shaders. This is where all attributes are linked up and any final errors in the shaders are found.

Then we need to tell OpenGL to actually use the program. If we don’t it won’t know how to process the vertices and colour the fragments.

    // in core profile, at least 1 vao is needed
    GLuint vao;
    glCreateVertexArrays(1, &vao);
    glBindVertexArray(vao);

This is one step that I wish we didn’t have to do at this stage, as we haven’t learnt about VAO’s yet and dont need to know about them at this stage. But unfortunately, we need to have one of these things in our application for OpenGL to be able to draw things. I hate asking people to do this, but ignore this for now. We will revisit in chapter 5. Just copy and paste it into your application.

For those dying to know, VAO’s are Vertex Array Objects and they store bits of information about vertex data, type, format and location for us.
… previous code (click to expand)
    std::array<GLfloat, 4> clearColour;

    while (!glfwWindowShouldClose(windowPtr)) {

… previous code (click to expand)
        auto currentTime =
            duration<float>(system_clock::now() - startTime).count();
        clearColour = {std::sin(currentTime) * 0.5f + 0.5f,
                       std::cos(currentTime) * 0.5f + 0.5f, 0.2f, 1.0f};

        glClearBufferfv(GL_COLOR, 0, clearColour.data());

        glDrawArrays(GL_TRIANGLES, 0, 3);

And now we have told OpenGL what shader program to use, we just have to tell it to draw a triangle by calling the glDrawArrays() function with 3 vertices and you should see a triangle!

… previous code (click to expand)
        glfwSwapBuffers(windowPtr);
        glfwPollEvents();
    }

    glfwTerminate();
}

Thats the end of this chapter! Phew! That was actually a lot of information and you have just ingested a massive bulk of what OpenGL is all about. No doubt you may have experienced an bug here and there due to typos or mismatches attributes in your shader. If something did go wrong in your c++ program code, hopefully the c++ compiler was able to give you some kind of indication as to where or what the error was. But how do we know if our shader is compiling and linking correctly? You might be getting a black triangle or even no triangle at all and not know why. That can be VERY frustrating, let me tell you (you probably know). I’ve been away at 2am many times trying to get opengl to render a triangle and not know why it isn’t working.

In the next chapter, we will see how we can easily enable some built in debugging features in OpenGL so that if something does go wrong, we can at least be notified about it and even maybe be told what is wrong. Catch you later.

Dokipen