GLSL

Modern OpenGL: Part 10 Sending Uniform Parameters to Shaders

So far, our shader programs that we have been using in previous posts, have been static. By that, I mean that you compile them and all of the variables in the shader are fixed and never change (The only thing that can change is the vertex data itself). In this part on our journey of learning opengl, we are going to be controlling some parameters in the shader program so that we can change values over time by sending them into the shader from our c++ code, every frame. The are basically variables that we can tweak, and they are very fast and easy to change as opposed to having to change vertex data.

The way we do this is by using “uniform parameters”. These are variables in the shader that are exposed to the external c++ code and can be set dynamically. The reason they are called uniform, is because they have the same value across all invocations of the shader on all of your vertices/fragments.

There isn’t much we actually need to do for this to work. First lets deal with what we have to do in our shader. Previously we have some shader variables which were just set in the program itself to a hard coded value. Now we need to declare them as uniform. As we just said before, the word uniform itself describes how the value is going to be the same for all vertices/fragments that the shader program operates on. Instead of varying (which means different per vertex/fragment, like attributes), it is the same for each vertex/fragment invocation.

… previous code (click to expand)
#include "error_handling.hpp"
#include <array>
#include <chrono>     // current time
#include <cmath>      // sin & cos
#include <cstdlib>    // for std::exit()
#include <fmt/core.h> // for fmt::print(). implements c++20 std::format
#include <unordered_map>

// this is really important to make sure that glbindings does not clash with
// glfw's opengl includes. otherwise we get ambigous overloads.
#define GLFW_INCLUDE_NONE
#include <GLFW/glfw3.h>

#include <glbinding/gl/gl.h>
#include <glbinding/glbinding.h>

#include <glbinding-aux/debug.h>

#include "glm/glm.hpp"

using namespace gl;
using namespace std::chrono;

int main() {

    auto startTime = system_clock::now();
    const int width = 1280;
    const int height = 720;

    auto windowPtr = [](int w, int h) {
        if (!glfwInit()) {
            fmt::print("glfw didnt initialize!\n");
            std::exit(EXIT_FAILURE);
        }

        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);

        /* Create a windowed mode window and its OpenGL context */
        auto windowPtr = glfwCreateWindow(w, h, "Chapter 9 - Full Screen Effects (Diy Shadertoy!)",
                                       nullptr, nullptr);

        if (!windowPtr) {
            fmt::print("window doesn't exist\n");
            glfwTerminate();
            std::exit(EXIT_FAILURE);
        }
        glfwSetWindowPos(windowPtr, 520, 180);

        glfwMakeContextCurrent(windowPtr);
        glbinding::initialize(glfwGetProcAddress, false);
        return windowPtr;
    }(width, height);
    // debugging
    {
        glEnable(GL_DEBUG_OUTPUT);
        glDebugMessageCallback(errorHandler::MessageCallback, 0);
        glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
        glDebugMessageControl(GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_OTHER,
                              GL_DEBUG_SEVERITY_NOTIFICATION, 0, nullptr, false);
    }

    auto createProgram = [](const char* vertexShaderSource,
                            const char* fragmentShaderSource) -> GLuint {
        auto vertexShader = glCreateShader(GL_VERTEX_SHADER);
        glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr);
        glCompileShader(vertexShader);
        errorHandler::checkShader(vertexShader, "Vertex");

        auto fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
        glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr);
        glCompileShader(fragmentShader);
        errorHandler::checkShader(fragmentShader, "Fragment");

        auto program = glCreateProgram();
        glAttachShader(program, vertexShader);
        glAttachShader(program, fragmentShader);

        glLinkProgram(program);
        return program;
    };

    auto program = createProgram(R"(
            #version 450 core

            const vec4 vertices[] = vec4[]( vec4(-1.f, -1.f, 0.0, 1.0),
                                        vec4( 3.f, -1.f, 0.0, 1.0),    
                                        vec4(-1.f,  3.f, 0.0, 1.0));    

            void main(){
                gl_Position = vertices[gl_VertexID]; 
            }
        )",
                                 R"(
        #version 450 core

        // The MIT License
        // Copyright © 2013 Inigo Quilez
        // Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
        // associated documentation files (the "Software"), to deal in the Software without restriction,
        // including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
        // and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
        // subject to the following conditions: The above copyright notice and this permission notice shall be
        // included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS",
        // WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
        // MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
        // COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
        // TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
        // IN THE SOFTWARE.
        //
        // I've not seen anybody out there computing correct cell interior distances for Voronoi
        // patterns yet. That's why they cannot shade the cell interior correctly, and why you've
        // never seen cell boundaries rendered correctly.
        //
        // However, here's how you do mathematically correct distances (note the equidistant and non
        // degenerated grey isolines inside the cells) and hence edges (in yellow):
        //
        // http://www.iquilezles.org/www/articles/voronoilines/voronoilines.htm
        //
        // More Voronoi shaders:
        //
        // Exact edges:  https://www.shadertoy.com/view/ldl3W8
        // Hierarchical: https://www.shadertoy.com/view/Xll3zX
        // Smooth:       https://www.shadertoy.com/view/ldB3zc
        // Voronoise:    https://www.shadertoy.com/view/Xd23Dh
   
        out vec4 fragColor;
        vec4 fragCoord  = gl_FragCoord;
        vec2 iMouse = vec2(960.f,0.f);



        

What we have done here is that we have created the same declarations of the variables that the shader program requires (that were previously provided by shadertoy behind the scenes) but now we want to change two of them (iTime and iResolution) to be exposed to and controlled by our c++ code. Here is how we do that…

        uniform float iTime;
        uniform vec2 iResolution;
        

We have declared them as uniform variables. This is pretty much like declaring any other variable, but it has the “uniform” specifier in front of it. Now when the shader gets compiled, it will take that into account and provide a way for us to set the value of that variable any time we want.

… previous code (click to expand)
        mat2 rot(in float a){float c = cos(a), s = sin(a);return mat2(c,s,-s,c);}
        const mat3 m3 = mat3(0.33338, 0.56034, -0.71817, -0.87887, 0.32651, -0.15323, 0.15162, 0.69596, 0.61339)*1.93;
        float mag2(vec2 p){return dot(p,p);}
        float linstep(in float mn, in float mx, in float x){ return clamp((x - mn)/(mx - mn), 0., 1.); }
        float prm1 = 0.;
        vec2 bsMo = vec2(0);

        vec2 disp(float t){ return vec2(sin(t*0.22)*1., cos(t*0.175)*1.)*2.; }

        vec2 map(vec3 p)
        {
            vec3 p2 = p;
            p2.xy -= disp(p.z).xy;
            p.xy *= rot(sin(p.z+iTime)*(0.1 + prm1*0.05) + iTime*0.09);
            float cl = mag2(p2.xy);
            float d = 0.;
            p *= .61;
            float z = 1.;
            float trk = 1.;
            float dspAmp = 0.1 + prm1*0.2;
            for(int i = 0; i < 5; i++)
            {
                p += sin(p.zxy*0.75*trk + iTime*trk*.8)*dspAmp;
                d -= abs(dot(cos(p), sin(p.yzx))*z);
                z *= 0.57;
                trk *= 1.4;
                p = p*m3;
            }
            d = abs(d + prm1*3.)+ prm1*.3 - 2.5 + bsMo.y;
            return vec2(d + cl*.2 + 0.25, cl);
        }

        vec4 render( in vec3 ro, in vec3 rd, float time )
        {
            vec4 rez = vec4(0);
            const float ldst = 8.;
            vec3 lpos = vec3(disp(time + ldst)*0.5, time + ldst);
            float t = 1.5;
            float fogT = 0.;
            for(int i=0; i<130; i++)
            {
                if(rez.a > 0.99)break;

                vec3 pos = ro + t*rd;
                vec2 mpv = map(pos);
                float den = clamp(mpv.x-0.3,0.,1.)*1.12;
                float dn = clamp((mpv.x + 2.),0.,3.);
                
                vec4 col = vec4(0);
                if (mpv.x > 0.6)
                {
                
                    col = vec4(sin(vec3(5.,0.4,0.2) + mpv.y*0.1 +sin(pos.z*0.4)*0.5 + 1.8)*0.5 + 0.5,0.08);
                    col *= den*den*den;
                    col.rgb *= linstep(4.,-2.5, mpv.x)*2.3;
                    float dif =  clamp((den - map(pos+.8).x)/9., 0.001, 1. );
                    dif += clamp((den - map(pos+.35).x)/2.5, 0.001, 1. );
                    col.xyz *= den*(vec3(0.005,.045,.075) + 1.5*vec3(0.033,0.07,0.03)*dif);
                }
                
                float fogC = exp(t*0.2 - 2.2);
                col.rgba += vec4(0.06,0.11,0.11, 0.1)*clamp(fogC-fogT, 0., 1.);
                fogT = fogC;
                rez = rez + col*(1. - rez.a);
                t += clamp(0.5 - dn*dn*.05, 0.09, 0.3);
            }
            return clamp(rez, 0.0, 1.0);
        }

        float getsat(vec3 c)
        {
            float mi = min(min(c.x, c.y), c.z);
            float ma = max(max(c.x, c.y), c.z);
            return (ma - mi)/(ma+ 1e-7);
        }

        //from my "Will it blend" shader (https://www.shadertoy.com/view/lsdGzN)
        vec3 iLerp(in vec3 a, in vec3 b, in float x)
        {
            vec3 ic = mix(a, b, x) + vec3(1e-6,0.,0.);
            float sd = abs(getsat(ic) - mix(getsat(a), getsat(b), x));
            vec3 dir = normalize(vec3(2.*ic.x - ic.y - ic.z, 2.*ic.y - ic.x - ic.z, 2.*ic.z - ic.y - ic.x));
            float lgt = dot(vec3(1.0), ic);
            float ff = dot(dir, normalize(ic));
            ic += 1.5*dir*sd*ff*lgt;
            return clamp(ic,0.,1.);
        }

        void main() 
        {	
            vec2 q = fragCoord.xy/iResolution.xy;
            vec2 p = (gl_FragCoord.xy - 0.5*iResolution.xy)/iResolution.y;
            bsMo = (iMouse.xy - 0.5*iResolution.xy)/iResolution.y;
            
            float time = iTime*3.;
            vec3 ro = vec3(0,0,time);
            
            ro += vec3(sin(iTime)*0.5,sin(iTime*1.)*0.,0);
                
            float dspAmp = .85;
            ro.xy += disp(ro.z)*dspAmp;
            float tgtDst = 3.5;
            
            vec3 target = normalize(ro - vec3(disp(time + tgtDst)*dspAmp, time + tgtDst));
            ro.x -= bsMo.x*2.;
            vec3 rightdir = normalize(cross(target, vec3(0,1,0)));
            vec3 updir = normalize(cross(rightdir, target));
            rightdir = normalize(cross(updir, target));
            vec3 rd=normalize((p.x*rightdir + p.y*updir)*1. - target);
            rd.xy *= rot(-disp(time + 3.5).x*0.2 + bsMo.x);
            prm1 = smoothstep(-0.4, 0.4,sin(iTime*0.3));
            vec4 scn = render(ro, rd, time);
                
            vec3 col = scn.rgb;
            col = iLerp(col.bgr, col.rgb, clamp(1.-prm1,0.05,1.));
            
            col = pow(col, vec3(.55,0.65,0.6))*vec3(1.,.97,.9);

            col *= pow( 16.0*q.x*q.y*(1.0-q.x)*(1.0-q.y), 0.12)*0.7+0.3; //Vign
            
            fragColor = vec4( col, 1.0 );
        }
    )");



    GLuint vao;
    glCreateVertexArrays(1, &vao);
    glBindVertexArray(vao);

    glUseProgram(program);

    int timeUniformLocation = glGetUniformLocation(program, "iTime");
    int resolutionUniformLocation = glGetUniformLocation(program, "iResolution");

Now in our c++ code, we need a way to be able to refer to the location of that variable from the shader. This is done with the glGetUniformLocation function call. We ask the shader “program” where it’s “iTime” variable is and OpenGL will return us back an integer location that will be something like 0, 1, 2 etc depending on what OpenGL itself decided it would be. It doesn’t actually really matter what it is, because we are storing it as a variable and we never need to know what the actual value is itself. We just need to know that we can use it later on to set the value of the variable in the shader code using that integer ‘location’.

Previously, we used binding locations inside of shaders as a way to refer to the location inside of a shader, however I thought this was a good way to show you an alternative. This is often viewed as an older way of doing things, but I like it and still think it has its uses. So you can decide for yourself which method you want to use.

So now that we have that location, then we can use it to set the value of the shader variable at that location. We do that with a family of functions that all start with

glProgramUniform* (the star/wildcard here states that there is some text missing that dictates the type we want to change).

So for example, if we want to change the uniform variable that has a type in the shader that is a vec2, then we want to use the glProgramUniform2f function. And for a uniform variable that is a single ‘float’, then we want to use glProgramUniform1f.

    glProgramUniform2f(program, resolutionUniformLocation, width, height);


    while (!glfwWindowShouldClose(windowPtr)) {

        auto currentTime = duration<float>(system_clock::now() - startTime).count();
        glProgramUniform1f(program, timeUniformLocation, currentTime);

        // draw full screen triangle
        glDrawArrays(GL_TRIANGLES, 0, 3);

        glfwSwapBuffers(windowPtr);
        glfwPollEvents();
    }

    glfwTerminate();
}

Here, just outside our main loop, we are setting the shader variable using the location “resolutionUniformLocation” that we previously got from asking where the “iResolution” variable was.= in the shader. We pass the two float values in as arguments to the function and they will get passed into the shader for us. This function is being called outside the loop because we only really want to set it once. But it still needs to be called because the resolution might have been set at program start up based on some user input (like command line parameters) and we still need to tell the shader what that is. If the window was changed size and we had written a function to handle that (which glfw allows up to do, although we haven’t explored that yet), then we would be able to update the shader accordingly.

Then we are calling the next glProgramUniform1f function inside the loop. This is because we want to update the value of the “iTime” in the shader with the current value from the runtime of the program. It is “1f” because it is a single float and this is something that is realtime and needs to update every frame.

That is pretty much it then for uniform variables. In the next chapter we will be tackling the ultimate graphics programming rite of passage… writing an obj loader from scratch to be able to load meshes into our programs and have them rendered on the screen!

Modern OpenGL: Part 9 Full Screen Effects for a DIY Shadertoy Project

We are going to be taking a couple of chapters to step away from the mechanical process of just managing the feeding of vertex data into shaders and have a bit of fun by rendering some full screen shader effects.

The idea for this comes from the Shadertoy website that hosts shaders that render interesting effects and scenes using just fragment shaders and does it in the web browser! Really what is happening is that a full screen triangle is being drawn (very much like the background gradient triangle that we drew in the previous chapter) and then the fragment program is being run for every pixel in the screen. Each invocation of the fragment program determines the colour of the pixel, so some very interesting and complex effects can be achieved with some fancy math.

For our program that we are compiling locally on our own machine, all we have to do really is draw a triangle, and assign a fragment shader that contains the code from a shadertoy example and we should get the same results.

We aren’t going to be learning about the specific shadertoy example and the maths behind it in this chapter. We will save that for a future time. We are instead going to be focusing on how we can use what we have already learnt to do something that is fun.

This is the shadertoy example we are going to be running in our program for this example. The basic idea though is that it should technically be possible to get any glsl code and run it in our program wherever it comes from.

https://www.shadertoy.com/view/3l23Rh#

We can’t copy and paste the shader code directly though. There are certain variables that the code references that are needed to be set for it to work. When the shader is running in the browser, the shadertoy page code is responsible for making sure that those variables are present for the program to run. We have to do that manually in this case as well. We will cover that when we get to that point in the shader.

… previous code (click to expand)
#include "error_handling.hpp"
#include <array>
#include <chrono>     // current time
#include <cmath>      // sin & cos
#include <cstdlib>    // for std::exit()
#include <fmt/core.h> // for fmt::print(). implements c++20 std::format
#include <unordered_map>

// this is really important to make sure that glbindings does not clash with
// glfw's opengl includes. otherwise we get ambigous overloads.
#define GLFW_INCLUDE_NONE
#include <GLFW/glfw3.h>

#include <glbinding/gl/gl.h>
#include <glbinding/glbinding.h>

#include <glbinding-aux/debug.h>

#include "glm/glm.hpp"

using namespace gl;
using namespace std::chrono;

int main() {

    auto startTime = system_clock::now();
    const int width = 1600;
    const int height = 900;

    auto windowPtr = [&]() {
        if (!glfwInit()) {
            fmt::print("glfw didnt initialize!\n");
            std::exit(EXIT_FAILURE);
        }

        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 5);

        /* Create a windowed mode window and its OpenGL context */
        auto windowPtr = glfwCreateWindow(width, height, "Chapter 9 - Full Screen Effects (Diy Shadertoy!)",
                                       nullptr, nullptr);

        if (!windowPtr) {
            fmt::print("window doesn't exist\n");
            glfwTerminate();
            std::exit(EXIT_FAILURE);
        }
        glfwSetWindowPos(windowPtr, 520, 180);
        glfwMakeContextCurrent(windowPtr);
        glbinding::initialize(glfwGetProcAddress, false);
        return windowPtr;
    }();

    // debugging
    {
        glEnable(GL_DEBUG_OUTPUT);
        glDebugMessageCallback(errorHandler::MessageCallback, 0);
        glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
        glDebugMessageControl(GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_OTHER,
                              GL_DEBUG_SEVERITY_NOTIFICATION, 0, nullptr, false);
    }

    auto createProgram = [](const char* vertexShaderSource,
                            const char* fragmentShaderSource) -> GLuint {
        auto vertexShader = glCreateShader(GL_VERTEX_SHADER);
        glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr);
        glCompileShader(vertexShader);
        errorHandler::checkShader(vertexShader, "Vertex");

        auto fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
        glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr);
        glCompileShader(fragmentShader);
        errorHandler::checkShader(fragmentShader, "Fragment");

        auto program = glCreateProgram();
        glAttachShader(program, vertexShader);
        glAttachShader(program, fragmentShader);

        glLinkProgram(program);
        return program;
    };

auto program = createProgram(R"(
        #version 450 core

        const vec4 vertices[] = vec4[]( vec4(-1.f, -1.f, 0.0, 1.0),
                                        vec4( 3.f, -1.f, 0.0, 1.0),    
                                        vec4(-1.f,  3.f, 0.0, 1.0));    

        void main(){
            gl_Position = vertices[gl_VertexID]; 
        }
    )",

Here we are starting to call our createProgram() function that we created in the earlier chapters and are passing it the vertex shader string. This is the same as we saw in chapter 8 where we drew a fullscreen triangle (we are storing the positions of 3 vertices which we will draw with a call to DrawTriangles() with a count of 3).

The next argument to the createProgram function is our fragment shader string. This is the code that we have copied from shader toy.

 
                                R"(
            
        // Protean clouds by nimitz (twitter: @stormoid)
    // https://www.shadertoy.com/view/3l23Rh
    // License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License
    // Contact the author for other licensing options

    /*
        Technical details:

        The main volume noise is generated from a deformed periodic grid, which can produce
        a large range of noise-like patterns at very cheap evalutation cost. Allowing for multiple
        fetches of volume gradient computation for improved lighting.

        To further accelerate marching, since the volume is smooth, more than half the the density
        information isn't used to rendering or shading but only as an underlying volume	distance to 
        determine dynamic step size, by carefully selecting an equation	(polynomial for speed) to 
        step as a function of overall density (not necessarialy rendered) the visual results can be 
        the	same as a naive implementation with ~40% increase in rendering performance.

        Since the dynamic marching step size is even less uniform due to steps not being rendered at all
        the fog is evaluated as the difference of the fog integral at each rendered step.

    */

Make sure to adhere to the license of any code you use!

Next there are some things that we need to add to the shader to at this point make it work.

First we have to declare an output attribute in the fragment shader that will be the location that the fragment colour will be written to.

    
    out vec4 fragColor;

Shadertoy also provides a few built in variables that an author of a shader is able to use to figure out things like

  • what the current elapsed time is since the rendering began
  • what the resolution of the shader screen/window is
  • where the mouse is clicking

We have to provide those to the shader, otherwise those variables wont be defined and it wont compile. In this case we have to make sure that the value of the resolution matches our actual resolution of our opengl window. (we will see in the next chanpter how we can ‘tell’ the shader what the resolution is without having to explicitly set in the code).

    
    float iTime = 1.14f;
    vec2 iResolution = vec2(1600, 900);
    vec2 iMouse = vec2(960.f,0.f);

Shadertoy also provides a variable called fragCoord, which is a renamed version of gl_FragCoord. This is a GLSL built in variable that tells you what the current pixel’s screen coordinate is in terms of the resolution (its a vec4, but we are interested in only the first two components). This can normally be used as it but for some reason, this shader uses another name for it. So we could go in and fix that bit in the code, but its just as easy to add this at the top…

    
    vec4 fragCoord  = gl_FragCoord;

Now this is the rest of the fragment shader…

    mat2 rot(in float a){float c = cos(a), s = sin(a);return mat2(c,s,-s,c);}
    const mat3 m3 = mat3(0.33338, 0.56034, -0.71817, -0.87887, 0.32651, -0.15323, 0.15162, 0.69596, 0.61339)*1.93;
    float mag2(vec2 p){return dot(p,p);}
    float linstep(in float mn, in float mx, in float x){ return clamp((x - mn)/(mx - mn), 0., 1.); }
    float prm1 = 0.;
    vec2 bsMo = vec2(0);

    vec2 disp(float t){ return vec2(sin(t*0.22)*1., cos(t*0.175)*1.)*2.; }

    vec2 map(vec3 p)
    {
        vec3 p2 = p;
        p2.xy -= disp(p.z).xy;
        p.xy *= rot(sin(p.z+iTime)*(0.1 + prm1*0.05) + iTime*0.09);
        float cl = mag2(p2.xy);
        float d = 0.;
        p *= .61;
        float z = 1.;
        float trk = 1.;
        float dspAmp = 0.1 + prm1*0.2;
        for(int i = 0; i < 5; i++)
        {
            p += sin(p.zxy*0.75*trk + iTime*trk*.8)*dspAmp;
            d -= abs(dot(cos(p), sin(p.yzx))*z);
            z *= 0.57;
            trk *= 1.4;
            p = p*m3;
        }
        d = abs(d + prm1*3.)+ prm1*.3 - 2.5 + bsMo.y;
        return vec2(d + cl*.2 + 0.25, cl);
    }

    vec4 render( in vec3 ro, in vec3 rd, float time )
    {
        vec4 rez = vec4(0);
        const float ldst = 8.;
        vec3 lpos = vec3(disp(time + ldst)*0.5, time + ldst);
        float t = 1.5;
        float fogT = 0.;
        for(int i=0; i<130; i++)
        {
            if(rez.a > 0.99)break;

            vec3 pos = ro + t*rd;
            vec2 mpv = map(pos);
            float den = clamp(mpv.x-0.3,0.,1.)*1.12;
            float dn = clamp((mpv.x + 2.),0.,3.);
            
            vec4 col = vec4(0);
            if (mpv.x > 0.6)
            {
            
                col = vec4(sin(vec3(5.,0.4,0.2) + mpv.y*0.1 +sin(pos.z*0.4)*0.5 + 1.8)*0.5 + 0.5,0.08);
                col *= den*den*den;
                col.rgb *= linstep(4.,-2.5, mpv.x)*2.3;
                float dif =  clamp((den - map(pos+.8).x)/9., 0.001, 1. );
                dif += clamp((den - map(pos+.35).x)/2.5, 0.001, 1. );
                col.xyz *= den*(vec3(0.005,.045,.075) + 1.5*vec3(0.033,0.07,0.03)*dif);
            }
            
            float fogC = exp(t*0.2 - 2.2);
            col.rgba += vec4(0.06,0.11,0.11, 0.1)*clamp(fogC-fogT, 0., 1.);
            fogT = fogC;
            rez = rez + col*(1. - rez.a);
            t += clamp(0.5 - dn*dn*.05, 0.09, 0.3);
        }
        return clamp(rez, 0.0, 1.0);
    }

    float getsat(vec3 c)
    {
        float mi = min(min(c.x, c.y), c.z);
        float ma = max(max(c.x, c.y), c.z);
        return (ma - mi)/(ma+ 1e-7);
    }

    //from my "Will it blend" shader (https://www.shadertoy.com/view/lsdGzN)
    vec3 iLerp(in vec3 a, in vec3 b, in float x)
    {
        vec3 ic = mix(a, b, x) + vec3(1e-6,0.,0.);
        float sd = abs(getsat(ic) - mix(getsat(a), getsat(b), x));
        vec3 dir = normalize(vec3(2.*ic.x - ic.y - ic.z, 2.*ic.y - ic.x - ic.z, 2.*ic.z - ic.y - ic.x));
        float lgt = dot(vec3(1.0), ic);
        float ff = dot(dir, normalize(ic));
        ic += 1.5*dir*sd*ff*lgt;
        return clamp(ic,0.,1.);
    }

One other thing we have to do in our copy of the shaderytoy example is to change the name of the mainImage function in line 112 of the original source to main(). This is because opengl expects a main() function as its entry point.

    void main() // previously void mainImage( out vec4 fragColor, in vec2 fragCoord )
    {	


Now we paste in the rest of the main function from shadertoy…

        vec2 q = fragCoord.xy/iResolution.xy;
        vec2 p = (gl_FragCoord.xy - 0.5*iResolution.xy)/iResolution.y;
        bsMo = (iMouse.xy - 0.5*iResolution.xy)/iResolution.y;
        
        float time = iTime*3.;
        vec3 ro = vec3(0,0,time);
        
        ro += vec3(sin(iTime)*0.5,sin(iTime*1.)*0.,0);
            
        float dspAmp = .85;
        ro.xy += disp(ro.z)*dspAmp;
        float tgtDst = 3.5;
        
        vec3 target = normalize(ro - vec3(disp(time + tgtDst)*dspAmp, time + tgtDst));
        ro.x -= bsMo.x*2.;
        vec3 rightdir = normalize(cross(target, vec3(0,1,0)));
        vec3 updir = normalize(cross(rightdir, target));
        rightdir = normalize(cross(updir, target));
        vec3 rd=normalize((p.x*rightdir + p.y*updir)*1. - target);
        rd.xy *= rot(-disp(time + 3.5).x*0.2 + bsMo.x);
        prm1 = smoothstep(-0.4, 0.4,sin(iTime*0.3));
        vec4 scn = render(ro, rd, time);
            
        vec3 col = scn.rgb;
        col = iLerp(col.bgr, col.rgb, clamp(1.-prm1,0.05,1.));
        
        col = pow(col, vec3(.55,0.65,0.6))*vec3(1.,.97,.9);

        col *= pow( 16.0*q.x*q.y*(1.0-q.x)*(1.0-q.y), 0.12)*0.7+0.3; //Vign
        
        fragColor = vec4( col, 1.0 );
    }
    )");
It’s personally reasonable to want to have this shader code be a text file on disk and load it when we run our c++ program. That is definitely a useful way to go about it and if you are writing your own application, then that might be a way to go if you are editing and changing the vertex/fragment shaders often and don’t want to recompile your c++ program every time.

Right now if we change the shader code, we have to recompile our c++ program to embed the shader code into the c++ binary. Not a very efficient way of working! But this is for educational purposes, so we shall keep it like this to improve readability. I find if I can make code as linear as possible to read, then the learning isn’t as clouded as you don’t have to go and jump around too much.
   
    glUseProgram(program);

    GLuint vao;
    glCreateVertexArrays(1, &vao);
    glBindVertexArray(vao);

    while (!glfwWindowShouldClose(windowPtr)) {

        // draw full screen triangle
        glDrawArrays(GL_TRIANGLES, 0, 3);

        glfwSwapBuffers(windowPtr);
        glfwPollEvents();
    }

    glfwTerminate();
}

This last piece of the code is just our standard render loop. You can see its even simpler now as we don’t have to clear the screen (we know the triangle is going to be drawing over every pixel).

Isn’t that just a lovely thing to see? Next chapter, we will learn how to communicate values from our c++ program into our shader program thought the use of “Uniform” Shader Variables.

Modern OpenGL: Part 8 Switching Multiple Buffer Objects to draw a Gradient Background

In this chapter we will be seeing how we can use multiple buffers that contain different meshes and switch between them. The use case in this example is that we want to draw a gradient background in our window. This also introduces a new technique for drawing a full screen mesh that will be useful for the next chapter (where we implement a shadertoy-like GLSL shader viewer).

I’m going to preface this chapter with the advice that you should try and bunch together as many of your meshes together into a single buffer as possible. Even if they are different meshes, the vertices can still live together in the same buffer. At the moment, you might be wondering how we can do that and still draw objects that have different textures and shaders etc. Well there are techniques (which we will cover in future chapters) that show you how to do this. The basic idea is that you can store the vertices in one big buffer, but there’s nothing saying you have to render all of them in that buffer. You can draw a subrange quite easily).

However for now, it might be that you absolutely need to have two separate buffers and want to switch between them. Well we have seen that VAOs are the mechanism which marshall the data from buffers to shaders. There are two ways to switch buffers….

  1. call glVertexArrayVertexBuffer() between draw calls to change the buffer object on the currently bound VAO. This is only possible if the vertex format is the same across buffers. If the vertex format/layout changes, then this wont work. Of course you could also call glVertexArrayAttribFormat() and glVertexArrayAttribBinding() on the currently bound VAO, but this kind of defeats the purpose of the VAO which seems to be a wrapper around having to call these functions multiple times when you want to switch buffers that have different formats/layout.
  2. Bind a completely different VAO. This is recommended if your vertex format changes.

In this case, we are going to be using method 1, where we will have a single VAO that shares the vertex format, but we will switch the buffer with a call to glVertexArrayVertexBuffer().

Preparing the Data

Here we are creating our two c++ arrays that hold the data that will go into the two buffers.

… previous code (click to expand)
#include "error_handling.hpp"
#include <array>
#include <chrono>     // current time
#include <cmath>      // sin & cos
#include <cstdlib>    // for std::exit()
#include <fmt/core.h> // for fmt::print(). implements c++20 std::format
#include <unordered_map>

// this is really important to make sure that glbindings does not clash with
// glfw's opengl includes. otherwise we get ambigous overloads.
#define GLFW_INCLUDE_NONE
#include <GLFW/glfw3.h>

#include <glbinding/gl/gl.h>
#include <glbinding/glbinding.h>

#include <glbinding-aux/debug.h>

#include "glm/glm.hpp"

using namespace gl;
using namespace std::chrono;

int main() {

    auto startTime = system_clock::now();

    const auto windowPtr = []() {
        if (!glfwInit()) {
            fmt::print("glfw didnt initialize!\n");
            std::exit(EXIT_FAILURE);
        }

        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);

        auto windowPtr = glfwCreateWindow(1280, 720, "Chapter 8 - Multiple Vertex Array Objects", nullptr, nullptr);

        if (!windowPtr) {
            fmt::print("window doesn't exist\n");
            glfwTerminate();
            std::exit(EXIT_FAILURE);
        }
        glfwSetWindowPos(windowPtr, 520, 180);

        glfwMakeContextCurrent(windowPtr);
        glbinding::initialize(glfwGetProcAddress, false);
        return windowPtr;
    }();

    // debugging
    {
        glEnable(GL_DEBUG_OUTPUT);
        glDebugMessageCallback(errorHandler::MessageCallback, 0);
        glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
        glDebugMessageControl(GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_OTHER, GL_DEBUG_SEVERITY_NOTIFICATION, 0, nullptr,
                              false);
    }

    auto createProgram = [](const char* vertexShaderSource, const char* fragmentShaderSource) -> GLuint {
        auto vertexShader = glCreateShader(GL_VERTEX_SHADER);
        glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr);
        glCompileShader(vertexShader);
        errorHandler::checkShader(vertexShader, "Vertex");

        auto fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
        glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr);
        glCompileShader(fragmentShader);
        errorHandler::checkShader(fragmentShader, "Fragment");

        auto program = glCreateProgram();
        glAttachShader(program, vertexShader);
        glAttachShader(program, fragmentShader);

        glLinkProgram(program);
        return program;
    };

    auto program = createProgram(R"(
        #version 450 core
        layout (location = 0) in vec3 position;
        layout (location = 1) in vec3 colours;

        out vec3 vertex_colour;

        void main(){
            vertex_colour = colours;
            gl_Position = vec4(position, 1.0f);
        }
    )",
                                 R"(
        #version 450 core

        in vec3 vertex_colour;
        out vec4 finalColor;

        void main() {
            finalColor = vec4(  vertex_colour.x,
                                vertex_colour.y,
                                vertex_colour.z,
                                1.0);
        }
    )");

    struct vertex3D {
        glm::vec3 position;
        glm::vec3 colour;
    };
    // clang-format off
    // interleaved data
    // make 3d to see clipping
    const std::array<vertex3D, 3> backGroundVertices {{
        //   position   |     colour
        {{-1.f, -1.f, 0.f},  {0.12f, 0.14f, 0.16f}},
        {{ 3.f, -1.f, 0.f},  {0.12f, 0.14f, 0.16f}},
        {{-1.f,  3.f, 0.f},  {0.80f, 0.80f, 0.82f}}
    }};

    const std::array<vertex3D, 3> foregroundVertices {{
        //   position   |     colour
        {{-0.5f, -0.7f,  0.01f},  {1.f, 0.f, 0.f}},
        {{0.5f, -0.7f,  -0.01f},  {0.f, 1.f, 0.f}},
        {{0.0f, 0.6888f, 0.01f}, {0.f, 0.f, 1.f}}
    }};
    // clang-format on

Drawing a full screen Mesh

If you want to draw a full screen mesh, then you might think that two triangles that fill the screen to make a ‘quad’ is what you want.

This wilil certainly ‘work’, but there is a simpler way. We can draw a single triangle that covers the whole window. Anything that extends out of the window bounds will simply not get rendered.

Here the red section is the window area (in opengl normalized device coordinates, we will cover this in a future chapter) and the green is the triangle that we are drawing which covers this region.

The second array is the triangle that we are going to draw over the top, like so…..

Did you notice how the z component of one of the vertices (the bottom right one) is negative? What effect do you think this will have on the rendering? The background triangle is centered at 0 in the z direction. So far we haven’t been concerned with 3D, but this is certainly possible to use the third z component to control the layering of our 2D elements. Do we have intersecting geometry here? Lets see once we have rendered.

Creating Objects using C++ Lambdas

I’m going to introduce you to a c++ feature that I like to use when writing small educational programs like this where I like to keep the code readable and locally reasonable. I like not having to jump around code to different locations just to see what a helper function is doing. So using a lambda, we can create a local function right there in place and use it immediately.

    auto createBuffer = [&program](const std::array<vertex3D, 3>& vertices) -> GLuint {     
        GLuint bufferObject;
        glCreateBuffers(1, &bufferObject);

        // upload immediately
        glNamedBufferStorage(bufferObject, vertices.size() * sizeof(vertex3D), vertices.data(),
                             GL_MAP_WRITE_BIT | GL_DYNAMIC_STORAGE_BIT);

        return bufferObject;  
    };

    auto backGroundBuffer = createBuffer(backGroundVertices);
    auto foreGroundBuffer = createBuffer(foregroundVertices);

The way this works is we use the word auto to store the lambda, which is essentially a function object. It doesn’t actually run the code, just stores the function as a variable that we can call later. We do that immediately and create two buffers by invoking that lambda and passing in our c++ arrays. The code inside the lambda gets run for both calls, just like it was a normal function. The difference is, its right there, we can read it and as nothing else needs to use it, why does it need to be a public function anyway? Local reasoning about code is a beautiful thing.

Now we do something similar for the vao. This technically doesn’t need to be a lambda, but I do it anyway (partly out of habit, and partly because it indents the code and wraps things up nicely and the name is descriptive and is self-commenting. Also we will be extending it in the future).

    auto createVertexArrayObject = [](GLuint program){

        GLuint vao;
        glCreateVertexArrays(1, &vao);

        glEnableVertexArrayAttrib(vao, 0);
        glEnableVertexArrayAttrib(vao, 1);


        glVertexArrayAttribBinding(vao, glGetAttribLocation(program, "position"),
                                   /*buffer index*/ 0);
        glVertexArrayAttribBinding(vao, glGetAttribLocation(program, "colours"),
                                   /*buffs idx*/ 0);

        glVertexArrayAttribFormat(vao, 0, glm::vec3::length(), GL_FLOAT, GL_FALSE, offsetof(vertex3D, position));
        glVertexArrayAttribFormat(vao, 1, glm::vec3::length(), GL_FLOAT, GL_FALSE, offsetof(vertex3D, colour));

        return vao;
    }

    auto vertexArrayObject = createVertexArrayObject(program);

Now let’s do our render loop. Notice how we can now remove the calls to glClearBufferfv() as we are drawing a full screen triangle, removing the need to clear the background.

    glUseProgram(program);
    glBindVertexArray(vertexArrayObject);

    while (!glfwWindowShouldClose(windowPtr)) {

        glVertexArrayVertexBuffer(vao, 0, backGroundBuffer, /*offset*/ 0,
                                  /*stride*/ sizeof(vertex3D));
        glDrawArrays(GL_TRIANGLES, 0, 3);

        glVertexArrayVertexBuffer(vao, 0, foreGroundBuffer, /*offset*/ 0,
                                  /*stride*/ sizeof(vertex3D));
        glDrawArrays(GL_TRIANGLES, 0, 3);

        glfwSwapBuffers(windowPtr);
        glfwPollEvents();
    }

    glfwTerminate();
}

We bind the single VAO as normal, but this time, we swap out the vertex arrays directly using the calls to glVertexArrayVertexBuffer() and pass in the bufferObjects.

Introducing Depth

Now we will come back to the point that was raised earlier which is that two of the vertices (left bottom and middle top) are on one side of the background plane in the z direction (coming into/out of the screen) and the bottom right vertex is coming towards the screen. So we should expect some kind of intersection here shouldn’t we?

Well when triangles are rasterized, if a certain feature called depth testing isn’t enabled, then each fragment will just get drawn on top of the one drawn previously. Because we draw the background first, and then the coloured triangle, thats the order in which they appear and the triangles fragments just get placed on top. So how do we enable this depth testing thingy? Like this….

    glEnable(GL_DEPTH_TEST);

We place this line just before our main rendering loop. And we should now see this…

Great! Each pixel’s depth is now being compared to the depth that is already there and if it is ‘behind’ the existing depth value then it is discarded and not rendered.

This is effectively what has happened….

We have made it so that the bottom left and top middle vertices are ‘behind the gradient background triangle and part of it slices through.

1. I say ‘behind’ with quotes because the behaviour of the depth comparison can be controlled by telling opengl if you want to compare greater or lesser than the current depth value.

2. Depth values are stored in the ‘Depth Buffer’. At a future point, the depth buffer becomes important for more advanced effects. For now we are just concerned with simple rendering geometry so we can leave that for a future discussion.

3. We are rendering a background here, but what would happen if we didn’t fill the screen with values and wanted to draw multiple intersecting meshes against a cleared background like we had in previous chapter? In that case we would want to clear the depth buffer as well by creating a default value of 1.0f and calling the glClearBufferfv() function like so…..

std::array<GLfloat, 1> clearDepth{1.0};

while (!glfwWindowShouldClose(windowPtr)) {
   glClearBufferfv(GL_DEPTH, 0, clearDepth.data());
   // render loop...
}

We now have some pretty solid fundamental mechanics in our toolbox. From here we could go on and keep exploring various intricacies of how these things work, but I think we have enough that we can start exploring some more fun concepts. In the next lesson, we will be taking the idea of drawing a full screen triangle and using it to run a ‘shadertoy’ -like fragment shader program. We will actually be using a real shadertoy example and rendering it locally in our own OpenGL program. So see you in the next chapter.

Modern OpenGL: Part 6 Adding Colour with Multiple Vertex Buffers

So we have now tackled the basics of how to not only put data into buffers, but also how to connect that data up to shaders through the use of VAO’s.

Now we will extend that knowledge a little to allow us to use multiple buffers to store separate attributes. One reason you might want to do this is if you want to have your vertices have a colour attribute that never changes, but your positions might be animated and need to be updated every frame. If your vertices had all attributes in one buffer (which is something we will tackle in the next chapter), you would have to update all the vertices. Separating them out into separate buffers means you only have to update the data that you need to. The other reason is that is easier to introduce the concept of multiple attributes in shaders like this and I won’t be overloading you with information.

… previous code (click to expand)
#include "error_handling.hpp"
#include <array>
#include <chrono>     // current time
#include <cmath>      // sin & cos
#include <cstdlib>    // for std::exit()
#include <fmt/core.h> // for fmt::print(). implements c++20 std::format
#include <unordered_map>

// this is really important to make sure that glbindings does not clash with
// glfw's opengl includes. otherwise we get ambigous overloads.
#define GLFW_INCLUDE_NONE
#include <GLFW/glfw3.h>

#include <glbinding/gl/gl.h>
#include <glbinding/glbinding.h>

#include <glbinding-aux/debug.h>

#include "glm/glm.hpp"

using namespace gl;
using namespace std::chrono;

int main() {

    auto startTime = system_clock::now();

    const auto windowPtr = []() {
        if (!glfwInit()) {
            fmt::print("glfw didnt initialize!\n");
            std::exit(EXIT_FAILURE);
        }

        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);

        auto windowPtr =
            glfwCreateWindow(1280, 720, "Chapter 6 - Multiple Buffers", nullptr, nullptr);

        if (!windowPtr) {
            fmt::print("window doesn't exist\n");
            glfwTerminate();
            std::exit(EXIT_FAILURE);
        }
        glfwSetWindowPos(windowPtr, 520, 180);

        glfwMakeContextCurrent(windowPtr);
        glbinding::initialize(glfwGetProcAddress, false);
        return windowPtr;
    }();

    // debugging
    {
        glEnable(GL_DEBUG_OUTPUT);
        glDebugMessageCallback(errorHandler::MessageCallback, 0);
        glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
        glDebugMessageControl(GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_OTHER,
                              GL_DEBUG_SEVERITY_NOTIFICATION, 0, nullptr, false);
    }

The first changes we need to make are in the shader programs. We now are going to have 2 attribute inputs. One for position (which is what we had previously) and the additional one, colours.

    auto program = []() -> GLuint {
        const char* vertexShaderSource = R"(
            #version 450 core
            layout (location = 0) in vec2 position;
            layout (location = 1) in vec3 colour; <-- new!

            out vec3 vertex_colour;

            void main(){
                vertex_colour = colour;
                gl_Position = vec4(position, 0.0f, 1.0f);
            }
        )";

        const char* fragmentShaderSource = R"(
            #version 450 core

            in vec3 vertex_colour;
            out vec4 finalColor;

            void main() {
                finalColor = vec4(  vertex_colour.x,
                                    vertex_colour.y,
                                    vertex_colour.z,
                                    1.0);
            }
        )";

        auto vertexShader = glCreateShader(GL_VERTEX_SHADER);
        glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr);
        glCompileShader(vertexShader);
        errorHandler::checkShader(vertexShader, "Vertex");

        auto fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
        glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr);
        glCompileShader(fragmentShader);
        errorHandler::checkShader(fragmentShader, "Fragment");

        auto program = glCreateProgram();
        glAttachShader(program, vertexShader);
        glAttachShader(program, fragmentShader);

        glLinkProgram(program);
        return program;
    }();

    

Here is what we have added…

Vertex Shader:
  • a new in vec3 attribute ‘colour’ with the layout qualifier setting it’s location to the ‘1’ array index
  • a new ‘out’ attribute
  • in main(), we directly set the value of this out attribute to the value coming in from the buffers
Fragment Shader:
  • a new in attribute that has the same name as the out attribute that we had in our vertex shader.

If the names match up between the vertex shader and the fragment shader, then the compiler will know that they are meant to be the ‘same’ attribute. Alternatively we could have used layout locations to specify the ‘slots’ and we could name them whatever we want.

Setting up multiple buffers

Now we will use what we learnt about VAO mapping in the previous chapter to connect up the buffer data to the new attribute.

First lets create the initial data in our main program….

    // extra braces required in c++11.
    const std::array<glm::vec2, 3> vertices{{{-0.5f, -0.7f}, {0.5f, -0.7f}, {0.0f, 0.6888f}}};

    const std::array<glm::vec3, 3> colours{{{1.f, 0.f, 0.f}, {0.f, 1.f, 0.f}, {0.f, 0.f, 1.f}}};

You might be able to spot the colours we are setting here. We have one vertex with the first component at 1, a second with the 2nd component one and the third with the third component 1. Those are the red, green and blue colours. So we have an array of 3 vec3s.

Lets create the GPU buffers and provide the data to them

    std::array<GLuint, 2> bufferObjects;
    glCreateBuffers(2, bufferObjects.data());

    glNamedBufferStorage(bufferObjects[0], vertices.size() * sizeof(glm::vec2), nullptr,
                         GL_MAP_WRITE_BIT | GL_DYNAMIC_STORAGE_BIT);
    glNamedBufferStorage(bufferObjects[1], colours.size() * sizeof(glm::vec3), nullptr,
                         GL_MAP_WRITE_BIT | GL_DYNAMIC_STORAGE_BIT);

    glNamedBufferSubData(bufferObjects[0], 0, vertices.size() * sizeof(glm::vec2),
                         vertices.data());
    glNamedBufferSubData(bufferObjects[1], 0, colours.size() * sizeof(glm::vec3),
                         colours.data());

This time we will create an array of 2 buffers and ask the glCreateBuffers to give us 2 unique ids to refer to them.

Then we do something slightly different from the last chapter. In this one, we pass nullptr to the glNamedBufferStorage (along with the index into the bufferObjects array for the buffer that we want to allocate storage for). Passing nullptr, tells opengl to just allocate the storage of the required size, but doesn’t transfer any data yet.

We transfer the data with the glNamedBufferSubData() function. Why did we do this? Well just because if we wanted to have dynamic data (like if the positions or colours changed everyframe), then this would be the way to update the data to those buffers without having to recreate the buffers and reallocate storage and potentially set new binding information in the VAO.

    // in core profile, at least 1 vao is needed
    GLuint vao;
    glCreateVertexArrays(1, &vao);

    glEnableVertexArrayAttrib(vao, 0);
    glEnableVertexArrayAttrib(vao, 1);

    // buffer to index mapping
    glVertexArrayVertexBuffer(vao, 0, bufferObjects[0], /*offset*/ 0, sizeof(glm::vec2));
    glVertexArrayVertexBuffer(vao, 1, bufferObjects[1], /*offset*/ 0, sizeof(glm::vec3));

    glVertexArrayAttribBinding(vao, glGetAttribLocation(program, "position"),
                               /*slot index*/ 0);
    glVertexArrayAttribBinding(vao, glGetAttribLocation(program, "colour"),
                               /*slot index*/ 1);

    // data stride
    glVertexArrayAttribFormat(vao, 0, glm::vec2::length(), GL_FLOAT, GL_FALSE, 0);
    glVertexArrayAttribFormat(vao, 1, glm::vec3::length(), GL_FLOAT, GL_FALSE, 0);    

So lets recap what we have done here that is different from the last chapter.

  • we now enable 2 shader array attribute locations with glEnableVertexArrayAttrib() for the position and colour respectively
  • we connect up the two buffers to to independent binding index slots with glVertexArrayVertexBuffer() (remember those intermediary mapping locations?). We also inform opengl of the byte size of the elements as well (with vec3 for the colour)
  • we connect up the shader attribute locations to those slots with glVertexArrayAttribBinding()
  • we set the format on the attributes inputs for the shader with glVertexArrayAttribFormat(). notice that we set the vec3s type here to

Here is a diagram of our vao setup now…

And we are done!

… previous code (click to expand)
    std::array<GLfloat, 4> clearColour;
    glUseProgram(program);
    glBindVertexArray(vao);


    while (!glfwWindowShouldClose(windowPtr)) {
        auto currentTime = duration<float>(system_clock::now() - startTime).count();
        clearColour = {std::sin(currentTime) * 0.5f + 0.5f, std::cos(currentTime) * 0.5f + 0.5f,
                       0.2f, 1.0f};

        glClearBufferfv(GL_COLOR, 0, clearColour.data());
        glDrawArrays(GL_TRIANGLES, 0, 3);
        glfwSwapBuffers(windowPtr);
        glfwPollEvents();
    }

    glfwTerminate();
}

Modern OpenGL: Part 5 Feeding Vertex Data to Shaders with Vertex Buffers

In the previous chapters, we were able to…

  • create a window
  • draw a triangle
  • set up OpenGL debugging callbacks

Now we want to extend our triangle drawing, but allowing our c++ code to supply the triangle’s vertex positions. We were making things easier before by telling OpenGL what the vertex positions were directly in the shader in an array that was compiled along with the shader. We then indexed into that array with the glVertexID.

In this chapter, we will be

  • setting up out shaders to be ‘given’ the vertex position automatically (no need for glVertexID)
  • setup buffers in our c++ code to move vertex data over to the GPU to ‘feed’ them to the shaders.

I say ‘feed’ because I like to imaging the shader program as being a black box or a factory that is just given data, they are transformed by the shaders and then out the other end triangles just come out. Think of the shader program as a factory, and the vertices as the raw ingredients that go into the factory to make the product (triangles).

Setting up shaders to receive vertex input

We can remove the shaders we had previously as the vertices will now be supplied directly to the shaders. The shaders also don’t need to do any manual indexing as OpenGL will take care of that for us. All we have to do when we supply our data is to make sure that the vertex data is set up so that every 3 vertices represents the positions of our triangle vertices and then to draw the right number of vertices for our data.

… previous code (click to expand)
#include "error_handling.hpp"
#include <array>
#include <chrono>     // current time
#include <cmath>      // sin & cos
#include <cstdlib>    // for std::exit()
#include <fmt/core.h> // for fmt::print(). implements c++20 std::format
#include <unordered_map>

// this is really important to make sure that glbindings does not clash with
// glfw's opengl includes. otherwise we get ambigous overloads.
#define GLFW_INCLUDE_NONE
#include <GLFW/glfw3.h>

#include <glbinding/gl/gl.h>
#include <glbinding/glbinding.h>

#include <glbinding-aux/debug.h>

#include "glm/glm.hpp"

using namespace gl;
using namespace std::chrono;

int main() {

    auto startTime = system_clock::now();

    const auto windowPtr = []() {
        if (!glfwInit()) {
            fmt::print("glfw didnt initialize!\n");
            std::exit(EXIT_FAILURE);
        }

        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 5);

        auto windowPtr = glfwCreateWindow(1280, 720, "Chapter 5 - Vertex Data", nullptr, nullptr);

        if (!windowPtr) {
            fmt::print("window doesn't exist\n");
            glfwTerminate();
            std::exit(EXIT_FAILURE);
        }
        glfwSetWindowPos(windowPtr, 520, 180);

        glfwMakeContextCurrent(windowPtr);
        glbinding::initialize(glfwGetProcAddress, false);
        return windowPtr;
    }();

    // debugging
    {
        glEnable(GL_DEBUG_OUTPUT);
        glDebugMessageCallback(errorHandler::MessageCallback, 0);
        glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
        glDebugMessageControl(GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_OTHER,
                              GL_DEBUG_SEVERITY_NOTIFICATION, 0, nullptr, false);
    }

    auto program = []() -> GLuint {
        const char* vertexShaderSource = R"(
            #version 450 core
            layout (location = 0) in vec2 position;

            out vec3 vertex_colour;

            void main(){
                vertex_colour = vec3(1.0f, 0.0f, 0.0f);
                gl_Position = vec4(position, 0.0f, 1.0f);
            }
        )";

        const char* fragmentShaderSource = R"(
            #version 450 core

            in vec3 vertex_colour;
            out vec4 finalColor;

            void main() {
                finalColor = vec4(vertex_colour, 1.0);
            }
        )";


… previous code (click to expand)
        auto vertexShader = glCreateShader(GL_VERTEX_SHADER);
        glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr);
        glCompileShader(vertexShader);
        errorHandler::checkShader(vertexShader, "Vertex");

        auto fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
        glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr);
        glCompileShader(fragmentShader);
        errorHandler::checkShader(fragmentShader, "Fragment");

        auto program = glCreateProgram();
        glAttachShader(program, vertexShader);
        glAttachShader(program, fragmentShader);

        glLinkProgram(program);
        return program;
    }();

I’ve removed the colour array for this example as it will allow us to focus on the important parts. We will be adding back more attributes in later chapters but here, we have just hard coded the colour attribute that is output from the vertex shader (and interpolated across the rasterised triangles) to fixed red colour.

Here we have added the line….

layout (location = 0) in vec2 position;

This will take a bit of dissecting. There are actually two things happening here, the ‘layout qualifier’ and the ‘type qualifier’.

OpenGL can work without the layout (location = 0) part, but the in vec2 positions part is defining the data interface of the shader and states what data is coming in.

Layout Qualifier

layout

This word specifies that we are about to state a layout qualifier. Qualifiers are statements that tell the glsl compiler where storage for a variable will be and other information which we won’t look into just yet..

(location = 0)

The text inside the braces is the qualifier and states that we are going to store a variable at a particular location 0. This location is going to be something that we can also use on the c++ side and is the way that we associate the vertex data in the buffer with the variable in the GLSL shader. You will often hear about the ‘binding’ location. This is location that the attribute is bound to.

Type Qualifier

in

This ‘in’ qualifier simply states that the attribute (data) that we are stating is an ‘in’ attribute and that it is going to be supplied to the shader somehow (we will be providing the data via vertex buffers which is a region of memory from our c++ application).

vec2

This is the data type that the attribute is expected to be. There are various supported data types. This one is a vector that has two values.

position

This is the name that we want to give the attribute that we can refer to in our glsl shader (This is not a glsl keyword).

for more information, see…
https://www.khronos.org/opengl/wiki/Layout_Qualifier_(GLSL)

We could have written the shader without the layout qualifier and just stated the in vec2 positions part and then on the c++ side, asked the OpenGL runtime what locations they were assigned, as OpenGL would have done that for us if we hadn’t have stated explicitly with the layout qualifier. This way though, we have absolute control over the way that the data is laid out for our shaders. When we get to creating buffers though, I will show you how to query the location anyway, as it is a useful thing to be able to do (especially for debugging purposes).

Using a vertex attribute

In the vertex shader, we have now replaced the array indexing with directly accessing the variable supplied to the shader by using the name.

gl_Position = vec4(position, 0.0f, 1.0f);

Allocating Data on the GPU for our vertex data

Before we feed shaders with vertex data, we need to get the data onto the gpu. Thus section deals with communicating with the gpu to tell it to reserve some space in its vram for vertex data and for opengl to give us back an id for us to refer to that buffer. Then we shall show how to fill it with data .

Marshalling the vertex buffer data to our shaders

There is going to be quite a lot of information here so get ready for another brain overload. Don’t worry, once you have these concepts fully grasped, then the use of OpenGL becomes a lot easier (mostly).

Vertex Buffer Objects

These objects are where we will be storing our vertex data that will be fed into the shaders for the GPU to process and draw. typically the data will be held in GPU memory. So we have to allocate memory ON the GPU and transfer into it. Lets create one!

First let’s create the data in our cpu memory. This might be loaded from a model from disk (which we will see in chapter 11), but here we are manually specifying some locations. We have chosen to use a std::array as the data needs to be contiguous in memory.

// extra braces required in c++11.
const std::array<glm::vec2, 3> vertices{{{-0.5f, -0.7f}, {0.5f, -0.7f}, {0.0f, 0.6888f}}};

Now we use the opengl API to ‘create’ a buffer.

GLuint bufferObject;
glCreateBuffers(1, &bufferObject);

Narrator: “This did not create a buffer”.

What we have done here is only tell opengl to give us the id of a buffer. At the moment it is just an abstract handle that we will use to refer to the buffer. There is no actually data allocated on the GPU yet. This is done in the next line.

glNamedBufferStorage(bufferObject, vertices.size() * sizeof(glm::vec2), vertices.data(),
                             GL_MAP_WRITE_BIT | GL_DYNAMIC_STORAGE_BIT);

This glNamedBufferStorage() call is the one that instructs opengl to both

  • allocate the storage and
  • transfer the data

The arguments that we pass in are…

  • the buffer id we created
  • how many bytes we want allocated
  • the pointer to the data from our array so the buffer knows where to get the data to transfer
  • Some flags to tell opengl how we want to use the buffer (this can help opengl know where to place the data to make it efficient)

And we are done! Now how to we get this data into our shaders so the shaders know that this is the buffer it should be processing?

In future chapters, we will be looking at advanced ways of getting data on to the gpu to improve performance and is more in line with the modern AZDO principles. For now, this keeps things simple.

Vertex Array Objects

  • which vertex buffers are being used
  • how the data is going to be interpreted by OpenGL by setting type information of the data
  • which shader attribute locations the vertex data is associated with

Vertex Array Objects (VAOs) are the Worst Named Things In The History Of Humanity. What they actually are, are a construct that maps data from buffers into shaders. You then enable (bind) a single VAO that will control which data will be drawn when you issue a draw call.

Shaders typically have attribute inputs that we have seen just before and these inputs are defined by a location (0,1,2 etc ) and a type. Those locations make up the ‘array’ that we want to map our buffers to. VAOs are the thing that let us hook up the data in our buffers to the shader inputs.

There are 3 components we need to address.

  1. enable the array location we want to feed to
  2. which buffer is associated with with location
  3. the type, offset and stride of the data

DSA vs legacy

VAOs used to work by first having to bind the VOA to make it ‘the current one’, then you would ‘bind’ your buffers with another API call, and the VAO would ‘remember’ which buffer was bound. There is no such mechanism here.

Using Direct State Access (DSA) functions we can explicitly state the relationship between the buffers and the binding location by ‘telling’ the VAO this information, rather than having it implicitly remembered because that was the one that was bound at the time.

So lets create a VAO!

    // in core profile, at least 1 vao is needed
    GLuint vao;
    glCreateVertexArrays(1, &vao);

We do a similar thing that we did with the buffer, we create an id integer and have opengl assign it for us. There is no allocation step here, we can just use the id directly.

We now need to tell opengl how to map the buffer data to the shader locations. We start by simply enabling the first attribute location in the shader.

    glEnableVertexArrayAttrib(vao, 0);

This just ‘opens the gate’ for the data to flow through. This is the input to things like position or colour etc that are the “layout (location = 0)” qualifiers that we specified in the shader previously.

Now we provide the actual mapping between the Buffer Object and a new concept which we will call the ‘mapping slot’ or what the documentation calls a bindingindex. This is basically an intermediary location that we can use as a common agreed upon waypoint to connect the buffer and shader. The shader will never see this location or use it in the shader program. It’s just a connecting up mechanism.

This isn’t actually that useful or interesting at this point, but if we ever have a situation where we have data spread accorss multiple buffers, this mechanism gives us flexibility to pull our data from various sources. It’s especially useful if part of our data is static and never changes but we want to update another buffer per frame and we would only ever need to update one buffer, but the shader can still pull from different sources and not need to be changed.
    // buffer to index mapping
    glVertexArrayVertexBuffer(vao, 0, bufferObject, /*offset*/ 0, sizeof(glm::vec2));

We provide….

  • the vertex array object we are setting data on
  • the mapping slot (or ‘binding point’) location that we are setting the buffer to
  • the buffer object the data is coming from
  • offset and stride information so opengl knows how far apart each ‘element’ is

Because the data is contiguous, how does opengl know that the data is every 8 bytes apart (2 floats of 32 bits each. Since 32 bits is 4 bytes, 2 floats are 8 bytes ). We tell it! Thats why we pass in the sizeof(glm::vec2).

Now we need to connect up the shader attribute location to the mapping slot (binding index).

    // attribute to index mapping. can query shader for its attributes
    glVertexArrayAttribBinding(vao, glGetAttribLocation(program, "position"),
                               /*buffer slot index*/ 0);

Here we are setting the VAO to know how to associate the buffer data to the shader location by passing these arguments…

  • which VAO we are setting data on
  • the shader attribute location (which we can actually query by using the glGetAttribLocation() function. we couuld have explicitly set this based upon what we set in the (layout location = 0))
  • the intermediary binding index we want to pull our data from

Finally, the third and final piece of information we need to provide to the VAO is the vertex format.

    // data stride
    glVertexArrayAttribFormat(vao, /* shader attrobute location */ 0, 
                              glm::vec2::length(), GL_FLOAT, GL_FALSE, 0);

This tells opengl what format the vertex data is. We already passed in the byte offset previously, but this goes into more detail about what is being fed to the shader. We pass…

  • the vao
  • which attribute location we are setting the input format information to
  • how many elements are in each vector type (an integer, 2 in the case of vec2)
  • the type of each individual element in the vector (a float in this case as opposed to int or bool etc)
  • and offset into the buffer to where the data starts. (This will become useful later when we start interleaving vertex data. For now its all the same data in one packed buffer)
It is UP TO YOU as a developer to make sure that the vertex array object that you use (and hence the buffer layout) and the shader program that expects a particular vertex data layout are ‘compatible’. There are various ways to programmatically ensure this, but for now we are manually ensuring this.

Here is a quick diagram of what the 3 functions do to fix up the association between vertex buffer and shader attribute location.

And here is how we typically can use vaos by switching them to change which buffers are being used.

Now to use the buffer we created, we need to bind the vertex array object. I know that we said that we are going to be using DSA functions to avoid having to use the ‘bind’ paradigm in opengl, but alas, opengl hasn’t improved in this area. We still need to provide a ‘current’ VAO that opengl uses to do it’s drawing.

    glBindVertexArray(vao);

We can now continue to do our drawing as normal and hopefully we should see a triangle!

… previous code (click to expand)
    std::array<GLfloat, 4> clearColour;
    glUseProgram(program);

    while (!glfwWindowShouldClose(windowPtr)) {

        auto currentTime = duration<float>(system_clock::now() - startTime).count();

        clearColour = {std::sin(currentTime) * 0.5f + 0.5f, std::cos(currentTime) * 0.5f + 0.5f,
                       0.2f, 1.0f};

        glClearBufferfv(GL_COLOR, 0, clearColour.data());

        glDrawArrays(GL_TRIANGLES, 0, 3);
        glfwSwapBuffers(windowPtr);
        glfwPollEvents();
    }

    glfwTerminate();
}