## Modern OpenGL: Part 9 Full Screen Effects for a DIY Shadertoy Project

We are going to be taking a couple of chapters to step away from the mechanical process of just managing the feeding of vertex data into shaders and have a bit of fun by rendering some full screen shader effects.

The idea for this comes from the Shadertoy website that hosts shaders that render interesting effects and scenes using just fragment shaders and does it in the web browser! Really what is happening is that a full screen triangle is being drawn (very much like the background gradient triangle that we drew in the previous chapter) and then the fragment program is being run for every pixel in the screen. Each invocation of the fragment program determines the colour of the pixel, so some very interesting and complex effects can be achieved with some fancy math.

For our program that we are compiling locally on our own machine, all we have to do really is draw a triangle, and assign a fragment shader that contains the code from a shadertoy example and we should get the same results.

This is the shadertoy example we are going to be running in our program for this example. The basic idea though is that it should technically be possible to get any glsl code and run it in our program wherever it comes from.

https://www.shadertoy.com/view/3l23Rh#

We can’t copy and paste the shader code directly though. There are certain variables that the code references that are needed to be set for it to work. When the shader is running in the browser, the shadertoy page code is responsible for making sure that those variables are present for the program to run. We have to do that manually in this case as well. We will cover that when we get to that point in the shader.

… previous code (click to expand)#include "error_handling.hpp" #include <array> #include <chrono> // current time #include <cmath> // sin & cos #include <cstdlib> // for std::exit() #include <fmt/core.h> // for fmt::print(). implements c++20 std::format #include <unordered_map> // this is really important to make sure that glbindings does not clash with // glfw's opengl includes. otherwise we get ambigous overloads. #define GLFW_INCLUDE_NONE #include <GLFW/glfw3.h> #include <glbinding/gl/gl.h> #include <glbinding/glbinding.h> #include <glbinding-aux/debug.h> #include "glm/glm.hpp" using namespace gl; using namespace std::chrono; int main() { auto startTime = system_clock::now(); const int width = 1600; const int height = 900; auto windowPtr = [&]() { if (!glfwInit()) { fmt::print("glfw didnt initialize!\n"); std::exit(EXIT_FAILURE); } glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 5); /* Create a windowed mode window and its OpenGL context */ auto windowPtr = glfwCreateWindow(width, height, "Chapter 9 - Full Screen Effects (Diy Shadertoy!)", nullptr, nullptr); if (!windowPtr) { fmt::print("window doesn't exist\n"); glfwTerminate(); std::exit(EXIT_FAILURE); } glfwSetWindowPos(windowPtr, 520, 180); glfwMakeContextCurrent(windowPtr); glbinding::initialize(glfwGetProcAddress, false); return windowPtr; }(); // debugging { glEnable(GL_DEBUG_OUTPUT); glDebugMessageCallback(errorHandler::MessageCallback, 0); glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS); glDebugMessageControl(GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_OTHER, GL_DEBUG_SEVERITY_NOTIFICATION, 0, nullptr, false); } auto createProgram = [](const char* vertexShaderSource, const char* fragmentShaderSource) -> GLuint { auto vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); errorHandler::checkShader(vertexShader, "Vertex"); auto fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); errorHandler::checkShader(fragmentShader, "Fragment"); auto program = glCreateProgram(); glAttachShader(program, vertexShader); glAttachShader(program, fragmentShader); glLinkProgram(program); return program; };

auto program = createProgram(R"( #version 450 core const vec4 vertices[] = vec4[]( vec4(-1.f, -1.f, 0.0, 1.0), vec4( 3.f, -1.f, 0.0, 1.0), vec4(-1.f, 3.f, 0.0, 1.0)); void main(){ gl_Position = vertices[gl_VertexID]; } )",

Here we are starting to call our createProgram() function that we created in the earlier chapters and are passing it the vertex shader string. This is the same as we saw in chapter 8 where we drew a fullscreen triangle (we are storing the positions of 3 vertices which we will draw with a call to DrawTriangles() with a count of 3).

The next argument to the createProgram function is our fragment shader string. This is the code that we have copied from shader toy.

R"( // Protean clouds by nimitz (twitter: @stormoid) // https://www.shadertoy.com/view/3l23Rh // License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License // Contact the author for other licensing options /* Technical details: The main volume noise is generated from a deformed periodic grid, which can produce a large range of noise-like patterns at very cheap evalutation cost. Allowing for multiple fetches of volume gradient computation for improved lighting. To further accelerate marching, since the volume is smooth, more than half the the density information isn't used to rendering or shading but only as an underlying volume distance to determine dynamic step size, by carefully selecting an equation (polynomial for speed) to step as a function of overall density (not necessarialy rendered) the visual results can be the same as a naive implementation with ~40% increase in rendering performance. Since the dynamic marching step size is even less uniform due to steps not being rendered at all the fog is evaluated as the difference of the fog integral at each rendered step. */

Make sure to adhere to the license of any code you use!

Next there are some things that we need to *add* to the shader to at this point make it work.

First we have to declare an output attribute in the fragment shader that will be the location that the fragment colour will be written to.

out vec4 fragColor;

Shadertoy also provides a few built in variables that an author of a shader is able to use to figure out things like

- what the current elapsed time is since the rendering began
- what the resolution of the shader screen/window is
- where the mouse is clicking

We have to provide those to the shader, otherwise those variables wont be defined and it wont compile. In this case we have to make sure that the value of the resolution matches our actual resolution of our opengl window. (we will see in the next chanpter how we can ‘tell’ the shader what the resolution is without having to explicitly set in the code).

float iTime = 1.14f; vec2 iResolution = vec2(1600, 900); vec2 iMouse = vec2(960.f,0.f);

Shadertoy also provides a variable called fragCoord, which is a renamed version of gl_FragCoord. This is a GLSL built in variable that tells you what the current pixel’s screen coordinate is in terms of the resolution (its a vec4, but we are interested in only the first two components). This can normally be used as it but for some reason, this shader uses another name for it. So we could go in and fix that bit in the code, but its just as easy to add this at the top…

vec4 fragCoord = gl_FragCoord;

Now this is the rest of the fragment shader…

mat2 rot(in float a){float c = cos(a), s = sin(a);return mat2(c,s,-s,c);} const mat3 m3 = mat3(0.33338, 0.56034, -0.71817, -0.87887, 0.32651, -0.15323, 0.15162, 0.69596, 0.61339)*1.93; float mag2(vec2 p){return dot(p,p);} float linstep(in float mn, in float mx, in float x){ return clamp((x - mn)/(mx - mn), 0., 1.); } float prm1 = 0.; vec2 bsMo = vec2(0); vec2 disp(float t){ return vec2(sin(t*0.22)*1., cos(t*0.175)*1.)*2.; } vec2 map(vec3 p) { vec3 p2 = p; p2.xy -= disp(p.z).xy; p.xy *= rot(sin(p.z+iTime)*(0.1 + prm1*0.05) + iTime*0.09); float cl = mag2(p2.xy); float d = 0.; p *= .61; float z = 1.; float trk = 1.; float dspAmp = 0.1 + prm1*0.2; for(int i = 0; i < 5; i++) { p += sin(p.zxy*0.75*trk + iTime*trk*.8)*dspAmp; d -= abs(dot(cos(p), sin(p.yzx))*z); z *= 0.57; trk *= 1.4; p = p*m3; } d = abs(d + prm1*3.)+ prm1*.3 - 2.5 + bsMo.y; return vec2(d + cl*.2 + 0.25, cl); } vec4 render( in vec3 ro, in vec3 rd, float time ) { vec4 rez = vec4(0); const float ldst = 8.; vec3 lpos = vec3(disp(time + ldst)*0.5, time + ldst); float t = 1.5; float fogT = 0.; for(int i=0; i<130; i++) { if(rez.a > 0.99)break; vec3 pos = ro + t*rd; vec2 mpv = map(pos); float den = clamp(mpv.x-0.3,0.,1.)*1.12; float dn = clamp((mpv.x + 2.),0.,3.); vec4 col = vec4(0); if (mpv.x > 0.6) { col = vec4(sin(vec3(5.,0.4,0.2) + mpv.y*0.1 +sin(pos.z*0.4)*0.5 + 1.8)*0.5 + 0.5,0.08); col *= den*den*den; col.rgb *= linstep(4.,-2.5, mpv.x)*2.3; float dif = clamp((den - map(pos+.8).x)/9., 0.001, 1. ); dif += clamp((den - map(pos+.35).x)/2.5, 0.001, 1. ); col.xyz *= den*(vec3(0.005,.045,.075) + 1.5*vec3(0.033,0.07,0.03)*dif); } float fogC = exp(t*0.2 - 2.2); col.rgba += vec4(0.06,0.11,0.11, 0.1)*clamp(fogC-fogT, 0., 1.); fogT = fogC; rez = rez + col*(1. - rez.a); t += clamp(0.5 - dn*dn*.05, 0.09, 0.3); } return clamp(rez, 0.0, 1.0); } float getsat(vec3 c) { float mi = min(min(c.x, c.y), c.z); float ma = max(max(c.x, c.y), c.z); return (ma - mi)/(ma+ 1e-7); } //from my "Will it blend" shader (https://www.shadertoy.com/view/lsdGzN) vec3 iLerp(in vec3 a, in vec3 b, in float x) { vec3 ic = mix(a, b, x) + vec3(1e-6,0.,0.); float sd = abs(getsat(ic) - mix(getsat(a), getsat(b), x)); vec3 dir = normalize(vec3(2.*ic.x - ic.y - ic.z, 2.*ic.y - ic.x - ic.z, 2.*ic.z - ic.y - ic.x)); float lgt = dot(vec3(1.0), ic); float ff = dot(dir, normalize(ic)); ic += 1.5*dir*sd*ff*lgt; return clamp(ic,0.,1.); }

One other thing we have to do in our copy of the shaderytoy example is to change the name of the mainImage function in line 112 of the original source to main(). This is because opengl expects a main() function as its entry point.

void main() // previously void mainImage( out vec4 fragColor, in vec2 fragCoord ) {

Now we paste in the rest of the main function from shadertoy…

vec2 q = fragCoord.xy/iResolution.xy; vec2 p = (gl_FragCoord.xy - 0.5*iResolution.xy)/iResolution.y; bsMo = (iMouse.xy - 0.5*iResolution.xy)/iResolution.y; float time = iTime*3.; vec3 ro = vec3(0,0,time); ro += vec3(sin(iTime)*0.5,sin(iTime*1.)*0.,0); float dspAmp = .85; ro.xy += disp(ro.z)*dspAmp; float tgtDst = 3.5; vec3 target = normalize(ro - vec3(disp(time + tgtDst)*dspAmp, time + tgtDst)); ro.x -= bsMo.x*2.; vec3 rightdir = normalize(cross(target, vec3(0,1,0))); vec3 updir = normalize(cross(rightdir, target)); rightdir = normalize(cross(updir, target)); vec3 rd=normalize((p.x*rightdir + p.y*updir)*1. - target); rd.xy *= rot(-disp(time + 3.5).x*0.2 + bsMo.x); prm1 = smoothstep(-0.4, 0.4,sin(iTime*0.3)); vec4 scn = render(ro, rd, time); vec3 col = scn.rgb; col = iLerp(col.bgr, col.rgb, clamp(1.-prm1,0.05,1.)); col = pow(col, vec3(.55,0.65,0.6))*vec3(1.,.97,.9); col *= pow( 16.0*q.x*q.y*(1.0-q.x)*(1.0-q.y), 0.12)*0.7+0.3; //Vign fragColor = vec4( col, 1.0 ); } )");

Right now if we change the shader code, we have to recompile our c++ program to embed the shader code into the c++ binary. Not a very efficient way of working! But this is for educational purposes, so we shall keep it like this to improve readability. I find if I can make code as linear as possible to read, then the learning isn’t as clouded as you don’t have to go and jump around too much.

glUseProgram(program); GLuint vao; glCreateVertexArrays(1, &vao); glBindVertexArray(vao); while (!glfwWindowShouldClose(windowPtr)) { // draw full screen triangle glDrawArrays(GL_TRIANGLES, 0, 3); glfwSwapBuffers(windowPtr); glfwPollEvents(); } glfwTerminate(); }

This last piece of the code is just our standard render loop. You can see its even simpler now as we don’t have to clear the screen (we know the triangle is going to be drawing over every pixel).

Isn’t that just a lovely thing to see? Next chapter, we will learn how to communicate values from our c++ program into our shader program thought the use of “Uniform” Shader Variables.