In the last chapter we got our torus mesh rendering with perspective, but we still don’t have a great way of coloring the surface of it. We can’t take the easy way out and slap a texture on it, because we don’t have a texture, and the UVs on our new mesh aren’t really appealing to look at. Instead, we’re going to do what every good shader developer does and throw some (simple) math at it to add some lighting to our mesh.

Our meshes so far have stored position, color, and texture coordinate information for each vertex in the mesh. Lighting calculations rely on a new vertex attribute called a normal vector. A “normal” is a vector that points outward from the mesh, perpendicular to the mesh’s surface. This provides information about the shape of the face that each fragment belongs to. Just like UV coordinates, normal vectors are interpolated from the vertex shader to the fragment shader, so the normal vector for a single fragment is a combination of the normal vectors of the three vertices that make up that fragment’s mesh face. You can see what our mesh’s normals look like in Figure 8-1.

Figure 8-1
figure 1

Our torus mesh’s normals—the blue lines show the normals of each vertex

It’s sometimes helpful to use a shader to visualize a mesh’s normals as color data for debugging purposes. The shader to do this is almost identical to our uv_vis.frag shader, except that we need to grab the normal vector from the vertex we’re shading instead of the UV coordinate. This also means that we need a vertex shader that passes normals down the pipeline for any of this to work. Since all our shaders from now on will require use of normal vectors, we can make this change directly to the “mesh.vert” shader our meshes currently use. Listing 8-1 shows how this is done in the example code. The first example project that uses this modified vertex shader is the “NormalTorus” example project for this chapter.

Listing 8-1 Modifying mesh.vert to Pass Normals to the Fragment Shader

#version 410 layout (location = 0) in vec3 pos; layout (location = 2) in vec3 nrm; ❶ uniform mat4 mvp; out vec3 fragNrm; void main(){       gl_Position = mvp * vec4(pos, 1.0);       fragNrm = nrm; }

Notice that our mesh’s normal vectors are stored at attribute location 2 (❶), instead of location 3 like our UV coordinates were. Remember that this order is set arbitrarily by openFrameworks. Different engines and frameworks will order their vertex attributes differently. Once we have our normal vector, writing a fragment shader to visualize it is as simple as outputting UV colors. You can see how it’s done in Listing 8-2.

Listing 8-2 Visualizing Normal as Colors in a Fragment Shader – normal_vis.frag

#version 410 uniform vec3 lightDir; uniform vec3 lightCol; in vec3 fragNrm; out vec4 outCol; void main(){       vec3 normal = normalize(fragNrm); ❶       outCol = vec4(normal, 1.0); }

This is the first time that we’ve seen the normalize function in a shader (❶), so let’s take a moment to talk about it. Normalizing a vector is the process of lengthening or shortening a vector until it has a magnitude of 1 while preserving the direction of the vector. It’s important to note that this doesn’t mean that all the components of a vector are positive numbers, since that would change the direction of the vector. The vector (-1, 0, 0) and (1, 0, 0), both have a magnitude of 1, despite having opposite directions.

Most math that we’ll do with normal vectors assumes that those vectors are normalized. The normal vectors on our mesh are already normalized, but the interpolation process that happens when we pass those vectors to the fragment shader isn’t guaranteed to preserve vector length. It’s very important to always renormalize any vectors that have been interpolated across a face, otherwise any calculations you do that require that a vector is normalized will be off.

If we throw our new shaders onto our torus mesh, your program should look like Figure 8-2. Notice that even with all the vectors normalized, some areas of the mesh are still black. This is telling us that the components of the normal vector for those fragments are all negative values. Since there’s no such thing as negative color values, we end up with totally black fragments. If you’re following along at home, the code to generate Figure 8-2 can be found in the “NormalTorus” project in the Chapter 8 example code.

Figure 8-2
figure 2

Using mesh.vert and normal_vis.frag to Visualize the Normals on Our Torus Mesh

Smooth vs. Flat Shading with Normals

The faces of our torus mesh are all uniformly curved surfaces. When the normal vectors for curved surfaces get interpolated, each fragment on a mesh face gets a normal vector that’s slightly different from any other fragments on that face. The net result of this is that when we visualize these fragment normals as color data, we end up with the smooth gradient shown in Figure 8-2. This is great for curved surfaces but not ideal for flatter objects. For example, if we approached shading a cube the same way, we might end up with Figure 8-3, which shows both a set of normal vectors for a cube mesh, and how those vectors translate into fragment normals.

Figure 8-3
figure 3

Normals on a cube mesh. The left image shows a cube mesh in a modeling program with normals visible. On the right is the same cube rendered with our normal_vis shader.

Figure 8-3 looks nice but isn’t a great way to shade a cube. The smooth gradient of normal colors on the faces means that when we try to do any light calculations on our cube mesh, the mesh faces will be shaded as though they are curved. What we actually want to do is to give every fragment that belongs to a flat mesh face the same normal vector. Since a vertex can only have a single normal vector, this means that we can’t share vertices across multiple faces. Instead, each face of our cube needs its own set of four vertices, with normal vectors pointing in the direction of that specific mesh face. You can see these normal vectors illustrated in Figure 8-4, which also shows off what the resulting fragment normals would look like when rendered by our “normal_vis.frag” shader.

Figure 8-4
figure 4

Flat normals on a cube mesh. The left image shows a cube mesh in a modeling program with per-face normals visible. On the right is the same cube rendered with our normal_vis shader.

In practice, deciding which vertices need to be duplicated to use flat normals and which can use smooth normals is a decision that’s usually made by artists. 3D modeling programs provide powerful tools for setting up vertex normals to ensure that an object is shaded exactly how the artist intended. It’s only important for us to understand how normals can be used to achieve different looks.

World Space Normals and Swizzling

One thing that you’ll commonly want to do is to get the normal vector for a fragment in world space, instead of in object space like we’re getting it now. This is going to make our lives much easier later when we’re dealing with information about light positions in our scene. If you remember from the last chapter, we can use the model matrix to transform a vector from object space to world space. With this change, our “mesh.vert” shader might look like Listing 8-3.

Listing 8-3 Updating Our Vertex Shader to Pass World Space Normals to the Fragment Shader

#version 410 layout (location = 0) in vec3 pos; layout (location = 2) in vec3 nrm; uniform mat4 mvp; uniform mat4 model; out vec3 fragNrm; void main(){       gl_Position = mvp * vec4(pos, 1.0);       fragNrm = (model * vec4(nrm, 0.0)).xyz; ❶ }

You may have noticed a weird bit of syntax in Listing 8-3 (at ❶). After we multiply our normal vector by the model matrix (which gives us a vec4), we’re assigning it to our vec3 fragNrm variable with the .xyz operator. Until now, we’ve always accessed components of a vector one at a time, but vectors in GLSL can also be swizzled (yes, that’s really what it’s called) to select any combination of up to four components. This means that all of the examples in Listing 8-4 are actually valid GLSL.

Listing 8-4 Examples of Swizzling

vec2 uv = vec2(1,2); vec2 a = uv.xy;                 //(1,2) vec2 b = uv.xx;                 //(1,1) vec3 c = uv.xyx;                //(1,2,1) vec4 color = vec4(1,2,3,4); vec2 rg = color.rg;             //(1,2) vec3 allBlue = color.bbb;       //(3,3,3) vec4 reversed = color.wzyx;     //(4,3,2,1)

In the case of Listing 8-3, the swizzle operator was necessary because transform matrices need to be multiplied by vec4s, and the result of that multiplication is a vec4. Our fragNrm variable is only a vec3, so we need to use the swizzle operator to choose which components of our vec4 to store in it.

For virtually any other type of vector, this would be all we needed to do to transform from one coordinate space to another. Normals are a bit of a special case though, and in some cases we need to take a few extra steps to make sure that these vectors are always correct when they get to the fragment shader.

The Normal Matrix

The tricky part of transforming normal vectors to world space occurs when we start using a mesh’s model matrix to scale that mesh. If we always keep our mesh’s scale uniform (that is, we always scale each dimension of our mesh the same amount), then the math we’ve already got will work just fine. However, any time we want to use a nonuniform scale, like (0.5, 1, 1) for example, this math starts to break down. This is because we want normals to always point away from our mesh’s surface, and if we transform one of their dimensions more than another, we can influence the direction that they point in ways that don’t hold up mathematically.

For example, let’s say we had a sphere mesh with a model matrix that applied the scale (2.0, 0.5, 1.0) to it. If we also used that model matrix to transform our normal vectors, we would end up with normals that no longer pointed in the correct directions. It’s the difference between the center sphere in Figure 8-5, and the one on the right.

Figure 8-5
figure 5

Scaling normals nonuniformly. The center sphere has used the model matrix to transform its normals, while the right sphere has used the normal matrix.

To deal with this, normals aren’t usually transformed by the model matrix. Instead, we use the model matrix to create a new matrix (called the normal matrix), which can correctly transform our normal vectors without scaling them in funny ways. Instead of the model matrix, what we need is the “transpose of the inverse of the upper 3×3 of the model matrix.” This is a mouthful, but for our purposes it just means a few more math operations in our C++ code. Listing 8-5 shows how we could create this matrix.

Listing 8-5 How to Create the Transpose of the Inverse Modelview Matrix. Add a Line Like This to Your draw() Function, so You Can Pass This Matrix as a Uniform

mat3 normalMatrix = (transpose(inverse(mat3(model))));

This matrix is a mat3 instead of a mat4 like all our other matrices have been so far. Creating a mat3 from a mat4 means making a 3×3 matrix composed of the top left 3×3 section in the larger matrix. This is a quick way to get a matrix that has no translation information, but keeps all the rotation and scaling data from our model matrix. The rest of the math in Listing 8-5 is simply doing exactly what the definition of the normal matrix tells us to do to create our normal matrix.

Again, all of this is only necessary if your object is scaled nonuniformly. It’s not unheard of for engines to only support uniform scaling so that they can avoid having to worry about this. For our examples, however, we’re going to use the normal matrix from now on. This way we can be sure that no matter how we set up our scene, we know our normal are correct. With this matrix being passed to our vertex shader, “mesh.vert” is going end up looking like Listing 8-6.

Listing 8-6 mesh.vert Using the Normal Matrix

#version 410 layout (location = 0) in vec3 pos; layout (location = 2) in vec3 nrm; uniform mat4 mvp; uniform mat3 normal; out vec3 fragNrm; void main(){       gl_Position = mvp * vec4(pos, 1.0);       fragNrm = (normal * nrm).xyz; }

Why Lighting Calculations Need Normals

Now that we have a better sense of what normals are and how we can access them in a shader, it’s time to figure out how to use them to add some lighting to our mesh. Normals are essential to lighting calculations because they allow us to figure out the relationship between the direction of light that’s hitting our mesh’s surface and the orientation of that surface itself. We can think of every fragment on a mesh as being a tiny, completely flat point, regardless of the overall shape of the mesh. Using this mental model, light hitting a fragment can be thought about like Figure 8-6.

Figure 8-6
figure 6

Light hitting a fragment

The more perpendicular our fragment is to the incoming light, the more light will hit that spot on the mesh, causing it to be brighter. Put more specifically, as the angle between our normal vector and the incoming light vector approaches 180 degrees, the amount of light that hits that specific point of our mesh increases. You can see the angle between the incoming light vector and our normal in Figure 8-6, labeled with the symbol theta (ϴ). To calculate what theta is, we can use a specific bit of vector math called a dot product, which is a very handy piece of math that’s used all over video game code.

What’s a Dot Product?

A dot product is a math operation that takes two vectors and returns a single number that represents a relationship between the two vectors. Dot products themselves are very simple. They involve multiplying together the components of each vector (so the X component of vector A gets multiplied by the X component of vector B), and then adding up all the results. You can perform a dot product on vectors of any size, as long as both vectors have the same number of components. Listing 8-7 shows what this might look like in code.

Listing 8-7 A Simple Implementation of a Dot Product

float dot(vec3 a, vec3 b){       float x = a.x * b.x;       float y = a.y * b.y;       float z = a.z * b.z;       return x + y + z; }

The value of a dot product can tell you a lot about the two vectors that were used to create it. Even whether that value is positive or negative can give us valuable information:

  1. 1.

    If the dot product is 0, it means that the two vectors are perpendicular to one another.

  2. 2.

    A positive dot product means that the angle between the two vectors is less than 90 degrees.

  3. 3.

    A negative dot product means that the angle between the two vectors is greater than 90 degrees.

Dot products can also be used to find the exact angle between the two vectors. This is done all the time in game code and only takes a few lines of code to do. You can see these few lines in Listing 8-8. Our lighting calculations won’t be converting dot product values into actual angles. However, knowing how to get degrees from dot product values is a handy technique that’s worth taking a minute to learn.

Listing 8-8 Using Dot Product to Find the Angle Between Two Vectors. The Returned Value Will Be in Radians

float angleBetween(vec3 a, vec3 b){       float d = dot(a,b);       float len = length(a) * length(b); ❶       float cosAngle = d / len;  ❷       float angle = acos(cosAngle);       return angle; }

As you can see in Listing 8-8, once you divide the dot product of two vectors by the product of their two vector lengths (also called magnitudes), you end up with the cosine of the angle between the two of them. In shader code, our vectors are usually going to already be normalized, which means that the both their lengths are already 1. This means that we can omit lines ❶ and ❷ and still end up with the cosine of the angle between them, which is the value we’ll need for our lighting calculations.

Shading with Dot Products

The first type of lighting that we’re going to build is known as diffuse—or Lambertian—lighting. This is the type of lighting that you expect to see on nonshiny objects, like a dry, unpolished piece of wood. Diffuse lighting simulates what happens when light hits the surfaces of a rough object. This means that rather than reflecting light in a certain direction (like a mirror would), light hitting our object will scatter in all directions, and give our object a matte appearance. Figure 8-7 shows some examples of meshes being shaded with diffuse lighting.

Figure 8-7
figure 7

Three meshes rendered with diffuse lighting

Diffuse lighting works by comparing the normal for a given fragment with the direction that light is coming from. The smaller an angle there is between these two vectors, the more perpendicular we know the mesh surface is at that point. Dot products approach 1 as the angle between two vectors goes to zero, so we can use the dot product of the normal and light direction vector to determine how much light we need to use to shade each fragment. In shader code, this might look like Listing 8-9, which can be found in the “DiffuseLighting” example project and is called “diffuse.frag.”

Listing 8-9 diffuse.frag - Diffuse Lighting in Shader Code

#version 410 uniform vec3 lightDir; uniform vec3 lightCol; uniform vec3 meshCol; in vec3 fragNrm; out vec4 outCol; void main(){       vec3 normal = normalize(fragNrm);       float lightAmt = dot(normal, lightDir); ❶       vec3 fragLight = lightCol * lightAmt;       outCol = vec4(meshCol * fragLight, 1.0); ❷ }

After all that build up, the shader for diffuse lighting hopefully seems very straightforward. The only new bit of syntax is the dot() function at ❶, which is, as you might have guessed, how to calculate a dot product in GLSL. Once we have the dot product, we can use it to determine how much light our fragment will get by multiplying the incoming light’s color by the dot product value. If our dot product is 1, this means that our fragment will receive the full intensity of the light, while any fragment with a normal pointing greater than 90 degrees away from the light will receive none. Multiplying the color of the fragment itself by our dot product/light color vector will give us the final color of that fragment. This final multiplication is shown at ❷.

You may have noticed that the math in Listing 8-9 is actually backward. The normal vector for our fragment points away from the mesh surface, while the incoming light vector points toward it. This would usually mean that the preceding calculations make the areas that face the incoming light darker, except that we’re going to cheat a bit and store our light’s direction backward in C++ code. This saves us having to multiply in our shader code to invert the light direction and makes all the math work out how we intend. If this is a bit confusing, look back at Figure 8-6, and see that the two vectors in the diagram are pointing different directions, and then compare it to Figure 8-8, which shows this light direction vector reversed.

Figure 8-8
figure 8

How light data is sent to shader code. Notice that the light direction has been reversed, and is now pointing toward the light source.

One other oddity in the math we have so far is that if we get a negative dot product value, we end up with a negative color for our light when we multiply those two values together. This doesn’t matter to us right now because there’s only one source of light in our scene. However, in scenes with multiple lights, this could end up with us making parts of our mesh darker than they need to be. To fix this, all we need to do is wrap our dot product in a max() call to make sure that the smallest possible value we can get is 0. With that change, line ❶ looks like Listing 8-10.

Listing 8-10 Using max() to Make Sure We Don’t End Up with Negative Brightness

float lightAmt = max(0.0, dot(normal, lightDir)); ❶

Your First Directional Light

This is the first time that any shader we’ve written has worked with light information. To make things simpler, this shader has been written to work with a very specific type of light, which games commonly refer to as a directional light. Directional lights are used to represent light sources that are very far away (like the sun) and work by casting light uniformly in a single direction. This means that no matter where in our scene a mesh is positioned, it will receive the same amount of light, coming from the same direction, as every other object in the scene.

Setting up a directional light in C++ code is very simple, since we can represent all the data we need about that light in a few variables. Listing 8-11 shows a struct containing data for a directional light’s color, direction, and intensity.

Listing 8-11 The Data Needed to Represent a Directional Light

struct DirectionalLight{       glm::vec3 direction;       glm::vec3 color;       float intensity; };

Our shader didn’t have a separate uniform for light intensity, but the struct for our light does. Games commonly multiply the color of a light by its brightness before passing that data to the graphics pipeline. This is done for optimization reasons. It’s much cheaper to do that math once, before any shader executes, than it is to do that math for every fragment that you need to shade. For the same reason, we’re going to normalize the direction of our light in C++ code too. Listing 8-12 shows a few helper functions that we’re going to use to make sure the data we send to our shader uses these optimizations.

Listing 8-12 Some Light Data Helper Functions

glm::vec3 getLightDirection(DirectionalLight& l){       return glm::normalize(l.direction * -1.0f); } glm::vec3 getLightColor(DirectionalLight& l){       return l.color * l.intensity; }

In getLightColor(), we aren’t clamping our light color to the standard 0–1 range that color data would usually be stored in. This is because the light color that we pass to our shader is pre-multiplied by that light’s brightness, and therefore needs to be able to exceed the standard range of a color for lights with brightness greater than 1.0. We still can’t write a value greater than (1.0, 1.0, 1.0, 1.0) to a fragment, so no matter how bright we set our light to, we still can only go as bright as white on our screen.

With these functions ready to go, we’re ready to set up our light and use it in our draw function. Normally we’d want to set up the light in our setup() function, but for the purposes of a smaller amount of example code, Listing 8-13 is going to put the light setup logic into the draw() function. We’re also going to adjust our model matrix so that our mesh faces upward. This is going to give us a better angle to see the lighting on our mesh.

Listing 8-13 Setting Up and Using Our New Light Data

void ofApp::draw(){       using namespace glm;       DirectionalLight dirLight;       dirLight.direction = normalize(vec3(0, -1, 0));       dirLight.color = vec3(1, 1, 1);       dirLight.intensity = 1.0f;       //code to set up mvp matrix omitted for brevity       diffuseShader.begin();       diffuseShader.setUniformMatrix4f("mvp", mvp);       diffuseShader.setUniform3f("meshCol", glm::vec3(1, 0, 0)); ❶       diffuseShader.setUniform3f("lightDir", getLightDirection(dirLight));       diffuseShader.setUniform3f("lightCol", getLightColor(dirLight));       torusMesh.draw();       diffuseShader.end(); }

The other new thing, both in our shader from earlier and Listing 8-8, is that we’re specifying a mesh color as one of our shader uniforms (❶). This lets us set a solid color for our mesh and gives you a value play with to get a feel for how different light colors interact with mesh surface colors. Passing in red to our mesh color and white to the light color uniforms will look like the left screenshot in Figure 8-9. You’ll notice that the angle that we were originally viewing our torus from made it difficult to see what was going on with our lighting. To remedy that, I changed the example code’s draw function to rotate our mesh so it was facing upward, and then positioned the camera so it was overhead, looking down. The code changes for this are shown in Listing 8-14, and the result is the right screenshot in Figure 8-9. The code to generate this screenshot can be found in the “DiffuseTorus” project of the Chapter 8 example code.

Figure 8-9
figure 9

Diffuse lighting on our torus mesh

Listing 8-14 Code Changes to Get a Better Angle to See Lighting

void ofApp::draw(){       //only the changed lines of code are shown, for brevity       cam.pos = vec3(0, 0.75f, 1.0f);       float cAngle = radians(-45.0f);       vec3 right = vec3(1, 0, 0);       mat4 view = inverse( translate(cam.pos) * rotate(cAngle, right) );       mat4 model = rotate(radians(90.0f), right) * scale(vec3(0.5, 0.5, 0.5));       //rest of function continues unchanged }

Lighting in shaders is a big topic, and we’re going to spend the next few chapters talking about all of it. However, before we end this chapter, I want to walk through one more simple lighting effect that can be built with just the math we’ve covered so far.

Creating a Rim Light Effect

One lighting technique commonly employed in games is known as rim light. This technique adds light (or color) around the edges of a mesh’s shape, making them appear as though they are being back lit by a light that is invisible to the player. Figure 8-10 shows what our torus mesh would look like if we rendered it with only rim lighting applied to the mesh. To make things easier to see, I set the background color of my program to black, which you can do in your own program using the ofSetBackgroundColor() function.

Figure 8-10
figure 10

Rendering our torus with rim light

Rim light works very similarly to how our directional light did in the last section. The only real difference is that instead of using the light’s direction vector in our calculations, we’ll be using a vector that goes from each fragment to the camera. The result of this dot product will be how much extra light we need to add to each fragment in order to get a rim light. Listing 8-15 shows what a rim light only shader would look like. This is the shader used to render Figure 8-10.

Listing 8-15 A Rim Light Only Shader

#version 410 uniform vec3 meshCol; uniform vec3 cameraPos; ❶ in vec3 fragNrm; in vec3 fragWorldPos; ❷ out vec4 outCol; void main() {       vec3 normal = normalize(fragNrm);       vec3 toCam = normalize(cameraPos - fragWorldPos); ❸       float rimAmt = 1.0-max(0.0,dot(normal, toCam)); ❹       rimAmt = pow(rimAmt, 2); ❺       vec3 rimLightCol = vec3(1,1,1);       outCol = vec4(rimLightCol * rimAmt, 1.0); }

The first differences between this shader and the directional light shader we wrote earlier in the chapter are the new variables that we need to make the rim light effect work. First, we need the position of the camera in world space (❶). Since we’ll be calculating the vector from each individual fragment to the camera, we can’t pass a single direction vector from C++ code. Instead, we pass the camera position so we can calculate this vector in our shader code. Additionally, we need the world position of our fragment. To calculate this, we need our vertex shader to calculate world space vertex positions, which we can then interpolate to get fragment positions. In Listing 8-15, this data comes from the vertex shader and is read into the fragWorldPos variable (❷). We’ll discuss the changes we need to make to the vertex shader in a moment, but let’s keep walking through the fragment shader first.

The rim light calculations start in earnest at ❸. As mentioned, this involves calculating the vector from the current fragment to the camera, and then using those vectors to calculate a dot product (❹). Notice that we then subtract this value from 1.0 before writing it to our rimAmt variable. Since our dot product value is being clamped to the range 0-1, this has the effect of inverting the dot product that we calculate. If you don’t do this, you end up with an effect that lights the center of the mesh instead of the rim. You can see what this looks like in Figure 8-11.

Figure 8-11
figure 11

What rim light looks like if you forget to invert your dot product value. If your background is black, like the left image, this gives your meshes a ghostly look.

After we invert the dot product value, there’s one last bit of math that we need to do in order to get our final rimAmt value. In order to control how “tight” the rim is on an object, we can raise our rimAmt value to a power (❺). This is the first time we’ve seen the pow() function used in GLSL, but it’s the same as the pow function that you’re probably used to in C++. All it does is return the first argument, raised to the power specified by the second argument. If you’re confused as to why this will concentrate our rim effect, think about what happens when you raise a value that’s less than 1 to a power. Values closer to 1 will stay relatively close to 1—for example, 0.95 raised to the 4th power is 0.81. However, as values get smaller, this operation is going to have a greater impact—0.25 to the 4th power is 0.004, for example. Figure 8-12 shows examples of what our rim effect looks like with different powers used for our rim calculation.

Figure 8-12
figure 12

What our rim shader looks like if we raise our rimAmt variable to different powers. From left to right, these screenshots show the result of not using a pow() at all, raising to the 2nd power, and raising to the 4th power.

That wraps up all the new stuff in our fragment shader, but remember that we also need to modify our vertex shader to pass world space position information for our rim calculations. Listing 8-16 shows what the vertex shader looks like to support the rim light effect we just walked through.

Listing 8-16 The Vertex Shader That Supports rimlight.frag

#version 410 layout (location = 0) in vec3 pos; layout (location = 2) in vec3 nrm; uniform mat4 mvp; uniform mat3 normal; uniform mat4 model; out vec3 fragNrm; out vec3 fragWorldPos; void main(){       gl_Position = mvp * vec4(pos, 1.0);       fragNrm = (normal * nrm).xyz;       fragWorldPos = (model * vec4(pos, 1.0)).xyz; ❶ }

Just as we talked about earlier, in order to get the world space position of an individual fragment, the vertex shader needs to output world space positions of each vertex. These vertex positions are interpolated just like our other “out” variables, and the result of this interpolation is that we wind up with a perfragment position vector. Transforming vertex positions to world space is done by multiplying them by the model matrix for our mesh (❶). This also means that our C++ code will need to provide both the mvp matrix and the model matrix to our vertex shader.

Now that we’ve walked through what a rim light only shader looks like, it’s time for us to combine our rim light effect with the directional lighting math we wrote earlier. We’re going to use our existing directional light shader in the next chapter and don’t want it to have rim light in it when we do, so create a new fragment shader in your example project and copy Listing 8-17 into it. Chapter 9 will use fragment world position calculations, so it’s safe to add that to “mesh.vert.”

Listing 8-17 Adding Rim Light to Our Directional Light Shader

#version 410 uniform vec3 lightDir; uniform vec3 lightCol; uniform vec3 meshCol; uniform vec3 cameraPos; in vec3 fragNrm; in vec3 fragWorldPos; out vec4 outCol; void main() {       vec3 normal = normalize(fragNrm);       vec3 toCam = normalize(cameraPos - fragWorldPos);       float rimAmt = 1.0-max(0.0,dot(normal, toCam));       rimAmt = pow(rimAmt, 2);       float lightAmt = max(0.0,dot(normal, lightDir));       vec3 fragLight = lightCol * lightAmt;       outCol = vec4(meshCol * fragLight + rimAmt, 1.0); ❶ }

For the most part, this is just copying and pasting parts of our rim light shader into our existing directional light one. The only real thing to note is how to incorporate the rim light value with the rest of the fragment’s color. Since we want the rim light to add on to the existing lighting of our mesh, we need to add the rim light’s contribution to our fragment’s color after all the other lighting has been done. You can see this at line ❶. In this example, we’re adding a pure white rim light to our mesh, but many games will choose to use different colors instead.

After you’ve made these changes to our shader and modified your draw() function to pass the new uniform data we need about our camera position, your program should look like Figure 8-13. The code for this can be found in the “RimLight” project in the example code for this chapter if you’re stuck.

Figure 8-13
figure 13

Our torus mesh rendered with a white rim light

While it looks cool, the lighting calculations that are most commonly used by games don’t include rim light by default. This is because rim light isn’t based on how light works, it’s a purely artistic effect. The next few chapters are concerned with how to model real-world lighting effects in our shader code, so we won’t be using rim light again. This doesn’t mean that you shouldn’t use rim light in your projects. Rather, think of it as a technique to add to your shaders after all the other lighting for that object has been correctly set up, to add some unrealistic visual flair to your rendering.

Summary

That wraps up our first chapter about lighting! Here’s a quick rundown of what we covered:

  • Normal vectors are vectors that point outward from the surface of a mesh. These are stored on each vertex of a mesh to help provide information about the shape of each mesh face.

  • If your game supports nonuniform scaling, normal vectors need a special matrix (called the normal matrix) to transform them from object to world space.

  • A dot product is a scalar value that represents a relationship between two vectors. Calculating a dot product can be done with the GLSL dot() function.

  • Diffuse lighting is the name for the type of lighting found on nonshiny surfaces. The amount of diffuse light a fragment receives is equivalent to the dot product between that fragment’s normal vector and the incoming light’s direction vector.

  • Rim Light is the name of a shading technique that makes a mesh appear as though it’s being lit from behind. It can be calculated by taking the dot product of a fragment’s normal vector and a vector from that fragment to the game camera.