Repository: lwjglgamedev/lwjglbook-bookcontents
Branch: main
Commit: cc6664605e31
Files: 27
Total size: 521.3 KB
Directory structure:
gitextract_brkbjcqe/
├── GLOSSARY.md
├── README.md
├── SUMMARY.md
├── appendix-a/
│ └── appendix-a.md
├── book.json
├── chapter-01/
│ └── chapter-01.md
├── chapter-02/
│ └── chapter-02.md
├── chapter-03/
│ └── chapter-03.md
├── chapter-04/
│ └── chapter-04.md
├── chapter-05/
│ └── chapter-05.md
├── chapter-06/
│ └── chapter-06.md
├── chapter-07/
│ └── chapter-07.md
├── chapter-08/
│ └── chapter-08.md
├── chapter-09/
│ └── chapter-09.md
├── chapter-10/
│ └── chapter-10.md
├── chapter-11/
│ └── chapter-11.md
├── chapter-12/
│ └── chapter-12.md
├── chapter-13/
│ └── chapter-13.md
├── chapter-14/
│ └── chapter-14.md
├── chapter-15/
│ └── chapter-15.md
├── chapter-16/
│ └── chapter-16.md
├── chapter-17/
│ └── chapter-17.md
├── chapter-18/
│ └── chapter-18.md
├── chapter-19/
│ └── chapter-19.md
├── chapter-20/
│ └── chapter-20.md
├── chapter-21/
│ └── chapter-21.md
└── styles/
└── pdf.css
================================================
FILE CONTENTS
================================================
================================================
FILE: GLOSSARY.md
================================================
# Glossary
================================================
FILE: README.md
================================================
# 3D Game Development with LWJGL 3
This online book will introduce the main concepts required to write a 3D game using the LWJGL 3 library.
[LWJGL](http://www.lwjgl.org/) is a Java library that provides access to native APIs used in the development of graphics \(OpenGL\), audio \(OpenAL\) and parallel computing \(OpenCL\) applications. This library leverages the high performance of native OpenGL applications while using the Java language.
My initial goal was to learn the techniques involved in writing a 3D game using OpenGL. All the information required was there in the internet but it was not organized and sometimes it was very hard to find and even incomplete or misleading.
I started to collect some materials, develop some examples and decided to organize that information in the form of a book.
[Table of contents](SUMMARY.md).
You can also check my Vulkan book [here](https://github.com/lwjglgamedev/vulkanbook).
## Source Code
The source code of the samples of this book is in [GitHub](https://github.com/lwjglgamedev/lwjglbook).
The source code for the book itself is also published in [GitHub](https://github.com/lwjglgamedev/lwjglbook-bookcontents).
## License
The book is licensed under [Attribution-ShareAlike 4.0 International \(CC BY-SA 4.0\)](http://creativecommons.org/licenses/by-sa/4.0/)
The source code for the book is licensed under [Apache v2.0](https://www.apache.org/licenses/LICENSE-2.0 "Apache v2.0")
## Previous version
Previous version of the book can still be accessed in Github. Here are the links:
* [Book contents](https://github.com/lwjglgamedev/lwjglbook-bookcontents-leg)
* [Source code](https://github.com/lwjglgamedev/lwjglbook-leg)
**NOTE**: The old version of the book was published, originally, using a different URL. That site is now part of what GitBook considers legacy content. Unfortunately, I cannot access that content any more nor even delete it (according to GitBook this is not possible). Therefore, if you are accessing the book through a GitBook URL which starts with: "https://lwjglgamedev.gitbooks.io/" you are accessing and out of sync version (not even in sync with previous book legacy version which is still hosted in Github).
## Support
If you like the book you can become a [sponsor](https://github.com/sponsors/lwjglgamedev)
## Comments are welcome
Suggestions and corrections are more than welcome \(and if you do like it please rate it with a star\). Please send them using the discussion forum and make the corrections you consider in order to improve the book.
## Author
Antonio Hernández Bejarano
## Special Thanks
To all the readers that have contributed with corrections, improvements and ideas.
================================================
FILE: SUMMARY.md
================================================
# Summary
* [Introduction](README.md)
* [Chapter 01 - First steps](chapter-01/chapter-01.md)
* [Chapter 02 - The Game Loop](chapter-02/chapter-02.md)
* [Chapter 03 - Our first triangle](chapter-03/chapter-03.md)
* [Chapter 04 - Render a quad](chapter-04/chapter-04.md)
* [Chapter 05 - Perspective projection](chapter-05/chapter-05.md)
* [Chapter 06 - Going 3D](chapter-06/chapter-06.md)
* [Chapter 07 - Textures](chapter-07/chapter-07.md)
* [Chapter 08 - Camera](chapter-08/chapter-08.md)
* [Chapter 09 - Loading more complex models (Assimp)](chapter-09/chapter-09.md)
* [Chapter 10 - GUI (Imgui)](chapter-10/chapter-10.md)
* [Chapter 11 - Lights](chapter-11/chapter-11.md)
* [Chapter 12 - Sky Box](chapter-12/chapter-12.md)
* [Chapter 13 - Fog](chapter-13/chapter-13.md)
* [Chapter 14 - Normal Mapping](chapter-14/chapter-14.md)
* [Chapter 15 - Animations](chapter-15/chapter-15.md)
* [Chapter 16 - Audio](chapter-16/chapter-16.md)
* [Chapter 17 - Cascade shadow maps](chapter-17/chapter-17.md)
* [Chapter 18 - 3D Object Picking](chapter-18/chapter-18.md)
* [Chapter 19 - Deferred Shading](chapter-19/chapter-19.md)
* [Chapter 20 - Indirect drawing (static models)](chapter-20/chapter-20.md)
* [Chapter 21 - Indirect drawing (animated models) and compute shaders](chapter-21/chapter-21.md)
* [Appendix A - OpenGL Debugging](appendix-a/appendix-a.md)
================================================
FILE: appendix-a/appendix-a.md
================================================
# Appendix A - OpenGL Debugging
Debugging an OpenGL program can be a daunting task. Most of the times you end up with a black screen and you have no means of knowing what’s going on. In order to alleviate this problem we can use some existing tools that will provide more information about the rendering process.
In this annex we will describe how to use the [RenderDoc](https://renderdoc.org/) tool to debug our LWJGL programs. RenderDoc is a graphics debugging tool that can be used with Direct3D, Vulkan and OpenGL. In the case of OpenGL it only supports the core profile from 3.2 up to 4.5.
So let’s get started. You need to download and install the RenderDoc version for your OS. Once installed, when you launch it you will see something similar to this.

The first step is to configure RenderDoc to execute and monitor our samples. In the “Capture Executable” tab we need to setup the following parameters:
* **Executable path**: In our case this should point to the JVM launcher (For instance, “C:\Program Files\Java\jdk-XX\bin\java.exe”).
* **Working Directory**: This is the working directory that will be setup for your program. In our case it should be set to the target directory where maven dumps the result. By setting this way, the dependencies will be able to be found (For instance, "D:/Projects/booksamples/chapter-18/target").
* **Command line arguments**: This will contain the arguments required by the JVM to execute our sample. In our case, just passing the jar to be executed (For instance, “-jar chapter-18-1.0.jar”).

There are many other options int this tab to configure the capture options. You can consult their purpose in [RenderDoc documentation](https://renderdoc.org/docs/index.html). Once everything has been setup you can execute your program by clicking on the “Launch” button. You will see something like this:

Once launched the process, you will see that a new tab has been added which is named “java \[PID XXXX]” (where the XXXX number represents the PID, the process identifier, of the java process).

From that tab you can capture the state of your program by pressing the “Trigger capture” button. Once a capture has been generated, you will see a little snapshot in that same tab.

If you double click on that capture, all the data collected will be loaded and you can start inspecting it. The “Event Browser” panel will be populated will all the relevant OpenGL calls executed during one rendering cycle.

You can see, the following events:
* Three depth passes for the cascade shadows.
* The geometry pass. If you click over a glDrawELements event, and select the “Mesh” tab you can see the mesh that was drawn, its input and output for the vertex shader.
* The lighting pass.
You can also view the input textures used for that drawing operation (by clicking the “Texture Viewer” tab).

In the center panel, you can see the output, and on the right panel you can see the list of textures used as an input. You can also view the output textures one by one. This is very illustrative to show how deferred shading works.

As you can see, this tool provides valuable information about what’s happening when rendering. It can save precious time while debugging rendering problems. It can even display information about the shaders used in the rendering pipeline.

================================================
FILE: book.json
================================================
{
"plugins": [
"katex"
],
"pluginsConfig": {}
}
================================================
FILE: chapter-01/chapter-01.md
================================================
# Chapter 01 - First steps
In this book we will learn the principal techniques involved in developing 3D games using [OpenGL](https://www.opengl.org). We will develop our samples in Java and we will use the Lightweight Java Game Library [LWJGL](http://www.lwjgl.org/). LWJGL library enables the access to low-level APIs (Application Programming Interface) such as OpenGL from Java.
LWJGL is a low level API that acts like a wrapper around OpenGL. Therefore, if your idea is to start creating 3D games in a short period of time maybe you should consider other alternatives like [jMonkeyEngine](https://jmonkeyengine.org) or [Unity](https::/unity.com). By using this low level API you will have to go through many concepts and write lots of lines of code before you see the results. The benefit of doing it this way is that you will get a much better understanding of 3D graphics and you will always have the control.
Regarding Java, you will need at least Java 17. So the first step, in case you do not have that version installed, is to download the Java SDK. You can download the OpenJDK binaries [here](https://jdk.java.net/17/). In any case, this book assumes that you have a moderate understanding of the Java language. If this is not your case, you should first get proper knowledge of the language.
The best way to work with the examples is to clone the Github repository. You can either download the whole repository as a zip and extract in in your desired folder or clone it by using the following command: `git clone https://github.com/lwjglgamedev/lwjglbook.git`. In bot cases you will have a root folder which contains one sub folder per chapter.
You may use the Java IDE you want in order to run the samples. You can download IntelliJ IDEA which has good support for Java. IntelliJ provides a free open source version, the Community version, which you can download from here: [https://www.jetbrains.com/idea/download/](https://www.jetbrains.com/idea/download/).

When you open the source code in your IDE you can either open the root folder which contains all the chapters (the parent project) or each chapter independently. In the first case, please remember to properly set the working directory for each chapter to the root folder of the chapter. The samples will try to access files using relative paths assuming that the root folder is the chapter base folder.
For building our samples we will be using [Maven](https://maven.apache.org/). Maven is already integrated in most IDEs and you can directly open the different samples inside them. Just open the folder that contains the chapter sample and IntelliJ will detect that it is a maven project. .png>)
Maven builds projects based on an XML file named `pom.xml` (Project Object Model) which manages project dependencies (the libraries you need to use) and the steps to be performed during the build process. Maven follows the principle of convention over configuration, that is, if you stick to the standard project structure and naming conventions the configuration file does not need to explicitly say where source files are or where compiled classes should be located.
This book does not intend to be a maven tutorial, so please find the information about it in the web in case you need it. The source code root folder defines a parent project which defines the plugins to be used and collects the versions of the libraries employed. Therefore you will find there a `pom.xml` file which defines common actions and properties for all the chapters, which are handled as sub-projects.
LWJGL 3.1 introduced some changes in the way that the project is built. Now the base code is much more modular, and we can be more selective in the packages that we want to use instead of using a giant monolithic jar file. This comes at a cost: You now need to carefully specify the dependencies one by one. But the [download](https://www.lwjgl.org/download) page includes a fancy tool that generates the pom file for you. In our case, we will be first using GLFW and OpenGL bindings. You can check what the pom file looks like in the source code.
The LWJGL platform dependency already takes care of unpacking native libraries for your platform, so there's no need to use other plugins (such as `mavennatives`). We just need to set up three profiles to set a property that will configure the LWJGL platform. The profiles will set up the correct values of that property for Windows, Linux and Mac OS families.
```xml
windows-profile
Windows
natives-windows
linux-profile
Linux
natives-linux
OSX-profile
mac
natives-osx
```
Inside each project, the LWJGL platform dependency will use the correct property established in the profile for the current platform.
```xml
org.lwjgl
lwjgl-platform
${lwjgl.version}
${native.target}
```
Besides that, every project generates a runnable jar (one that can be executed by typing java -jar name\_of\_the\_jar.jar). This is achieved by using the maven-jar-plugin which creates a jar with a `MANIFEST.MF` file with the correct values. The most important attribute for that file is `Main-Class`, which sets the entry point for the program. In addition, all the dependencies are set as entries in the `Class-Path` attribute for that file. In order to execute it on another computer, you just need to copy the main jar file and the lib directory (with all the jars included there) which are located under the target directory.
The jars that contain LWJGL classes, also contain the native libraries. LWJGL will also take care of extracting them and adding them to the path where the JVM will look for libraries.
This chapter's source code is taken directly from the getting started sample in the LWJGL site [http://www.lwjgl.org/guide](http://www.lwjgl.org/guide). Although it is very well documented let's go through the source code and explain the most relevant parts. Since pasting the source code for each class will make it impossible to read, we will include fragments. In order for you to better understand the class to which each specific fragment belongs, we will always include the class header in each fragment. We will use three dots (`...`) to indicate that there is more code before / after the fragment. The sample is contained in a single class named `HelloWorld` which starts like this:
```java
package org.lwjglb;
import org.lwjgl.Version;
import org.lwjgl.glfw.*;
import org.lwjgl.opengl.GL;
import org.lwjgl.system.MemoryStack;
import java.nio.IntBuffer;
import static org.lwjgl.glfw.Callbacks.glfwFreeCallbacks;
import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.system.MemoryStack.stackPush;
import static org.lwjgl.system.MemoryUtil.NULL;
public class HelloWorld {
// The window handle
private long window;
public static void main(String[] args) {
new HelloWorld().run();
}
...
}
```
The class just stores a reference to a Window handle (we will see what this means later on), an in the `main` method we just call the `run` method. Let's start dissecting that method:
```java
public class HelloWorld {
...
public void run() {
System.out.println("Hello LWJGL " + Version.getVersion() + "!");
init();
loop();
// Free the window callbacks and destroy the window
glfwFreeCallbacks(window);
glfwDestroyWindow(window);
// Terminate GLFW and free the error callback
glfwTerminate();
glfwSetErrorCallback(null).free();
}
...
}
```
This method just calls the `init` method to initialize the application and then calls the `loop` method which is basically and endless loop which renders to a a window. When the `loop` method is finished we just need to free some resources created during initialization (the GLFW window). Let's start with the `init` method.
```java
public class HelloWorld {
...
private void init() {
// Setup an error callback. The default implementation
// will print the error message in System.err.
GLFWErrorCallback.createPrint(System.err).set();
// Initialize GLFW. Most GLFW functions will not work before doing this.
if (!glfwInit())
throw new IllegalStateException("Unable to initialize GLFW");
// Configure GLFW
glfwDefaultWindowHints(); // optional, the current window hints are already the default
glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE); // the window will stay hidden after creation
glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE); // the window will be resizable
// Create the window
window = glfwCreateWindow(300, 300, "Hello World!", NULL, NULL);
if (window == NULL)
throw new RuntimeException("Failed to create the GLFW window");
// Setup a key callback. It will be called every time a key is pressed, repeated or released.
glfwSetKeyCallback(window, (window, key, scancode, action, mods) -> {
if (key == GLFW_KEY_ESCAPE && action == GLFW_RELEASE)
glfwSetWindowShouldClose(window, true); // We will detect this in the rendering loop
});
...
}
...
}
```
We start by invoking [GLFW](https://www.glfw.org/), which is library to handle GUI components (Windows, etc.) and events (key presses, mouse movements, etc.) with an OpenGL context attached in a straightforward way. Currently, you cannot using Swing or AWT directly to render OpenGL. If you want to use AWT you can check [lwjgl3-awt ](https://github.com/LWJGLX/lwjgl3-awt), but in this book we will stick with GLFW. We first start by initializing GLFW library and setting some parameters for window initialization (such as if it is resizable or not). The window is created by calling the `glfwCreateWindow` which receive window's width and height and the window title. This function returns a handle, which we need to store so we can use it ith any other GLFW related function. After that, we set a keyboard callback, that is a function that will be called when a key is pressed. In this case we just want to detect if the `ESC` key is pressed to close the window. Let's continue with the `init` method:
```java
public class HelloWorld {
...
private void init() {
...
// Get the thread stack and push a new frame
try (MemoryStack stack = stackPush()) {
IntBuffer pWidth = stack.mallocInt(1); // int*
IntBuffer pHeight = stack.mallocInt(1); // int*
// Get the window size passed to glfwCreateWindow
glfwGetWindowSize(window, pWidth, pHeight);
// Get the resolution of the primary monitor
GLFWVidMode vidmode = glfwGetVideoMode(glfwGetPrimaryMonitor());
// Center the window
glfwSetWindowPos(
window,
(vidmode.width() - pWidth.get(0)) / 2,
(vidmode.height() - pHeight.get(0)) / 2
);
} // the stack frame is popped automatically
// Make the OpenGL context current
glfwMakeContextCurrent(window);
// Enable v-sync
glfwSwapInterval(1);
// Make the window visible
glfwShowWindow(window);
}
...
}
```
Although we will explain it in next chapters, you will see here a key class in LWJGL which is the `MemoryStack`. As it has been said before, LJWGL provides wrappers around native libraries (C-based functions). Java does not have the concept of pointers (at least thinking in C terms), so passing structures to C functions is not a straight forward task. In order to share those structures, and to have pass by reference parameters, such as in the example above, we need to allocate memory which can be accessed by native code. LWJGL provides the `MemoryStack` class which allows us to allocate native-accessible memory / structures which is automatically cleaned (in fact is returned to a pool like structure so it can be reused) when we are out of the scope where `stackPush` method is called. Every native-accessible memory / structure is instantiated through this stack class. In the sample above we need to call the `glfwGetWindowSize` to get window dimensions. The values are returned using a pass-by-reference approach, so meed to allocate two ints (in the form of two `IntBuffer`'s). With that information and the dimensions of the monitor we can center the window, setup OpenGL, enable v-sync (more on this in next chapter) and finally show the window.
Now we need an endless loop to render continuously something:
```java
public class HelloWorld {
...
private void loop() {
// This line is critical for LWJGL's interoperation with GLFW's
// OpenGL context, or any context that is managed externally.
// LWJGL detects the context that is current in the current thread,
// creates the GLCapabilities instance and makes the OpenGL
// bindings available for use.
GL.createCapabilities();
// Set the clear color
glClearColor(1.0f, 0.0f, 0.0f, 0.0f);
// Run the rendering loop until the user has attempted to close
// the window or has pressed the ESCAPE key.
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear the framebuffer
glfwSwapBuffers(window); // swap the color buffers
// Poll for window events. The key callback above will only be
// invoked during this call.
glfwPollEvents();
}
}
...
}
```
We first create OpenGL context, set up the clear color and perform a clear operation (over color abd depth buffers) in each loop, polling for keyboard events to detect if window should be closed. We will explain these concepts in detail along next chapters. However, just for the sake of completeness, render is done over a target, in this case over a buffer which contains color information and depth values (for 3D), after we have finished rendering over these buffers, we just need to inform GLFW that this buffer is ready for presenting by calling `glfwSwapBuffers`. GLFW will maintain several buffers so we can perform render operations over one buffer while the other one is presented in the window (if not, we would have flickering artifacts).
If you have your environment correctly set up you should be able to execute it and see a window with a red background.
.png>)
The source code of this chapter is located [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-01).
[Next chapter](../chapter-02/chapter-02.md)
================================================
FILE: chapter-02/chapter-02.md
================================================
# Chapter 02 - The Game Loop
In this chapter we will start developing our game engine by creating the game loop. The game loop is the core component of every game. It is basically an endless loop which is responsible for periodically handling user input, updating game state and rendering to the screen.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-02).
## The basis
The following snippet shows the structure of a game loop:
```java
while (keepOnRunning) {
input();
update();
render();
}
```
The `input` method is responsible of handling user input (key strokes, mouse movements, etc.). The `update` method is responsible of updating game state (enemy positions, AI, etc.). Finally, the `render` method is responsible for rendering the visuals of the game with OpenGL. So, is that all? Are we finished with game loops? Well, not yet. The above snippet has many pitfalls. First of all the speed that the game loop runs at will be different depending on the machine it runs on. If the machine is fast enough the user will not even be able to see what is happening in the game. Moreover, that game loop will consume all the machine resources.
First of all we may want to control separately the period at which the game state is updated and the period at which the game is rendered to the screen. Why do we do this? Well, updating our game state at a constant rate is more important, especially if we use some physics engine. On the contrary, if our rendering is not done in time it makes no sense to render old frames while processing our game loop. We have the flexibility to skip some frames.
## Implementation
Prior to examining the game loop, let's create the supporting classes that will form the core of the engine. We will first create an interface that will encapsulate the game logic. By doing this we will make our game engine reusable across the different chapters. This interface will have methods to initialize the game assets (`init`), handle user input (`input`), update game state (`update`) and clean up the resources (`cleanup`).
```java
package org.lwjglb.engine;
import org.lwjglb.engine.graph.Render;
import org.lwjglb.engine.scene.Scene;
public interface IAppLogic {
void cleanup();
void init(Window window, Scene scene, Render render);
void input(Window window, Scene scene, long diffTimeMillis);
void update(Window window, Scene scene, long diffTimeMillis);
}
```
As you can see, there are some classes instances which we have not defined yet (`Window`, `Scene` and `Render`) and a parameter named `diffTimeMillis` which holds the milliseconds passed between invocations of those methods.
Let's start with the `Window` class. We will encapsulate in this class all the invocations to GLFW library to create and manage a window, and its structure is like this:
```java
package org.lwjglb.engine;
import org.lwjgl.glfw.GLFWVidMode;
import org.lwjgl.system.MemoryUtil;
import org.tinylog.Logger;
import java.util.concurrent.Callable;
import static org.lwjgl.glfw.Callbacks.glfwFreeCallbacks;
import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.system.MemoryUtil.NULL;
public class Window {
private final long windowHandle;
private int height;
private Callable resizeFunc;
private int width;
...
...
public static class WindowOptions {
public boolean compatibleProfile;
public int fps;
public int height;
public int ups = Engine.TARGET_UPS;
public int width;
}
}
```
As you can see, it defines some attributes to store the window handle, its width and height and a callback function which will be invoked nay time the window is resized. It also defines an inner class to set up some options to control window creation:
* `compatibleProfile`: This controls whether we want to use old functions from previous versions (deprecated functions) or not.
* `fps`: Defines the target frames per second (FPS). If it has a value less than or equal to zero it means that we do not want to set a fixed target FPS but instead use the monitor refresh rate as the target FPS. In order to do so, we will use v-sync (that is the number of screen updates to wait from the time `glfwSwapBuffers` was called before swapping the buffers and returning).
* `height`: Desired window height.
* `width`: Desired window width:
* `ups`: Defines the target number of updates per second (initialized to a default value).
Let's examine the constructor of the `Window` class:
```java
public class Window {
...
public Window(String title, WindowOptions opts, Callable resizeFunc) {
this.resizeFunc = resizeFunc;
if (!glfwInit()) {
throw new IllegalStateException("Unable to initialize GLFW");
}
glfwDefaultWindowHints();
glfwWindowHint(GLFW_VISIBLE, GL_FALSE);
glfwWindowHint(GLFW_RESIZABLE, GL_TRUE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
if (opts.compatibleProfile) {
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
} else {
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
}
if (opts.width > 0 && opts.height > 0) {
this.width = opts.width;
this.height = opts.height;
} else {
glfwWindowHint(GLFW_MAXIMIZED, GLFW_TRUE);
GLFWVidMode vidMode = glfwGetVideoMode(glfwGetPrimaryMonitor());
width = vidMode.width();
height = vidMode.height();
}
windowHandle = glfwCreateWindow(width, height, title, NULL, NULL);
if (windowHandle == NULL) {
throw new RuntimeException("Failed to create the GLFW window");
}
glfwSetFramebufferSizeCallback(windowHandle, (window, w, h) -> resized(w, h));
glfwSetErrorCallback((int errorCode, long msgPtr) ->
Logger.error("Error code [{}], msg [{}]", errorCode, MemoryUtil.memUTF8(msgPtr))
);
glfwSetKeyCallback(windowHandle, (window, key, scancode, action, mods) -> {
keyCallBack(key, action);
});
glfwMakeContextCurrent(windowHandle);
if (opts.fps > 0) {
glfwSwapInterval(0);
} else {
glfwSwapInterval(1);
}
glfwShowWindow(windowHandle);
int[] arrWidth = new int[1];
int[] arrHeight = new int[1];
glfwGetFramebufferSize(windowHandle, arrWidth, arrHeight);
width = arrWidth[0];
height = arrHeight[0];
}
...
public void keyCallBack(int key, int action) {
if (key == GLFW_KEY_ESCAPE && action == GLFW_RELEASE) {
glfwSetWindowShouldClose(windowHandle, true); // We will detect this in the rendering loop
}
}
...
}
```
We start by setting some window hints to hide the window and set it resizable. After that, we set OpenGL version and set either core or compatible profile depending on window options. Then, if we have not set a preferred width and height we get the primary monitor dimensions to set window size. We then create the window by calling the `glfwCreateWindow` and set some callbacks when window is resized or to detect window termination (when `ESC` key is pressed). If we want to manually set a target FPS, we invoke `glfwSwapInterval(0)` to disable v-sync and finally, we show the window and get the frame buffer size (the portion of the window used to render()).
The rest of the methods of the `Window` class are for cleaning up resources, the resize callback, some getters for window size and methods to poll events and to check if the window should be closed.
```java
public class Window {
...
public void cleanup() {
glfwFreeCallbacks(windowHandle);
glfwDestroyWindow(windowHandle);
glfwTerminate();
GLFWErrorCallback callback = glfwSetErrorCallback(null);
if (callback != null) {
callback.free();
}
}
public int getHeight() {
return height;
}
public int getWidth() {
return width;
}
public long getWindowHandle() {
return windowHandle;
}
public boolean isKeyPressed(int keyCode) {
return glfwGetKey(windowHandle, keyCode) == GLFW_PRESS;
}
public void pollEvents() {
glfwPollEvents();
}
protected void resized(int width, int height) {
this.width = width;
this.height = height;
try {
resizeFunc.call();
} catch (Exception excp) {
Logger.error("Error calling resize callback", excp);
}
}
public void update() {
glfwSwapBuffers(windowHandle);
}
public boolean windowShouldClose() {
return glfwWindowShouldClose(windowHandle);
}
...
}
```
The `Scene` class will hold 3D scene future elements (models, etc.). For now it is just an empty placeholder:
```java
package org.lwjglb.engine.scene;
public class Scene {
public Scene() {
}
public void cleanup() {
// Nothing to be done here yet
}
}
```
The `Render` class is just another placeholder that clears the screen:
```java
package org.lwjglb.engine.graph;
import org.lwjgl.opengl.GL;
import org.lwjglb.engine.Window;
import org.lwjglb.engine.scene.Scene;
import static org.lwjgl.opengl.GL11.*;
public class Render {
public Render() {
GL.createCapabilities();
}
public void cleanup() {
// Nothing to be done here yet
}
public void render(Window window, Scene scene) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
}
```
Now we can implement the game loop in a new class named `Engine` which starts like this:
```java
package org.lwjglb.engine;
import org.lwjglb.engine.graph.Render;
import org.lwjglb.engine.scene.Scene;
public class Engine {
public static final int TARGET_UPS = 30;
private final IAppLogic appLogic;
private final Window window;
private Render render;
private boolean running;
private Scene scene;
private int targetFps;
private int targetUps;
public Engine(String windowTitle, Window.WindowOptions opts, IAppLogic appLogic) {
window = new Window(windowTitle, opts, () -> {
resize();
return null;
});
targetFps = opts.fps;
targetUps = opts.ups;
this.appLogic = appLogic;
render = new Render();
scene = new Scene();
appLogic.init(window, scene, render);
running = true;
}
private void cleanup() {
appLogic.cleanup();
render.cleanup();
scene.cleanup();
window.cleanup();
}
private void resize() {
// Nothing to be done yet
}
...
}
```
The `Engine` class, receives in the constructor the title of the window, the window options and a reference to the implementation of the `IAppLogic` interface. In the constructor it creates instance of the `Window`, `Render` and `Scene` classes. The `cleanup` method just invokes the other classes `cleanup` resources. The game loop is defined in the `run` method which is defined like this:
```java
public class Engine {
...
private void run() {
long initialTime = System.currentTimeMillis();
float timeU = 1000.0f / targetUps;
float timeR = targetFps > 0 ? 1000.0f / targetFps : 0;
float deltaUpdate = 0;
float deltaFps = 0;
long updateTime = initialTime;
while (running && !window.windowShouldClose()) {
window.pollEvents();
long now = System.currentTimeMillis();
deltaUpdate += (now - initialTime) / timeU;
deltaFps += (now - initialTime) / timeR;
if (targetFps <= 0 || deltaFps >= 1) {
appLogic.input(window, scene, now - initialTime);
}
if (deltaUpdate >= 1) {
long diffTimeMillis = now - updateTime;
appLogic.update(window, scene, diffTimeMillis);
updateTime = now;
deltaUpdate--;
}
if (targetFps <= 0 || deltaFps >= 1) {
render.render(window, scene);
deltaFps--;
window.update();
}
initialTime = now;
}
cleanup();
}
...
}
```
The loop starts by calculating two parameters: `timeU` and `timeR` which control the maximum elapsed time between updates (`timeU`) and render calls (`timeR`) in milliseconds. If those periods are consumed we need either to update game state or to render. In the later case, if the target FPS is set to 0 we will rely on v-sync refresh rate so we just set the value to `0`. The loop starts by polling the events over the window, after that, we get current time in milliseconds. After that we get the elapsed time between update and render calls. If we have passed the maximum elapsed time for render (or relay in v-sync), we process user input by calling `appLogic.input`. If we have surpassed maximum update elapsed time we update game state by calling `appLogic.update`. we have passed the maximum elapsed time for render (or relay in v-sync), we trigger render calls by calling `render.render`.
At the end of the loop we call the `cleanup` method to free resources.
Finally the `Engine` is completed like this:
```java
public class Engine {
...
public void start() {
running = true;
run();
}
public void stop() {
running = false;
}
}
```
A little note on threading. GLFW requires to be initialized from the main thread. Polling of events should also be done in that thread. Therefore, instead of creating a separate thread for the game loop, which is what you would see commonly in games, we will execute everything from the main thread. This is why we do not create a new `Thread` in the `start` method.
Finally, we just simplify the `Main` class to this:
```java
package org.lwjglb.game;
import org.lwjglb.engine.*;
import org.lwjglb.engine.graph.Render;
import org.lwjglb.engine.scene.Scene;
public class Main implements IAppLogic {
public static void main(String[] args) {
Main main = new Main();
Engine gameEng = new Engine("chapter-02", new Window.WindowOptions(), main);
gameEng.start();
}
@Override
public void cleanup() {
// Nothing to be done yet
}
@Override
public void init(Window window, Scene scene, Render render) {
// Nothing to be done yet
}
@Override
public void input(Window window, Scene scene, long diffTimeMillis) {
// Nothing to be done yet
}
@Override
public void update(Window window, Scene scene, long diffTimeMillis) {
// Nothing to be done yet
}
}
```
We just create the `Engine` instance and start it up in the `main` method. The `Main` class also implements the `IAppLogic` interface which, for now, is just empty.
[Next chapter](../chapter-03/chapter-03.md)
================================================
FILE: chapter-03/chapter-03.md
================================================
# Chapter 03 - Our first triangle
In this chapter we will render our first triangle to the screen and introduce the basis of a programmable graphics pipeline. But, prior to that, we will explain first the basis of coordinate systems. Trying to introduce some fundamental mathematical concepts in a simple way to support the techniques and topics that we will address in subsequent chapters. We will assume some simplifications which may sacrifice preciseness for the sake of legibility.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-03).
## A brief about coordinates
We locate objects in space by specifying their coordinates. Think about a map. You specify a point on a map by stating its latitude or longitude. With just a pair of numbers a point is precisely identified. That pair of numbers are the point coordinates (things are a little bit more complex in reality, since a map is a projection of a non perfect ellipsoid, the earth, so more data is needed but it’s a good analogy).
A coordinate system is a system which employs one or more numbers, that is, one or more components to uniquely specify the position of a point. There are different coordinate systems (Cartesian, polar, etc.) and you can transform coordinates from one system to another. We will use the Cartesian coordinate system.
In the Cartesian coordinate system, for two dimensions, a coordinate is defined by two numbers that measure the signed distance to two perpendicular axes, x and y.

Continuing with the map analogy, coordinate systems define an origin. For geographic coordinates the origin is set to the point where the equator and the zero meridian cross. Depending on where we set the origin, coordinates for a specific point are different. A coordinate system may also define the orientation of the axis. In the previous figure, the x coordinate increases as long as we move to the right and the y coordinate increases as we move upwards. But we could also define an alternative Cartesian coordinate system with different axis orientation in which we would obtain different coordinates.

As you can see we need to define some arbitrary parameters, such as the origin and the axis orientation in order to give the appropriate meaning to the pair of numbers that constitute a coordinate. We will refer to that coordinate system with the set of arbitrary parameters as the coordinate space. In order to work with a set of coordinates we must use the same coordinate space. The good news is that we can transform coordinates from one space to another just by performing translations and rotations.
If we are dealing with 3D coordinates we need an additional axis, the z axis. 3D coordinates will be formed by a set of three numbers (x, y, z).

As in 2D Cartesian coordinate spaces we can change the orientation of the axes in 3D coordinate spaces as long as the axes are perpendicular. The next figure shows another 3D coordinate space.

3D coordinates can be classified in two types: left handed and right handed. How do you know which type it is? Take your hand and form a “L” between your thumb and your index fingers, the middle finger should point in a direction perpendicular to the other two. The thumb should point to the direction where the x axis increases, the index finger should point where the y axis increases and the middle finger should point where the z axis increases. If you are able to do that with your left hand, then it's left handed, if you need to use your right hand it's right-handed.

2D coordinate spaces are all equivalent since by applying rotation we can transform from one to another. 3D coordinate spaces, on the contrary, are not all equal. You can only transform from one to another by applying rotation if they both have the same handedness, that is, if both are left handed or right handed.
Now that we have defined some basic topics let’s talk about some commonly used terms when dealing with 3D graphics. When we explain in later chapters how to render 3D models we will see that we use different 3D coordinate spaces, that is because each of those coordinate spaces has a context, a purpose. A set of coordinates is meaningless unless it refers to something. When you examine this coordinates (40.438031, -3.676626) they may say something to you or not. But if I say that they are geometric coordinates (latitude and longitude) you will see that they are the coordinates of a place in Madrid.
When we will load 3D objects we will get a set of 3D coordinates. Those coordinates are expressed in a 3D coordinate space which is called object coordinate space. When the graphics designers are creating those 3D models they don’t know anything about the 3D scene that this model will be displayed in, so they can only define the coordinates using a coordinate space that is only relevant for the model.
When we will be drawing a 3D scene all of our 3D objects will be relative to the so called world space coordinate space. We will need to transform from 3D object space to world space coordinates. Some objects will need to be rotated, stretched or enlarged and translated in order to be displayed properly in a 3D scene.
We will also need to restrict the range of the 3D space that is shown, which is like moving a camera through our 3D space. Then we will need to transform world space coordinates to camera or view space coordinates. Finally these coordinates need to be transformed to screen coordinates, which are 2D, so we need to project 3D view coordinates to a 2D screen coordinate space.
The following picture shows OpenGL coordinates, (the z axis is perpendicular to the screen) and coordinates are between -1 and +1.

## Your first triangle
Now we can start learning the processes that takes place while rendering a scene using OpenGL. If you are used to older versions of OpenGL, that is fixed-function pipeline, you may end this chapter wondering why it needs to be so complex. You may end up thinking that drawing a simple shape to the screen should not require so many concepts and lines of code. Let me give you an advice for those of you that think that way. It is actually simpler and much more flexible. You only need to give it a chance. Modern OpenGL lets you think in one problem at a time and it lets you organize your code and processes in a more logical way.
The sequence of steps that ends up drawing a 3D representation into your 2D screen is called the graphics pipeline. First versions of OpenGL employed a model which was called fixed-function pipeline. This model employed a set of steps in the rendering process which defined a fixed set of operations. The programmer was constrained to the set of functions available for each step and could set some parameters to tweak it. Thus, the effects and operations that could be applied were limited by the API itself (for instance, “set fog” or “add light”, but the implementation of those functions were fixed and could not be changed).
The graphics pipeline was composed of these steps:

OpenGL 2.0 introduced the concept of programmable pipeline. In this model, the different steps that compose the graphics pipeline can be controlled or programmed by using a set of specific programs called shaders. The following picture depicts a simplified version of the OpenGL programmable pipeline:

The rendering starts taking as its input a list of vertices in the form of Vertex Buffers. But, what is a vertex? A vertex is any data structure that can be used as an input to render a scene. By now you can think as a structure that describes a point in 2D or 3D space. And how do you describe a point in a 3D space? By specifying its x, y and z coordinates. And what is a Vertex Buffer? A Vertex Buffer is another data structure that packs all the vertices that need to be rendered, by using vertex arrays, and makes that information available to the shaders in the graphics pipeline.
Those vertices are processed by the vertex shader whose main purpose is to calculate the projected position of each vertex into the screen space. This shader can generate also other outputs related to color or texture, but its main goal is to project the vertices into the screen space, that is, to generate dots.
The geometry processing stage connects the vertices that are transformed by the vertex shader to form triangles. It does so by taking into consideration the order in which the vertices were stored and grouping them using different models. Why triangles? A triangle is like the basic work unit for graphic cards. It’s a simple geometric shape that can be combined and transformed to construct complex 3D scenes. This stage can also use a specific shader to group the vertices.
The rasterization stage takes the triangles generated in the previous stages, clips them and transforms them into pixel-sized fragments. Those fragments are used during the fragment processing stage by the fragment shader to generate pixels assigning them the final color that gets written into the framebuffer. The framebuffer is the final result of the graphics pipeline. It holds the value of each pixel that should be drawn to the screen.
Keep in mind that 3D cards are designed to parallelize all the operations described above. The input data is processed in parallel in order to generate the final scene.
So let's start writing our first shader program. Shaders are written by using the GLSL language (OpenGL Shading Language) which is based on ANSI C. First we will create a file named “`scene.vert`” (the extension is for Vertex Shader) under the `resources\shaders` directory with the following content:
```glsl
#version 330
layout (location=0) in vec3 inPosition;
void main()
{
gl_Position = vec4(inPosition, 1.0);
}
```
The first line is a directive that states the version of the GLSL language we are using. The following table relates the GLSL version, the OpenGL that matches that version and the directive to use (Wikipedia: [https://en.wikipedia.org/wiki/OpenGL\_Shading\_Language#Versions](https://en.wikipedia.org/wiki/OpenGL_Shading_Language#Versions)).
| GLS Version | OpenGL Version | Shader Preprocessor |
| ----------- | -------------- | ------------------- |
| 1.10.59 | 2.0 | #version 110 |
| 1.20.8 | 2.1 | #version 120 |
| 1.30.10 | 3.0 | #version 130 |
| 1.40.08 | 3.1 | #version 140 |
| 1.50.11 | 3.2 | #version 150 |
| 3.30.6 | 3.3 | #version 330 |
| 4.00.9 | 4.0 | #version 400 |
| 4.10.6 | 4.1 | #version 410 |
| 4.20.11 | 4.2 | #version 420 |
| 4.30.8 | 4.3 | #version 430 |
| 4.40 | 4.4 | #version 440 |
| 4.50 | 4.5 | #version 450 |
The second line specifies the input format for this shader. Data in an OpenGL buffer can be whatever we want, that is, the language does not force you to pass a specific data structure with a predefined semantic. From the point of view of the shader it is expecting to receive a buffer with data. It can be a position, a position with some additional information or whatever else we want. In this example, from vertex shader's perspective is just receiving an array of floats. When we fill the buffer, we define the buffer chunks that are going to be processed by the shader.
So, first we need to get that chunk into something that’s meaningful to us. In this case we are saying that, starting from the position 0, we are expecting to receive a vector composed of 3 attributes (x, y, z).
The shader has a main block like any other C program which in this case is very simple. It is just returning the received position in the output variable `gl_Position` without applying any transformation. You now may be wondering why the vector of three attributes has been converted into a vector of four attributes (vec4). This is because `gl_Position` is expecting the result in vec4 format since it is using homogeneous coordinates. That is, it’s expecting something in the form (x, y, z, w), where w represents an extra dimension. Why add another dimension? In later chapters you will see that most of the operations we need to do are based on vectors and matrices. Some of those operations cannot be combined if we do not have that extra dimension. For instance we could not combine rotation and translation operations. (If you want to learn more on this, this extra dimension allow us to combine affine and linear transformations. You can learn more about this by reading the excellent book “3D Math Primer for Graphics and Game Development", by Fletcher Dunn and Ian Parberry).
Let us now have a look at our first fragment shader. We will create a file named “`scene.frag`” (the extension is for Fragment Shader) under the resources directory with the following content:
```glsl
#version 330
out vec4 fragColor;
void main()
{
fragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
```
The structure is quite similar to our vertex shader. In this case we will set a fixed color for each fragment. The output variable is defined in the second line and set as a vec4 fragColor.
Now that we have our shaders created, how do we use them? We will need to create a new class named `ShaderProgram` which basically receives the source code of the different shader modules (vertex, fragment) and complies then and links them together to generate a shader program. This is the sequence of steps we need to follow:
1. Create an OpenGL program.
2. Load the shader program modules (vertex or fragment shaders).
3. For each shader, create a new shader module and specify its type (vertex, fragment).
4. Compile the shader.
5. Attach the shader to the program.
6. Link the program.
At the end the shader program will be loaded in the GPU and we can use it by referencing an identifier, the program identifier.
```java
package org.lwjglb.engine.graph;
import org.lwjgl.opengl.GL30;
import org.lwjglb.engine.Utils;
import java.util.*;
import static org.lwjgl.opengl.GL30.*;
public class ShaderProgram {
private final int programId;
public ShaderProgram(List shaderModuleDataList) {
programId = glCreateProgram();
if (programId == 0) {
throw new RuntimeException("Could not create Shader");
}
List shaderModules = new ArrayList<>();
shaderModuleDataList.forEach(s -> shaderModules.add(createShader(Utils.readFile(s.shaderFile), s.shaderType)));
link(shaderModules);
}
public void bind() {
glUseProgram(programId);
}
public void cleanup() {
unbind();
if (programId != 0) {
glDeleteProgram(programId);
}
}
protected int createShader(String shaderCode, int shaderType) {
int shaderId = glCreateShader(shaderType);
if (shaderId == 0) {
throw new RuntimeException("Error creating shader. Type: " + shaderType);
}
glShaderSource(shaderId, shaderCode);
glCompileShader(shaderId);
if (glGetShaderi(shaderId, GL_COMPILE_STATUS) == 0) {
throw new RuntimeException("Error compiling Shader code: " + glGetShaderInfoLog(shaderId, 1024));
}
glAttachShader(programId, shaderId);
return shaderId;
}
public int getProgramId() {
return programId;
}
private void link(List shaderModules) {
glLinkProgram(programId);
if (glGetProgrami(programId, GL_LINK_STATUS) == 0) {
throw new RuntimeException("Error linking Shader code: " + glGetProgramInfoLog(programId, 1024));
}
shaderModules.forEach(s -> glDetachShader(programId, s));
shaderModules.forEach(GL30::glDeleteShader);
}
public void unbind() {
glUseProgram(0);
}
public void validate() {
glValidateProgram(programId);
if (glGetProgrami(programId, GL_VALIDATE_STATUS) == 0) {
throw new RuntimeException("Error validating Shader code: " + glGetProgramInfoLog(programId, 1024));
}
}
public record ShaderModuleData(String shaderFile, int shaderType) {
}
}
```
The constructor of the `ShaderProgram` receives a list of `ShaderModuleData` instances which define the shader module type (vertex, fragment, etc.) and the path to the source file which contains the shader module code. The constructor starts by creating a new OpenGL shader program by compiling firs each shader module (by invoking the `createShader` method) and finally linking all together (by invoking the `link` method). Once the shader program has been linked, the compiled vertex and fragment shaders can be freed up (by calling `glDetachShader`).
The `validate` method, basically calls the `glValidateProgram` function. This function is used mainly for debugging purposes, and it should not be used when your game reaches production stage. This method tries to validate if the shader is correct given the **current OpenGL state**. This means, that validation may fail in some cases even if the shader is correct, due to the fact that the current state is not complete enough to run the shader (some data may have not been uploaded yet). You should call it when all required input and output data is properly bound (better just before performing any drawing call).
`ShaderProgram` also provides methods to use this program for rendering, that is binding it, another one for unbinding (when we are done with it) and finally, a cleanup method to free all the resources when they are no longer needed.
We will create an utility class named `Utils`, which in this case defines a public method to load a file into a `String`:
```java
package org.lwjglb.engine;
import java.io.IOException;
import java.nio.file.*;
public class Utils {
private Utils() {
// Utility class
}
public static String readFile(String filePath) {
String str;
try {
str = new String(Files.readAllBytes(Paths.get(filePath)));
} catch (IOException excp) {
throw new RuntimeException("Error reading file [" + filePath + "]", excp);
}
return str;
}
}
```
We will need also a new class, named `Scene`, which will hold values of our 3D scene, such as models, lights, etc. By now it just tores the meshes (set of vertices) of the models we want to draw. This is the source code for that class:
```java
package org.lwjglb.engine.scene;
import org.lwjglb.engine.graph.Mesh;
import java.util.*;
public class Scene {
private Map meshMap;
public Scene() {
meshMap = new HashMap<>();
}
public void addMesh(String meshId, Mesh mesh) {
meshMap.put(meshId, mesh);
}
public void cleanup() {
meshMap.values().forEach(Mesh::cleanup);
}
public Map getMeshMap() {
return meshMap;
}
}
```
As you can see it just stores `Mesh` instances in a Map, which is later on used for drawing. But what is a `Mesh`? It is basically our way to load vertices data into the GPU so it can be used for render. Prior to describe in detail the `Mesh` class, let's see how it can be used in our `Main` class:
```java
public class Main implements IAppLogic {
public static void main(String[] args) {
Main main = new Main();
Engine gameEng = new Engine("chapter-03", new Window.WindowOptions(), main);
gameEng.start();
}
...
@Override
public void init(Window window, Scene scene, Render render) {
float[] positions = new float[]{
0.0f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f
};
Mesh mesh = new Mesh(positions, 3);
scene.addMesh("triangle", mesh);
}
...
}
```
In the `init` method, we define an array of floats that contain the coordinates of the vertices of a triangle. As you can see there’s no structure in that array, we just dump there all the coordinates. As it is right now, OpenGL cannot know the structure of that data. It’s just a sequence of floats. The following picture depicts the triangle in our coordinate system.

The class that defines the structure of that data and loads it in the GPU is the `Mesh` class which is defined like this:
```java
package org.lwjglb.engine.graph;
import org.lwjgl.opengl.GL30;
import org.lwjgl.system.*;
import java.nio.FloatBuffer;
import java.util.*;
import static org.lwjgl.opengl.GL30.*;
public class Mesh {
private int numVertices;
private int vaoId;
private List vboIdList;
public Mesh(float[] positions, int numVertices) {
this.numVertices = numVertices;
vboIdList = new ArrayList<>();
vaoId = glGenVertexArrays();
glBindVertexArray(vaoId);
// Positions VBO
int vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer positionsBuffer = MemoryUtil.memCallocFloat(positions.length);
positionsBuffer.put(0, positions);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, positionsBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
MemoryUtil.memFree(positionsBuffer);
}
public void cleanup() {
vboIdList.forEach(GL30::glDeleteBuffers);
glDeleteVertexArrays(vaoId);
}
public int getNumVertices() {
return numVertices;
}
public final int getVaoId() {
return vaoId;
}
}
```
We are introducing now two important concepts, Vertex Array Objects (VAOs) and Vertex Buffer Object (VBOs). If you get lost in the code above remember that at the end what we are doing is sending the data that models the objects we want to draw to the graphics card memory. When we store it we get an identifier that serves us later to refer to it while drawing.
Let us first start with Vertex Buffer Object (VBOs). A VBO is just a memory buffer stored in the graphics card memory that stores vertices. This is where we will transfer our array of floats that model a triangle. As we said before, OpenGL does not know anything about our data structure. In fact it can hold not just coordinates but other information, such as textures, color, etc. A Vertex Array Objects (VAOs) is an object that contains one or more VBOs which are usually called attribute lists. Each attribute list can hold one type of data: position, color, texture, etc. You are free to store whichever you want in each slot.
A VAO is like a wrapper that groups a set of definitions for the data that is going to be stored in the graphics card. When we create a VAO we get an identifier. We use that identifier to render it and the elements it contains using the definitions we specified during its creation.
So let us review the code above. The first thing that we do is to create the VAO (by calling the `glGenVertexArrays` function) and bind it (by calling the `glBindVertexArray` function). After that, we need to create the VBO (by calling the `glGenBuffers`) and put the data into it. In order to do so, we store our array of floats into a `FloatBuffer`. This is mainly due to the fact that we must interface with the OpenGL library, which is C-based, so we must transform our array of floats into something that can be managed by the library.
We use the `MemoryUtil` class to create the buffer in off-heap memory so that it's accessible by the OpenGL library. We have store the data (with the put method). Remember, that Java objects, are allocated in a space called the heap. The heap is a large bunch of memory reserved in the JVM's process memory. Memory stored in the heap cannot be accessed by native code (JNI, the mechanism that allows calling native code from Java does not allow that). The only way of sharing memory data between Java and native code is by directly allocating memory in Java.
If you come from previous versions of LWJGL it's important to stress out a few topics. You may have noticed that we do not use the utility class `BufferUtils` to create the buffers. Instead we use the `MemoryUtil` class. This is due to the fact that `BufferUtils` was not very efficient, and has been maintained only for backwards compatibility. Instead, LWJGL 3 proposes two methods for buffer management:
* Auto-managed buffers, that is, buffers that are automatically collected by the Garbage Collector. These buffers are mainly used for short-lived operations, or for data that is transferred to the GPU and does not need to be present in the process memory. This is achieved by using the `org.lwjgl.system.MemoryStack` class.
* Manually managed buffers. In this case we need to carefully free them once we are finished. These buffers are intended for long time operations or for large amounts of data. This is achieved by using the `MemoryUtil` class.
You can consult the details here: [https://blog.lwjgl.org/memory-management-in-lwjgl-3/](https://blog.lwjgl.org/memory-management-in-lwjgl-3/).
In this specific case, positions data is short lived, once we have loaded the data, we are done with that buffer. You may think, we then you do not use the `org.lwjgl.system.MemoryStack` class? The reason to use the second approach (`MemoryUtil` class) is that LWJGL's stack is limited. If you end up loading large modes you may consume all the available space and get and "Out of stack space" exception. The drawback for this approach is that we need to manually free the memory once we are done with it by calling `MemoryUtil.memFree`.
After that, we bind the VBO (by calling the `glBindBuffer`) and load the data into it (by calling the `glBufferData` function). Now comes the most important part. We need to define the structure of our data and store it in one of the attribute lists of the VAO. This is done with the following line.
```java
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);
```
The parameters are:
* index: Specifies the location where the shader expects this data.
* size: Specifies the number of components per vertex attribute (from 1 to 4). In this case, we are passing 3D coordinates, so it should be 3.
* type: Specifies the type of each component in the array, in this case a float.
* normalized: Specifies if the values should be normalized or not.
* stride: Specifies the byte offset between consecutive generic vertex attributes. (We will explain it later).
* offset: Specifies an offset to the first component in the buffer.
After we are finished with our VBO and VAO so we can unbind them (bind them to 0)
Since we are using auto-managed buffers, once the `try` / `catch` block finished, the buffer is automatically cleaned up.
The `Mesh` class is completed by the `cleanup` method which basically frees the VAO and the VBO and some getter methods to get the number of vertices of the mesh and the id of the VAO. When rendering this elements, we will use the VAO id when using drawing operations.
Now let's put all of this into work. We will create a new class named `SceneRender` which will perform the render of all the models in our scene and is defined like this:
```java
package org.lwjglb.engine.graph;
import org.lwjglb.engine.Window;
import org.lwjglb.engine.scene.Scene;
import java.util.*;
import static org.lwjgl.opengl.GL30.*;
public class SceneRender {
private ShaderProgram shaderProgram;
public SceneRender() {
List shaderModuleDataList = new ArrayList<>();
shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/scene.vert", GL_VERTEX_SHADER));
shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/scene.frag", GL_FRAGMENT_SHADER));
shaderProgram = new ShaderProgram(shaderModuleDataList);
}
public void cleanup() {
shaderProgram.cleanup();
}
public void render(Scene scene) {
shaderProgram.bind();
scene.getMeshMap().values().forEach(mesh -> {
glBindVertexArray(mesh.getVaoId());
glDrawArrays(GL_TRIANGLES, 0, mesh.getNumVertices());
}
);
glBindVertexArray(0);
shaderProgram.unbind();
}
}
```
As you can see, in the constructor, we create two `ShaderModuleData` instances (one for vertex shader and the other one for fragment) and creates a shader program. We define a `cleanup` method to free the resources (in this case the shader program) and a `render` method which is the one which performs the drawing. This method starts by using the shader program by calling its `bind` method. Then, we iterate over the meshes stored in the `Scene` instance, bind them (by calling the `glBindVertexArray` function) and draw the vertices of the VAO (by calling the `glDrawArrays` function). Finally, we unbind the VAO and the shader program to restore the state.
Finally, we just need to update the `Render` class to use the `SceneRender` class.
```java
package org.lwjglb.engine.graph;
import org.lwjgl.opengl.GL;
import org.lwjglb.engine.Window;
import org.lwjglb.engine.scene.Scene;
public class Render {
private SceneRender sceneRender;
public Render() {
GL.createCapabilities();
sceneRender = new SceneRender();
}
public void cleanup() {
sceneRender.cleanup();
}
public void render(Window window, Scene scene) {
...
glViewport(0, 0, window.getWidth(), window.getHeight());
sceneRender.render(scene);
}
}
```
The `render` method starts by clearing the framebuffer and setting the view port (by calling the `glViewport` method) to the window dimensions. That is, we set the rendering area to those dimensions (This does not need to be done for every frame, but if we want to support window resizing we can do it this way to adapt to potential changes in each frame). After that we just invoke the `render` method over the `SceneRender` instance. And, that’s all! If you followed the steps carefully you will see something like this:

Our first triangle! You may think that this will not make it into the top ten game list, and you will be totally right. You may also think that this has been too much work for drawing a boring triangle. But keep in mind that we are introducing key concepts and preparing the base infrastructure to do more complex things. Please be patient and continue reading.
[Next chapter](../chapter-04/chapter-04.md)
================================================
FILE: chapter-04/chapter-04.md
================================================
# Chapter 04 - Render a quad
## Chapter 04 - More on render
In this chapter, we will continue talking about how OpenGL renders things. We will draw a quad instead of a triangle and set additional data to the Mesh, such as a color for each vertex.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-04).
## Mesh modification
As we said at the beginning, we want to draw a quad. A quad can be constructed by using two triangles, as shown in the next figure.
.png>)
As you can see, each of the two triangles is composed of three vertices. The first one is formed by the vertices V1, V2, and V4 (the orange one), and the second one is formed by the vertices V4, V2, and V3 (the green one). Vertices are specified in a counter-clockwise order, so the float array to be passed will be \[V1, V2, V4, V4, V2, V3]. Thus, the data for that shape could be:
```java
float[] positions = new float[] {
-0.5f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
}
```
The code above still presents some issues. We are repeating coordinates to represent the quad. We are passing the V2 and V4 coordinates twice. With this small shape, it may not seem like a big deal, but imagine a much more complex 3D model. We would be repeating the coordinates many times, like in the figure below (where a vertex can be shared between six triangles).
.png>)
At the end, we would need much more memory because of that duplicate information. But the major problem is not this; the biggest problem is that we will be repeating processes in our shaders for the same vertex. This is where Index Buffers come to the rescue. For drawing the quad, we only need to specify each vertex once, this way: V1, V2, V3, V4). Each vertex has a position in the array. V1 has position 0, V2 has position 1, etc.:
| V1 | V2 | V3 | V4 |
| -- | -- | -- | -- |
| 0 | 1 | 2 | 3 |
Then we specify the order in which those vertices should be drawn by referring to their position:
| 0 | 1 | 3 | 3 | 1 | 2 |
| -- | -- | -- | -- | -- | -- |
| V1 | V2 | V4 | V4 | V2 | V3 |
So we need to modify our `Mesh` class to accept another parameter, an array of indices, and now the number of vertices to draw will be the length of that indices array. Keep in mind also that now we are just using three floats to represent the position of a vertex, but we want to associate the color of each one. Therefore, we need to modify the `Mesh` class like this.
```java
public class Mesh {
...
public Mesh(float[] positions, float[] colors, int[] indices) {
numVertices = indices.length;
...
// Color VBO
vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer colorsBuffer = MemoryUtil.memCallocFloat(colors.length);
colorsBuffer.put(0, colors);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, colorsBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, false, 0, 0);
// Index VBO
vboId = glGenBuffers();
vboIdList.add(vboId);
IntBuffer indicesBuffer = MemoryUtil.memCallocInt(indices.length);
indicesBuffer.put(0, indices);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indicesBuffer, GL_STATIC_DRAW);
...
MemoryUtil.memFree(colorsBuffer);
MemoryUtil.memFree(indicesBuffer);
}
...
}
```
After we have created the VBO that stores the positions, we need to create another VBO that will hold the color data. After that, we create another one for the indices. The process of creating that VBO is similar to the previous ones, but notice that the type is now `GL_ELEMENT_ARRAY_BUFFER`. Since we are dealing with integers, we need to create an `IntBuffer` instead of a `FloatBuffer`. The VAO will now contain three VBOs, one for positions, the other one for colors, and another one that will hold the indices and that will be used for rendering.
After that, we need to change the drawing call in the `SceneRender` class to use indices:
```java
public class SceneRender {
...
public void render(Scene scene) {
...
scene.getMeshMap().values().forEach(mesh -> {
glBindVertexArray(mesh.getVaoId());
glDrawElements(GL_TRIANGLES, mesh.getNumVertices(), GL_UNSIGNED_INT, 0);
}
);
...
}
...
}
```
The parameters of the `glDrawElements` method are:
* mode: Specifies the primitives for rendering, triangles in this case. No changes here.
* count: Specifies the number of elements to be rendered.
* type: Specifies the type of value in the indices data. In this case, we are using integers.
* indices: Specifies the offset to apply to the indices data to start rendering.
Now we can just create a new Mesh with the extra vertex parameters (colors) and the indices in the `Main` class:
```java
public class Main implements IAppLogic {
...
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-04", new Window.WindowOptions(), main);
...
}
...
public void init(Window window, Scene scene, Render render) {
float[] positions = new float[]{
-0.5f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
};
float[] colors = new float[]{
0.5f, 0.0f, 0.0f,
0.0f, 0.5f, 0.0f,
0.0f, 0.0f, 0.5f,
0.0f, 0.5f, 0.5f,
};
int[] indices = new int[]{
0, 1, 3, 3, 1, 2,
};
Mesh mesh = new Mesh(positions, colors, indices);
scene.addMesh("quad", mesh);
}
...
}
```
Now we need to modify the shaders, not because of the indices, but to use the color per vertex. The vertex shader (`scene.vert`) is like this:
```glsl
#version 330
layout (location=0) in vec3 position;
layout (location=1) in vec3 color;
out vec3 outColor;
void main()
{
gl_Position = vec4(position, 1.0);
outColor = color;
}
```
In the input parameters, you can see we receive a new `vec2` for the color, and we just return that to be used in the fragment shader (`scene.frag`), which is like this:
```glsl
#version 330
in vec3 outColor;
out vec4 fragColor;
void main()
{
fragColor = vec4(outColor, 1.0);
}
```
We use just the input color parameter to return the fragment color. It is important to notice that the color value will be interpolated when using it in the fragment shader, so the result will look something like this:

[Next chapter](../chapter-05/chapter-05.md)
================================================
FILE: chapter-05/chapter-05.md
================================================
# Chapter 05 - Perspective projection
In this chapter, we will learn two important concepts, perspective projection (to render far away objects smaller than closer ones) and uniforms (a buffer like structure to pass additional data to the shader).
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-05).
## Perspective projection
Let’s get back to our nice colored quad we created in the previous chapter. If you look carefully, you will see that the quad is distorted and appears as a rectangle. You can even change the width of the window from 600 pixels to 900 and the distortion will be more evident. What’s happening here?
If you revisit our vertex shader code we are just passing our coordinates directly. That is, when we say that a vertex has a value for coordinate x of 0.5 we are saying to OpenGL to draw it at x position 0.5 on our screen. The following figure shows the OpenGL coordinates (just for x and y axis).

Those coordinates are mapped, considering our window size, to window coordinates (which have the origin at the top-left corner of the previous figure). So, if our window has a size of 900x580, OpenGL coordinates (1,0) will be mapped to coordinates (900, 0) creating a rectangle instead of a quad.
.png>)
But, the problem is more serious than that. Modify the z coordinate of our quad from 0.0 to 1.0 and to -1.0. What do you see? The quad is exactly drawn in the same place no matter if it’s displaced along the z axis. Why is this happening? Objects that are further away should be drawn smaller than objects that are closer. But we are drawing them with the same x and y coordinates.
But, wait. Should this not be handled by the z coordinate? The answer is yes and no. The z coordinate tells OpenGL that an object is closer or farther away, but OpenGL does not know anything about the size of your object. You could have two objects of different sizes, one closer and smaller and one bigger and further that could be projected correctly onto the screen with the same size (those would have same x and y coordinates but different z). OpenGL just uses the coordinates we are passing, so we must take care of this. We need to correctly project our coordinates.
Now that we have diagnosed the problem, how do we fix it? The answer is using a perspective projection matrix. The perspective projection matrix will take care of the aspect ratio (the relation between size and height) of our drawing area so objects won’t be distorted. It also will handle the distance so objects far away from us will be drawn smaller. The projection matrix will also consider our field of view and the maximum distance to be displayed.
For those not familiar with matrices, a matrix is a bi-dimensional array of numbers arranged in columns and rows. Each number inside a matrix is called an element. A matrix order is the number of rows and columns. For instance, here you can see a 2x2 matrix (2 rows and 2 columns).
.png>)
Matrices have a number of basic operations that can be applied to them (such as addition, multiplication, etc.) that you can consult in a math book. The main characteristics of matrices, related to 3D graphics, is that they are very useful to transform points in the space.
You can think about the projection matrix as a camera, which has a field of view and a minimum and maximum distance. The vision area of that camera will be obtained from a truncated pyramid. The following picture shows a top view of that area.
.png>)
A projection matrix will correctly map 3D coordinates so they can be correctly represented on a 2D screen. The mathematical representation of that matrix is as follows (don’t be scared).
.png>)
Where aspect ratio is the relation between our screen width and our screen height ($$a=width/height$$). In order to obtain the projected coordinates of a given point we just need to multiply the projection matrix by the original coordinates. The result will be another vector that will contain the projected version.
So we need to handle a set of mathematical entities such as vectors, matrices and include the operations that can be done on them. We could choose to write all that code by our own from scratch or use an already existing library. We will choose the easy path and use a specific library for dealing with math operations in LWJGL which is called JOML (Java OpenGL Math Library). In order to use that library we just need to add another dependency to our `pom.xml` file.
```xml
org.joml
joml
${joml.version}
```
Now that everything has been set up let’s define our projection matrix. We will create a new class named `Projection` which is defined like this:
```java
package org.lwjglb.engine.scene;
import org.joml.Matrix4f;
public class Projection {
private static final float FOV = (float) Math.toRadians(60.0f);
private static final float Z_FAR = 1000.f;
private static final float Z_NEAR = 0.01f;
private Matrix4f projMatrix;
public Projection(int width, int height) {
projMatrix = new Matrix4f();
updateProjMatrix(width, height);
}
public Matrix4f getProjMatrix() {
return projMatrix;
}
public void updateProjMatrix(int width, int height) {
projMatrix.setPerspective(FOV, (float) width / height, Z_NEAR, Z_FAR);
}
}
```
As you can see, it relies on the `Matrix4f` class (provided by the JOML library) which provides a method to set up a perspective projection matrix named `setPerspective`. This method needs the following parameters:
* Field of View: The Field of View angle in radians. We just use the `FOV` constant for that
* Aspect Ratio: That is, the relation ship between render width and height.
* Distance to the near plane (z-near)
* Distance to the far plane (z-far).
We will store a `Projection` class instance in the `Scene` class and initialize it in the constructor. In order to allow for the window to be resized, we will need to create a new method in the `Scene` class, named `resize` to recalculate the perspective projection matrix when window dimensions change.
```java
public class Scene {
...
private Projection projection;
public Scene(int width, int height) {
...
projection = new Projection(width, height);
}
...
public Projection getProjection() {
return projection;
}
public void resize(int width, int height) {
projection.updateProjMatrix(width, height);
}
}
```
We need also to update the `Engine` to adapt it to the new `Scene` class constructor parameters and to invoke the `resize` method:
```java
public class Engine {
...
public Engine(String windowTitle, Window.WindowOptions opts, IAppLogic appLogic) {
...
scene = new Scene(window.getWidth(), window.getHeight());
...
}
...
private void resize() {
scene.resize(window.getWidth(), window.getHeight());
}
...
}
```
## Uniforms
Now that we have the infrastructure to calculate the perspective projection matrix, how do we use it? We need to use it in our shader, and it should be applied to all the vertices. At first, you could think of bundling it in the vertex input (like the coordinates and the colors). In this case we would be wasting lots of space since the projection matrix is common to any vertex. You may also think of multiplying the vertices by the matrix in the java code. But then, our VBOs would be useless and we will not be using the process power available in the graphics card.
The answer is to use “uniforms”. Uniforms are global GLSL variables that shaders can use and that we will employ to pass data that is common to all elements or to a model. So, let's start with how uniforms are used in shader programs. We need to modify our vertex shader code and declare a new uniform called `projectionMatrix` and use it to calculate the projected position.
```glsl
#version 330
layout (location=0) in vec3 position;
layout (location=1) in vec3 color;
out vec3 outColor;
uniform mat4 projectionMatrix;
void main()
{
gl_Position = projectionMatrix * vec4(position, 1.0);
outColor = color;
}
```
As you can see we define our `projectionMatrix` as a 4x4 matrix and the position is obtained by multiplying it by our original coordinates. Now we need to pass the values of the projection matrix to our shader. We will create a new class named `UniformMap` which will allow us to create references to the uniforms and set up their values. It starts like this:
```java
package org.lwjglb.engine.graph;
import org.joml.Matrix4f;
import org.lwjgl.system.MemoryStack;
import java.util.*;
import static org.lwjgl.opengl.GL20.*;
public class UniformsMap {
private int programId;
private Map uniforms;
public UniformsMap(int programId) {
this.programId = programId;
uniforms = new HashMap<>();
}
public void createUniform(String uniformName) {
int uniformLocation = glGetUniformLocation(programId, uniformName);
if (uniformLocation < 0) {
throw new RuntimeException("Could not find uniform [" + uniformName + "] in shader program [" +
programId + "]");
}
uniforms.put(uniformName, uniformLocation);
}
...
}
```
As you can see, the constructor receives the identifier of the shader program and it defines a `Map` to store the references (`Integer` instances) to uniforms which are created in the `createUniform` method. Uniforms references are retrieved by calling the `glGetUniformLocation` function, which receives two parameters:
* The shader program identifier.
* The name of the uniform (it should match the one defined in the shader code).
As you can see, uniform creation is independent on the data type associated to it. We will need to have separate methods for the different types when we want to set the data for that uniform. By now, we will just need a method to load a 4x4 matrix:
```java
public class UniformsMap {
...
public void setUniform(String uniformName, Matrix4f value) {
try (MemoryStack stack = MemoryStack.stackPush()) {
Integer location = uniforms.get(uniformName);
if (location == null) {
throw new RuntimeException("Could not find uniform [" + uniformName + "]");
}
glUniformMatrix4fv(location.intValue(), false, value.get(stack.mallocFloat(16)));
}
}
}
```
Now, we can use the code above in the `SceneRender` class:
```java
public class SceneRender {
...
private UniformsMap uniformsMap;
public SceneRender() {
...
createUniforms();
}
...
private void createUniforms() {
uniformsMap = new UniformsMap(shaderProgram.getProgramId());
uniformsMap.createUniform("projectionMatrix");
}
...
public void render(Scene scene) {
...
uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());
...
}
}
```
We are almost done. We can now show the quad correctly rendered, So you can now launch your program and will obtain a... black background without any coloured quad. What’s happening? Did we break something? Well, actually no. Remember that we are now simulating the effect of a camera looking at our scene. And we provided two distances, one to the farthest plane (equal to 1000f) and one to the closest plane (equal to 0.01f). Our coordinates were:
```java
float[] positions = new float[]{
-0.5f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
};
```
That is, our z coordinates are outside the visible zone. Let’s assign them a value of `-0.05f`. Now you will see a giant square like this:

What is happening now is that we are drawing the quad too close to our camera. We are actually zooming into it. If we assign now a value of `-1.0f` to the z coordinate we can now see our coloured quad.
```java
public class Main implements IAppLogic {
...
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-05", new Window.WindowOptions(), main);
...
}
...
public void init(Window window, Scene scene, Render render) {
float[] positions = new float[]{
-0.5f, 0.5f, -1.0f,
-0.5f, -0.5f, -1.0f,
0.5f, -0.5f, -1.0f,
0.5f, 0.5f, -1.0f,
};
...
}
...
}
```

If we continue pushing the quad backwards we will see it becoming smaller. Notice also that our quad does not appear as a rectangle anymore.
[Next chapter](../chapter-06/chapter-06.md)
================================================
FILE: chapter-06/chapter-06.md
================================================
# Chapter 06 - Going 3D
In this chapter we will set up the basis for 3D models and explain the concept of model transformations to render our first 3D shaper, a rotating cube.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-06).
## Models and Entities
Let us first define the concept of a 3D model. Up to now, we have been working with meshes (a collection of vertices). A model is an structure which glues together vertices, colors, textures and materials. A model may be composed of several meshes and can be used by several game entities. A game entity represents a player and enemy, and obstacle, anything that is part of the 3D scene. In this book we will assume that an entity always is related to a model (although you can have entities that are not rendered and therefore cannot have a model). An entity has specific data such a position, which we need to use when rendering. You will see later on that we start the render process by getting the models and then draw the entities associated to that model. This is because efficiency, since several entities can share the same model is better to set up the elements that belong to the model once and later on handle the data that is specific for each entity.
The class that represents a model is, obviously, called `Model` and is defined like this:
```java
package org.lwjglb.engine.graph;
import org.lwjglb.engine.scene.Entity;
import java.util.*;
public class Model {
private final String id;
private List entitiesList;
private List meshList;
public Model(String id, List meshList) {
this.id = id;
this.meshList = meshList;
entitiesList = new ArrayList<>();
}
public void cleanup() {
meshList.forEach(Mesh::cleanup);
}
public List getEntitiesList() {
return entitiesList;
}
public String getId() {
return id;
}
public List getMeshList() {
return meshList;
}
}
```
As you can see, a model, by now, stores a list of `Mesh` instances and has a unique identifier. In addition to that, we store the list of game entities (modelled by the `Entity` class) that are associated to that model. If you are going to create a full engine, you may want to store those relationships in somewhere else (not in the model), but, for simplicity, we will store those links in the `Model` class. This way, render process will be simpler.
In order to represent models in a 3D scene, we'd like to provide support for some basic operations:
* Translation: Move an object in some direction by some amount.
* Rotation: Rotate an object by some angle around any of the three axes.
* Scale: Adjust the size of an object.
.png>)
The operations described above are known as transformations. You probably guessed--correctly--that the way we'll achieve that is by multiplying our coordinates by a sequence of matrices: one for performing a translation, one for rotation and one for scaling. Those three matrices will be combined into a single matrix called a transformation matrix and passed as a uniform to our vertex shader.
Generally, 3D models exist in their own space. A 3D artist would probably choose to model a pancake with the center of the pancake at the origin and the flat sides pointing straight up and down. When you place the model in your scene, though, you might want it to be somewhere else, like on a griddle, or in the air, being flipped. You might even want more than one! In that case, it would be useful to, instead of having a different 3D file for each different pancake, just have one pancake file that we render in different positions for each different pancake we want to render. This is why model matrices are so standard: easier model creation, easier placing of the model in space, and the ability to have more than one stagnant model.
Such a matrix, comprised of three transformations, will appear twice: once for the world matrix, used for converting the coordinates of the vertices of your model (your centered, straight-up-and-down pancake) to where you want them to be in your 3D space (a pancake flying through the air); and once for the projection matrix, or view matrix, for rendering the shapes in 3D. This second use will be discussed later, in chapter 8. For now, let's jsut worry about the world matrix.
That world matrix will be calculated like this:
$$
World Matrix=\left[Translation Matrix\right]\left[Rotation Matrix\right]\left[Scale Matrix\right]
$$
Transformation matrices are applied right to left. So here, first the coordinates are scaled, then rotated, then transformed. While the order of the rotation and scale matrices don't matter, we want the Translation Matrix to be last: since we're rotating about the origin, the position of the object would be rotated, too!
If we include our projection matrix in the transformation matrix it would be like this:
$$
\begin{array}{lcl} Transf & = & \left[Proj Matrix\right]\left[Translation Matrix\right]\left[Rotation Matrix\right]\left[Scale Matrix\right] \\ & = & \left[Proj Matrix\right]\left[World Matrix\right] \end{array}
$$
The translation matrix is defined like this:
$$
\begin{bmatrix} 1 & 0 & 0 & dx \\ 0 & 1 & 0 & dy \\ 0 & 0 & 1 & dz \\ 0 & 0 & 0 & 1 \end{bmatrix}
$$
Translation Matrix Parameters:
* dx: Displacement along the x axis.
* dy: Displacement along the y axis.
* dz: Displacement along the z axis.
The scale matrix is defined like this:
$$
\begin{bmatrix} sx & 0 & 0 & 0 \\ 0 & sy & 0 & 0 \\ 0 & 0 & sz & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}
$$
Scale Matrix Parameters:
* sx: Scaling along the x axis.
* sy: Scaling along the y axis.
* sz: Scaling along the z axis.
The rotation matrix is much more complex. But keep in mind that it can be constructed by multiplying 3 rotation matrices, one for each axis, or by applying a quaternion (more on this later).
Now, let's define the `Entity` class:
```java
package org.lwjglb.engine.scene;
import org.joml.*;
public class Entity {
private final String id;
private final String modelId;
private Matrix4f modelMatrix;
private Vector3f position;
private Quaternionf rotation;
private float scale;
public Entity(String id, String modelId) {
this.id = id;
this.modelId = modelId;
modelMatrix = new Matrix4f();
position = new Vector3f();
rotation = new Quaternionf();
scale = 1;
}
public String getId() {
return id;
}
public String getModelId() {
return modelId;
}
public Matrix4f getModelMatrix() {
return modelMatrix;
}
public Vector3f getPosition() {
return position;
}
public Quaternionf getRotation() {
return rotation;
}
public float getScale() {
return scale;
}
public final void setPosition(float x, float y, float z) {
position.x = x;
position.y = y;
position.z = z;
}
public void setRotation(float x, float y, float z, float angle) {
this.rotation.fromAxisAngleRad(x, y, z, angle);
}
public void setScale(float scale) {
this.scale = scale;
}
public void updateModelMatrix() {
modelMatrix.translationRotateScale(position, rotation, scale);
}
}
```
As you can see, an `Entity` instance has a unique ID and defines attributes for its position (as a 3 components vector), its scale (just a float, we will be assuming that we scale evenly across all three axis) and rotation (as a quaternion). We could have store rotation information storing rotation angles for pitch, yaw and roll, but instead we are using a strange mathematical object, called a quaternion, that you may not have heard of. The problem with using rotation angles is the so called gimbal lock. When applying rotation using these angles (called Euler angles) we may end up aligning two rotating axis and losing degrees of freedom, resulting in the inability to properly rotate an object. Quaternions do not have these problem. Instead of me trying to poorly explain what quaternions are, let me just link to an excellent [blog entry](https://www.3dgep.com/understanding-quaternions/) which explain all the concepts behind them. All you need to know about quaternions, though, is that they represent a rotation without having the same pitfalls Euler angles do.
All the transformations applied to a model are defined by a 4x4 matrix, therefore a `Model` instance stores a `Matrix4f` instance for this transformation. It's automatically constructed using the JOML method `translationRotateScale` (taking position, scale and rotation). But we'll need to call the `updateModelMatrix` method each time we modify the attributes of a `Model` instance, to update that matrix.
## Other code changes
We need to change the `Scene` class to store models instead of `Mesh` instances. In addition to that, we need to add support for linking `Entity` instances with models so we can later on render them.
```java
package org.lwjglb.engine.scene;
import org.lwjglb.engine.graph.Model;
import java.util.*;
public class Scene {
private Map modelMap;
private Projection projection;
public Scene(int width, int height) {
modelMap = new HashMap<>();
projection = new Projection(width, height);
}
public void addEntity(Entity entity) {
String modelId = entity.getModelId();
Model model = modelMap.get(modelId);
if (model == null) {
throw new RuntimeException("Could not find model [" + modelId + "]");
}
model.getEntitiesList().add(entity);
}
public void addModel(Model model) {
modelMap.put(model.getId(), model);
}
public void cleanup() {
modelMap.values().forEach(Model::cleanup);
}
public Map getModelMap() {
return modelMap;
}
public Projection getProjection() {
return projection;
}
public void resize(int width, int height) {
projection.updateProjMatrix(width, height);
}
}
```
Now we need to modify the `SceneRender` class a little. Ths first thing that we need to do is to pass model matrix information to the shader through an uniform. Therefore, we will create a new uniform named `modelMatrix` in the vertex shader and, consequently, retrieve its location in the `createUniforms` method.
```java
public class SceneRender {
...
private void createUniforms() {
...
uniformsMap.createUniform("modelMatrix");
}
...
}
```
The next step is to modify the `render` method to change how we access models and set up properly the model matrix uniform:
```java
public class SceneRender {
...
public void render(Scene scene) {
shaderProgram.bind();
uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());
Collection models = scene.getModelMap().values();
for (Model model : models) {
model.getMeshList().stream().forEach(mesh -> {
glBindVertexArray(mesh.getVaoId());
List entities = model.getEntitiesList();
for (Entity entity : entities) {
uniformsMap.setUniform("modelMatrix", entity.getModelMatrix());
glDrawElements(GL_TRIANGLES, mesh.getNumVertices(), GL_UNSIGNED_INT, 0);
}
});
}
glBindVertexArray(0);
shaderProgram.unbind();
}
...
}
```
As you can see, we iterate over the models, then over their meshes, binding their VAO and after that we get the associated entities. For each entity, prior to invoking the drawing call, we fill up the `modelMatrix` uniform with the proper data.
The next step is to modify the vertex shader to use the `modelMatrix` uniform.
```glsl
#version 330
layout (location=0) in vec3 position;
layout (location=1) in vec3 color;
out vec3 outColor;
uniform mat4 projectionMatrix;
uniform mat4 modelMatrix;
void main()
{
gl_Position = projectionMatrix * modelMatrix * vec4(position, 1.0);
outColor = color;
}
```
As you can see the code is exactly the same. We are using the uniform to correctly project our coordinates taking into consideration our frustum, position, scale and rotation information. Another important thing to think about is, why don’t we pass the translation, rotation and scale matrices instead of combining them into a world matrix? The reason is that we should try to limit the matrices we use in our shaders. Also keep in mind that the matrix multiplication that we do in our shader is done once per each vertex. The projection matrix does not change between render calls and the world matrix does not change per `Entity` instance. If we passed the translation, rotation and scale matrices independently we would be doing many more matrix multiplications. Think about a model with tons of vertices. That’s a lot of extra operations.
But you may think now that if the model matrix does not change per `Entity` instance, why didn't we do the matrix multiplication in our Java class? We could multiply the projection matrix and the model matrix just once per `Entity` and send it as a single uniform. In this case we would be saving many more operations, right? The answer is that this is a valid point for now, but when we add more features to our game engine we will need to operate with world coordinates in the shaders anyway, so it’s better to handle those two matrices in an independent way.
Finally another very important aspect to remark is the order of multiplication of the matrices. We first need to multiply position information by model matrix, that is we transform model coordinates in world space. After that, we apply the projection. Keep in mind that matrix multiplication is not commutative, so order is very important.
Now we need to modify the `Main` class to adapt to the new way of loading models and entities and the coordinates of indices of a 3D cube.
```java
public class Main implements IAppLogic {
private Entity cubeEntity;
private Vector4f displInc = new Vector4f();
private float rotation;
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-06", new Window.WindowOptions(), main);
...
}
...
@Override
public void init(Window window, Scene scene, Render render) {
float[] positions = new float[]{
// VO
-0.5f, 0.5f, 0.5f,
// V1
-0.5f, -0.5f, 0.5f,
// V2
0.5f, -0.5f, 0.5f,
// V3
0.5f, 0.5f, 0.5f,
// V4
-0.5f, 0.5f, -0.5f,
// V5
0.5f, 0.5f, -0.5f,
// V6
-0.5f, -0.5f, -0.5f,
// V7
0.5f, -0.5f, -0.5f,
};
float[] colors = new float[]{
0.5f, 0.0f, 0.0f,
0.0f, 0.5f, 0.0f,
0.0f, 0.0f, 0.5f,
0.0f, 0.5f, 0.5f,
0.5f, 0.0f, 0.0f,
0.0f, 0.5f, 0.0f,
0.0f, 0.0f, 0.5f,
0.0f, 0.5f, 0.5f,
};
int[] indices = new int[]{
// Front face
0, 1, 3, 3, 1, 2,
// Top Face
4, 0, 3, 5, 4, 3,
// Right face
3, 2, 7, 5, 3, 7,
// Left face
6, 1, 0, 6, 0, 4,
// Bottom face
2, 1, 6, 2, 6, 7,
// Back face
7, 6, 4, 7, 4, 5,
};
List meshList = new ArrayList<>();
Mesh mesh = new Mesh(positions, colors, indices);
meshList.add(mesh);
String cubeModelId = "cube-model";
Model model = new Model(cubeModelId, meshList);
scene.addModel(model);
cubeEntity = new Entity("cube-entity", cubeModelId);
cubeEntity.setPosition(0, 0, -2);
scene.addEntity(cubeEntity);
}
...
}
```
In order to draw a cube we just need to define eight vertices. We'll define the new vertices in the `positions` array, making sure to keep the `colors` array length 8.
.png>)
Since a cube is made of six faces we need to draw twelve triangles (two per face), so we need to update the indices array. Remember that triangles must be defined in counter-clock wise order. Doing this by hard is tedious, and it's easy to mess up. Here's how to do it: Always put the face that you want to define indices for in front of you. Then, identify the vertices and draw the triangles in counter-clock wise order. Finally, we create a model with just one mesh and an entity associated to that model.
We will first use the `input` method to modify the cube position by using cursor arrows and its scale by using `Z` and `X`key. We just need to detect the key that has been pressed, update the cube entity position and /or scale, and, finally, update its model matrix.
```java
public class Main implements IAppLogic {
...
public void input(Window window, Scene scene, long diffTimeMillis) {
displInc.zero();
if (window.isKeyPressed(GLFW_KEY_UP)) {
displInc.y = 1;
} else if (window.isKeyPressed(GLFW_KEY_DOWN)) {
displInc.y = -1;
}
if (window.isKeyPressed(GLFW_KEY_LEFT)) {
displInc.x = -1;
} else if (window.isKeyPressed(GLFW_KEY_RIGHT)) {
displInc.x = 1;
}
if (window.isKeyPressed(GLFW_KEY_A)) {
displInc.z = -1;
} else if (window.isKeyPressed(GLFW_KEY_Q)) {
displInc.z = 1;
}
if (window.isKeyPressed(GLFW_KEY_Z)) {
displInc.w = -1;
} else if (window.isKeyPressed(GLFW_KEY_X)) {
displInc.w = 1;
}
displInc.mul(diffTimeMillis / 1000.0f);
Vector3f entityPos = cubeEntity.getPosition();
cubeEntity.setPosition(displInc.x + entityPos.x, displInc.y + entityPos.y, displInc.z + entityPos.z);
cubeEntity.setScale(cubeEntity.getScale() + displInc.w);
cubeEntity.updateModelMatrix();
}
...
}
```
In order to better view the cube, we'll have the model in the `Main` class rotate along the three axes. We will do this in the `update` method.
```java
public class Main implements IAppLogic {
...
public void update(Window window, Scene scene, long diffTimeMillis) {
rotation += 1.5;
if (rotation > 360) {
rotation = 0;
}
cubeEntity.setRotation(1, 1, 1, (float) Math.toRadians(rotation));
cubeEntity.updateModelMatrix();
}
...
}
```
And that’s all. We are now able to display a spinning 3D cube! You can now compile and run your example and you will obtain something like this.
.png>)
There is something weird with this cube. Some faces are not being painted correctly. What is happening? The reason why the cube has this aspect is that the triangles that compose the cube are being drawn in a sort of random order. The pixels that are far away should be drawn before pixels that are closer. This is not happening right now and in order to do that we must enable depth testing.
This can be done in the constructor of the `Render` class:
```java
public class Render {
...
public Render() {
GL.createCapabilities();
glEnable(GL_DEPTH_TEST);
...
}
...
}
```
Now our cube is being rendered correctly!
.png>)
[Next chapter](../chapter-07/chapter-07.md)
================================================
FILE: chapter-07/chapter-07.md
================================================
# Chapter 07 - Textures
In this chapter we will learn how to load textures, how they relate to a model and how to use them in the rendering process
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-07).
## Texture loading
A texture is an image which is mapped to a model to set the color of the pixels of the model. You can think of a texture as a skin that is wrapped around your 3D model. What you do is assign points in the image texture to the vertices in your model. With that information, OpenGL is able to calculate the color to apply to the other pixels based on the texture image.
.png>)
The texture image can be larger or smaller than the model. OpenGL will extrapolate the color if the pixel to be processed cannot be mapped to a specific point in the texture. You can control this extrapolation when a specific texture is created.
In order to apply a texture to a model, we must assign texture coordinates to each of our vertices. The texture coordinate system is a bit different from the coordinate system of our model. First of all, we have a 2D texture so our coordinates will only have two components, x and y. Besides that, the origin is setup in the top left corner of the image and the maximum value of the x or y value is 1.
.png>)
How do we relate texture coordinates with our position coordinates? Easy, in the same way we passed the color information. We set up a VBO which will have a texture coordinate for each vertex position.
So let’s start modifying the code base to use textures in our 3D cube. The first step is to load the image that will be used as a texture. For this task, we will use the LWJGL wrapper for the [stb](https://github.com/nothings/stb) library. In order to do that, we need first to declare that dependency, including the natives in our `pom.xml` file.
```xml
org.lwjgl
lwjgl-stb
${lwjgl.version}
[...]
org.lwjgl
lwjgl-stb
${lwjgl.version}
${native.target}
runtime
```
The first step we will do is to create a new `Texture` class that will perform all the necessary steps to load a texture and is defined like this:
```java
package org.lwjglb.engine.graph;
import org.lwjgl.system.MemoryStack;
import java.nio.*;
import static org.lwjgl.opengl.GL30.*;
import static org.lwjgl.stb.STBImage.*;
public class Texture {
private int textureId;
private String texturePath;
public Texture(int width, int height, ByteBuffer buf) {
this.texturePath = "";
generateTexture(width, height, buf);
}
public Texture(String texturePath) {
try (MemoryStack stack = MemoryStack.stackPush()) {
this.texturePath = texturePath;
IntBuffer w = stack.mallocInt(1);
IntBuffer h = stack.mallocInt(1);
IntBuffer channels = stack.mallocInt(1);
ByteBuffer buf = stbi_load(texturePath, w, h, channels, 4);
if (buf == null) {
throw new RuntimeException("Image file [" + texturePath + "] not loaded: " + stbi_failure_reason());
}
int width = w.get();
int height = h.get();
generateTexture(width, height, buf);
stbi_image_free(buf);
}
}
public void bind() {
glBindTexture(GL_TEXTURE_2D, textureId);
}
public void cleanup() {
glDeleteTextures(textureId);
}
private void generateTexture(int width, int height, ByteBuffer buf) {
textureId = glGenTextures();
glBindTexture(GL_TEXTURE_2D, textureId);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, buf);
glGenerateMipmap(GL_TEXTURE_2D);
}
public String getTexturePath() {
return texturePath;
}
}
```
The first thing we do in the constructor is to allocate `IntBuffer`s for the library to return the image size and number of channels. Then, we call the `stbi_load` method to actually load the image into a `ByteBuffer`. This method requires the following parameters:
* `filePath`: The absolute path to the file. The stb library is native and does not understand anything about `CLASSPATH`. Therefore, we will be using regular file system paths.
* `width`: Image width. This will be populated with the image width.
* `height`: Image height. This will be populated with the image height.
* `channels`: The image channels.
* `desired_channels`: The desired image channels. We pass 4 (RGBA).
One important thing to remember is that OpenGL, for historical reasons, requires that texture images have a size (number of texels in each dimension) of a power of two (2, 4, 8, 16, ....). I think this is not required by OpenGL drivers anymore but if you have some issues you can try modifying the dimensions. Texels are to textures as pixels are to pictures.
The next step is to upload the texture into the GPU. This will be done in the `generateTexture` method. First of all, we need to create a new texture identifier (by calling the `glGenTextures` function). After that, we need to bind to that texture (by calling the `glBindTexture`). Then, we need to tell OpenGL how to unpack our RGBA bytes. Since each component is one byte in size, we just use `GL_UNPACK_ALIGNMENT` for the `glPixelStorei` function. Finally, we load the texture data by calling the `glTexImage2D`.
The `glTexImage2D` method has the following parameters:
* `target`: Specifies the target texture (its type). In this case: `GL_TEXTURE_2D`.
* `level`: Specifies the level-of-detail number. Level 0 is the base image level. Level n is the nth mipmap reduction image. More on this later.
* `internal format`: Specifies the number of colour components in the texture.
* `width`: Specifies the width of the texture image.
* `height`: Specifies the height of the texture image.
* `border`: This value must be zero.
* `format`: Specifies the format of the pixel data: RGBA in this case.
* `type`: Specifies the data type of the pixel data. We are using unsigned bytes for this.
* `data`: The buffer that stores our data.
After that, by calling the `glTexParameteri` function, we basically say that when a pixel is drawn with no direct one-to-one association to a texture coordinate, it will pick the nearest texture coordinate point. After that, we generate a mipmap. A mipmap is a decreasing resolution set of images generated from a high detailed texture. These lower resolution images will be used automatically when our object is scaled. We do this when calling the `glGenerateMipmap` function. And that’s all, we have successfully loaded our texture. Now we need to use it.
Now we will create a texture cache. It is very frequent that models reuse the same texture; therefore, instead of loading the same texture multiple times, we will cache the textures already loaded to load each texture just once. This will be controlled by the `TextureCache` class:
```java
package org.lwjglb.engine.graph;
import java.util.*;
public class TextureCache {
public static final String DEFAULT_TEXTURE = "resources/models/default/default_texture.png";
private Map textureMap;
public TextureCache() {
textureMap = new HashMap<>();
textureMap.put(DEFAULT_TEXTURE, new Texture(DEFAULT_TEXTURE));
}
public void cleanup() {
textureMap.values().forEach(Texture::cleanup);
}
public Texture createTexture(String texturePath) {
return textureMap.computeIfAbsent(texturePath, Texture::new);
}
public Texture getTexture(String texturePath) {
Texture texture = null;
if (texturePath != null) {
texture = textureMap.get(texturePath);
}
if (texture == null) {
texture = textureMap.get(DEFAULT_TEXTURE);
}
return texture;
}
}
```
As you can see, we just store the loaded textures in a `Map` and return a default texture in case texture path is null (models with no textures). The default texture is just a black image; for models that define colors instead of textures, we can combine both the default texture and the model in the fragment shader. The `TextureCache` class instance will be stored in the `Scene` class:
```java
public class Scene {
...
private TextureCache textureCache;
...
public Scene(int width, int height) {
...
textureCache = new TextureCache();
}
...
public TextureCache getTextureCache() {
return textureCache;
}
...
}
```
Now we need to change the way we define models to add support for textures. In order to do so (and to prepare for the more complex models we are going to load in future chapters), we will introduce a new class named `Material`. This class will hold texture path and a list of `Mesh` instances. Therefore, we will associate `Model` instances with a `List` of `Material`'s instead of `Mesh`'es. In future chapters, materials will be able to contain other properties, such as diffuse or specular colors.
The `Material` class is defined like this:
```java
package org.lwjglb.engine.graph;
import java.util.*;
public class Material {
private List meshList;
private String texturePath;
public Material() {
meshList = new ArrayList<>();
}
public void cleanup() {
meshList.forEach(Mesh::cleanup);
}
public List getMeshList() {
return meshList;
}
public String getTexturePath() {
return texturePath;
}
public void setTexturePath(String texturePath) {
this.texturePath = texturePath;
}
}
```
As you can see, `Mesh` instances are now under the `Material` class. Therefore, we need to modify the `Model` class like this:
```java
package org.lwjglb.engine.graph;
import org.lwjglb.engine.scene.Entity;
import java.util.*;
public class Model {
private final String id;
private List entitiesList;
private List materialList;
public Model(String id, List materialList) {
this.id = id;
entitiesList = new ArrayList<>();
this.materialList = materialList;
}
public void cleanup() {
materialList.forEach(Material::cleanup);
}
public List getEntitiesList() {
return entitiesList;
}
public String getId() {
return id;
}
public List getMaterialList() {
return materialList;
}
}
```
As we said before, we need to pass texture coordinates as another VBO. So we will modify our `Mesh` class to accept an array of floats that contains texture coordinates instead of colors. The `Mesh` class is modified like this:
```java
public class Mesh {
...
public Mesh(float[] positions, float[] textCoords, int[] indices) {
numVertices = indices.length;
vboIdList = new ArrayList<>();
vaoId = glGenVertexArrays();
glBindVertexArray(vaoId);
// Positions VBO
int vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer positionsBuffer = MemoryUtil.memCallocFloat(positions.length);
positionsBuffer.put(0, positions);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, positionsBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);
// Texture coordinates VBO
vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer textCoordsBuffer = MemoryUtil.memCallocFloat(textCoords.length);
textCoordsBuffer.put(0, textCoords);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, textCoordsBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, false, 0, 0);
// Index VBO
vboId = glGenBuffers();
vboIdList.add(vboId);
IntBuffer indicesBuffer = MemoryUtil.memCallocInt(indices.length);
indicesBuffer.put(0, indices);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indicesBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
MemoryUtil.memFree(positionsBuffer);
MemoryUtil.memFree(textCoordsBuffer);
MemoryUtil.memFree(indicesBuffer);
}
...
}
```
## Using the textures
Now we need to use the texture in our shaders. In the vertex shader, we have changed the second input parameter (previously `outColor`, now `outTexCoord`) from `vec3` to a `vec2`. The vertex shader, as in the color case, just passes the texture coordinates to be used by the fragment shader.
```glsl
#version 330
layout (location=0) in vec3 position;
layout (location=1) in vec2 texCoord;
out vec2 outTextCoord;
uniform mat4 projectionMatrix;
uniform mat4 modelMatrix;
void main()
{
gl_Position = projectionMatrix * modelMatrix * vec4(position, 1.0);
outTextCoord = texCoord;
}
```
In the fragment shader, we must use the texture coordinates in order to set the pixel colors by sampling a texture (through a `sampler2D` uniform)
```glsl
#version 330
in vec2 outTextCoord;
out vec4 fragColor;
uniform sampler2D txtSampler;
void main()
{
fragColor = texture(txtSampler, outTextCoord);
}
```
We will see now how all of this is used in the `SceneRender` class. First, we need to create a new uniform for the texture sampler.
```java
public class SceneRender {
...
private void createUniforms() {
...
uniformsMap.createUniform("txtSampler");
}
...
}
```
Now, we can use the texture in the render process:
```java
public class SceneRender {
...
public void render(Scene scene) {
shaderProgram.bind();
uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());
uniformsMap.setUniform("txtSampler", 0);
Collection models = scene.getModelMap().values();
TextureCache textureCache = scene.getTextureCache();
for (Model model : models) {
List entities = model.getEntitiesList();
for (Material material : model.getMaterialList()) {
Texture texture = textureCache.getTexture(material.getTexturePath());
glActiveTexture(GL_TEXTURE0);
texture.bind();
for (Mesh mesh : material.getMeshList()) {
glBindVertexArray(mesh.getVaoId());
for (Entity entity : entities) {
uniformsMap.setUniform("modelMatrix", entity.getModelMatrix());
glDrawElements(GL_TRIANGLES, mesh.getNumVertices(), GL_UNSIGNED_INT, 0);
}
}
}
}
glBindVertexArray(0);
shaderProgram.unbind();
}
}
```
As you can see, we first set the texture sampler uniform to the `0` value. Let's explain why we do this. A graphics card has several spaces or slots to store textures. Each of these spaces is called a texture unit. When we are working with textures we must set the texture unit that we want to work with. In this case, we are using just one texture, so we will use the texture unit `0`. The uniform has a `sampler2D` type and will hold the value of the texture unit that we want to work with. When we iterate over models and materials, we get the texture associated to each material from the cache, activate the texture unit by calling the `glActiveTexture` function with the parameter `GL_TEXTURE0`, and bind the texture unit and texture identifier.
We need to modify also the `UniformsMap` class to add a new method which accepts an integer to set up the sampler value. This method will also be called `setUniform` but will instead accept the name of the uniform and an integer value. Since we will be repeating some code between the `setUniform` method used to set up matrices and this new one, we will extract the part of the code that retrieves the uniform location to a new method named `getUniformLocation`. The changes in the `UniformsMap` class are shown below:
```java
public class UniformsMap {
...
private int getUniformLocation(String uniformName) {
Integer location = uniforms.get(uniformName);
if (location == null) {
throw new RuntimeException("Could not find uniform [" + uniformName + "]");
}
return location.intValue();
}
public void setUniform(String uniformName, int value) {
glUniform1i(getUniformLocation(uniformName), value);
}
public void setUniform(String uniformName, Matrix4f value) {
try (MemoryStack stack = MemoryStack.stackPush()) {
glUniformMatrix4fv(getUniformLocation(uniformName), false, value.get(stack.mallocFloat(16)));
}
}
...
}
```
Right now, we have just modified our code base to support textures. Now, we need to setup texture coordinates for our 3D cube. Our texture image file will be something like this:
.png>)
In our 3D model we have eight vertices. Let’s see how this can be done. Let’s first define the front face texture coordinates for each vertex.
.png>)
| Vertex | Texture Coordinate |
| ------ | ------------------ |
| V0 | (0.0, 0.0) |
| V1 | (0.0, 0.5) |
| V2 | (0.5, 0.5) |
| V3 | (0.5, 0.0) |
Now, let’s define the texture mapping of the top face.
.png>)
| Vertex | Texture Coordinate |
| ------ | ------------------ |
| V4 | (0.0, 0.5) |
| V5 | (0.5, 0.5) |
| V0 | (0.0, 1.0) |
| V3 | (0.5, 1.0) |
As you can see, we have a problem. We need to setup different texture coordinates for the same vertices (V0 and V3). How can we solve this? For each time we need to map a new set of texture coordinates to an existing vertex, we must make another vertex with the same set of vertex coordinates. For the top face, we need to repeat the four vertices and assign them the correct texture coordinates.
Since the front, back, and lateral faces use the same texture, we do not need to repeat all of these vertices. The complete definition of the source code used 20 total vertices compared to the original 8.
In the next chapters, we will learn how to load models generated by 3D modeling tools. That way, we won’t need to define by hand the positions and texture coordinates (which by the way, would be impractical for more complex models).
We just need to modify the `init` method in the `Main` class to define the texture coordinates and load texture data:
```java
public class Main implements IAppLogic {
...
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-07", new Window.WindowOptions(), main);
...
}
...
public void init(Window window, Scene scene, Render render) {
float[] positions = new float[]{
// V0
-0.5f, 0.5f, 0.5f,
// V1
-0.5f, -0.5f, 0.5f,
// V2
0.5f, -0.5f, 0.5f,
// V3
0.5f, 0.5f, 0.5f,
// V4
-0.5f, 0.5f, -0.5f,
// V5
0.5f, 0.5f, -0.5f,
// V6
-0.5f, -0.5f, -0.5f,
// V7
0.5f, -0.5f, -0.5f,
// For text coords in top face
// V8: V4 repeated
-0.5f, 0.5f, -0.5f,
// V9: V5 repeated
0.5f, 0.5f, -0.5f,
// V10: V0 repeated
-0.5f, 0.5f, 0.5f,
// V11: V3 repeated
0.5f, 0.5f, 0.5f,
// For text coords in right face
// V12: V3 repeated
0.5f, 0.5f, 0.5f,
// V13: V2 repeated
0.5f, -0.5f, 0.5f,
// For text coords in left face
// V14: V0 repeated
-0.5f, 0.5f, 0.5f,
// V15: V1 repeated
-0.5f, -0.5f, 0.5f,
// For text coords in bottom face
// V16: V6 repeated
-0.5f, -0.5f, -0.5f,
// V17: V7 repeated
0.5f, -0.5f, -0.5f,
// V18: V1 repeated
-0.5f, -0.5f, 0.5f,
// V19: V2 repeated
0.5f, -0.5f, 0.5f,
};
float[] textCoords = new float[]{
0.0f, 0.0f,
0.0f, 0.5f,
0.5f, 0.5f,
0.5f, 0.0f,
0.0f, 0.0f,
0.5f, 0.0f,
0.0f, 0.5f,
0.5f, 0.5f,
// For text coords in top face
0.0f, 0.5f,
0.5f, 0.5f,
0.0f, 1.0f,
0.5f, 1.0f,
// For text coords in right face
0.0f, 0.0f,
0.0f, 0.5f,
// For text coords in left face
0.5f, 0.0f,
0.5f, 0.5f,
// For text coords in bottom face
0.5f, 0.0f,
1.0f, 0.0f,
0.5f, 0.5f,
1.0f, 0.5f,
};
int[] indices = new int[]{
// Front face
0, 1, 3, 3, 1, 2,
// Top Face
8, 10, 11, 9, 8, 11,
// Right face
12, 13, 7, 5, 12, 7,
// Left face
14, 15, 6, 4, 14, 6,
// Bottom face
16, 18, 19, 17, 16, 19,
// Back face
4, 6, 7, 5, 4, 7,};
Texture texture = scene.getTextureCache().createTexture("resources/models/cube/cube.png");
Material material = new Material();
material.setTexturePath(texture.getTexturePath());
List materialList = new ArrayList<>();
materialList.add(material);
Mesh mesh = new Mesh(positions, textCoords, indices);
material.getMeshList().add(mesh);
Model cubeModel = new Model("cube-model", materialList);
scene.addModel(cubeModel);
cubeEntity = new Entity("cube-entity", cubeModel.getId());
cubeEntity.setPosition(0, 0, -2);
scene.addEntity(cubeEntity);
}
```
The final result is like this.
.png>)
[Next chapter](../chapter-08/chapter-08.md)
================================================
FILE: chapter-08/chapter-08.md
================================================
# Chapter 08 - Camera
In this chapter, we will learn how to move inside a rendered 3D scene. This capability is like having a camera that can travel inside the 3D world and in fact, that's the term used to refer to it.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-08).
## Camera introduction
If you try to search for specific camera functions in OpenGL, you will discover that there is no camera concept, or in other words, the camera is always fixed, centered at position (0, 0, 0) in the center of the screen. So, what we will do is a simulation that gives us the impression that we have a camera capable of moving inside the 3D scene. How do we achieve this? Well, if we cannot move the camera, then we must move all the objects contained in our 3D space at once. In other words, if we cannot move a camera, we will move the whole world.
Hence, suppose that we would like to move the camera position along the z axis from a starting position (Cx, Cy, Cz) to a position (Cx, Cy, Cz+dz) to get closer to the object, which is placed at the coordinates (Ox, Oy, Oz).

What we will actually do is move the object (all the objects in our 3D space indeed) in the opposite direction that the camera should move. Think about it like the objects being placed in a treadmill.

A camera can be displaced along the three axis (x, y and z) and also can rotate along them (roll, pitch and yaw).

So basically what we must do is to be able to move and rotate all of the objects of our 3D world. How are we going to do this? The answer is to apply another transformation that will translate all of the vertices of all of the objects in the opposite direction of the movement of the camera, and that will rotate them according to the camera rotation. This will be done, of course, with another matrix, the so called view matrix. This matrix will first perform the translation and then the rotation along the axis.
Let's see how we can construct that matrix. If you remember from the transformations chapter, our transformation equation was like this:
$$
\begin{array}{lcl} Transf & = & \lbrack ProjMatrix \rbrack \cdot \lbrack TranslationMatrix \rbrack \cdot \lbrack RotationMatrix \rbrack \cdot \lbrack ScaleMatrix \rbrack \\ & = & \lbrack ProjMatrix \rbrack \cdot \lbrack WorldMatrix \rbrack \end{array}
$$
The view matrix should be applied before multiplying by the projection matrix, so our equation should now be like this:
$$
\begin{array}{lcl} Transf & = & \lbrack ProjMatrix \rbrack \cdot \lbrack ViewMatrix \rbrack \cdot \lbrack TranslationMatrix \rbrack \cdot \lbrack RotationMatrix \rbrack \cdot \lbrack ScaleMatrix \rbrack \\ & = & \lbrack ProjMatrix \rbrack \cdot \lbrack ViewMatrix \rbrack \cdot \lbrack WorldMatrix \rbrack \end{array}
$$
## Camera implementation
So let’s start modifying our code to support a camera. First of all, we will create a new class called `Camera` which will hold the position and rotation state of our camera as well as its view matrix. The class is defined like this:
```java
package org.lwjglb.engine.scene;
import org.joml.*;
public class Camera {
private Vector3f direction;
private Vector3f position;
private Vector3f right;
private Vector2f rotation;
private Vector3f up;
private Matrix4f viewMatrix;
public Camera() {
direction = new Vector3f();
right = new Vector3f();
up = new Vector3f();
position = new Vector3f();
viewMatrix = new Matrix4f();
rotation = new Vector2f();
}
public void addRotation(float x, float y) {
rotation.add(x, y);
recalculate();
}
public Vector3f getPosition() {
return position;
}
public Matrix4f getViewMatrix() {
return viewMatrix;
}
public void moveBackwards(float inc) {
viewMatrix.positiveZ(direction).negate().mul(inc);
position.sub(direction);
recalculate();
}
public void moveDown(float inc) {
viewMatrix.positiveY(up).mul(inc);
position.sub(up);
recalculate();
}
public void moveForward(float inc) {
viewMatrix.positiveZ(direction).negate().mul(inc);
position.add(direction);
recalculate();
}
public void moveLeft(float inc) {
viewMatrix.positiveX(right).mul(inc);
position.sub(right);
recalculate();
}
public void moveRight(float inc) {
viewMatrix.positiveX(right).mul(inc);
position.add(right);
recalculate();
}
public void moveUp(float inc) {
viewMatrix.positiveY(up).mul(inc);
position.add(up);
recalculate();
}
private void recalculate() {
viewMatrix.identity()
.rotateX(rotation.x)
.rotateY(rotation.y)
.translate(-position.x, -position.y, -position.z);
}
public void setPosition(float x, float y, float z) {
position.set(x, y, z);
recalculate();
}
public void setRotation(float x, float y) {
rotation.set(x, y);
recalculate();
}
}
```
As you can see, besides rotation and position, we define some vectors to define forward, up and right directions. This is because we are implementing a free space movement camera. If we move after any rotation, we want to move in the direction the camera is pointing in, not along a predefined axis. We need to get those vectors to calculate where the next position will be placed. At the end, the state of the camera is stored into a 4x4 matrix, the view matrix. Any time we change position or rotation, we need to update it. As you can see, when updating the view matrix, we first need to do the rotation and then the translation. If we did the opposite, we would not be rotating along the camera position but along the coordinate origin.
The `Camera` class also provides methods to update position when moving forward, up or to the right. In these methods, the view matrix is used to calculate where the forward, up or right methods should be according to the current state, and updates the position accordingly. We use the fantastic JOML library to perform these calculations for us while maintaining the code quite simple.
## Using the Camera
We will store a `Camera` instance in the `Scene` class, so let's go for the changes:
```java
public class Scene {
...
private Camera camera;
...
public Scene(int width, int height) {
...
camera = new Camera();
}
...
public Camera getCamera() {
return camera;
}
...
}
```
It would be nice to control de camera with our mouse. In order to do so, we will create a new class to handle mouse events so we can use them to update camera rotation. Here's the code for that class.
```java
package org.lwjglb.engine;
import org.joml.Vector2f;
import static org.lwjgl.glfw.GLFW.*;
public class MouseInput {
private Vector2f currentPos;
private Vector2f displVec;
private boolean inWindow;
private boolean leftButtonPressed;
private Vector2f previousPos;
private boolean rightButtonPressed;
public MouseInput(long windowHandle) {
previousPos = new Vector2f(-1, -1);
currentPos = new Vector2f();
displVec = new Vector2f();
leftButtonPressed = false;
rightButtonPressed = false;
inWindow = false;
glfwSetCursorPosCallback(windowHandle, (handle, xpos, ypos) -> {
currentPos.x = (float) xpos;
currentPos.y = (float) ypos;
});
glfwSetCursorEnterCallback(windowHandle, (handle, entered) -> inWindow = entered);
glfwSetMouseButtonCallback(windowHandle, (handle, button, action, mode) -> {
leftButtonPressed = button == GLFW_MOUSE_BUTTON_1 && action == GLFW_PRESS;
rightButtonPressed = button == GLFW_MOUSE_BUTTON_2 && action == GLFW_PRESS;
});
}
public Vector2f getCurrentPos() {
return currentPos;
}
public Vector2f getDisplVec() {
return displVec;
}
public void input() {
displVec.x = 0;
displVec.y = 0;
if (previousPos.x > 0 && previousPos.y > 0 && inWindow) {
double deltax = currentPos.x - previousPos.x;
double deltay = currentPos.y - previousPos.y;
boolean rotateX = deltax != 0;
boolean rotateY = deltay != 0;
if (rotateX) {
displVec.y = (float) deltax;
}
if (rotateY) {
displVec.x = (float) deltay;
}
}
previousPos.x = currentPos.x;
previousPos.y = currentPos.y;
}
public boolean isLeftButtonPressed() {
return leftButtonPressed;
}
public boolean isRightButtonPressed() {
return rightButtonPressed;
}
}
```
The `MouseInput` class, in its constructor, registers a set of callbacks to process mouse events:
* `glfwSetCursorPosCallback`: Registers a callback that will be invoked when the mouse is moved.
* `glfwSetCursorEnterCallback`: Registers a callback that will be invoked when the mouse enters our window. We will be receiving mouse events even if the mouse is not in our window. We use this callback to track when the mouse is in our window.
* `glfwSetMouseButtonCallback`: Registers a callback that will be invoked when a mouse button is pressed.
The `MouseInput` class provides an input method that should be called when game input is processed. This method calculates the mouse displacement from the previous position and stores it in the `displVec` variable so it can be used by our game.
The `MouseInput` class will be instantiated in our `Window` class, which will also provide a getter to return its instance.
```java
public class Window {
...
private MouseInput mouseInput;
...
public Window(String title, WindowOptions opts, Callable resizeFunc) {
...
mouseInput = new MouseInput(windowHandle);
}
...
public MouseInput getMouseInput() {
return mouseInput;
}
...
}
```
In the `Engine` class, we will consume mouse input when handling regular input:
```java
public class Engine {
...
private void run() {
...
if (targetFps <= 0 || deltaFps >= 1) {
window.getMouseInput().input();
appLogic.input(window, scene, now - initialTime);
}
...
}
...
}
```
Now we can modify the vertex shader to use the `Camera`'s view matrix, which, as you may guess, will be passed as a uniform.
```glsl
#version 330
layout (location=0) in vec3 position;
layout (location=1) in vec2 texCoord;
out vec2 outTextCoord;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
void main()
{
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
outTextCoord = texCoord;
}
```
So the next step is to properly create the uniform in the `SceneRender` class and update its value in each `render` call:
```java
public class SceneRender {
...
private void createUniforms() {
...
uniformsMap.createUniform("viewMatrix");
...
}
...
public void render(Scene scene) {
...
uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());
uniformsMap.setUniform("viewMatrix", scene.getCamera().getViewMatrix());
...
}
}
```
And that’s all, our base code supports the concept of a camera. Now we need to use it. We can change the way we handle the input and update the camera. We will set the following controls:
* Keys “A” and “D” to move the camera to the left and right (x axis), respectively.
* Keys “W” and “S” to move the camera forward and backwards (z axis), respectively.
* Keys “Z” and “X” to move the camera up and down (y-axis), respectively.
We will use the mouse position to rotate the camera along the x and y axis when the right button of the mouse is pressed.
Now we are ready to update our `Main` class to process the keyboard and mouse input.
```java
public class Main implements IAppLogic {
private static final float MOUSE_SENSITIVITY = 0.1f;
private static final float MOVEMENT_SPEED = 0.005f;
...
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-08", new Window.WindowOptions(), main);
...
}
...
public void input(Window window, Scene scene, long diffTimeMillis) {
float move = diffTimeMillis * MOVEMENT_SPEED;
Camera camera = scene.getCamera();
if (window.isKeyPressed(GLFW_KEY_W)) {
camera.moveForward(move);
} else if (window.isKeyPressed(GLFW_KEY_S)) {
camera.moveBackwards(move);
}
if (window.isKeyPressed(GLFW_KEY_A)) {
camera.moveLeft(move);
} else if (window.isKeyPressed(GLFW_KEY_D)) {
camera.moveRight(move);
}
if (window.isKeyPressed(GLFW_KEY_UP)) {
camera.moveUp(move);
} else if (window.isKeyPressed(GLFW_KEY_DOWN)) {
camera.moveDown(move);
}
MouseInput mouseInput = window.getMouseInput();
if (mouseInput.isRightButtonPressed()) {
Vector2f displVec = mouseInput.getDisplVec();
camera.addRotation((float) Math.toRadians(-displVec.x * MOUSE_SENSITIVITY),
(float) Math.toRadians(-displVec.y * MOUSE_SENSITIVITY));
}
}
...
}
```
[Next chapter](../chapter-09/chapter-09.md)
================================================
FILE: chapter-09/chapter-09.md
================================================
# Chapter 09 - Loading more complex models: Assimp
The capability of loading complex 3D models in different formats is crucial in order to write a game. The task of writing parsers for some of them would require lots of work. Even just supporting a single format can be time consuming. Fortunately, the [Assimp](http://assimp.sourceforge.net/) library already can be used to parse many common 3D formats. It’s a C/C++ library which can load static and animated models in a variety of formats. LWJGL provides the bindings to use them from Java code. In this chapter, we will explain how it can be used.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-09).
## Model loader
The first thing is adding Assimp maven dependencies to the project pom.xml. We need to add compile time and runtime dependencies.
```xml
org.lwjgl
lwjgl-assimp
${lwjgl.version}
org.lwjgl
lwjgl-assimp
${lwjgl.version}
${native.target}
runtime
```
Once the dependencies has been set, we will create a new class named `ModelLoader` that will be used to load models with Assimp. The class defines two static public methods:
```java
package org.lwjglb.engine.scene;
import org.joml.Vector4f;
import org.lwjgl.PointerBuffer;
import org.lwjgl.assimp.*;
import org.lwjgl.system.MemoryStack;
import org.lwjglb.engine.graph.*;
import java.io.File;
import java.nio.IntBuffer;
import java.util.*;
import static org.lwjgl.assimp.Assimp.*;
public class ModelLoader {
private ModelLoader() {
// Utility class
}
public static Model loadModel(String modelId, String modelPath, TextureCache textureCache) {
return loadModel(modelId, modelPath, textureCache, aiProcess_GenSmoothNormals | aiProcess_JoinIdenticalVertices |
aiProcess_Triangulate | aiProcess_FixInfacingNormals | aiProcess_CalcTangentSpace | aiProcess_LimitBoneWeights |
aiProcess_PreTransformVertices);
}
public static Model loadModel(String modelId, String modelPath, TextureCache textureCache, int flags) {
...
}
...
}
```
Both methods have the following arguments:
* `modelId`: A unique identifier for the model to be loaded.
* `modelPath`: The path to the file where the model file is located. This is a regular file path, no CLASSPATH relative paths, because Assimp may need to load additional files and may use the same base path as the `modelPath` \(For instance, material files for wavefront, OBJ, files\). If you embed your resources inside a JAR file, Assimp will not be able to import it, so it must be a file system path. When loading textures we will use `modelPath` to get the base directory where the model is located to load textures (overriding whatever path is defined in the model). We do this because some models contain absolute paths to local folders of where the model was developed which, obviously, are not accessible.
* `textureCache`: A reference to the texture cache to avoid loading the same texture multiple times.
The second method has an extra argument named `flags`. This parameter allows to tune the loading process. The first method invokes the second one and passes some values that are useful in most of the situations:
* `aiProcess_JoinIdenticalVertices`: This flag reduces the number of vertices that are used, identifying those that can be reused between faces.
* `aiProcess_Triangulate`: The model may use quads or other geometries to define their elements. Since we are only dealing with triangles, we must use this flag to split all he faces into triangles \(if needed\).
* `aiProcess_FixInfacingNormals`: This flags try to reverse normals that may point inwards.
* `aiProcess_CalcTangentSpace`: We will use this parameter when implementing lights, but it basically calculates tangent and bitangents using normals information.
* `aiProcess_LimitBoneWeights`: We will use this parameter when implementing animations, but it basically limit the number of weights that affect a single vertex.
* `aiProcess_PreTransformVertices`: This flag performs some transformation over the data loaded so the model is placed in the origin and the coordinates are corrected to math OpenGL coordinate System. If you have problems with models that are rotated, make sure to use this flag. Important: do not use this flag if your model uses animations, this flag will remove that information.
There are many other flags that can be used, you can check them in the LWJGL or Assimp documentation.
Let’s go back to the second constructor. The first thing we do is invoke the `aiImportFile` method to load the model with the selected flags.
```java
public class ModelLoader {
...
public static Model loadModel(String modelId, String modelPath, TextureCache textureCache, int flags) {
File file = new File(modelPath);
if (!file.exists()) {
throw new RuntimeException("Model path does not exist [" + modelPath + "]");
}
String modelDir = file.getParent();
AIScene aiScene = aiImportFile(modelPath, flags);
if (aiScene == null) {
throw new RuntimeException("Error loading model [modelPath: " + modelPath + "]");
}
...
}
...
}
```
The rest of the code for the constructor is as follows:
```java
public class ModelLoader {
...
public static Model loadModel(String modelId, String modelPath, TextureCache textureCache, int flags) {
...
int numMaterials = aiScene.mNumMaterials();
List materialList = new ArrayList<>();
for (int i = 0; i < numMaterials; i++) {
AIMaterial aiMaterial = AIMaterial.create(aiScene.mMaterials().get(i));
materialList.add(processMaterial(aiMaterial, modelDir, textureCache));
}
int numMeshes = aiScene.mNumMeshes();
PointerBuffer aiMeshes = aiScene.mMeshes();
Material defaultMaterial = new Material();
for (int i = 0; i < numMeshes; i++) {
AIMesh aiMesh = AIMesh.create(aiMeshes.get(i));
Mesh mesh = processMesh(aiMesh);
int materialIdx = aiMesh.mMaterialIndex();
Material material;
if (materialIdx >= 0 && materialIdx < materialList.size()) {
material = materialList.get(materialIdx);
} else {
material = defaultMaterial;
}
material.getMeshList().add(mesh);
}
if (!defaultMaterial.getMeshList().isEmpty()) {
materialList.add(defaultMaterial);
}
return new Model(modelId, materialList);
}
...
}
```
We process the materials contained in the model. Materials define color and textures to be used by the meshes that compose the model. Then we process the different meshes. A model can define several meshes and each of them can use one of the materials defined for the model. This is why we process meshes after materials and link to them, to avoid repeating binding calls when rendering.
If you examine the code above you may see that many of the calls to the Assimp library return `PointerBuffer` instances. You can think about them like C pointers, they just point to a memory region which contain data. You need to know in advance the type of data that they hold in order to process them. In the case of materials, we iterate over that buffer creating instances of the `AIMaterial` class. In the second case, we iterate over the buffer that holds mesh data creating instance of the `AIMesh` class.
Let’s examine the `processMaterial` method.
```java
public class ModelLoader {
...
private static Material processMaterial(AIMaterial aiMaterial, String modelDir, TextureCache textureCache) {
Material material = new Material();
try (MemoryStack stack = MemoryStack.stackPush()) {
AIColor4D color = AIColor4D.create();
int result = aiGetMaterialColor(aiMaterial, AI_MATKEY_COLOR_DIFFUSE, aiTextureType_NONE, 0,
color);
if (result == aiReturn_SUCCESS) {
material.setDiffuseColor(new Vector4f(color.r(), color.g(), color.b(), color.a()));
}
AIString aiTexturePath = AIString.calloc(stack);
aiGetMaterialTexture(aiMaterial, aiTextureType_DIFFUSE, 0, aiTexturePath, (IntBuffer) null,
null, null, null, null, null);
String texturePath = aiTexturePath.dataString();
if (texturePath != null && texturePath.length() > 0) {
material.setTexturePath(modelDir + File.separator + new File(texturePath).getName());
textureCache.createTexture(material.getTexturePath());
material.setDiffuseColor(Material.DEFAULT_COLOR);
}
return material;
}
}
...
}
```
We first get the material color, in this case the diffuse color (by setting the `AI_MATKEY_COLOR_DIFFUSE` flag). There are many different types of colors which we will use when applying lights, for example we have diffuse, ambient (for ambient light), specular (for specular factor of lights, etc.) After that, we check if the material defines a texture or not. If so, that is if there is a texture path, we store the texture path and delegate texture creation to the `TexturCache` class as in previous examples. In this case, if the material defines a texture we set the diffuse color to a default value, which is black. By doing this we will be able to use both values, diffuse color and texture without checking if there is a texture or not. If the model does not define a texture we will use a default black texture which can be combined with the material color.
The `processMesh` method is defined like this.
```java
public class ModelLoader {
...
private static Mesh processMesh(AIMesh aiMesh) {
float[] vertices = processVertices(aiMesh);
float[] textCoords = processTextCoords(aiMesh);
int[] indices = processIndices(aiMesh);
// Texture coordinates may not have been populated. We need at least the empty slots
if (textCoords.length == 0) {
int numElements = (vertices.length / 3) * 2;
textCoords = new float[numElements];
}
return new Mesh(vertices, textCoords, indices);
}
...
}
```
A `Mesh` is defined by a set of vertices position, texture coordinates and indices. Each of these elements are processed in the `processVertices`, `processTextCoords` and `processIndices` methods. After processing all that data we check if texture coordinates have been defined. If not, we just assign a set of texture coordinates to 0.0f to ensure consistency of the VAO.
The `processXXX` methods are very simple, they just invoke the corresponding method over the `AIMesh` instance that returns the desired data and store it into an array:
```java
public class ModelLoader {
...
private static int[] processIndices(AIMesh aiMesh) {
List indices = new ArrayList<>();
int numFaces = aiMesh.mNumFaces();
AIFace.Buffer aiFaces = aiMesh.mFaces();
for (int i = 0; i < numFaces; i++) {
AIFace aiFace = aiFaces.get(i);
IntBuffer buffer = aiFace.mIndices();
while (buffer.remaining() > 0) {
indices.add(buffer.get());
}
}
return indices.stream().mapToInt(Integer::intValue).toArray();
}
...
private static float[] processTextCoords(AIMesh aiMesh) {
AIVector3D.Buffer buffer = aiMesh.mTextureCoords(0);
if (buffer == null) {
return new float[]{};
}
float[] data = new float[buffer.remaining() * 2];
int pos = 0;
while (buffer.remaining() > 0) {
AIVector3D textCoord = buffer.get();
data[pos++] = textCoord.x();
data[pos++] = 1 - textCoord.y();
}
return data;
}
private static float[] processVertices(AIMesh aiMesh) {
AIVector3D.Buffer buffer = aiMesh.mVertices();
float[] data = new float[buffer.remaining() * 3];
int pos = 0;
while (buffer.remaining() > 0) {
AIVector3D textCoord = buffer.get();
data[pos++] = textCoord.x();
data[pos++] = textCoord.y();
data[pos++] = textCoord.z();
}
return data;
}
}
```
You can see that get get a buffer to the vertices by invoking the `mVertices` method. We just simply process them to create a `List` of floats that contain the vertices positions. Since, the method returns just a buffer you could pass that information directly to the OpenGL methods that create vertices. We do not do it that way for two reasons. The first one is try to reduce as much as possible the modifications over the code base. Second one is that by loading into an intermediate structure you may be able to perform some pros-processing tasks and even debug the loading process.
If you want a sample of the much more efficient approach, that is, directly passing the buffers to OpenGL, you can check this [sample](https://github.com/LWJGL/lwjgl3-demos/blob/master/src/org/lwjgl/demo/opengl/assimp/WavefrontObjDemo.java).
## Using the models
We need to modify the `Material` class to add support for diffuse color:
```java
public class Material {
public static final Vector4f DEFAULT_COLOR = new Vector4f(0.0f, 0.0f, 0.0f, 1.0f);
private Vector4f diffuseColor;
...
public Material() {
diffuseColor = DEFAULT_COLOR;
...
}
...
public Vector4f getDiffuseColor() {
return diffuseColor;
}
...
public void setDiffuseColor(Vector4f diffuseColor) {
this.diffuseColor = diffuseColor;
}
...
}
```
In the `SceneRender` class, we need to create, and properly set up while rendering, the material diffuse color:
```java
public class SceneRender {
...
private void createUniforms() {
...
uniformsMap.createUniform("material.diffuse");
}
public void render(Scene scene) {
...
for (Model model : models) {
List entities = model.getEntitiesList();
for (Material material : model.getMaterialList()) {
uniformsMap.setUniform("material.diffuse", material.getDiffuseColor());
...
}
}
...
}
...
}
```
As you can see we are using a weird name for the uniform with a `.` in the name. This is because we will use structures in the shader. With structures we can group several types into a single combined one. You can see this in the fragment shader:
```glsl
#version 330
in vec2 outTextCoord;
out vec4 fragColor;
struct Material
{
vec4 diffuse;
};
uniform sampler2D txtSampler;
uniform Material material;
void main()
{
fragColor = texture(txtSampler, outTextCoord) + material.diffuse;
}
```
We will need also to add a new method to the `UniformsMap` class to add support for passing `Vector4f` values
```java
public class UniformsMap {
...
public void setUniform(String uniformName, Vector4f value) {
glUniform4f(getUniformLocation(uniformName), value.x, value.y, value.z, value.w);
}
}
```
Finally, we need to modify the `Main` class to use the `ModelLoader` class to load models:
```java
public class Main implements IAppLogic {
...
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-09", new Window.WindowOptions(), main);
...
}
...
public void init(Window window, Scene scene, Render render) {
Model cubeModel = ModelLoader.loadModel("cube-model", "resources/models/cube/cube.obj",
scene.getTextureCache());
scene.addModel(cubeModel);
cubeEntity = new Entity("cube-entity", cubeModel.getId());
cubeEntity.setPosition(0, 0, -2);
scene.addEntity(cubeEntity);
}
...
}
```
As you can see, the `init` method has been simplified a lot, no more model data embedded in the code. Now we are using a cube model which uses the wavefront format. You can locate model files in the `resources\models\cube` folder. You will find there, the following files:
* `cube.obj`: The main model file. In fact is a text based format, so you can open it and see how vertices, indices and textures coordinates are defined and glued together by defining faces. It also contains a reference to a material file.
* `cube.mtl`: The material file, it defines colors and textures.
* `cube.png`: The texture file of the model.
Finally, we will add another feature to optimize the render. We will reduce the amount of data that is being rendered by applying face culling. As you well know, a cube is made of six faces and we are rendering the six faces they are not visible. You can check this if you zoom inside a cube, you will see its interior.
Faces that cannot be seen should be discarded immediately and this is what face culling does. In fact, for a cube you can only see 3 faces at the same time, so we can just discard half of the faces just by applying face culling \(this will only be valid if your game does not require you to dive into the inner side of a model\).
For every triangle, face culling checks if it's facing towards us and discards the ones that are not facing that direction. But, how do we know if a triangle is facing towards us or not? Well, the way that OpenGL does this is by the winding order of the vertices that compose a triangle.
Remember from the first chapters that we may define the vertices of a triangle in clockwise or counter-clockwise order. In OpenGL, by default, triangles that are in counter-clockwise order are facing towards the viewer and triangles that are in clockwise order are facing backwards. The key thing here, is that this order is checked while rendering taking into consideration the point of view. So a triangle that has been defined in counter-clock wise order can be interpreted, at rendering time, as being defined clockwise because of the point of view.
We will enable face culling in the `Render` class:
```java
public class Render {
...
public Render() {
...
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
...
}
...
}
```
The first line will enable face culling and the second line states that faces that are facing backwards should be culled \(removed\).
If you run the sample you will see the same result as in previous chapter, however, if you zoom in into the cube, inner faces will not be rendered. You can modify this sample to load more complex models.
[Next chapter](../chapter-10/chapter-10.md)
================================================
FILE: chapter-10/chapter-10.md
================================================
# Chapter 10 - GUI (Imgui)
[Dear ImGui](https://github.com/ocornut/imgui) is a user interface library which can use several backends such as OpenGL and Vulkan. We will use it to display gui controls or to develop HUDs. It provides multiple widgets and the look and fell is easily customizable.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-10).
## Imgui integration
The first thing is adding Java Imgui wrapper maven dependencies to the project pom.xml. We need to add compile time and runtime dependencies.
```xml
io.github.spair
imgui-java-binding
${imgui-java.version}
io.github.spair
imgui-java-${native.target}
${imgui-java.version}
runtime
```
With Imgui we can render windows, panels, etc. like we render any other 3D model, but using only 2D shapes. We set the controls that we want to use and Imgui translates that to a set of vertex buffers that we can render using shaders. This is why it can be used with any backend.
For each vertex, Imgui defines its coordinates (2D coordinates), texture coordinates and the associated color. Therefore, we need to create a new class to model Gui meshes and to create the associated VAO and VBO. The class, named `GuiMesh` is defined like this.
```java
package org.lwjglb.engine.graph;
import imgui.ImDrawData;
import static org.lwjgl.opengl.GL15.*;
import static org.lwjgl.opengl.GL20.*;
import static org.lwjgl.opengl.GL30.*;
public class GuiMesh {
private int indicesVBO;
private int vaoId;
private int verticesVBO;
public GuiMesh() {
vaoId = glGenVertexArrays();
glBindVertexArray(vaoId);
// Single VBO
verticesVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, verticesVBO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, false, ImDrawData.sizeOfImDrawVert(), 0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, false, ImDrawData.sizeOfImDrawVert(), 8);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, true, ImDrawData.sizeOfImDrawVert(), 16);
indicesVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
public void cleanup() {
glDeleteBuffers(indicesVBO);
glDeleteBuffers(verticesVBO);
glDeleteVertexArrays(vaoId);
}
public int getIndicesVBO() {
return indicesVBO;
}
public int getVaoId() {
return vaoId;
}
public int getVerticesVBO() {
return verticesVBO;
}
}
```
As you can see, we use a single VBO but we define several attributes for the positions, texture coordinates and color. In this case, we do not populate the buffers with data, we will see later on how we will use it.
We need also to let the application create GUI controls and react to the user input. In order to support this, we will define a new interface named `IGuiInstance` which is defined like this:
```java
package org.lwjglb.engine;
import org.lwjglb.engine.scene.Scene;
public interface IGuiInstance {
void drawGui();
boolean handleGuiInput(Scene scene, Window window);
}
```
The method `drawGui` will be used to construct the GUI, this where we will define the window and widgets that will be used to construct the GUI meshes. We will use the `handleGuiInput` method to process input events in the GUI. It returns a boolean value to state that the input has been processed by the GUI or not. For example, if we display an overlapping window we may not be interested in keep processing keystrokes in the game logic. You can use the return value to control that. We will store the specific implementation of `IGuiInstance` interface in the `Scene` class.
```java
public class Scene {
...
private IGuiInstance guiInstance;
...
public IGuiInstance getGuiInstance() {
return guiInstance;
}
...
public void setGuiInstance(IGuiInstance guiInstance) {
this.guiInstance = guiInstance;
}
}
```
The next step will be to create a new class to render our GUI, which will be named `GuiRender` and starts like this:
```java
package org.lwjglb.engine.graph;
import imgui.*;
import imgui.type.ImInt;
import org.joml.Vector2f;
import org.lwjglb.engine.*;
import org.lwjglb.engine.scene.Scene;
import java.nio.ByteBuffer;
import java.util.*;
import static org.lwjgl.opengl.GL32.*;
import static org.lwjgl.glfw.GLFW.*;
public class GuiRender {
private GuiMesh guiMesh;
private GLFWKeyCallback prevKeyCallBack;
private Vector2f scale;
private ShaderProgram shaderProgram;
private Texture texture;
private UniformsMap uniformsMap;
public GuiRender(Window window) {
List shaderModuleDataList = new ArrayList<>();
shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/gui.vert", GL_VERTEX_SHADER));
shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/gui.frag", GL_FRAGMENT_SHADER));
shaderProgram = new ShaderProgram(shaderModuleDataList);
createUniforms();
createUIResources(window);
setupKeyCallBack(window);
}
public void cleanup() {
shaderProgram.cleanup();
texture.cleanup();
if (prevKeyCallBack != null) {
prevKeyCallBack.free();
}
}
...
}
```
As you can see, most of the stuff here will be very familiar to you, we just set up the shaders and the uniforms. Since we will need to set up a custom key callback to handle ImGui input text controls, we need to keep track of a previous key callback in `prevKeyCallBack` to properly use it and free it. In addition to that, there is a new method called `createUIResources` which is defined like this:
```java
public class GuiRender {
...
private void createUIResources(Window window) {
ImGui.createContext();
ImGuiIO imGuiIO = ImGui.getIO();
imGuiIO.setIniFilename(null);
imGuiIO.setDisplaySize(window.getWidth(), window.getHeight());
ImFontAtlas fontAtlas = ImGui.getIO().getFonts();
ImInt width = new ImInt();
ImInt height = new ImInt();
ByteBuffer buf = fontAtlas.getTexDataAsRGBA32(width, height);
texture = new Texture(width.get(), height.get(), buf);
guiMesh = new GuiMesh();
}
...
}
```
In the method above is where we setup Imgui, we first create a context (required to perform any operation), and set up the display size to the window size. Imgui stores the status in an ini file, since we do not want the status to persist between runs we need to set it to null. The next step is to initialize the font atlas and set up a texture which will be used in the shaders so we can render properly texts, etc. The final step is to create the `GuiMesh` instance.
The `createUniforms` just creates a single two float for the scale (we will see later on how it will be used).
```java
public class GuiRender {
...
private void createUniforms() {
uniformsMap = new UniformsMap(shaderProgram.getProgramId());
uniformsMap.createUniform("scale");
scale = new Vector2f();
}
...
}
```
The `setupKeyCallBack` method is required to properly process key events in Imgui and is defined like this:
```java
public class GuiRender {
...
private void setupKeyCallBack(Window window) {
prevKeyCallBack = glfwSetKeyCallback(window.getWindowHandle(), (handle, key, scancode, action, mods) -> {
window.keyCallBack(key, action);
ImGuiIO io = ImGui.getIO();
if (!io.getWantCaptureKeyboard()) {
return;
}
if (action == GLFW_PRESS) {
io.addKeyEvent(getImKey(key), true);
} else if (action == GLFW_RELEASE) {
io.addKeyEvent(getImKey(key), false);
}
}
);
glfwSetCharCallback(window.getWindowHandle(), (handle, c) -> {
ImGuiIO io = ImGui.getIO();
if (!io.getWantCaptureKeyboard()) {
return;
}
io.addInputCharacter(c);
});
}
private static int getImKey(int key) {
return switch (key) {
case GLFW_KEY_TAB -> ImGuiKey.Tab;
case GLFW_KEY_LEFT -> ImGuiKey.LeftArrow;
case GLFW_KEY_RIGHT -> ImGuiKey.RightArrow;
case GLFW_KEY_UP -> ImGuiKey.UpArrow;
case GLFW_KEY_DOWN -> ImGuiKey.DownArrow;
case GLFW_KEY_PAGE_UP -> ImGuiKey.PageUp;
case GLFW_KEY_PAGE_DOWN -> ImGuiKey.PageDown;
case GLFW_KEY_HOME -> ImGuiKey.Home;
case GLFW_KEY_END -> ImGuiKey.End;
case GLFW_KEY_INSERT -> ImGuiKey.Insert;
case GLFW_KEY_DELETE -> ImGuiKey.Delete;
case GLFW_KEY_BACKSPACE -> ImGuiKey.Backspace;
case GLFW_KEY_SPACE -> ImGuiKey.Space;
case GLFW_KEY_ENTER -> ImGuiKey.Enter;
case GLFW_KEY_ESCAPE -> ImGuiKey.Escape;
case GLFW_KEY_APOSTROPHE -> ImGuiKey.Apostrophe;
case GLFW_KEY_COMMA -> ImGuiKey.Comma;
case GLFW_KEY_MINUS -> ImGuiKey.Minus;
case GLFW_KEY_PERIOD -> ImGuiKey.Period;
case GLFW_KEY_SLASH -> ImGuiKey.Slash;
case GLFW_KEY_SEMICOLON -> ImGuiKey.Semicolon;
case GLFW_KEY_EQUAL -> ImGuiKey.Equal;
case GLFW_KEY_LEFT_BRACKET -> ImGuiKey.LeftBracket;
case GLFW_KEY_BACKSLASH -> ImGuiKey.Backslash;
case GLFW_KEY_RIGHT_BRACKET -> ImGuiKey.RightBracket;
case GLFW_KEY_GRAVE_ACCENT -> ImGuiKey.GraveAccent;
case GLFW_KEY_CAPS_LOCK -> ImGuiKey.CapsLock;
case GLFW_KEY_SCROLL_LOCK -> ImGuiKey.ScrollLock;
case GLFW_KEY_NUM_LOCK -> ImGuiKey.NumLock;
case GLFW_KEY_PRINT_SCREEN -> ImGuiKey.PrintScreen;
case GLFW_KEY_PAUSE -> ImGuiKey.Pause;
case GLFW_KEY_KP_0 -> ImGuiKey.Keypad0;
case GLFW_KEY_KP_1 -> ImGuiKey.Keypad1;
case GLFW_KEY_KP_2 -> ImGuiKey.Keypad2;
case GLFW_KEY_KP_3 -> ImGuiKey.Keypad3;
case GLFW_KEY_KP_4 -> ImGuiKey.Keypad4;
case GLFW_KEY_KP_5 -> ImGuiKey.Keypad5;
case GLFW_KEY_KP_6 -> ImGuiKey.Keypad6;
case GLFW_KEY_KP_7 -> ImGuiKey.Keypad7;
case GLFW_KEY_KP_8 -> ImGuiKey.Keypad8;
case GLFW_KEY_KP_9 -> ImGuiKey.Keypad9;
case GLFW_KEY_KP_DECIMAL -> ImGuiKey.KeypadDecimal;
case GLFW_KEY_KP_DIVIDE -> ImGuiKey.KeypadDivide;
case GLFW_KEY_KP_MULTIPLY -> ImGuiKey.KeypadMultiply;
case GLFW_KEY_KP_SUBTRACT -> ImGuiKey.KeypadSubtract;
case GLFW_KEY_KP_ADD -> ImGuiKey.KeypadAdd;
case GLFW_KEY_KP_ENTER -> ImGuiKey.KeypadEnter;
case GLFW_KEY_KP_EQUAL -> ImGuiKey.KeypadEqual;
case GLFW_KEY_LEFT_SHIFT -> ImGuiKey.LeftShift;
case GLFW_KEY_LEFT_CONTROL -> ImGuiKey.LeftCtrl;
case GLFW_KEY_LEFT_ALT -> ImGuiKey.LeftAlt;
case GLFW_KEY_LEFT_SUPER -> ImGuiKey.LeftSuper;
case GLFW_KEY_RIGHT_SHIFT -> ImGuiKey.RightShift;
case GLFW_KEY_RIGHT_CONTROL -> ImGuiKey.RightCtrl;
case GLFW_KEY_RIGHT_ALT -> ImGuiKey.RightAlt;
case GLFW_KEY_RIGHT_SUPER -> ImGuiKey.RightSuper;
case GLFW_KEY_MENU -> ImGuiKey.Menu;
case GLFW_KEY_0 -> ImGuiKey._0;
case GLFW_KEY_1 -> ImGuiKey._1;
case GLFW_KEY_2 -> ImGuiKey._2;
case GLFW_KEY_3 -> ImGuiKey._3;
case GLFW_KEY_4 -> ImGuiKey._4;
case GLFW_KEY_5 -> ImGuiKey._5;
case GLFW_KEY_6 -> ImGuiKey._6;
case GLFW_KEY_7 -> ImGuiKey._7;
case GLFW_KEY_8 -> ImGuiKey._8;
case GLFW_KEY_9 -> ImGuiKey._9;
case GLFW_KEY_A -> ImGuiKey.A;
case GLFW_KEY_B -> ImGuiKey.B;
case GLFW_KEY_C -> ImGuiKey.C;
case GLFW_KEY_D -> ImGuiKey.D;
case GLFW_KEY_E -> ImGuiKey.E;
case GLFW_KEY_F -> ImGuiKey.F;
case GLFW_KEY_G -> ImGuiKey.G;
case GLFW_KEY_H -> ImGuiKey.H;
case GLFW_KEY_I -> ImGuiKey.I;
case GLFW_KEY_J -> ImGuiKey.J;
case GLFW_KEY_K -> ImGuiKey.K;
case GLFW_KEY_L -> ImGuiKey.L;
case GLFW_KEY_M -> ImGuiKey.M;
case GLFW_KEY_N -> ImGuiKey.N;
case GLFW_KEY_O -> ImGuiKey.O;
case GLFW_KEY_P -> ImGuiKey.P;
case GLFW_KEY_Q -> ImGuiKey.Q;
case GLFW_KEY_R -> ImGuiKey.R;
case GLFW_KEY_S -> ImGuiKey.S;
case GLFW_KEY_T -> ImGuiKey.T;
case GLFW_KEY_U -> ImGuiKey.U;
case GLFW_KEY_V -> ImGuiKey.V;
case GLFW_KEY_W -> ImGuiKey.W;
case GLFW_KEY_X -> ImGuiKey.X;
case GLFW_KEY_Y -> ImGuiKey.Y;
case GLFW_KEY_Z -> ImGuiKey.Z;
case GLFW_KEY_F1 -> ImGuiKey.F1;
case GLFW_KEY_F2 -> ImGuiKey.F2;
case GLFW_KEY_F3 -> ImGuiKey.F3;
case GLFW_KEY_F4 -> ImGuiKey.F4;
case GLFW_KEY_F5 -> ImGuiKey.F5;
case GLFW_KEY_F6 -> ImGuiKey.F6;
case GLFW_KEY_F7 -> ImGuiKey.F7;
case GLFW_KEY_F8 -> ImGuiKey.F8;
case GLFW_KEY_F9 -> ImGuiKey.F9;
case GLFW_KEY_F10 -> ImGuiKey.F10;
case GLFW_KEY_F11 -> ImGuiKey.F11;
case GLFW_KEY_F12 -> ImGuiKey.F12;
default -> ImGuiKey.None;
};
}
...
}
```
First we need to setup a GLFW key callback which first calls `Window` key call back to handle key events and translate GFLW key code sto Imgui ones. When setting a callback we obtain a reference to a previously established one so we can chain them. In this case we will invoke it if the key event is not handled by ImGui. We are not using char callbacks in other parts of the code, but if you do, remember to apply that chain schema also. After that, we set up the state of Imgui according to key pressed or released events. Finally, we need to setup a char call back so text input widgets can process those events.
Let's view the `render` method now:
```java
public class GuiRender {
...
public void render(Scene scene) {
IGuiInstance guiInstance = scene.getGuiInstance();
if (guiInstance == null) {
return;
}
guiInstance.drawGui();
shaderProgram.bind();
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glBindVertexArray(guiMesh.getVaoId());
glBindBuffer(GL_ARRAY_BUFFER, guiMesh.getVerticesVBO());
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, guiMesh.getIndicesVBO());
ImGuiIO io = ImGui.getIO();
scale.x = 2.0f / io.getDisplaySizeX();
scale.y = -2.0f / io.getDisplaySizeY();
uniformsMap.setUniform("scale", scale);
ImDrawData drawData = ImGui.getDrawData();
int numLists = drawData.getCmdListsCount();
for (int i = 0; i < numLists; i++) {
glBufferData(GL_ARRAY_BUFFER, drawData.getCmdListVtxBufferData(i), GL_STREAM_DRAW);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, drawData.getCmdListIdxBufferData(i), GL_STREAM_DRAW);
int numCmds = drawData.getCmdListCmdBufferSize(i);
for (int j = 0; j < numCmds; j++) {
final int elemCount = drawData.getCmdListCmdBufferElemCount(i, j);
final int idxBufferOffset = drawData.getCmdListCmdBufferIdxOffset(i, j);
final int indices = idxBufferOffset * ImDrawData.sizeOfImDrawIdx();
texture.bind();
glDrawElements(GL_TRIANGLES, elemCount, GL_UNSIGNED_SHORT, indices);
}
}
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glDisable(GL_BLEND);
}
...
}
```
The first thing that we do is to check if we have set up an implementation of the `IGuiInstance` interface. If there is no instance, we just return, there is no need to render anything. After that we call the `drawGui` method. That is, in each render call we invoke that method so the Imgui can update its status to be able to generate the proper vertex data. After binding the shader we first enable blending which will allow us to use transparencies. Just by enabling blending, transparencies still will not show up. We need also to instruct OpenGL about how the blending will be applied. This is done through the `glBlendFunc` function. You can check an excellent explanation about the details of the different functions that can be applied [here](https://learnopengl.com/Advanced-OpenGL/Blending).
After that, we need to disable depth testing and face culling for Imgui to work properly. Then, we bind the gui mesh which defines the structure of the data and bind the data and indices buffers. Imgui uses screen coordinates to generate the vertices data, that is `x` values cover the `[0, screen width]` range and `y` values cover the `[0, screen height]`. We will use the `scale` uniform to map from that coordinate system to the `[-1, 1]` range of OpenGL's clip space.
After that, we retrieve the data generated by Imgui to render the GUI. Imgui first organizes the data in what they call command lists. Each command list has a buffer where it stores the vertex and indices data, so we first dump data to the GPU by calling the `glBufferData`. Each command list defines also a set of commands which we will use to generate the draw calls. Each command stores the number of elements to be drawn and the offset to be applied to the buffer in the command list. When we have drawn all the elements we can re-enable the depth test.
Finally, we need to add a `resize` method which will be called any time the window is resized to adjust Imgui display size.
```java
public class GuiRender {
...
public void resize(int width, int height) {
ImGuiIO imGuiIO = ImGui.getIO();
imGuiIO.setDisplaySize(width, height);
}
}
```
We need to update the `UniformsMap` class to add support for 2D vectors:
```java
public class UniformsMap {
...
public void setUniform(String uniformName, Vector2f value) {
glUniform2f(getUniformLocation(uniformName), value.x, value.y);
}
}
```
The vertex shader used for rendering the GUI is quite simple (`gui.vert`), we just transform the coordinates so they are in the `[-1, 1]` range and output the texture coordinates and color so they can be used in the fragment shader:
```glsl
#version 330
layout (location=0) in vec2 inPos;
layout (location=1) in vec2 inTextCoords;
layout (location=2) in vec4 inColor;
out vec2 frgTextCoords;
out vec4 frgColor;
uniform vec2 scale;
void main()
{
frgTextCoords = inTextCoords;
frgColor = inColor;
gl_Position = vec4(inPos * scale + vec2(-1.0, 1.0), 0.0, 1.0);
}
```
In the fragment shader (`gui.frag`) we just output the combination of the vertex color and the texture color associated to its texture coordinates:
```glsl
#version 330
in vec2 frgTextCoords;
in vec4 frgColor;
uniform sampler2D txtSampler;
out vec4 outColor;
void main()
{
outColor = frgColor * texture(txtSampler, frgTextCoords);
}
```
## Putting it up all together
Now we need to glue all the previous pices to render the GUI. We will first start by using the new `GuiRender` class into the `Render` one.
```java
public class Render {
...
private GuiRender guiRender;
...
public Render(Window window) {
...
guiRender = new GuiRender(window);
}
public void cleanup() {
...
guiRender.cleanup();
}
public void render(Window window, Scene scene) {
...
guiRender.render(scene);
}
public void resize(int width, int height) {
guiRender.resize(width, height);
}
}
```
We also need to modify the `Engine` class to include `IGuiInstance` in the update loop and to use its return value to indicate if input has been consumed or not.
```java
public class Engine {
...
public Engine(String windowTitle, Window.WindowOptions opts, IAppLogic appLogic) {
...
render = new Render(window);
...
}
...
private void resize() {
int width = window.getWidth();
int height = window.getHeight();
scene.resize(width, height);
render.resize(width, height);
}
private void run() {
...
IGuiInstance iGuiInstance = scene.getGuiInstance();
while (running && !window.windowShouldClose()) {
...
if (targetFps <= 0 || deltaFps >= 1) {
window.getMouseInput().input();
boolean inputConsumed = iGuiInstance != null && iGuiInstance.handleGuiInput(scene, window);
appLogic.input(window, scene, now - initialTime, inputConsumed);
}
...
}
...
}
...
}
```
We need also to update the `IAppLogic` interface to use the input consumed return value.
```java
public interface IAppLogic {
...
void input(Window window, Scene scene, long diffTimeMillis, boolean inputConsumed);
...
}
```
And finally, we will implement the `IGuiInstance` in the `Main` class:
```java
public class Main implements IAppLogic, IGuiInstance {
...
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-10", new Window.WindowOptions(), main);
...
}
...
@Override
public void drawGui() {
ImGui.newFrame();
ImGui.setNextWindowPos(0, 0, ImGuiCond.Always);
ImGui.showDemoWindow();
ImGui.endFrame();
ImGui.render();
}
@Override
public boolean handleGuiInput(Scene scene, Window window) {
ImGuiIO imGuiIO = ImGui.getIO();
MouseInput mouseInput = window.getMouseInput();
Vector2f mousePos = mouseInput.getCurrentPos();
imGuiIO.addMousePosEvent(mousePos.x, mousePos.y);
imGuiIO.addMouseButtonEvent(0, mouseInput.isLeftButtonPressed());
imGuiIO.addMouseButtonEvent(1, mouseInput.isRightButtonPressed());
return imGuiIO.getWantCaptureMouse() || imGuiIO.getWantCaptureKeyboard();
}
...
public void input(Window window, Scene scene, long diffTimeMillis, boolean inputConsumed) {
if (inputConsumed) {
return;
}
...
}
}
```
In the `drawGui` method we just setup a new frame, the window position and just invoke the `showDemoWindow` to generate Imgui's demo window. After ending the frame it is very important to call the `render` this is what will generate the set of commands upon the GUI structure defined previously. The `handleGuiInput` first gets mouse position and updates Imgui's IO class with that information and mouse button status. We also return a boolean that indicates that input has been capture by Imgui. Finally, we just need to update the `input` method to receive that flag. In this specific case, if input has already been consumed by the Gui, we just return.
With all those changes you will be able to see Imgui demo window overlapping the rotating cube. You can interact with the different methods and panels to get a glimpse of the capabilities of Imgui.

[Next chapter](../chapter-11/chapter-11.md)
================================================
FILE: chapter-11/chapter-11.md
================================================
# Chapter 11 - Lights
In this chapter, we will learn how to add light to our 3D game engine. We will not implement a physically perfect light model because, taking aside the complexity, it would require a tremendous amount of computer resources. Instead, we will implement an approximation that will provide decent results, using an algorithm named Phong shading (developed by Bui Tuong Phong). Another important thing to point out is that we will only model lights, but we won’t model the shadows that should be generated by those lights (this will be done in another chapter).
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-11).
## Some concepts
Before we start, let us define some light types:
* **Point light**: This type of light models a light source that’s emitted uniformly from a point in space in all directions.
* **Spot light**: This type of light models a light source that’s emitted from a point in space, but instead of emitting in all directions is restricted to a cone.
* **Directional light**: This type of light models the light that we receive from the sun; all the objects in the 3D space are hit by parallel ray lights coming from a specific direction. No matter if the object is close or far away, all the ray lights impact the objects with the same angle.
* **Ambient light**: This type of light comes from everywhere in the space and illuminates all the objects in the same way.
.png>)
Thus, to model light, we need to take into consideration the type of light, plus its position and some other parameters like its color. Of course, we must also consider the way that objects, impacted by ray lights, absorb and reflect light.
The Phong shading algorithm will model the effects of light for each point in our model, that is, for every vertex. This is why it’s called a local illumination simulation, and this is the reason why this algorithm will not calculate shadows: it will just calculate the light to be applied to every vertex without taking into consideration if the vertex is behind an object that blocks the light. We will overcome this drawback in later chapters. But, because of that, it's a simple and fast algorithm that provides very good effects. We will use here a simplified version that does not take into account materials deeply.
The Phong algorithm considers three components for lighting:
* **Ambient light**: models light that comes from everywhere, this will serve us to illuminate (with the required intensity) the areas that are not hit by any light, it’s like a background light.
* **Diffuse reflectance**: takes into consideration that surfaces that are facing the light source are brighter.
* **Specular reflectance**: models how light reflects on polished or metallic surfaces.
At the end, what we want to obtain is a factor that, multiplied by the color assigned to a fragment, will set that color brighter or darker depending on the light it receives. Let’s name our components as $$A$$ for ambient, $$D$$ for diffuse, and $$S$$ for specular. That factor will be the addition of those components:
$$L = A + D + S$$
In fact, those components are indeed colors, that is, the color components that each light component contributes to. This is due to the fact that light components will not only provide a degree of intensity, but they can also modify the color of the model. In our fragment shader, we just need to multiply that light color by the original fragment color (obtained from a texture or a base color).
We can also assign different colors for the same materials, which will be used in the ambient, diffuse, and specular components. Hence, these components will be modulated by the colors associated with the material. If the material has a texture, we will simply use a single texture for each of the components.
So the final color for a non textured material will be: $$L = A * ambientColour + D * diffuseColour + S * specular Colour$$.
And the final color for a textured material will be:
$$L = A * textureColour + D * textureColour + S * textureColour$$
## Normals
Normals are a key element when working with lights. Let’s define it first. The normal of a plane is a vector perpendicular to that plane, which has a length equal to one.
.png>)
As you can see in the figure above, a plane can have two normals. Which one should we use? Normals in 3D graphics are used for lighting, so we should choose the normal that is oriented towards the source of light. In other words, we should choose the normal that points out from the external face of our model.
When we have a 3D model, it is composed by polygons, triangles in our case. Each triangle is composed of three vertices. The Normal vector for a triangle will be the vector perpendicular to the triangle's surface, which has a length equal to one.
A vertex normal is associated with a specific vertex and is the combination of the normals of the surrounding triangles (of course, its length is equal to one). Here you can see the vertex models of a 3D mesh (taken from [Wikipedia](https://en.wikipedia.org/wiki/Vertex_normal#/media/File:Vertex_normals.png))
.png>)
## Diffuse reflectance
Let’s talk now about diffuse reflectance. This models the fact that surfaces which face in a perpendicular way to the light source look brighter than surfaces where light is received in a more indirect angle. Those objects receive more light, the light density (let me call it this way) is higher.
.png>)
But, how do we calculate this? This is where we will first start using normals. Let’s draw the normals for three points in the previous figure. As you can see, the normal for each point will be the vector perpendicular to the tangent plane for each point. Instead of drawing rays coming from the source of light we will draw vectors from each point to the point of light (that is, in the opposite direction).
.png>)
As you can see, the normal associated to $$P1$$, named $$N1$$, is parallel to the vector that points to the light source, which models the opposite of the light ray ($$N1$$ has been sketched displaced so you can see it, but it’s equivalent mathematically). $$P1$$ has an angle equal to $$0$$ with the vector that points to the light source. Its surface is perpendicular to the light source and $$P1$$ would be the brightest point.
The normal associated to $$P2$$, named $$N2$$, has an angle of around 30 degrees with the vector that points the light source, so it should be darker tan $$P1$$. Finally, the normal associated to $$P3$$, named $$N3$$, is also parallel to the vector that points to the light source but the two vectors are in the opposite direction. $$P3$$ has an angle of 180 degrees with the vector that points the light source, and should not get any light at all.
So it seems that we have a good approach to determine the light intensity that gets to a point, and this is related to the angle that forms the normal with a vector that points to the light source. How can we calculate this?
There’s a mathematical operation that we can use, the dot product. This operation takes two vectors and produces a number (a scalar) that is positive if the angle between them is acute, or negative if the angle between them is wide. If both vectors are normalized, that is the both have a length equal to one, the dot product will be between $$-1$$ and $$1$$. The dot product will be one if both vectors look exactly in the same direction (angle $$0$$); it will be $$0$$ if both vectors form a square angle, and it will be $$-1$$ if both vectors point in opposite directions.
Let’s define two vectors, $$v1$$ and $$v2$$, and let $$alpha$$ be the angle between them. The dot product is defined by the following formula.
.png>)
If both vectors are normalized, their length, their module will be equal to one, so the dot product is equal to the cosine if the angle between them. We will use that operation to calculate the diffuse reflectance component.
So we need to calculate the vector that points to the source of light. How we do this? We have the position of each point (the vertex position) and we have the position of the light source. First of all, both coordinates must be in the same coordinate space. To simplify, let’s assume that they are both in world coordinate space: then those positions are the coordinates of the vectors that point to the vertex position ($$VP$$) and to the light source ($$VS$$), as shown in the next figure.
.png>)
If we subtract $$V$$S from $$VP$$ we get the vector that we are looking for, which is called $$L$$.
Now we can compute the dot product between the vector that points to the light source and the normal. This product is called the Lambert term, due to Johann Lambert who was the first to propose that relation to model the brightness of a surface.
Let’s summarize how we can calculate it. We define the following variables:
* $$vPos$$: Position of our vertex in model view space coordinates.
* $$lPos$$: Position of the light in view space coordinates.
* $$intensity$$: Intensity of the light (from 0 to 1).
* $$lColour$$: Colour of the light.
* $$normal$$: The vertex normal.
First we need to calculate the vector that points to the light source from current position: $$toLightDirection = lPos - vPos$$. The result of that operation needs to be normalized.
Then we need to calculate the diffuse factor (a scalar): $$diffuseFactor = normal \cdot toLightDirection$$. It’s calculated as dot product between two vectors, and since we want it to be between $$-1$$ and $$1$$ both vectors need to be normalized. Colours need to be between $$0$$ and $$1$$ so if a value is lower than $$0$$ we will set it to 0.
Finally we just need to modulate the light color by the diffuse factor and the light intensity:
$$color = diffuseColour * lColour * diffuseFactor * intensity$$
## Specular component
Before considering the specular component, we first need to examine how light is reflected. When light hits a surface some part of it is absorbed and the other part is reflected, if you remember from your physics class, reflection is when light bounces off an object.
.png>)
Of course, surfaces are not totally polished, and if you look at closer distance you will see a lot of imperfections. Besides that, you have many ray lights (photons in fact), that impact that surface, and that get reflected in a wide range of angles. Thus, what we see is like a beam of light being reflected from the surface. That is, light is diffused when impacting over a surface, and that’s the diffuse component that we have been talking about previously.
.png>)
But when light impacts a polished surface, for instance a metal, the light suffers from lower diffusion and most of it gets reflected in the opposite direction as it hit that surface.
.png>)
This is what the specular component models, and it depends on the material characteristics. Regarding specular reflectance, it’s important to note that the reflected light will only be visible if the camera is in a proper position, that is, if it's in the area where the reflected light is emitted.
.png>)
Now that the mechanism behind specular reflection has been explained we are ready to calculate that component. First we need a vector that points from the light source to the vertex point. When we were calculating the diffuse component we calculated just the opposite, a vector that points to the light source. $$toLightDirection$$, so let’s calculate it as $$fromLightDirection = -(toLightDirection)$$.
Then we need to calculate the reflected light that results from the impact of the $$fromLightDirection$$ into the surface by taking into consideration its normal. There’s the GLSL function `reflect` that does exactly that. So, $$reflectedLight = reflect(fromLightSource, normal)$$.
We also need a vector that points to the camera, let’s name it $$cameraDirection$$, and it will be calculated as the difference between the camera position and the vertex position: $$cameraDirection = cameraPos - vPos$$. The camera position vector and the vertex position need to be in the same coordinate system and the resulting vector needs to be normalized. The following figure sketches the main components we have calculated up to now.
.png>)
Now we need to calculate the light intensity that we see, which we will call $$specularFactor$$. This component will be higher if the $$cameraDirection$$ and the $$reflectedLight$$ vectors are parallel and point in the same direction and will take its lower value if they point in opposite directions. In order to calculate this the dot product comes to the rescue again. So $$specularFactor = cameraDirection \cdot reflectedLight$$. We only want this value to be between $$0$$ and $$1$$ so if it’s lower than $$0$$ it will be set to 0.
We also need to take into consideration that this light must be more intense if the camera is pointing to the reflected light cone. This will be achieved by raising the $$specularFactor$$ to a parameter named $$specularPower$$.
$$specularFactor = specularFactor^{specularPower}$$.
Finally we need to model the reflectivity of the material, which will also modulate the intensity if the light reflected. This will be done with another parameter named reflectance. So the color of the specular component will be: $$specularColour * lColour * reflectance * specularFactor * intensity$$.
## Attenuation
We now know how to calculate the three components that will serve us to model a point light with an ambient light. But our light model is still not complete, as the light that an object reflects is independent of the distance from the light source. That is, we need to simulate light attenuation.
Attenuation is a function of the distance and light. The intensity of light is inversely proportional to the square of distance. That fact is easy to visualize, as light propagates its energy along the surface of a sphere with a radius that’s equal to the distance traveled by the light, and the surface of a sphere is proportional to the square of its radius. We can calculate the attenuation factor with this formula: $$1.0 / (atConstant + atLinear * dist + atExponent * dist^{2})$$.
In order to simulate attenuation we just need to multiply that attenuation factor by the final color.
## Directional Light
Directional lighting hits all the objects by parallel rays all coming from the same direction. It models light sources that are far away but have a high intensity such as the Sun.
.png>)
Another characteristic of directional light is that it is not affected by attenuation. Think again about sunlight: all objects that are hit by rays of light are illuminated with the same intensity, as the distance from the sun is so huge that the position of the objects is irrelevant. In fact, directional lights are modeled as light sources placed at infinity, if it was affected by attenuation it would have no effect in any object (its color contribution would be equal to $$0$$).
Besides that, directional light is composed also by a diffuse and specular components. The only differences with point lights is that it does not have a position but a direction and that it is not affected by attenuation. Let’s get back to the direction attribute of directional light, and imagine we are modeling the movement of the sun across our 3D world. If we are assuming that the north is placed towards the increasing z-axis, the following picture shows the direction to the light source at dawn, mid day and dusk.
.png>)
## Spot Light
Now we will implement spot lights which are very similar to point lights but the emitted light is restricted to a 3D cone. It models the light that comes out from focuses or any other light source that does not emit in all directions. A spot light has the same attributes as a point light but adds two new parameters, the cone angle and the cone direction.
.png>)
Spot light contribution is calculated in the same way as a point light with some exceptions. The points for which the vector that points from the vertex position to the light source is not contained inside the light cone are not affected by the point light.
.png>)
How do we calculate if it’s inside the light cone or not? We need to do a dot product again between the vector that points from the light source and the cone direction vector (both of them normalized).
.png>)
The dot product between $$L$$ and $$C$$ vectors is equal to: $$\vec{L}\cdot\vec{C}=|\vec{L}|\cdot|\vec{C}|\cdot Cos(\alpha)$$. If, in our spot light definition we store the cosine of the cutoff angle, if the dot product is higher than that value we will know that it is inside the light cone (recall the cosine graph, when $$α$$ angle is $$0$$, the cosine will be $$1$$, the smaller the angle the higher the cosine).
The second difference is that the points that are far away from the cone vector will receive less light, that is, the attenuation will be higher. There are several ways of calculating this; we will chose a simple approach by multiplying the attenuation by the following factor:
$$1 - (1-Cos(\alpha))/(1-Cos(cutOffAngle)$$
(In our fragment shaders we won’t have the angle but the cosine of the cutoff angle. You can check that the formula above produces values from 0 to 1, 0 when the angle is equal to the cutoff angle and 1 when the angle is 0).
## Implementing light classes
Let's start first by creating a set of classes to model the different types of lights. We will start with the class that models point lights:
```java
package org.lwjglb.engine.scene.lights;
import org.joml.Vector3f;
public class PointLight {
private Attenuation attenuation;
private Vector3f color;
private float intensity;
private Vector3f position;
public PointLight(Vector3f color, Vector3f position, float intensity) {
attenuation = new Attenuation(0, 0, 1);
this.color = color;
this.position = position;
this.intensity = intensity;
}
public Attenuation getAttenuation() {
return attenuation;
}
public Vector3f getColor() {
return color;
}
public float getIntensity() {
return intensity;
}
public Vector3f getPosition() {
return position;
}
public void setAttenuation(Attenuation attenuation) {
this.attenuation = attenuation;
}
public void setColor(Vector3f color) {
this.color = color;
}
public void setColor(float r, float g, float b) {
color.set(r, g, b);
}
public void setIntensity(float intensity) {
this.intensity = intensity;
}
public void setPosition(float x, float y, float z) {
position.set(x, y, z);
}
public static class Attenuation {
private float constant;
private float exponent;
private float linear;
public Attenuation(float constant, float linear, float exponent) {
this.constant = constant;
this.linear = linear;
this.exponent = exponent;
}
public float getConstant() {
return constant;
}
public float getExponent() {
return exponent;
}
public float getLinear() {
return linear;
}
public void setConstant(float constant) {
this.constant = constant;
}
public void setExponent(float exponent) {
this.exponent = exponent;
}
public void setLinear(float linear) {
this.linear = linear;
}
}
}
```
As you can see a point light is defined by a color, an intensity, a position and an attenuation model.
Ambient light are defined by just a color and an intensity:
```java
package org.lwjglb.engine.scene.lights;
import org.joml.Vector3f;
public class AmbientLight {
private Vector3f color;
private float intensity;
public AmbientLight(float intensity, Vector3f color) {
this.intensity = intensity;
this.color = color;
}
public AmbientLight() {
this(1.0f, new Vector3f(1.0f, 1.0f, 1.0f));
}
public Vector3f getColor() {
return color;
}
public float getIntensity() {
return intensity;
}
public void setColor(Vector3f color) {
this.color = color;
}
public void setColor(float r, float g, float b) {
color.set(r, g, b);
}
public void setIntensity(float intensity) {
this.intensity = intensity;
}
}
```
Directional lights are defined like this:
```java
package org.lwjglb.engine.scene.lights;
import org.joml.Vector3f;
public class DirLight {
private Vector3f color;
private Vector3f direction;
private float intensity;
public DirLight(Vector3f color, Vector3f direction, float intensity) {
this.color = color;
this.direction = direction;
this.intensity = intensity;
}
public Vector3f getColor() {
return color;
}
public Vector3f getDirection() {
return direction;
}
public float getIntensity() {
return intensity;
}
public void setColor(Vector3f color) {
this.color = color;
}
public void setColor(float r, float g, float b) {
color.set(r, g, b);
}
public void setDirection(Vector3f direction) {
this.direction = direction;
}
public void setIntensity(float intensity) {
this.intensity = intensity;
}
public void setPosition(float x, float y, float z) {
direction.set(x, y, z);
}
}
```
Finally, spot lights just include a point light reference plus light conde parameters:
```java
package org.lwjglb.engine.scene.lights;
import org.joml.Vector3f;
public class SpotLight {
private Vector3f coneDirection;
private float cutOff;
private float cutOffAngle;
private PointLight pointLight;
public SpotLight(PointLight pointLight, Vector3f coneDirection, float cutOffAngle) {
this.pointLight = pointLight;
this.coneDirection = coneDirection;
this.cutOffAngle = cutOffAngle;
setCutOffAngle(cutOffAngle);
}
public Vector3f getConeDirection() {
return coneDirection;
}
public float getCutOff() {
return cutOff;
}
public float getCutOffAngle() {
return cutOffAngle;
}
public PointLight getPointLight() {
return pointLight;
}
public void setConeDirection(float x, float y, float z) {
coneDirection.set(x, y, z);
}
public void setConeDirection(Vector3f coneDirection) {
this.coneDirection = coneDirection;
}
public final void setCutOffAngle(float cutOffAngle) {
this.cutOffAngle = cutOffAngle;
cutOff = (float) Math.cos(Math.toRadians(cutOffAngle));
}
public void setPointLight(PointLight pointLight) {
this.pointLight = pointLight;
}
}
```
All the lights will be stored in the Scene class, for that we will create a new class named `SceneLights` which will store references to all the types of lights (note that we only need one ambient light instance and one directional light):
```java
package org.lwjglb.engine.scene.lights;
import org.joml.Vector3f;
import java.util.*;
public class SceneLights {
private AmbientLight ambientLight;
private DirLight dirLight;
private List pointLights;
private List spotLights;
public SceneLights() {
ambientLight = new AmbientLight();
pointLights = new ArrayList<>();
spotLights = new ArrayList<>();
dirLight = new DirLight(new Vector3f(1, 1, 1), new Vector3f(0, 1, 0), 1.0f);
}
public AmbientLight getAmbientLight() {
return ambientLight;
}
public DirLight getDirLight() {
return dirLight;
}
public List getPointLights() {
return pointLights;
}
public List getSpotLights() {
return spotLights;
}
public void setSpotLights(List spotLights) {
this.spotLights = spotLights;
}
}
```
We will have a reference to `SceneLights` in the `Scene` class:
```java
public class Scene {
...
private SceneLights sceneLights;
...
public SceneLights getSceneLights() {
return sceneLights;
}
...
public void setSceneLights(SceneLights sceneLights) {
this.sceneLights = sceneLights;
}
}
```
## Model loading modification
We need to modify the `ModelLoader` class to:
* Get more properties of the material, in particular, ambient color, specular color and shininess factor.
* Load normals data for each mesh.
In order to get more properties of the material, we need to modify the `processMaterial` method:
```java
public class ModelLoader {
...
private static Material processMaterial(AIMaterial aiMaterial, String modelDir, TextureCache textureCache) {
Material material = new Material();
try (MemoryStack stack = MemoryStack.stackPush()) {
AIColor4D color = AIColor4D.create();
int result = aiGetMaterialColor(aiMaterial, AI_MATKEY_COLOR_AMBIENT, aiTextureType_NONE, 0,
color);
if (result == aiReturn_SUCCESS) {
material.setAmbientColor(new Vector4f(color.r(), color.g(), color.b(), color.a()));
}
result = aiGetMaterialColor(aiMaterial, AI_MATKEY_COLOR_DIFFUSE, aiTextureType_NONE, 0,
color);
if (result == aiReturn_SUCCESS) {
material.setDiffuseColor(new Vector4f(color.r(), color.g(), color.b(), color.a()));
}
result = aiGetMaterialColor(aiMaterial, AI_MATKEY_COLOR_SPECULAR, aiTextureType_NONE, 0,
color);
if (result == aiReturn_SUCCESS) {
material.setSpecularColor(new Vector4f(color.r(), color.g(), color.b(), color.a()));
}
float reflectance = 0.0f;
float[] shininessFactor = new float[]{0.0f};
int[] pMax = new int[]{1};
result = aiGetMaterialFloatArray(aiMaterial, AI_MATKEY_SHININESS_STRENGTH, aiTextureType_NONE, 0, shininessFactor, pMax);
if (result != aiReturn_SUCCESS) {
reflectance = shininessFactor[0];
}
material.setReflectance(reflectance);
AIString aiTexturePath = AIString.calloc(stack);
aiGetMaterialTexture(aiMaterial, aiTextureType_DIFFUSE, 0, aiTexturePath, (IntBuffer) null,
null, null, null, null, null);
String texturePath = aiTexturePath.dataString();
if (texturePath != null && texturePath.length() > 0) {
material.setTexturePath(modelDir + File.separator + new File(texturePath).getName());
textureCache.createTexture(material.getTexturePath());
material.setDiffuseColor(Material.DEFAULT_COLOR);
}
return material;
}
}
...
}
```
As you can see, we get the material ambient color by getting the `AI_MATKEY_COLOR_AMBIENT` property. Specular color is got by using the `AI_MATKEY_COLOR_SPECULAR` property. Shininess is queried using the `AI_MATKEY_SHININESS_STRENGTH` flag.
In order to load normals, we need to create a new method named `processNormals` and invoke it in the `processMesh` method.
```java
public class ModelLoader {
...
private static Mesh processMesh(AIMesh aiMesh) {
float[] vertices = processVertices(aiMesh);
float[] normals = processNormals(aiMesh);
float[] textCoords = processTextCoords(aiMesh);
int[] indices = processIndices(aiMesh);
// Texture coordinates may not have been populated. We need at least the empty slots
if (textCoords.length == 0) {
int numElements = (vertices.length / 3) * 2;
textCoords = new float[numElements];
}
return new Mesh(vertices, normals, textCoords, indices);
}
private static float[] processNormals(AIMesh aiMesh) {
AIVector3D.Buffer buffer = aiMesh.mNormals();
float[] data = new float[buffer.remaining() * 3];
int pos = 0;
while (buffer.remaining() > 0) {
AIVector3D normal = buffer.get();
data[pos++] = normal.x();
data[pos++] = normal.y();
data[pos++] = normal.z();
}
return data;
}
...
}
```
As you can see we need to modify also the `Material` and `Mesh` classes to store the new information. The changes in the `Material` class are as follows:
```java
public class Material {
...
private Vector4f ambientColor;
...
private float reflectance;
private Vector4f specularColor;
...
public Material() {
...
ambientColor = DEFAULT_COLOR;
specularColor = DEFAULT_COLOR;
...
}
...
public Vector4f getAmbientColor() {
return ambientColor;
}
...
public float getReflectance() {
return reflectance;
}
public Vector4f getSpecularColor() {
return specularColor;
}
...
public void setAmbientColor(Vector4f ambientColor) {
this.ambientColor = ambientColor;
}
...
public void setReflectance(float reflectance) {
this.reflectance = reflectance;
}
public void setSpecularColor(Vector4f specularColor) {
this.specularColor = specularColor;
}
...
}
```
`Mesh` class now accepts a new float array for normals data, and thus creates a new VBO for that:
```java
public class Mesh {
...
public Mesh(float[] positions, float[] normals, float[] textCoords, int[] indices) {
...
// Normals VBO
vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer normalsBuffer = MemoryUtil.memCallocFloat(normals.length);
normalsBuffer.put(0, normals);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, normalsBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, false, 0, 0);
// Texture coordinates VBO
...
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, false, 0, 0);
// Index VBO
...
MemoryUtil.memFree(normalsBuffer);
...
}
...
}
```
## Render with lights
Now it is time to use the lights while rendering, let's start with the shaders, in particular with the vertex shader (`scene.vert`):
```glsl
#version 330
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
layout (location=2) in vec2 texCoord;
out vec3 outPosition;
out vec3 outNormal;
out vec2 outTextCoord;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
void main()
{
mat4 modelViewMatrix = viewMatrix * modelMatrix;
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * mvPosition;
outPosition = mvPosition.xyz;
outNormal = normalize(modelViewMatrix * vec4(normal, 0.0)).xyz;
outTextCoord = texCoord;
}
```
As you can see we now have normal data as another input attribute and we just pass that data to the fragment shader. Before we continue with the fragment shader, there’s a very important concept that must be highlighted. From the code above you can see that `mvVertexNormal`, the variable contains the vertex normal, is transformed into model view space coordinates. This is done by multiplying the `normal` by the `modelViewMatrix` as with the vertex position. But there’s a subtle difference, the w component of that vertex normal is set to 0 before multiplying it by the matrix: `vec4(vertexNormal, 0.0)`. Why are we doing this? Because we do want the normal to be rotated and scaled but we do not want it to be translated, we are only interested into its direction but not in its position. This is achieved by setting is w component to 0 and is one of the advantages of using homogeneous coordinates, by setting the w component we can control what transformations are applied. You can do the matrix multiplication by hand and see why this happens.
Changes in the `scene.frag` fragment shader are most complex, let's go one step at a time:
```glsl
#version 330
const int MAX_POINT_LIGHTS = 5;
const int MAX_SPOT_LIGHTS = 5;
const float SPECULAR_POWER = 10;
in vec3 outPosition;
in vec3 outNormal;
in vec2 outTextCoord;
out vec4 fragColor;
...
```
First, we define the maximum some constants for the maximum number of pont and spot lights we will support. We need this since the data for those lights will be passed as an array of uniforms, which need to have a well defined size at compile time. You can see also that we receive normal data from vertex shader. After that, we define sthe structures that will model lights data:
```glsl
...
struct Attenuation
{
float constant;
float linear;
float exponent;
};
struct Material
{
vec4 ambient;
vec4 diffuse;
vec4 specular;
float reflectance;
};
struct AmbientLight
{
float factor;
vec3 color;
};
struct PointLight {
vec3 position;
vec3 color;
float intensity;
Attenuation att;
};
struct SpotLight
{
PointLight pl;
vec3 conedir;
float cutoff;
};
struct DirLight
{
vec3 color;
vec3 direction;
float intensity;
};
...
```
After that, we define new uniforms for lights data:
```glsl
...
uniform sampler2D txtSampler;
uniform Material material;
uniform AmbientLight ambientLight;
uniform PointLight pointLights[MAX_POINT_LIGHTS];
uniform SpotLight spotLights[MAX_SPOT_LIGHTS];
uniform DirLight dirLight
...
```
We will now define some functions to calculate the effect of each light type, starting with ambient light:
```glsl
...
vec4 calcAmbient(AmbientLight ambientLight, vec4 ambient) {
return vec4(ambientLight.factor * ambientLight.color, 1) * ambient;
}
...
```
As you can see we just modulate the ambient light color by a factor which is applied to material ambient color. Now we will define a function which will define how color light is calculated for lal types of lights:
```glsl
...
vec4 calcLightColor(vec4 diffuse, vec4 specular, vec3 lightColor, float light_intensity, vec3 position, vec3 to_light_dir, vec3 normal) {
vec4 diffuseColor = vec4(0, 0, 0, 1);
vec4 specColor = vec4(0, 0, 0, 1);
// Diffuse Light
float diffuseFactor = max(dot(normal, to_light_dir), 0.0);
diffuseColor = diffuse * vec4(lightColor, 1.0) * light_intensity * diffuseFactor;
// Specular Light
vec3 camera_direction = normalize(-position);
vec3 from_light_dir = -to_light_dir;
vec3 reflected_light = normalize(reflect(from_light_dir, normal));
float specularFactor = max(dot(camera_direction, reflected_light), 0.0);
specularFactor = pow(specularFactor, SPECULAR_POWER);
specColor = specular * light_intensity * specularFactor * material.reflectance * vec4(lightColor, 1.0);
return (diffuseColor + specColor);
}
...
```
The previous code is relatively straightforward, it just calculates a color for the diffuse component, another one for the specular component and modulates them by the attenuation suffered by the light in its travel to the vertex we are processing. Now we can define the functions that will be called for each type of light, we will start with point light:
```glsl
...
vec4 calcPointLight(vec4 diffuse, vec4 specular, PointLight light, vec3 position, vec3 normal) {
vec3 light_direction = light.position - position;
vec3 to_light_dir = normalize(light_direction);
vec4 light_color = calcLightColor(diffuse, specular, light.color, light.intensity, position, to_light_dir, normal);
// Apply Attenuation
float distance = length(light_direction);
float attenuationInv = light.att.constant + light.att.linear * distance +
light.att.exponent * distance * distance;
return light_color / attenuationInv;
}
...
```
As you can see, we just calculate the direction to the light (as a normal), and use that information to calculate the light color, using material's diffuse and specular colors, light color, its intensity, position, its direction and the normal direction. After that, we apply the attenuation. The function for spot lights is as follows:
```glsl
...
vec4 calcSpotLight(vec4 diffuse, vec4 specular, SpotLight light, vec3 position, vec3 normal) {
vec3 light_direction = light.pl.position - position;
vec3 to_light_dir = normalize(light_direction);
vec3 from_light_dir = -to_light_dir;
float spot_alfa = dot(from_light_dir, normalize(light.conedir));
vec4 color = vec4(0, 0, 0, 0);
if (spot_alfa > light.cutoff)
{
color = calcPointLight(diffuse, specular, light.pl, position, normal);
color *= (1.0 - (1.0 - spot_alfa)/(1.0 - light.cutoff));
}
return color;
}
...
```
The procedure is similar to point lights with the exception that we need to control if we are inside the cone of light or not. Inside the cone of light we need also to apply some attenuation as explained previously. Finally, the function for directional light is defined below:
```glsl
...
vec4 calcDirLight(vec4 diffuse, vec4 specular, DirLight light, vec3 position, vec3 normal) {
return calcLightColor(diffuse, specular, light.color, light.intensity, position, normalize(light.direction), normal);
}
...
```
In this case, we already have the direction to the light, and since there is no attenuation we do not need to consider light position. Finally, in the `main` method we just iterate over the different light types which will contribute to diffuse-specular component of the final fragment color:
```glsl
...
void main() {
vec4 text_color = texture(txtSampler, outTextCoord);
vec4 ambient = calcAmbient(ambientLight, text_color + material.ambient);
vec4 diffuse = text_color + material.diffuse;
vec4 specular = text_color + material.specular;
vec4 diffuseSpecularComp = calcDirLight(diffuse, specular, dirLight, outPosition, outNormal);
for (int i=0; i 0) {
diffuseSpecularComp += calcPointLight(diffuse, specular, pointLights[i], outPosition, outNormal);
}
}
for (int i=0; i 0) {
diffuseSpecularComp += calcSpotLight(diffuse, specular, spotLights[i], outPosition, outNormal);
}
}
fragColor = ambient + diffuseSpecularComp;
}
```
Now it is turn to examine how we are going to modify the `SceneRender` class to include lights in the render process. The first step is to create the new uniforms:
```java
public class SceneRender {
private static final int MAX_POINT_LIGHTS = 5;
private static final int MAX_SPOT_LIGHTS = 5;
...
private void createUniforms() {
...
uniformsMap.createUniform("material.ambient");
uniformsMap.createUniform("material.diffuse");
uniformsMap.createUniform("material.specular");
uniformsMap.createUniform("material.reflectance");
uniformsMap.createUniform("ambientLight.factor");
uniformsMap.createUniform("ambientLight.color");
for (int i = 0; i < MAX_POINT_LIGHTS; i++) {
String name = "pointLights[" + i + "]";
uniformsMap.createUniform(name + ".position");
uniformsMap.createUniform(name + ".color");
uniformsMap.createUniform(name + ".intensity");
uniformsMap.createUniform(name + ".att.constant");
uniformsMap.createUniform(name + ".att.linear");
uniformsMap.createUniform(name + ".att.exponent");
}
for (int i = 0; i < MAX_SPOT_LIGHTS; i++) {
String name = "spotLights[" + i + "]";
uniformsMap.createUniform(name + ".pl.position");
uniformsMap.createUniform(name + ".pl.color");
uniformsMap.createUniform(name + ".pl.intensity");
uniformsMap.createUniform(name + ".pl.att.constant");
uniformsMap.createUniform(name + ".pl.att.linear");
uniformsMap.createUniform(name + ".pl.att.exponent");
uniformsMap.createUniform(name + ".conedir");
uniformsMap.createUniform(name + ".cutoff");
}
uniformsMap.createUniform("dirLight.color");
uniformsMap.createUniform("dirLight.direction");
uniformsMap.createUniform("dirLight.intensity");
}
...
}
```
When we are using arrays we need to create a uniform for each element of the list. So, for instance, for the $$pointLights$$ array we need to create a uniform named `pointLights[0]`, `pointLights[1]`, etc. And of course, this translates also to the structure attributes, so we will have `pointLights[0].color`, `pointLights[1], color`, etc.
We will create a new method that will update the uniforms for lights for each render call, which will be named `updateLights` and is defined like this:
```java
public class SceneRender {
...
private void updateLights(Scene scene) {
Matrix4f viewMatrix = scene.getCamera().getViewMatrix();
SceneLights sceneLights = scene.getSceneLights();
AmbientLight ambientLight = sceneLights.getAmbientLight();
uniformsMap.setUniform("ambientLight.factor", ambientLight.getIntensity());
uniformsMap.setUniform("ambientLight.color", ambientLight.getColor());
DirLight dirLight = sceneLights.getDirLight();
Vector4f auxDir = new Vector4f(dirLight.getDirection(), 0);
auxDir.mul(viewMatrix);
Vector3f dir = new Vector3f(auxDir.x, auxDir.y, auxDir.z);
uniformsMap.setUniform("dirLight.color", dirLight.getColor());
uniformsMap.setUniform("dirLight.direction", dir);
uniformsMap.setUniform("dirLight.intensity", dirLight.getIntensity());
List pointLights = sceneLights.getPointLights();
int numPointLights = pointLights.size();
PointLight pointLight;
for (int i = 0; i < MAX_POINT_LIGHTS; i++) {
if (i < numPointLights) {
pointLight = pointLights.get(i);
} else {
pointLight = null;
}
String name = "pointLights[" + i + "]";
updatePointLight(pointLight, name, viewMatrix);
}
List spotLights = sceneLights.getSpotLights();
int numSpotLights = spotLights.size();
SpotLight spotLight;
for (int i = 0; i < MAX_SPOT_LIGHTS; i++) {
if (i < numSpotLights) {
spotLight = spotLights.get(i);
} else {
spotLight = null;
}
String name = "spotLights[" + i + "]";
updateSpotLight(spotLight, name, viewMatrix);
}
}
...
}
```
The code is very straight forward, we just start by setting ambient light an directional light uniforms and after that we iterate over point and spot lights which have dedicated methods to set the uniforms for each of the elements o the array:
```java
public class SceneRender {
...
private void updatePointLight(PointLight pointLight, String prefix, Matrix4f viewMatrix) {
Vector4f aux = new Vector4f();
Vector3f lightPosition = new Vector3f();
Vector3f color = new Vector3f();
float intensity = 0.0f;
float constant = 0.0f;
float linear = 0.0f;
float exponent = 0.0f;
if (pointLight != null) {
aux.set(pointLight.getPosition(), 1);
aux.mul(viewMatrix);
lightPosition.set(aux.x, aux.y, aux.z);
color.set(pointLight.getColor());
intensity = pointLight.getIntensity();
PointLight.Attenuation attenuation = pointLight.getAttenuation();
constant = attenuation.getConstant();
linear = attenuation.getLinear();
exponent = attenuation.getExponent();
}
uniformsMap.setUniform(prefix + ".position", lightPosition);
uniformsMap.setUniform(prefix + ".color", color);
uniformsMap.setUniform(prefix + ".intensity", intensity);
uniformsMap.setUniform(prefix + ".att.constant", constant);
uniformsMap.setUniform(prefix + ".att.linear", linear);
uniformsMap.setUniform(prefix + ".att.exponent", exponent);
}
private void updateSpotLight(SpotLight spotLight, String prefix, Matrix4f viewMatrix) {
PointLight pointLight = null;
Vector3f coneDirection = new Vector3f();
float cutoff = 0.0f;
if (spotLight != null) {
Vector4f auxDir = new Vector4f(spotLight.getConeDirection(), 0.0f);
auxDir.mul(viewMatrix);
coneDirection.set(auxDir.x, auxDir.y, auxDir.z);
cutoff = spotLight.getCutOff();
pointLight = spotLight.getPointLight();
}
uniformsMap.setUniform(prefix + ".conedir", coneDirection);
uniformsMap.setUniform(prefix + ".cutoff", cutoff);
updatePointLight(pointLight, prefix + ".pl", viewMatrix);
}
...
}
```
As we have said, these coordinates for lights must be in view space. Usually we will set up light coordinates in world space coordinates, so we need to multiply them by the view matrix in order to be able to use them in our shader. This also applies to cone direction for spot light. Finally, we need to update the `render` method to invoke the `updateLights` method and also set up properly the new elements of the model materials:
```java
public class SceneRender {
...
public void render(Scene scene) {
...
updateLights(scene);
...
for (Model model : models) {
List entities = model.getEntitiesList();
for (Material material : model.getMaterialList()) {
uniformsMap.setUniform("material.ambient", material.getAmbientColor());
uniformsMap.setUniform("material.diffuse", material.getDiffuseColor());
uniformsMap.setUniform("material.specular", material.getSpecularColor());
uniformsMap.setUniform("material.reflectance", material.getReflectance());
...
}
}
...
}
...
}
```
We need also to add a pair of methods to the `UniformsMap` class to create uniforms fto set values for floats and 3D vectors:
```java
public class UniformsMap {
...
public void setUniform(String uniformName, float value) {
glUniform1f(getUniformLocation(uniformName), value);
}
public void setUniform(String uniformName, Vector3f value) {
glUniform3f(getUniformLocation(uniformName), value.x, value.y, value.z);
}
...
}
```
## Light controls
The last step is to use the lights in the `Main` class. But, prior to that, we will create a GUI, using Imgui, to provide some elements to control light parameters. We will do this in a new class named `LightControls`. The code is too wordy, but it is very simple to understand, we just need to set up a set of attributes to get the values from the GUI controls and a method to draw the panels and widgets required.
```java
package org.lwjglb.game;
import imgui.*;
import imgui.flag.ImGuiCond;
import org.joml.*;
import org.lwjglb.engine.*;
import org.lwjglb.engine.scene.Scene;
import org.lwjglb.engine.scene.lights.*;
public class LightControls implements IGuiInstance {
private float[] ambientColor;
private float[] ambientFactor;
private float[] dirConeX;
private float[] dirConeY;
private float[] dirConeZ;
private float[] dirLightColor;
private float[] dirLightIntensity;
private float[] dirLightX;
private float[] dirLightY;
private float[] dirLightZ;
private float[] pointLightColor;
private float[] pointLightIntensity;
private float[] pointLightX;
private float[] pointLightY;
private float[] pointLightZ;
private float[] spotLightColor;
private float[] spotLightCutoff;
private float[] spotLightIntensity;
private float[] spotLightX;
private float[] spotLightY;
private float[] spotLightZ;
public LightControls(Scene scene) {
SceneLights sceneLights = scene.getSceneLights();
AmbientLight ambientLight = sceneLights.getAmbientLight();
Vector3f color = ambientLight.getColor();
ambientFactor = new float[]{ambientLight.getIntensity()};
ambientColor = new float[]{color.x, color.y, color.z};
PointLight pointLight = sceneLights.getPointLights().get(0);
color = pointLight.getColor();
Vector3f pos = pointLight.getPosition();
pointLightColor = new float[]{color.x, color.y, color.z};
pointLightX = new float[]{pos.x};
pointLightY = new float[]{pos.y};
pointLightZ = new float[]{pos.z};
pointLightIntensity = new float[]{pointLight.getIntensity()};
SpotLight spotLight = sceneLights.getSpotLights().get(0);
pointLight = spotLight.getPointLight();
color = pointLight.getColor();
pos = pointLight.getPosition();
spotLightColor = new float[]{color.x, color.y, color.z};
spotLightX = new float[]{pos.x};
spotLightY = new float[]{pos.y};
spotLightZ = new float[]{pos.z};
spotLightIntensity = new float[]{pointLight.getIntensity()};
spotLightCutoff = new float[]{spotLight.getCutOffAngle()};
Vector3f coneDir = spotLight.getConeDirection();
dirConeX = new float[]{coneDir.x};
dirConeY = new float[]{coneDir.y};
dirConeZ = new float[]{coneDir.z};
DirLight dirLight = sceneLights.getDirLight();
color = dirLight.getColor();
pos = dirLight.getDirection();
dirLightColor = new float[]{color.x, color.y, color.z};
dirLightX = new float[]{pos.x};
dirLightY = new float[]{pos.y};
dirLightZ = new float[]{pos.z};
dirLightIntensity = new float[]{dirLight.getIntensity()};
}
@Override
public void drawGui() {
ImGui.newFrame();
ImGui.setNextWindowPos(0, 0, ImGuiCond.Always);
ImGui.setNextWindowSize(450, 400);
ImGui.begin("Lights controls");
if (ImGui.collapsingHeader("Ambient Light")) {
ImGui.sliderFloat("Ambient factor", ambientFactor, 0.0f, 1.0f, "%.2f");
ImGui.colorEdit3("Ambient color", ambientColor);
}
if (ImGui.collapsingHeader("Point Light")) {
ImGui.sliderFloat("Point Light - x", pointLightX, -10.0f, 10.0f, "%.2f");
ImGui.sliderFloat("Point Light - y", pointLightY, -10.0f, 10.0f, "%.2f");
ImGui.sliderFloat("Point Light - z", pointLightZ, -10.0f, 10.0f, "%.2f");
ImGui.colorEdit3("Point Light color", pointLightColor);
ImGui.sliderFloat("Point Light Intensity", pointLightIntensity, 0.0f, 1.0f, "%.2f");
}
if (ImGui.collapsingHeader("Spot Light")) {
ImGui.sliderFloat("Spot Light - x", spotLightX, -10.0f, 10.0f, "%.2f");
ImGui.sliderFloat("Spot Light - y", spotLightY, -10.0f, 10.0f, "%.2f");
ImGui.sliderFloat("Spot Light - z", spotLightZ, -10.0f, 10.0f, "%.2f");
ImGui.colorEdit3("Spot Light color", spotLightColor);
ImGui.sliderFloat("Spot Light Intensity", spotLightIntensity, 0.0f, 1.0f, "%.2f");
ImGui.separator();
ImGui.sliderFloat("Spot Light cutoff", spotLightCutoff, 0.0f, 360.0f, "%2.f");
ImGui.sliderFloat("Dir cone - x", dirConeX, -1.0f, 1.0f, "%.2f");
ImGui.sliderFloat("Dir cone - y", dirConeY, -1.0f, 1.0f, "%.2f");
ImGui.sliderFloat("Dir cone - z", dirConeZ, -1.0f, 1.0f, "%.2f");
}
if (ImGui.collapsingHeader("Dir Light")) {
ImGui.sliderFloat("Dir Light - x", dirLightX, -1.0f, 1.0f, "%.2f");
ImGui.sliderFloat("Dir Light - y", dirLightY, -1.0f, 1.0f, "%.2f");
ImGui.sliderFloat("Dir Light - z", dirLightZ, -1.0f, 1.0f, "%.2f");
ImGui.colorEdit3("Dir Light color", dirLightColor);
ImGui.sliderFloat("Dir Light Intensity", dirLightIntensity, 0.0f, 1.0f, "%.2f");
}
ImGui.end();
ImGui.endFrame();
ImGui.render();
}
...
}
```
Finally, we need a method to handle GUI input, where we update Imgui based on mouse status and check if the input has been consumed by the GUI controls. If so, we just populate the attributes of the class according to user input:
```java
public class LightControls implements IGuiInstance {
...
@Override
public boolean handleGuiInput(Scene scene, Window window) {
ImGuiIO imGuiIO = ImGui.getIO();
MouseInput mouseInput = window.getMouseInput();
Vector2f mousePos = mouseInput.getCurrentPos();
imGuiIO.addMousePosEvent(mousePos.x, mousePos.y);
imGuiIO.addMouseButtonEvent(0, mouseInput.isLeftButtonPressed());
imGuiIO.addMouseButtonEvent(1, mouseInput.isRightButtonPressed());
boolean consumed = imGuiIO.getWantCaptureMouse() || imGuiIO.getWantCaptureKeyboard();
if (consumed) {
SceneLights sceneLights = scene.getSceneLights();
AmbientLight ambientLight = sceneLights.getAmbientLight();
ambientLight.setIntensity(ambientFactor[0]);
ambientLight.setColor(ambientColor[0], ambientColor[1], ambientColor[2]);
PointLight pointLight = sceneLights.getPointLights().get(0);
pointLight.setPosition(pointLightX[0], pointLightY[0], pointLightZ[0]);
pointLight.setColor(pointLightColor[0], pointLightColor[1], pointLightColor[2]);
pointLight.setIntensity(pointLightIntensity[0]);
SpotLight spotLight = sceneLights.getSpotLights().get(0);
pointLight = spotLight.getPointLight();
pointLight.setPosition(spotLightX[0], spotLightY[0], spotLightZ[0]);
pointLight.setColor(spotLightColor[0], spotLightColor[1], spotLightColor[2]);
pointLight.setIntensity(spotLightIntensity[0]);
spotLight.setCutOffAngle(spotLightCutoff[0]);
spotLight.setConeDirection(dirConeX[0], dirConeY[0], dirConeZ[0]);
DirLight dirLight = sceneLights.getDirLight();
dirLight.setPosition(dirLightX[0], dirLightY[0], dirLightZ[0]);
dirLight.setColor(dirLightColor[0], dirLightColor[1], dirLightColor[2]);
dirLight.setIntensity(dirLightIntensity[0]);
}
return consumed;
}
}
```
The last step is to update the `Main` class to create the lights, remove previous `drawGui` and `handleGuiInput` method (we hare handling that now in the `LightControls` class):
```java
public class Main implements IAppLogic {
...
private LightControls lightControls;
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-11", new Window.WindowOptions(), main);
...
}
...
public void init(Window window, Scene scene, Render render) {
Model cubeModel = ModelLoader.loadModel("cube-model", "resources/models/cube/cube.obj",
scene.getTextureCache());
scene.addModel(cubeModel);
cubeEntity = new Entity("cube-entity", cubeModel.getId());
cubeEntity.setPosition(0, 0f, -2);
cubeEntity.updateModelMatrix();
scene.addEntity(cubeEntity);
SceneLights sceneLights = new SceneLights();
sceneLights.getAmbientLight().setIntensity(0.3f);
scene.setSceneLights(sceneLights);
sceneLights.getPointLights().add(new PointLight(new Vector3f(1, 1, 1),
new Vector3f(0, 0, -1.4f), 1.0f));
Vector3f coneDir = new Vector3f(0, 0, -1);
sceneLights.getSpotLights().add(new SpotLight(new PointLight(new Vector3f(1, 1, 1),
new Vector3f(0, 0, -1.4f), 0.0f), coneDir, 140.0f));
lightControls = new LightControls(scene);
scene.setGuiInstance(lightControls);
}
...
@Override
public void update(Window window, Scene scene, long diffTimeMillis) {
// Nothing to be done here
}
}
```
At the end you will be able to see something similar to this.

[Next chapter](../chapter-12/chapter-12.md)
================================================
FILE: chapter-12/chapter-12.md
================================================
# Chapter 12 - Sky Box
In this chapter we will see how to create a sky box. A skybox will allow us to set a background to give the illusion that our 3D world is bigger. That background is wrapped around the camera position and covers the whole space. The technique that we are going to use here is to construct a big cube that will be displayed around the 3D scene, that is, the centre of the camera position will be the centre of the cube. The sides of that cube will be wrapped with a texture with hills, a blue sky and clouds mapped in such a way that the image appears to be a continuous landscape.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-12).
## Sky Box
The following picture depicts the skybox concept.
.png>)
The process of creating a sky box can be summarized in the following steps:
* Create a big cube.
* Apply a texture to it that provides the illusion that we are seeing a giant landscape with no edges.
* Render the cube so its sides are at a far distance and its origin is located at the centre of the camera.
We will start by creating a new class named `SkyBox` with a constructor that receives the path to the 3D model which contains the sky box cube (with its texture) and a reference to the texture cache. This class will load that model and will create an `Entity` instance associated to that model. The definition of the `SkyBox` class is as follows.
```java
package org.lwjglb.engine.scene;
import org.lwjglb.engine.graph.*;
public class SkyBox {
private Entity skyBoxEntity;
private Model skyBoxModel;
public SkyBox(String skyBoxModelPath, TextureCache textureCache) {
skyBoxModel = ModelLoader.loadModel("skybox-model", skyBoxModelPath, textureCache);
skyBoxEntity = new Entity("skyBoxEntity-entity", skyBoxModel.getId());
}
public Entity getSkyBoxEntity() {
return skyBoxEntity;
}
public Model getSkyBoxModel() {
return skyBoxModel;
}
}
```
We will store a reference to the `SkyBox` class in the `Scene` class:
```java
public class Scene {
...
private SkyBox skyBox;
...
public SkyBox getSkyBox() {
return skyBox;
}
...
public void setSkyBox(SkyBox skyBox) {
this.skyBox = skyBox;
}
...
}
```
The next step is to create another set of vertex and fragment shaders for the sky box. But why not reuse the scene shaders we already have? The answer is that, actually, the shaders that we will need for the skybox are a simplified version of those shaders. For example, we will not be applying lights to the sky box. Below you can see the sky box vertex shader (`skybox.vert`).
```glsl
#version 330
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
layout (location=2) in vec2 texCoord;
out vec2 outTextCoord;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
void main()
{
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
outTextCoord = texCoord;
}
```
You can see that we still use the model matrix. Since we will scale the skybox, we need the model matrix. You may see other implementations that increase the size of the cube that models the sky box at start time and do not need to multiply the model and the view matrix. We have chosen this approach because it’s more flexible and it allows us to change the size of the skybox at runtime, but you can easily switch to the other approach if you want.
The fragment shader (`skybox.frag`) is also very simple, we just get the color from a texture or from a diffuse color.
```glsl
#version 330
in vec2 outTextCoord;
out vec4 fragColor;
uniform vec4 diffuse;
uniform sampler2D txtSampler;
uniform int hasTexture;
void main()
{
if (hasTexture == 1) {
fragColor = texture(txtSampler, outTextCoord);
} else {
fragColor = diffuse;
}
}
```
We will create a new class named `SkyBoxRender` to use those shaders and perform the render. The class starts by creating the shader program and setting up the required uniforms.
```java
package org.lwjglb.engine.graph;
import org.joml.Matrix4f;
import org.lwjglb.engine.scene.*;
import java.util.*;
import static org.lwjgl.opengl.GL20.*;
import static org.lwjgl.opengl.GL30.glBindVertexArray;
public class SkyBoxRender {
private ShaderProgram shaderProgram;
private UniformsMap uniformsMap;
private Matrix4f viewMatrix;
public SkyBoxRender() {
List shaderModuleDataList = new ArrayList<>();
shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/skybox.vert", GL_VERTEX_SHADER));
shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/skybox.frag", GL_FRAGMENT_SHADER));
shaderProgram = new ShaderProgram(shaderModuleDataList);
viewMatrix = new Matrix4f();
createUniforms();
}
...
}
```
The `createUniforms` method is defined liked this:
```java
public class SkyBoxRender {
...
private void createUniforms() {
uniformsMap = new UniformsMap(shaderProgram.getProgramId());
uniformsMap.createUniform("projectionMatrix");
uniformsMap.createUniform("viewMatrix");
uniformsMap.createUniform("modelMatrix");
uniformsMap.createUniform("diffuse");
uniformsMap.createUniform("txtSampler");
uniformsMap.createUniform("hasTexture");
}
...
}
```
We just create some uniforms to hold data we will need while rendering. The next step to create a new render method for the skybox that will be invoked in the global render method.
```java
public class SkyBoxRender {
...
public void render(Scene scene) {
SkyBox skyBox = scene.getSkyBox();
if (skyBox == null) {
return;
}
shaderProgram.bind();
uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());
viewMatrix.set(scene.getCamera().getViewMatrix());
viewMatrix.m30(0);
viewMatrix.m31(0);
viewMatrix.m32(0);
uniformsMap.setUniform("viewMatrix", viewMatrix);
uniformsMap.setUniform("txtSampler", 0);
Model skyBoxModel = skyBox.getSkyBoxModel();
Entity skyBoxEntity = skyBox.getSkyBoxEntity();
TextureCache textureCache = scene.getTextureCache();
for (Material material : skyBoxModel.getMaterialList()) {
Texture texture = textureCache.getTexture(material.getTexturePath());
glActiveTexture(GL_TEXTURE0);
texture.bind();
uniformsMap.setUniform("diffuse", material.getDiffuseColor());
uniformsMap.setUniform("hasTexture", texture.getTexturePath().equals(TextureCache.DEFAULT_TEXTURE) ? 0 : 1);
for (Mesh mesh : material.getMeshList()) {
glBindVertexArray(mesh.getVaoId());
uniformsMap.setUniform("modelMatrix", skyBoxEntity.getModelMatrix());
glDrawElements(GL_TRIANGLES, mesh.getNumVertices(), GL_UNSIGNED_INT, 0);
}
}
glBindVertexArray(0);
shaderProgram.unbind();
}
}
```
You will see that we are modifying the view matrix prior to loading that data in the associated uniform. Remember that when we move the camera, what we are actually doing is moving the whole world. So if we just multiply the view matrix as it is, the skybox will be displaced when the camera moves. But we do not want this--we want to stick it at the origin coordinates at (0, 0, 0). This is achieved by setting to 0 the parts of the view matrix that contain the translation increments (the `m30`, `m31` and `m32` components). You may think that you could avoid using the view matrix at all since the sky box must be fixed at the origin. In that case, you would see that the skybox does not rotate with the camera, which is not what we want. We need it to rotate but not translate. To render the skybox, we just set up the uniforms and render the cube associated to the sky box.
Finally, we define a `cleanup` method to properly free resources:
```java
public class SkyBoxRender {
...
public void cleanup() {
shaderProgram.cleanup();
}
...
}
```
In the `Render` class we just need to instantiate the `SkyBoxRender` class and invoke the render method:
```java
public class Render {
...
private SkyBoxRender skyBoxRender;
...
public Render(Window window) {
...
skyBoxRender = new SkyBoxRender();
}
public void render(Window window, Scene scene) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, window.getWidth(), window.getHeight());
skyBoxRender.render(scene);
sceneRender.render(scene);
guiRender.render(scene);
}
...
}
```
You can see that we render the sky box first. This is so 3D models with transparencies are blended with the skybox and not a black background.
Finally, in the `Main` class we just set up the sky box in the scene and create a set of tiles to give the illusion of an infinite terrain. We set up a chunk of tiles that move along with the camera position to always be shown.
```java
public class Main implements IAppLogic {
...
private static final int NUM_CHUNKS = 4;
private Entity[][] terrainEntities;
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-12", new Window.WindowOptions(), main);
...
}
...
@Override
public void init(Window window, Scene scene, Render render) {
String quadModelId = "quad-model";
Model quadModel = ModelLoader.loadModel("quad-model", "resources/models/quad/quad.obj",
scene.getTextureCache());
scene.addModel(quadModel);
int numRows = NUM_CHUNKS * 2 + 1;
int numCols = numRows;
terrainEntities = new Entity[numRows][numCols];
for (int j = 0; j < numRows; j++) {
for (int i = 0; i < numCols; i++) {
Entity entity = new Entity("TERRAIN_" + j + "_" + i, quadModelId);
terrainEntities[j][i] = entity;
scene.addEntity(entity);
}
}
SceneLights sceneLights = new SceneLights();
sceneLights.getAmbientLight().setIntensity(0.2f);
scene.setSceneLights(sceneLights);
SkyBox skyBox = new SkyBox("resources/models/skybox/skybox.obj", scene.getTextureCache());
skyBox.getSkyBoxEntity().setScale(50);
skyBox.getSkyBoxEntity().updateModelMatrix();
scene.setSkyBox(skyBox);
scene.getCamera().moveUp(0.1f);
updateTerrain(scene);
}
@Override
public void input(Window window, Scene scene, long diffTimeMillis, boolean inputConsumed) {
float move = diffTimeMillis * MOVEMENT_SPEED;
Camera camera = scene.getCamera();
if (window.isKeyPressed(GLFW_KEY_W)) {
camera.moveForward(move);
} else if (window.isKeyPressed(GLFW_KEY_S)) {
camera.moveBackwards(move);
}
if (window.isKeyPressed(GLFW_KEY_A)) {
camera.moveLeft(move);
} else if (window.isKeyPressed(GLFW_KEY_D)) {
camera.moveRight(move);
}
MouseInput mouseInput = window.getMouseInput();
if (mouseInput.isRightButtonPressed()) {
Vector2f displVec = mouseInput.getDisplVec();
camera.addRotation((float) Math.toRadians(-displVec.x * MOUSE_SENSITIVITY), (float) Math.toRadians(-displVec.y * MOUSE_SENSITIVITY));
}
}
@Override
public void update(Window window, Scene scene, long diffTimeMillis) {
updateTerrain(scene);
}
public void updateTerrain(Scene scene) {
int cellSize = 10;
Camera camera = scene.getCamera();
Vector3f cameraPos = camera.getPosition();
int cellCol = (int) (cameraPos.x / cellSize);
int cellRow = (int) (cameraPos.z / cellSize);
int numRows = NUM_CHUNKS * 2 + 1;
int numCols = numRows;
int zOffset = -NUM_CHUNKS;
float scale = cellSize / 2.0f;
for (int j = 0; j < numRows; j++) {
int xOffset = -NUM_CHUNKS;
for (int i = 0; i < numCols; i++) {
Entity entity = terrainEntities[j][i];
entity.setScale(scale);
entity.setPosition((cellCol + xOffset) * 2.0f, 0, (cellRow + zOffset) * 2.0f);
entity.getModelMatrix().identity().scale(scale).translate(entity.getPosition());
xOffset++;
}
zOffset++;
}
}
}
```
[Next chapter](../chapter-13/chapter-13.md)
================================================
FILE: chapter-13/chapter-13.md
================================================
# Chapter 13 - Fog
In this chapter we will see review how to create a fog effect in our game engine. With that effect, we will simulate how distant objects get dimmed and seem to vanish into a dense fog.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-13).
## Concepts
Let us first examine the attributes that define fog. The first one is the fog color. In the real world, the fog has a gray color, but we can use this effect to simulate wide areas invaded by a fog with different colors. The attribute is the fog's density.
Thus, in order to apply the fog effect, we need to find a way to fade our 3D scene objects into the fog color as long as they get far away from the camera. Objects that are close to the camera will not be affected by the fog, but objects that are far away will not be distinguishable. So we need to be able to calculate a factor that can be used to blend the fog color and each fragment color in order to simulate that effect. That factor will need to be dependent on the distance to the camera.
Let’s call that factor $$fogFactor$$, and set its range from 0 to 1. When the $$fogFactor$$ is 1, it means that the object will not be affected by fog, that is, it’s a nearby object. When the $$fogFactor$$ takes the 0 value, it means that the objects will be completely hidden in the fog.
Therefore, the equation needed to calculate the fog color is:
$$finalColor = (1 - fogFactor) \cdot fogColor + fogFactor \cdot framentColor$$
* $$finalColor$$ is the color that results from applying the fog effect.
* $$fogFactor$$ is the parameters that controls how the fog color and the fragment color are blended. It basically controls the object visibility.
* $$fogColor$$ is the color of the fog.
* $$fragmentColor$$ is the color of the fragment without applying any fog effect on it.
Now we need to find a way to calculate $$fogFactor$$ depending on the distance. We can choose different models, and the first one could be to use a linear model. This is a model that, given a distance, changes the fogFactor value in a linear way.
The linear model can be defined by the following parameters:
* $$fogStart$$: The distance at where fog effects starts to be applied.
* $$fogFinish$$: The distance at where fog effects reach its maximum value.
* $$distance$$: Distance to the camera.
With those parameters, the equation to be applied is:
$$\displaystyle fogFactor = \frac{(fogFinish - distance)}{(fogFinish - fogStart)}$$
For objects at distance lower than $$fogStart$$ we just simply set the $$fogFactor$$ to $$1$$. The following graph shows how the $$fogFactor$$ changes with the distance.
.png>)
The linear model is easy to calculate but it is not very realistic and it does not take into consideration the fog density. In reality, fog tends to grow in a smoother way. So the next suitable model is an exponential one. The equation for that model is as follows:
$$\displaystyle fogFactor = e^{-(distance \cdot fogDensity)^{exponent}} = \frac{1}{e^{(distance \cdot fogDensity)^{exponent}}}$$
The new variables that come into play are:
* $$fogDensity$$ which models the thickness or density of the fog.
* $$exponent$$ which is used to control how fast the fog increases with distance.
The following picture shows two graphs for the equation above for different values of the exponent ($$2$$ for the blue line and $$4$$ for the red one).
.png>)
In our code, we will use a formula that sets a value of two for the exponent (you can easily modify the example to use different values).
## Implementation
Now that the theory has been explained we can put it into practice. We will implement the effect in the scene fragment shader (`scene.frag`) since we have there all the variables we need. We will start by defining a struct that models the fog attributes.
```glsl
...
struct Fog
{
int activeFog;
vec3 color;
float density;
};
...
```
The `active` attribute will be used to activate or deactivate the fog effect. The fog will be passed to the shader through another uniform named `fog`.
```glsl
...
uniform Fog fog;
...
```
We will create a function named `calcFog` which is defined as this.
```glsl
...
vec4 calcFog(vec3 pos, vec4 color, Fog fog, vec3 ambientLight, DirLight dirLight) {
vec3 fogColor = fog.color * (ambientLight + dirLight.color * dirLight.intensity);
float distance = length(pos);
float fogFactor = 1.0 / exp((distance * fog.density) * (distance * fog.density));
fogFactor = clamp(fogFactor, 0.0, 1.0);
vec3 resultColor = mix(fogColor, color.xyz, fogFactor);
return vec4(resultColor.xyz, color.w);
}
...
```
As you can see, we first calculate the distance to the vertex. The vertex coordinates are defined in the `pos` variable and we just need to calculate the length. Then we calculate the fog factor using the exponential model with an exponent of two (which is equivalent to multiply it twice). We clamp the `fogFactor` to a range between $$0$$ and $$1$$ and use the `mix` function. In GLSL, the `mix` function is used to blend the fog color and the fragment color (defined by variable `color`). It's equivalent to applying this equation:
$$resultColor = (1 - fogFactor) \cdot fog.color + fogFactor \cdot color$$
We also preserve the w component, the transparency, of the original color. We don't want this component to be affected, as the fragment should maintain its transparency level.
At the end of the fragment shader, after applying all the light effects, we just simply assign the returned value to the fragment color if the fog is active.
```glsl
...
if (fog.activeFog == 1) {
fragColor = calcFog(outPosition, fragColor, fog, ambientLight.color, dirLight);
}
...
```
We will create also a new class named `Fog` which is another POJO (Plain Old Java Object) that contains the fog attributes.
```java
package org.lwjglb.engine.scene;
import org.joml.Vector3f;
public class Fog {
private boolean active;
private Vector3f color;
private float density;
public Fog() {
active = false;
color = new Vector3f();
}
public Fog(boolean active, Vector3f color, float density) {
this.color = color;
this.density = density;
this.active = active;
}
public Vector3f getColor() {
return color;
}
public float getDensity() {
return density;
}
public boolean isActive() {
return active;
}
public void setActive(boolean active) {
this.active = active;
}
public void setColor(Vector3f color) {
this.color = color;
}
public void setDensity(float density) {
this.density = density;
}
}
```
We will add a `Fog` instance in the `Scene` class.
```java
public class Scene {
...
private Fog fog;
...
public Scene(int width, int height) {
...
fog = new Fog();
}
...
public Fog getFog() {
return fog;
}
...
public void setFog(Fog fog) {
this.fog = fog;
}
...
}
```
Now we need to set up all these elements in the `SceneRender` class, We start by setting the uniform values for the `Fog` structure:
```java
public class SceneRender {
...
private void createUniforms() {
...
uniformsMap.createUniform("fog.activeFog");
uniformsMap.createUniform("fog.color");
uniformsMap.createUniform("fog.density");
}
...
}
```
In the `render` method we need first to enable blending and then populate the `Fog` uniform:
```java
public class SceneRender {
...
public void render(Scene scene) {
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
shaderProgram.bind();
...
Fog fog = scene.getFog();
uniformsMap.setUniform("fog.activeFog", fog.isActive() ? 1 : 0);
uniformsMap.setUniform("fog.color", fog.getColor());
uniformsMap.setUniform("fog.density", fog.getDensity());
...
shaderProgram.unbind();
glDisable(GL_BLEND);
}
...
}
```
Finally, we will modify the `Main` class to set up fog and just use a single quad as a terrain scaled to show the effect of fog.
```java
public class Main implements IAppLogic {
...
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-13", new Window.WindowOptions(), main);
...
}
...
public void init(Window window, Scene scene, Render render) {
String terrainModelId = "terrain";
Model terrainModel = ModelLoader.loadModel(terrainModelId, "resources/models/terrain/terrain.obj",
scene.getTextureCache());
scene.addModel(terrainModel);
Entity terrainEntity = new Entity("terrainEntity", terrainModelId);
terrainEntity.setScale(100.0f);
terrainEntity.updateModelMatrix();
scene.addEntity(terrainEntity);
SceneLights sceneLights = new SceneLights();
AmbientLight ambientLight = sceneLights.getAmbientLight();
ambientLight.setIntensity(0.5f);
ambientLight.setColor(0.3f, 0.3f, 0.3f);
DirLight dirLight = sceneLights.getDirLight();
dirLight.setPosition(0, 1, 0);
dirLight.setIntensity(1.0f);
scene.setSceneLights(sceneLights);
SkyBox skyBox = new SkyBox("resources/models/skybox/skybox.obj", scene.getTextureCache());
skyBox.getSkyBoxEntity().setScale(50);
scene.setSkyBox(skyBox);
scene.setFog(new Fog(true, new Vector3f(0.5f, 0.5f, 0.5f), 0.95f));
scene.getCamera().moveUp(0.1f);
}
...
public void update(Window window, Scene scene, long diffTimeMillis) {
// Nothing to be done here
}
}
```
One important thing to highlight is that we must choose the fog color wisely. This is even more important when we have no skybox but a fixed color background. We should set up the fog color to be equal to the clear color. If you uncomment the code that render the skybox and rerun the example you will get something like this.
You should be able to see something like this:

[Next chapter](../chapter-14/chapter-14.md)
================================================
FILE: chapter-14/chapter-14.md
================================================
# Chapter 14 - Normal Mapping
In this chapter, we will explain a technique that will dramatically improve the appearance of our 3D models. By now, we are able to apply textures to complex 3D models, but our models are still far from what real objects look like. Surfaces in the real world are not perfectly plain, they have imperfections which our 3D models currently lack.
In order to render more realistic scenes, we will use normal maps. If you look at a flat surface in the real world, you will notice that those imperfections are visible even at a distance due to the way light reflects off of it. In a 3D scene, a flat surface has no imperfections so even if we apply a texture to it, the way that light reflects off of the surface won't change, causing it to appear flat and unrealistic.
We could think of increasing the detail of our models by increasing the number of triangles and reflect those imperfections, but performance would degrade. To increase the realism, we must apply a system that changes the way light reflects on surfaces. This is achieved with the normal mapping technique.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-14).
## Concepts
Let’s go back to the plain surface example. A plane can be defined by two triangles which form a quad. If you remember from the lighting chapters, the element that models how light reflects are surface normals. In this case, we have a single normal for the whole surface, each fragment of the surface uses the same normal when calculating how light affects them. This is shown in the next figure.
.png>)
If we could change the normals for each fragment of the surface, we could model surface imperfections to render them in a more realistic way. This is shown in the next figure.
.png>)
The way we are going to achieve this is by loading another texture that stores the normals for the surface. Each pixel of the normal texture will contain the values of the $$x$$, $$y$$ and $$z$$ coordinates of the normal stored as an RGB value.
Let’s use the following texture to draw a quad.
.png>)
An example of a normal map texture for the image above may be the following.
.png>)
As you can see, it's as if we had applied a color transformation to the original texture. Each pixel stores normal information using color components. One thing that you will usually see when viewing normal maps is that the dominant colors tend to skew blue. This is due to the fact that normals point to the positive $$z$$ axis. The $$z$$ component will usually have a much higher value than the $$x$$ and $$y$$ ones for plain surfaces as the normal points out of the surface. Since $$x$$, $$y$$, $$z$$ coordinates are mapped to RGB, the blue component will have also a higher value.
So, to render an object using normal maps we just need an extra texture and use it while rendering fragments to get the appropriate normal value.
## Implementation
Usually, normal maps are not defined in that way; they are usually defined in the so called tangent space. The tangent space is a coordinate system that is local to each triangle of the model. In that coordinate space the $$z$$ axis always points out of the surface. This is the reason why a normal map is usually bluish, even for complex models with opposing faces. In order to handle tangent space, we need norm,als, tangent and bi-tangent vectors. We already have the normal vector, the tangent and bitangent vectors are perpendicular vectors to the normal one. We need these vectors to calculate the `TBN` matrix which will allow us to use data that is in tangent space to the coordinate system we are using in our shaders.
You can check a great tutorial on this aspect [here](https://learnopengl.com/Advanced-Lighting/Normal-Mapping)
Therefore, the first step is to add support for normal mapping loading the `ModelLoader` class, including tangent and bitangent information. If you recall, when setting the model loading flags for assimp, we included this one: `aiProcess_CalcTangentSpace`. This flag allows for automatic calculations of the tangent and bitangent data.
In the `processMaterial` method we will first query for the presence of a normal map texture. If present, we load that texture and associate that texture path with the material:
```java
public class ModelLoader {
...
private static Material processMaterial(AIMaterial aiMaterial, String modelDir, TextureCache textureCache) {
...
try (MemoryStack stack = MemoryStack.stackPush()) {
...
AIString aiNormalMapPath = AIString.calloc(stack);
Assimp.aiGetMaterialTexture(aiMaterial, aiTextureType_NORMALS, 0, aiNormalMapPath, (IntBuffer) null,
null, null, null, null, null);
String normalMapPath = aiNormalMapPath.dataString();
if (normalMapPath != null && normalMapPath.length() > 0) {
material.setNormalMapPath(modelDir + File.separator + new File(normalMapPath).getName());
textureCache.createTexture(material.getNormalMapPath());
}
return material;
}
}
...
}
```
In the `processMesh` method we also need to load data for tangents and bitangents:
```java
public class ModelLoader {
...
private static Mesh processMesh(AIMesh aiMesh) {
...
float[] tangents = processTangents(aiMesh, normals);
float[] bitangents = processBitangents(aiMesh, normals);
...
return new Mesh(vertices, normals, tangents, bitangents, textCoords, indices);
}
...
}
```
The `processTangents` and `processBitangents` methods are quite similar to the one that loads normals:
```java
public class ModelLoader {
...
private static float[] processBitangents(AIMesh aiMesh, float[] normals) {
AIVector3D.Buffer buffer = aiMesh.mBitangents();
float[] data = new float[buffer.remaining() * 3];
int pos = 0;
while (buffer.remaining() > 0) {
AIVector3D aiBitangent = buffer.get();
data[pos++] = aiBitangent.x();
data[pos++] = aiBitangent.y();
data[pos++] = aiBitangent.z();
}
// Assimp may not calculate tangents with models that do not have texture coordinates. Just create empty values
if (data.length == 0) {
data = new float[normals.length];
}
return data;
}
...
private static float[] processTangents(AIMesh aiMesh, float[] normals) {
AIVector3D.Buffer buffer = aiMesh.mTangents();
float[] data = new float[buffer.remaining() * 3];
int pos = 0;
while (buffer.remaining() > 0) {
AIVector3D aiTangent = buffer.get();
data[pos++] = aiTangent.x();
data[pos++] = aiTangent.y();
data[pos++] = aiTangent.z();
}
// Assimp may not calculate tangents with models that do not have texture coordinates. Just create empty values
if (data.length == 0) {
data = new float[normals.length];
}
return data;
}
...
}
```
As you can see, we also need to modify the `Mesh` and `Material` classes to hold the new data. Let's start with the `Mesh` class:
```java
public class Mesh {
...
public Mesh(float[] positions, float[] normals, float[] tangents, float[] bitangents, float[] textCoords, int[] indices) {
...
// Tangents VBO
vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer tangentsBuffer = MemoryUtil.memCallocFloat(tangents.length);
tangentsBuffer.put(0, tangents);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, tangentsBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 3, GL_FLOAT, false, 0, 0);
// Bitangents VBO
vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer bitangentsBuffer = MemoryUtil.memCallocFloat(bitangents.length);
bitangentsBuffer.put(0, bitangents);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, bitangentsBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(3);
glVertexAttribPointer(3, 3, GL_FLOAT, false, 0, 0);
// Texture coordinates VBO
...
glEnableVertexAttribArray(4);
glVertexAttribPointer(4, 2, GL_FLOAT, false, 0, 0);
...
MemoryUtil.memFree(tangentsBuffer);
MemoryUtil.memFree(bitangentsBuffer);
...
}
...
}
```
We need to create two new VBOs for tangent and bitangent data (which follow a structure similar to the normals data) and therefore update the position of the texture coordinates VBO.
In the `Material` class we need to include the path to the normal mapping texture path:
```java
public class Material {
...
private String normalMapPath;
...
public String getNormalMapPath() {
return normalMapPath;
}
...
public void setNormalMapPath(String normalMapPath) {
this.normalMapPath = normalMapPath;
}
...
}
```
Now we need to modify the shaders, starting with the scene vertex shader (`scene.vert`):
```glsl
#version 330
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
layout (location=2) in vec3 tangent;
layout (location=3) in vec3 bitangent;
layout (location=4) in vec2 texCoord;
out vec3 outPosition;
out vec3 outNormal;
out vec3 outTangent;
out vec3 outBitangent;
out vec2 outTextCoord;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
void main()
{
mat4 modelViewMatrix = viewMatrix * modelMatrix;
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * mvPosition;
outPosition = mvPosition.xyz;
outNormal = normalize(modelViewMatrix * vec4(normal, 0.0)).xyz;
outTangent = normalize(modelViewMatrix * vec4(tangent, 0)).xyz;
outBitangent = normalize(modelViewMatrix * vec4(bitangent, 0)).xyz;
outTextCoord = texCoord;
}
```
As you can see, we need to define the new input data associated to bitangent and tangent. We transform those elements in the same way that we handled the normal: by passing that data as an input to the fragment shader (`scene.frag`):
```glsl
#version 330
...
in vec3 outTangent;
in vec3 outBitangent;
...
struct Material
{
vec4 ambient;
vec4 diffuse;
vec4 specular;
float reflectance;
int hasNormalMap;
};
...
uniform sampler2D normalSampler;
...
```
We start by defining the new inputs from the vertex shader, including an additional element for the `Material` struct which signals if there is a normal map available or not (`hasNormalMap`). We also add a new uniform for the normal map texture (`normalSampler`)). The next step is to define a function that updates the normal based on normal map texture:
```glsl
...
...
vec3 calcNormal(vec3 normal, vec3 tangent, vec3 bitangent, vec2 textCoords) {
mat3 TBN = mat3(tangent, bitangent, normal);
vec3 newNormal = texture(normalSampler, textCoords).rgb;
newNormal = normalize(newNormal * 2.0 - 1.0);
newNormal = normalize(TBN * newNormal);
return newNormal;
}
void main() {
vec4 text_color = texture(txtSampler, outTextCoord);
vec4 ambient = calcAmbient(ambientLight, text_color + material.ambient);
vec4 diffuse = text_color + material.diffuse;
vec4 specular = text_color + material.specular;
vec3 normal = outNormal;
if (material.hasNormalMap > 0) {
normal = calcNormal(outNormal, outTangent, outBitangent, outTextCoord);
}
vec4 diffuseSpecularComp = calcDirLight(diffuse, specular, dirLight, outPosition, normal);
for (int i=0; i 0) {
diffuseSpecularComp += calcPointLight(diffuse, specular, pointLights[i], outPosition, normal);
}
}
for (int i=0; i 0) {
diffuseSpecularComp += calcSpotLight(diffuse, specular, spotLights[i], outPosition, normal);
}
}
fragColor = ambient + diffuseSpecularComp;
if (fog.activeFog == 1) {
fragColor = calcFog(outPosition, fragColor, fog, ambientLight.color, dirLight);
}
}
```
The `calcNormal` function takes the following parameters:
* The vertex normal.
* The vertex tangent.
* The vertex bitangent.
* The texture coordinates.
The first thing we do in that function is calculate the TBN matrix. After that, we get the normal value from the normal map texture and use the TBN Matrix to pass from tangent space to view space. Remember that the colour we get are the normal coordinates, but since they are stored as RGB values they are contained in the range \[0, 1]. We need to transform them to be in the range \[-1, 1], so we just multiply by two and subtract 1.
Finally, we use that function only if the material defines a normal map texture.
We also need to modify the `SceneRender` class to create and use the new normals that we use in the shaders:
```java
public class SceneRender {
...
private void createUniforms() {
...
uniformsMap.createUniform("normalSampler");
...
uniformsMap.createUniform("material.hasNormalMap");
...
}
public void render(Scene scene) {
...
uniformsMap.setUniform("normalSampler", 1);
...
for (Model model : models) {
...
for (Material material : model.getMaterialList()) {
...
String normalMapPath = material.getNormalMapPath();
boolean hasNormalMapPath = normalMapPath != null;
uniformsMap.setUniform("material.hasNormalMap", hasNormalMapPath ? 1 : 0);
...
if (hasNormalMapPath) {
Texture normalMapTexture = textureCache.getTexture(normalMapPath);
glActiveTexture(GL_TEXTURE1);
normalMapTexture.bind();
}
...
}
}
...
}
...
}
```
We need to update the sky box vertex shader because we have new vectors between normal data and texture coordinate:
```glsl
#version 330
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
layout (location=4) in vec2 texCoord;
...
```
The last step is to update the `Main` class to show this effect. We will load two quads with and without normal maps associated to them. Also, we will use left and right arrows to control light angle to show the effect.
```java
public class Main implements IAppLogic {
...
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-14", new Window.WindowOptions(), main);
...
}
...
public void init(Window window, Scene scene, Render render) {
String wallNoNormalsModelId = "quad-no-normals-model";
Model quadModelNoNormals = ModelLoader.loadModel(wallNoNormalsModelId, "resources/models/wall/wall_nonormals.obj",
scene.getTextureCache());
scene.addModel(quadModelNoNormals);
Entity wallLeftEntity = new Entity("wallLeftEntity", wallNoNormalsModelId);
wallLeftEntity.setPosition(-3f, 0, 0);
wallLeftEntity.setScale(2.0f);
wallLeftEntity.updateModelMatrix();
scene.addEntity(wallLeftEntity);
String wallModelId = "quad-model";
Model quadModel = ModelLoader.loadModel(wallModelId, "resources/models/wall/wall.obj",
scene.getTextureCache());
scene.addModel(quadModel);
Entity wallRightEntity = new Entity("wallRightEntity", wallModelId);
wallRightEntity.setPosition(3f, 0, 0);
wallRightEntity.setScale(2.0f);
wallRightEntity.updateModelMatrix();
scene.addEntity(wallRightEntity);
SceneLights sceneLights = new SceneLights();
sceneLights.getAmbientLight().setIntensity(0.2f);
DirLight dirLight = sceneLights.getDirLight();
dirLight.setPosition(1, 1, 0);
dirLight.setIntensity(1.0f);
scene.setSceneLights(sceneLights);
Camera camera = scene.getCamera();
camera.moveUp(5.0f);
camera.addRotation((float) Math.toRadians(90), 0);
lightAngle = -35;
}
...
public void input(Window window, Scene scene, long diffTimeMillis, boolean inputConsumed) {
if (inputConsumed) {
return;
}
float move = diffTimeMillis * MOVEMENT_SPEED;
Camera camera = scene.getCamera();
if (window.isKeyPressed(GLFW_KEY_W)) {
camera.moveForward(move);
} else if (window.isKeyPressed(GLFW_KEY_S)) {
camera.moveBackwards(move);
}
if (window.isKeyPressed(GLFW_KEY_A)) {
camera.moveLeft(move);
} else if (window.isKeyPressed(GLFW_KEY_D)) {
camera.moveRight(move);
}
if (window.isKeyPressed(GLFW_KEY_LEFT)) {
lightAngle -= 2.5f;
if (lightAngle < -90) {
lightAngle = -90;
}
} else if (window.isKeyPressed(GLFW_KEY_RIGHT)) {
lightAngle += 2.5f;
if (lightAngle > 90) {
lightAngle = 90;
}
}
MouseInput mouseInput = window.getMouseInput();
if (mouseInput.isRightButtonPressed()) {
Vector2f displVec = mouseInput.getDisplVec();
camera.addRotation((float) Math.toRadians(-displVec.x * MOUSE_SENSITIVITY), (float) Math.toRadians(-displVec.y * MOUSE_SENSITIVITY));
}
SceneLights sceneLights = scene.getSceneLights();
DirLight dirLight = sceneLights.getDirLight();
double angRad = Math.toRadians(lightAngle);
dirLight.getDirection().x = (float) Math.sin(angRad);
dirLight.getDirection().y = (float) Math.cos(angRad);
}
...
}
```
The result is shown in the next figure.
.png>)
As you can see, the quad that has a normal texture applied gives the impression of having more volume. Although it is, in essence, a plain surface like the other quad, you can see the difference in how the light reflects.
[Next chapter](../chapter-15/chapter-15.md)
================================================
FILE: chapter-15/chapter-15.md
================================================
# Chapter 15 - Animations
Until now we have only loaded static 3D models, but in this chapter we will learn how to animate them. When thinking about animations the first approach is to create different meshes for each model positions, load them up into the GPU and draw them sequentially to create the illusion of movement. Although this approach is perfect for some games, it's not very efficient in terms of memory consumption. This where skeletal animation comes to play. We will learn how to load these models using [assimp](https://github.com/assimp/assimp).
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-15).
## Anti-aliasing support
In this chapter we will add also support for anti-aliasing. Up to this moment you may have seen saw-like edges in the models. In order to remove those effects, we will apply anti-aliasing which basically uses the values of several samples to construct the final value for each pixel. In our case, we will use four sampled values. We need to set up this as a window hint prior to image creation (and add a new window option to control that):
```java
public class Window {
...
public Window(String title, WindowOptions opts, Callable resizeFunc) {
...
if (opts.antiAliasing) {
glfwWindowHint(GLFW_SAMPLES, 4);
}
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
...
}
...
public static class WindowOptions {
public boolean antiAliasing;
...
}
}
```
In the `Render` class we need to enable multi-sampling (in addition to that, we remove face culling to properly render the sample model):
```java
public class Render {
...
public Render(Window window) {
GL.createCapabilities();
glEnable(GL_MULTISAMPLE);
glEnable(GL_DEPTH_TEST);
sceneRender = new SceneRender();
guiRender = new GuiRender(window);
skyBoxRender = new SkyBoxRender();
}
...
}
```
## Introduction
In skeletal animation the way a model animates is defined by its underlying skeleton. A skeleton is defined by a hierarchy of special elements called bones. These bones are defined by their position and rotation. We have said also that it's a hierarchy, which means that the final position for each bones is affected by the position of their parents. For instance, think of a wrist: the position of a wrist is modified if a character moves the elbow and also if it moves the shoulder.
Bones do not need to represent a physical bone or articulation: they are artifacts that allow the creatives to model an animation. In addition to bones we still have vertices, the points that define the triangles that compose a 3D model. But in skeletal animation, vertices are drawn based on the position of the bones they relate to.
In this chapter I’ve consulted many different sources, but I have found two that provide a very good explanation about how to create an animated model. Theses sources can be consulted at:
* [http://www.3dgep.com/gpu-skinning-of-md5-models-in-opengl-and-cg/](http://www.3dgep.com/gpu-skinning-of-md5-models-in-opengl-and-cg/)
* [http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html](http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html)
If you load a model which contains animations with current code, you will get what is called the binding pose. You can try that (with code from previous chapter) and you will be able to see the 3D model perfectly. The binding pose defines the positions, normals, and texture coordinates of the model without being affected by animation at all. An animated model defines in essence the following additional information:
* A tree like structure, composed by bones, which define a hierarchy where we can compose transformations.
* Each mesh, besides containing information about vertex position, normals, etc, will include information about which bones does this vertex relate to (by using a bone index) and how much they are affected (that is modulating the effect by using a weight factor).
* A set of animation key frames which define the specific transformations that should be applied to each bone and by extension wil modify the associated vertices. A model can define several animations and each of them may be composed of several animation key frames. When animation we iterate over those key frames (which define a duration) and we can even interoperate between them. In essence, for a specific instant of time we are applying to each vertex the transformations associated to the related bones.
Let’s review first the structures handled by assimp that contain animation information. We will start first with the bones and weights information. For each `AIMesh`, we can access the vertices positions, texture coordinates and indices. Meshes store also a list of bones. Each bone is defined by the following attributes:
* A name.
* An offset matrix: This will used later to compute the final transformations that should be used by each bone.
Bones also point to a list of weights. Each weight is defined by the following attributes:
* A weight factor, that is, the number that will be used to modulate the influence of the bone’s transformation associated to each vertex.
* A vertex identifier, that is, the vertex associated to the current bone.
The following picture shows the relationships between all these elements.
.png>)
Therefore, each vertex, besides containing position, normals and texture coordinates will have now a set of indices (typically four values) of the bones that affect those vertices (`jointIndices`) and a set of weights that will modulate that effect. Each vertex will bve modified according to the transformation matrices associated to each joint in order to calculate final position. Therefore, we will need to augment the VAO associated to each mesh to hold that information as it is shown in the next figure.

Assimp scene object defines a Node's hierarchy. Each Node is defined by a name and a list of children nodes. Animations use these nodes to define the transformations that should be applied. This hierarchy defined is indeed the bones’ hierarchy. Every bone is a node, and has a parent, except the root node, and possible a set of children. There are special nodes that are not bones; they are used to group transformations, and should be handled when calculating the transformations. Another issue is that this Node's hierarchy is defined from the whole model: we do not have separate hierarchies for each mesh.
A scene also defines a set of animations. A single model can have more than one animation to model how a character walks, runs, etc. Each of these animations defines different transformations. An animation has the following attributes:
* A name.
* A duration. That is, the duration in time of the animation. The name may seem confusing since an animation is the list of transformations that should be applied to each node for each different frame.
* A list of animation channels. An animation channel contains, for a specific instant in time, the translation, rotation and scaling information that should be applied to each node. The class that models the data contained in the animation channels is the `AINodeAnim`. Animation channels could be assimilated as the key frames.
The following figure shows the relationships between all the elements described above.
.png>)
For a specific instant of time, for a frame, the transformation to be applied to a bone is the transformation defined in the animation channel for that instant, multiplied by the transformations of all the parent nodes up to the root node. Hence, we need to extract the information stored in the scene, the process is as follows:
* Construct the node hierarchy.
* For each animation, iterate over each animation channel (for each animation node) and construct the transformation matrices for each of the bones for all the potential animation frames. Those transformation matrices are a combination of the transformation matrix of the node associated to the bone and the bone transformation matrices.
* We start at the root node, and for each frame, build the transformation matrix for that node, which is the transformation matrix of the node multiplied by the composition of the translation, rotation and scale matrix of that specific frame for that node.
* We then get the bones associated to that node and complement that transformation by multiplying the offset matrices of the bones. The result will be a transformation matrix associated with the related bones for that specific frame, which will be used in the shaders.
* After that, we iterate over the children nodes, passing the transformation matrix of the parent node to also be used in combination with the children node transformations.
## Implementation
Let's start by analyzing the changes in the `ModelLoader` class:
```java
public class ModelLoader {
public static final int MAX_BONES = 150;
private static final Matrix4f IDENTITY_MATRIX = new Matrix4f();
...
public static Model loadModel(String modelId, String modelPath, TextureCache textureCache, boolean animation) {
return loadModel(modelId, modelPath, textureCache, aiProcess_GenSmoothNormals | aiProcess_JoinIdenticalVertices |
aiProcess_Triangulate | aiProcess_FixInfacingNormals | aiProcess_CalcTangentSpace | aiProcess_LimitBoneWeights |
(animation ? 0 : aiProcess_PreTransformVertices));
}
...
}
```
We need an extra argument (named `animation`) in the `loadModel` method to indicate whether we are loading a model with animations or not. If so, we cannot use the `aiProcess_PreTransformVertices` flag. This flag performs some transformation over the data loaded so the model is placed in the origin and the coordinates are corrected to math OpenGL coordinate System. We cannot use this flag for animated models because it removes animation data information.
While processing the meshes, we will also process the associated bones and weights for each vertex, storing the list of bones so we can later on build the required transformations:
```java
public class ModelLoader {
...
public static Model loadModel(String modelId, String modelPath, TextureCache textureCache, int flags) {
...
List boneList = new ArrayList<>();
for (int i = 0; i < numMeshes; i++) {
AIMesh aiMesh = AIMesh.create(aiMeshes.get(i));
Mesh mesh = processMesh(aiMesh, boneList);
...
}
...
}
...
private static Mesh processMesh(AIMesh aiMesh, List boneList) {
...
AnimMeshData animMeshData = processBones(aiMesh, boneList);
...
return new Mesh(vertices, normals, tangents, bitangents, textCoords, indices, animMeshData.boneIds, animMeshData.weights);
}
...
}
```
The new method `processBones` is defined like this:
```java
public class ModelLoader {
...
private static AnimMeshData processBones(AIMesh aiMesh, List boneList) {
List boneIds = new ArrayList<>();
List weights = new ArrayList<>();
Map> weightSet = new HashMap<>();
int numBones = aiMesh.mNumBones();
PointerBuffer aiBones = aiMesh.mBones();
for (int i = 0; i < numBones; i++) {
AIBone aiBone = AIBone.create(aiBones.get(i));
int id = boneList.size();
Bone bone = new Bone(id, aiBone.mName().dataString(), toMatrix(aiBone.mOffsetMatrix()));
boneList.add(bone);
int numWeights = aiBone.mNumWeights();
AIVertexWeight.Buffer aiWeights = aiBone.mWeights();
for (int j = 0; j < numWeights; j++) {
AIVertexWeight aiWeight = aiWeights.get(j);
VertexWeight vw = new VertexWeight(bone.boneId(), aiWeight.mVertexId(),
aiWeight.mWeight());
List vertexWeightList = weightSet.get(vw.vertexId());
if (vertexWeightList == null) {
vertexWeightList = new ArrayList<>();
weightSet.put(vw.vertexId(), vertexWeightList);
}
vertexWeightList.add(vw);
}
}
int numVertices = aiMesh.mNumVertices();
for (int i = 0; i < numVertices; i++) {
List vertexWeightList = weightSet.get(i);
int size = vertexWeightList != null ? vertexWeightList.size() : 0;
for (int j = 0; j < Mesh.MAX_WEIGHTS; j++) {
if (j < size) {
VertexWeight vw = vertexWeightList.get(j);
weights.add(vw.weight());
boneIds.add(vw.boneId());
} else {
weights.add(0.0f);
boneIds.add(0);
}
}
}
return new AnimMeshData(Utils.listFloatToArray(weights), Utils.listIntToArray(boneIds));
}
...
}
```
This method traverses the bone definition for a specific mesh, getting their weights and filling up three lists:
* `boneList`: It contains a list of bones, with their offset matrices. It will be used later on to calculate the final bones transformations. A new class named `Bone` has been created to hold that information. This list will contain the bones for all the meshes.
* `boneIds`: It contains just the identifiers of the bones for each vertex of the `Mesh`. Bones are identified by its position when rendering. This list only contains the bones for a specific Mesh.
* `weights`: It contains the weights for each vertex of the `Mesh` to be applied for the associated bones.
The information retrieved in this method is encapsulated in the `AnimMeshData` record (defined inside the `ModelLoader` class). The new `Bone` and `VertexWeight` classes are also records. They are defined like this:
```java
public class ModelLoader {
...
public record AnimMeshData(float[] weights, int[] boneIds) {
}
private record Bone(int boneId, String boneName, Matrix4f offsetMatrix) {
}
private record VertexWeight(int boneId, int vertexId, float weight) {
}
}
```
We also have created two new methods in the `Utils` class to transform `List` of `float`s and `int`s to an array:
```java
public class Utils {
...
public static float[] listFloatToArray(List list) {
int size = list != null ? list.size() : 0;
float[] floatArr = new float[size];
for (int i = 0; i < size; i++) {
floatArr[i] = list.get(i);
}
return floatArr;
}
public static int[] listIntToArray(List list) {
return list.stream().mapToInt((Integer v) -> v).toArray();
}
...
}
```
Going back to the `loadModel` method, when we have processed the meshes and the materials we will process the animation data (that is the different animation key frames associated to each animation and their transformations. All that information is also stored in the `Model` class:
```java
public class ModelLoader {
...
public static Model loadModel(String modelId, String modelPath, TextureCache textureCache, int flags) {
...
List animations = new ArrayList<>();
int numAnimations = aiScene.mNumAnimations();
if (numAnimations > 0) {
Node rootNode = buildNodesTree(aiScene.mRootNode(), null);
Matrix4f globalInverseTransformation = toMatrix(aiScene.mRootNode().mTransformation()).invert();
animations = processAnimations(aiScene, boneList, rootNode, globalInverseTransformation);
}
aiReleaseImport(aiScene);
return new Model(modelId, materialList, animations);
}
...
}
```
The `buildNodesTree` method is quite simple, It just traverses the node's hierarchy starting from the root node, constructing a tree of nodes:
```java
public class ModelLoader {
...
private static Node buildNodesTree(AINode aiNode, Node parentNode) {
String nodeName = aiNode.mName().dataString();
Node node = new Node(nodeName, parentNode, toMatrix(aiNode.mTransformation()));
int numChildren = aiNode.mNumChildren();
PointerBuffer aiChildren = aiNode.mChildren();
for (int i = 0; i < numChildren; i++) {
AINode aiChildNode = AINode.create(aiChildren.get(i));
Node childNode = buildNodesTree(aiChildNode, node);
node.addChild(childNode);
}
return node;
}
...
}
```
The `toMatrix` method just transforms an assimp matrix to a JOML one:
```java
public class ModelLoader {
...
private static Matrix4f toMatrix(AIMatrix4x4 aiMatrix4x4) {
Matrix4f result = new Matrix4f();
result.m00(aiMatrix4x4.a1());
result.m10(aiMatrix4x4.a2());
result.m20(aiMatrix4x4.a3());
result.m30(aiMatrix4x4.a4());
result.m01(aiMatrix4x4.b1());
result.m11(aiMatrix4x4.b2());
result.m21(aiMatrix4x4.b3());
result.m31(aiMatrix4x4.b4());
result.m02(aiMatrix4x4.c1());
result.m12(aiMatrix4x4.c2());
result.m22(aiMatrix4x4.c3());
result.m32(aiMatrix4x4.c4());
result.m03(aiMatrix4x4.d1());
result.m13(aiMatrix4x4.d2());
result.m23(aiMatrix4x4.d3());
result.m33(aiMatrix4x4.d4());
return result;
}
...
}
```
The `processAnimations` method is defined like this:
```java
public class ModelLoader {
...
private static List processAnimations(AIScene aiScene, List boneList,
Node rootNode, Matrix4f globalInverseTransformation) {
List animations = new ArrayList<>();
// Process all animations
int numAnimations = aiScene.mNumAnimations();
PointerBuffer aiAnimations = aiScene.mAnimations();
for (int i = 0; i < numAnimations; i++) {
AIAnimation aiAnimation = AIAnimation.create(aiAnimations.get(i));
int maxFrames = calcAnimationMaxFrames(aiAnimation);
List frames = new ArrayList<>();
Model.Animation animation = new Model.Animation(aiAnimation.mName().dataString(), aiAnimation.mDuration(), frames);
animations.add(animation);
for (int j = 0; j < maxFrames; j++) {
Matrix4f[] boneMatrices = new Matrix4f[MAX_BONES];
Arrays.fill(boneMatrices, IDENTITY_MATRIX);
Model.AnimatedFrame animatedFrame = new Model.AnimatedFrame(boneMatrices);
buildFrameMatrices(aiAnimation, boneList, animatedFrame, j, rootNode,
rootNode.getNodeTransformation(), globalInverseTransformation);
frames.add(animatedFrame);
}
}
return animations;
}
...
}
```
This method returns a `List` of `Model.Animation` instances. Remember that a model can have more than one animation, so they are stored by their index. For each of these animations, we construct a list of animation frames (`Model.AnimatedFrame` instances), which are essentially a list of the transformation matrices to be applied to each of the bones that compose the model. For each animation, we calculate the maximum number of frames by calling the method `calcAnimationMaxFrames`, which is defined like this:
```java
public class ModelLoader {
...
private static int calcAnimationMaxFrames(AIAnimation aiAnimation) {
int maxFrames = 0;
int numNodeAnims = aiAnimation.mNumChannels();
PointerBuffer aiChannels = aiAnimation.mChannels();
for (int i = 0; i < numNodeAnims; i++) {
AINodeAnim aiNodeAnim = AINodeAnim.create(aiChannels.get(i));
int numFrames = Math.max(Math.max(aiNodeAnim.mNumPositionKeys(), aiNodeAnim.mNumScalingKeys()),
aiNodeAnim.mNumRotationKeys());
maxFrames = Math.max(maxFrames, numFrames);
}
return maxFrames;
}
...
}
```
Before continuing to review the changes in the `ModelLoader` class, let's review the changes in the `Model` class to hold animation information:
```java
public class Model {
...
private List animationList;
...
public Model(String id, List materialList, List animationList) {
entitiesList = new ArrayList<>();
this.id = id;
this.materialList = materialList;
this.animationList = animationList;
}
...
public List getAnimationList() {
return animationList;
}
...
public record AnimatedFrame(Matrix4f[] boneMatrices) {
}
public record Animation(String name, double duration, List frames) {
}
}
```
As you can see, we store the list of animations associated to the model, each animation defined by a name, a duration and a list of animation frames, which in essence just stores the bone transformation matrices to be applied for each bone.
Back to the `ModelLoader` class, each `AINodeAnim` instance defines some transformations to be applied to a node in the model for a specific frame. These transformations, for a specific node, are defined in the `AINodeAnim` instance. These transformations are defined in the form of position translations, rotations and scaling values. The trick here is that, for example, for a specific node, translation values can stop at a specific frame, but rotations and scaling values can continue for the next frames. In this case, we will have less translation values than rotation or scaling ones. Therefore, a good approximation to calculate the maximum number of frames is to use the maximum value. The problem gets more complex, because this is defined per node. A node can just define some transformations for the first frames and not apply more modifications for the rest. In this case, we should always use the last defined values. Therefore, we get the maximum number for all the animations associated to the nodes.
Going back to the `processAnimations` method, with that information, we are ready to iterate over the different frames and build the transformation matrices for the bones by calling the `buildFrameMatrices` method. For each frame, we start with the root node, and will apply the transformations recursively from top to bottom of the nodes hierarchy. The `buildFrameMatrices` is defined like this:
```java
public class ModelLoader {
...
private static void buildFrameMatrices(AIAnimation aiAnimation, List boneList, Model.AnimatedFrame animatedFrame,
int frame, Node node, Matrix4f parentTransformation, Matrix4f globalInverseTransform) {
String nodeName = node.getName();
AINodeAnim aiNodeAnim = findAIAnimNode(aiAnimation, nodeName);
Matrix4f nodeTransform = node.getNodeTransformation();
if (aiNodeAnim != null) {
nodeTransform = buildNodeTransformationMatrix(aiNodeAnim, frame);
}
Matrix4f nodeGlobalTransform = new Matrix4f(parentTransformation).mul(nodeTransform);
List affectedBones = boneList.stream().filter(b -> b.boneName().equals(nodeName)).toList();
for (Bone bone : affectedBones) {
Matrix4f boneTransform = new Matrix4f(globalInverseTransform).mul(nodeGlobalTransform).
mul(bone.offsetMatrix());
animatedFrame.boneMatrices()[bone.boneId()] = boneTransform;
}
for (Node childNode : node.getChildren()) {
buildFrameMatrices(aiAnimation, boneList, animatedFrame, frame, childNode, nodeGlobalTransform,
globalInverseTransform);
}
}
...
}
```
We get the transformation associated to the node. Then we check if this node has an animation node associated to it. If so, we need to get the proper translation, rotation and scaling transformations that apply to the frame we are handling. With that information, we get the bones associated to that node and update the transformation matrix for each of those bones for that specific frame by multiplying:
* The model inverse global transformation matrix (the inverse of the root node transformation matrix).
* The transformation matrix for the node.
* The bone offset matrix.
After that, we iterate over the children nodes, using the node transformation matrix as the parent matrix for those child nodes.
```java
public class ModelLoader {
...
private static Matrix4f buildNodeTransformationMatrix(AINodeAnim aiNodeAnim, int frame) {
AIVectorKey.Buffer positionKeys = aiNodeAnim.mPositionKeys();
AIVectorKey.Buffer scalingKeys = aiNodeAnim.mScalingKeys();
AIQuatKey.Buffer rotationKeys = aiNodeAnim.mRotationKeys();
AIVectorKey aiVecKey;
AIVector3D vec;
Matrix4f nodeTransform = new Matrix4f();
int numPositions = aiNodeAnim.mNumPositionKeys();
if (numPositions > 0) {
aiVecKey = positionKeys.get(Math.min(numPositions - 1, frame));
vec = aiVecKey.mValue();
nodeTransform.translate(vec.x(), vec.y(), vec.z());
}
int numRotations = aiNodeAnim.mNumRotationKeys();
if (numRotations > 0) {
AIQuatKey quatKey = rotationKeys.get(Math.min(numRotations - 1, frame));
AIQuaternion aiQuat = quatKey.mValue();
Quaternionf quat = new Quaternionf(aiQuat.x(), aiQuat.y(), aiQuat.z(), aiQuat.w());
nodeTransform.rotate(quat);
}
int numScalingKeys = aiNodeAnim.mNumScalingKeys();
if (numScalingKeys > 0) {
aiVecKey = scalingKeys.get(Math.min(numScalingKeys - 1, frame));
vec = aiVecKey.mValue();
nodeTransform.scale(vec.x(), vec.y(), vec.z());
}
return nodeTransform;
}
...
}
```
The `AINodeAnim` instance defines a set of keys that contain translation, rotation and scaling information. These keys are referred to specific instants of time. We assume that information is ordered by time, and construct a list of matrices that contain the transformation to be applied for each frame. As said before, some of those transformations may "stop" at a specific frame, so we should use the last values for the last of the frames.
The `findAIAnimNode` method is defined like this:
```java
public class ModelLoader {
...
private static AINodeAnim findAIAnimNode(AIAnimation aiAnimation, String nodeName) {
AINodeAnim result = null;
int numAnimNodes = aiAnimation.mNumChannels();
PointerBuffer aiChannels = aiAnimation.mChannels();
for (int i = 0; i < numAnimNodes; i++) {
AINodeAnim aiNodeAnim = AINodeAnim.create(aiChannels.get(i));
if (nodeName.equals(aiNodeAnim.mNodeName().dataString())) {
result = aiNodeAnim;
break;
}
}
return result;
}
...
}
```
The `Mesh` class needs to be updated to allocate the new VBOs for bone indices and bone weights. You will see that we use a maximum of four weights (and the associated bone indices per vertex)
```java
public class Mesh {
public static final int MAX_WEIGHTS = 4;
...
public Mesh(float[] positions, float[] normals, float[] tangents, float[] bitangents, float[] textCoords, int[] indices) {
this(positions, normals, tangents, bitangents, textCoords, indices,
new int[Mesh.MAX_WEIGHTS * positions.length / 3], new float[Mesh.MAX_WEIGHTS * positions.length / 3]);
}
public Mesh(float[] positions, float[] normals, float[] tangents, float[] bitangents, float[] textCoords, int[] indices,
int[] boneIndices, float[] weights) {
...
// Bone weights
vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer weightsBuffer = MemoryUtil.memCallocFloat(weights.length);
weightsBuffer.put(weights).flip();
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, weightsBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(5);
glVertexAttribPointer(5, 4, GL_FLOAT, false, 0, 0);
// Bone indices
vboId = glGenBuffers();
vboIdList.add(vboId);
IntBuffer boneIndicesBuffer = MemoryUtil.memCallocInt(boneIndices.length);
boneIndicesBuffer.put(boneIndices).flip();
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, boneIndicesBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(6);
glVertexAttribPointer(6, 4, GL_FLOAT, false, 0, 0);
...
MemoryUtil.memFree(weightsBuffer);
MemoryUtil.memFree(boneIndicesBuffer);
...
}
...
}
```
The `Node` class just stores the data associated to an `AINode` and has specific methods to manage its children:
```java
package org.lwjglb.engine.scene;
import org.joml.Matrix4f;
import java.util.*;
public class Node {
private final List children;
private final String name;
private final Node parent;
private Matrix4f nodeTransformation;
public Node(String name, Node parent, Matrix4f nodeTransformation) {
this.name = name;
this.parent = parent;
this.nodeTransformation = nodeTransformation;
this.children = new ArrayList<>();
}
public void addChild(Node node) {
this.children.add(node);
}
public List getChildren() {
return children;
}
public String getName() {
return name;
}
public Matrix4f getNodeTransformation() {
return nodeTransformation;
}
public Node getParent() {
return parent;
}
}
```
Now we can view how we render animated models and how they can coexist with static ones. Let's start with the `SceneRender` class. In this class we just need to set up a new uniform to pass the bones matrices (assigned to current animation frame) so they can be used in the shader. Besides that, the render of static and animated entities do not have any additional impact over this class.
```java
public class SceneRender {
...
private void createUniforms() {
...
uniformsMap.createUniform("bonesMatrices");
...
}
public void render(Scene scene) {
...
for (Model model : models) {
List entities = model.getEntitiesList();
for (Material material : model.getMaterialList()) {
...
for (Mesh mesh : material.getMeshList()) {
glBindVertexArray(mesh.getVaoId());
for (Entity entity : entities) {
uniformsMap.setUniform("modelMatrix", entity.getModelMatrix());
AnimationData animationData = entity.getAnimationData();
if (animationData == null) {
uniformsMap.setUniform("bonesMatrices", AnimationData.DEFAULT_BONES_MATRICES);
} else {
uniformsMap.setUniform("bonesMatrices", animationData.getCurrentFrame().boneMatrices());
}
glDrawElements(GL_TRIANGLES, mesh.getNumVertices(), GL_UNSIGNED_INT, 0);
}
}
}
}
}
...
}
```
For static models, we will pass an array of matrices set to null. We also need to modify the `UniformsMap` to add a new method to set up the values for an array of matrices:
```java
public class UniformsMap {
...
public void setUniform(String uniformName, Matrix4f[] matrices) {
try (MemoryStack stack = MemoryStack.stackPush()) {
int length = matrices != null ? matrices.length : 0;
FloatBuffer fb = stack.mallocFloat(16 * length);
for (int i = 0; i < length; i++) {
matrices[i].get(16 * i, fb);
}
glUniformMatrix4fv(uniforms.get(uniformName), false, fb);
}
}
}
```
We also have created a new class named `AnimationData` to control the current animation set to an `Entity`:
```java
package org.lwjglb.engine.scene;
import org.joml.Matrix4f;
import org.lwjglb.engine.graph.Model;
public class AnimationData {
public static final Matrix4f[] DEFAULT_BONES_MATRICES = new Matrix4f[ModelLoader.MAX_BONES];
static {
Matrix4f zeroMatrix = new Matrix4f().zero();
Arrays.fill(DEFAULT_BONES_MATRICES, zeroMatrix);
}
private Model.Animation currentAnimation;
private int currentFrameIdx;
public AnimationData(Model.Animation currentAnimation) {
currentFrameIdx = 0;
this.currentAnimation = currentAnimation;
}
public Model.Animation getCurrentAnimation() {
return currentAnimation;
}
public Model.AnimatedFrame getCurrentFrame() {
return currentAnimation.frames().get(currentFrameIdx);
}
public int getCurrentFrameIdx() {
return currentFrameIdx;
}
public void nextFrame() {
int nextFrame = currentFrameIdx + 1;
if (nextFrame > currentAnimation.frames().size() - 1) {
currentFrameIdx = 0;
} else {
currentFrameIdx = nextFrame;
}
}
public void setCurrentAnimation(Model.Animation currentAnimation) {
currentFrameIdx = 0;
this.currentAnimation = currentAnimation;
}
}
```
An of course, we need to modify the `Entity` class to hold a reference to the `AnimationData` instance:
```java
public class Entity {
...
private AnimationData animationData;
...
public AnimationData getAnimationData() {
return animationData;
}
...
public void setAnimationData(AnimationData animationData) {
this.animationData = animationData;
}
...
}
```
We need to modify the scene vertex shader (`scene.vert`) to put into play animation data. We start by defining some constants and the new input attributes for bone weights and indices (we are using four elements per vertex so we use `vec4` and `ivec4`). We also pass the bone matrices associated to current animation as a uniform.
```glsl
#version 330
const int MAX_WEIGHTS = 4;
const int MAX_BONES = 150;
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
layout (location=2) in vec3 tangent;
layout (location=3) in vec3 bitangent;
layout (location=4) in vec2 texCoord;
layout (location=5) in vec4 boneWeights;
layout (location=6) in ivec4 boneIndices;
...
uniform mat4 bonesMatrices[MAX_BONES];
...
```
In the `main` function we will iterate over the bone weights and modify the position and normals using the matrices designated by the associated bone indices and modulated by the associated weights. You can think about it as if each bone would contribute to position (and normals) modification but modulated by using the weights. If we are using static models, the weights would be zero so we will stick to original position and normals values.
```glsl
...
void main()
{
vec4 initPos = vec4(0, 0, 0, 0);
vec4 initNormal = vec4(0, 0, 0, 0);
vec4 initTangent = vec4(0, 0, 0, 0);
vec4 initBitangent = vec4(0, 0, 0, 0);
int count = 0;
for (int i = 0; i < MAX_WEIGHTS; i++) {
float weight = boneWeights[i];
if (weight > 0) {
count++;
int boneIndex = boneIndices[i];
vec4 tmpPos = bonesMatrices[boneIndex] * vec4(position, 1.0);
initPos += weight * tmpPos;
vec4 tmpNormal = bonesMatrices[boneIndex] * vec4(normal, 0.0);
initNormal += weight * tmpNormal;
vec4 tmpTangent = bonesMatrices[boneIndex] * vec4(tangent, 0.0);
initTangent += weight * tmpTangent;
vec4 tmpBitangent = bonesMatrices[boneIndex] * vec4(bitangent, 0.0);
initBitangent += weight * tmpBitangent;
}
}
if (count == 0) {
initPos = vec4(position, 1.0);
initNormal = vec4(normal, 0.0);
initTangent = vec4(tangent, 0.0);
initBitangent = vec4(bitangent, 0.0);
}
mat4 modelViewMatrix = viewMatrix * modelMatrix;
vec4 mvPosition = modelViewMatrix * initPos;
gl_Position = projectionMatrix * mvPosition;
outPosition = mvPosition.xyz;
outNormal = normalize(modelViewMatrix * initNormal).xyz;
outTangent = normalize(modelViewMatrix * initTangent).xyz;
outBitangent = normalize(modelViewMatrix * initBitangent).xyz;
outTextCoord = texCoord;
}
```
The following figure depicts the process.

In the `Main` class we need to load animation models and activate anti-aliasing. We will also increment the animation frame each update:
```java
public class Main implements IAppLogic {
...
private AnimationData animationData;
...
public static void main(String[] args) {
Main main = new Main();
Window.WindowOptions opts = new Window.WindowOptions();
opts.antiAliasing = true;
Engine gameEng = new Engine("chapter-15", opts, main);
gameEng.start();
}
...
@Override
public void init(Window window, Scene scene, Render render) {
String terrainModelId = "terrain";
Model terrainModel = ModelLoader.loadModel(terrainModelId, "resources/models/terrain/terrain.obj",
scene.getTextureCache(), false);
scene.addModel(terrainModel);
Entity terrainEntity = new Entity("terrainEntity", terrainModelId);
terrainEntity.setScale(100.0f);
terrainEntity.updateModelMatrix();
scene.addEntity(terrainEntity);
String bobModelId = "bobModel";
Model bobModel = ModelLoader.loadModel(bobModelId, "resources/models/bob/boblamp.md5mesh",
scene.getTextureCache(), true);
scene.addModel(bobModel);
Entity bobEntity = new Entity("bobEntity", bobModelId);
bobEntity.setScale(0.05f);
bobEntity.updateModelMatrix();
animationData = new AnimationData(bobModel.getAnimationList().get(0));
bobEntity.setAnimationData(animationData);
scene.addEntity(bobEntity);
SceneLights sceneLights = new SceneLights();
AmbientLight ambientLight = sceneLights.getAmbientLight();
ambientLight.setIntensity(0.5f);
ambientLight.setColor(0.3f, 0.3f, 0.3f);
DirLight dirLight = sceneLights.getDirLight();
dirLight.setPosition(0, 1, 0);
dirLight.setIntensity(1.0f);
scene.setSceneLights(sceneLights);
SkyBox skyBox = new SkyBox("resources/models/skybox/skybox.obj", scene.getTextureCache());
skyBox.getSkyBoxEntity().setScale(100);
skyBox.getSkyBoxEntity().updateModelMatrix();
scene.setSkyBox(skyBox);
scene.setFog(new Fog(true, new Vector3f(0.5f, 0.5f, 0.5f), 0.02f));
Camera camera = scene.getCamera();
camera.setPosition(-1.5f, 3.0f, 4.5f);
camera.addRotation((float) Math.toRadians(15.0f), (float) Math.toRadians(390.f));
lightAngle = 0;
}
...
@Override
public void update(Window window, Scene scene, long diffTimeMillis) {
animationData.nextFrame();
}
}
```
Finally, we also need to modify the `SkyBox` class since the `loadModel` method from the `ModelLoader` class has changes:
```java
public class SkyBox {
...
public SkyBox(String skyBoxModelPath, TextureCache textureCache) {
skyBoxModel = ModelLoader.loadModel("skybox-model", skyBoxModelPath, textureCache, false);
...
}
}
```
You will be able to see something like this:
.png>)
[Next chapter](../chapter-16/chapter-16.md)
================================================
FILE: chapter-16/chapter-16.md
================================================
# Chapter 16 - Audio
Until this moment we have been dealing with graphics, but another key aspect of every game is audio. In this chapter we will add sound support.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-16).
## OpenAL
Audio capability is going to be addressed in this chapter with the help of [OpenAL](https://www.openal.org) (Open Audio Library). OpenAL is the OpenGL counterpart for audio, it allows us to play sounds through an abstraction layer. That layer isolates us from the underlying complexities of the audio subsystem. Besides that, it allows us to “render” sounds in a 3D scene, where sounds can be set up in specific locations, attenuated with the distance and modified according to their velocity (simulating [Doppler effect](https://en.wikipedia.org/wiki/Doppler_effect))
Before start coding we need to present the main elements involved when dealing with OpenAL, which are:
* Buffers.
* Sources.
* Listener.
Buffers store audio data, such as music or sound effects. They are similar to the textures in the OpenGL domain. OpenAL expects audio data to be in PCM (Pulse Coded Modulation) format (either in mono or in stereo), so we cannot just dump MP3 or OGG files without converting them first to PCM.
The next element are sources, which represent a location in a 3D space (a point) that emits sound. A source is associated to a buffer (only one at time) and can be defined by the following attributes:
* A position, the location of the source ($$x$$, $$y$$ and $$z$$ coordinates). By the way, OpenAL uses a right handed Cartesian coordinate system as OpenGL, so you can assume (to simplify things) that your world coordinates are equivalent to the ones in the sound space coordinate system.
* A velocity, which specifies how fast the source is moving. This is used to simulate Doppler effect.
* A gain, which is used to modify the intensity of the sound (it’s like an amplifier factor).
A source has additional attributes which will be shown later when describing the source code.
And last, but no least, a listener which is where the generated sounds are supposed to be heard. The Listener represents were the microphone is set in a 3D audio scene to receive the sounds. There is only one listener. Thus, it’s often said that audio rendering is done from the listener’s perspective. A listener shares some the attributes but it has some additional ones such as the orientation. The orientation represents where the listener is facing.
So an audio 3D scene is composed by a set of sound sources which emit sound and a listener that receives them. The final perceived sound will depend on the distance of the listener to the different sources, their relative speed and the selected propagation models. Sources can share buffers and play the same data. The following figure depicts a sample 3D scene with the different element types involved.
.png>)
## Implementation
In order to use OpenAL, the first thing is adding maven dependencies to the project pom.xml. We need to add compile time and runtime dependencies.
```xml
...
org.lwjgl
lwjgl-openal
${lwjgl.version}
...
org.lwjgl
lwjgl-openal
${lwjgl.version}
${native.target}
runtime
...
```
So, let's start coding. We will create a new package under the name `org.lwjglb.engine.sound` that will host all the classes responsible for handling audio. We will first start with a class, named `SoundBuffer` that will represent an OpenAL buffer. A fragment of the definition of that class is shown below.
```java
package org.lwjglb.engine.sound;
import org.lwjgl.stb.STBVorbisInfo;
import org.lwjgl.system.*;
import java.nio.*;
import static org.lwjgl.openal.AL10.*;
import static org.lwjgl.stb.STBVorbis.*;
import static org.lwjgl.system.MemoryUtil.NULL;
public class SoundBuffer {
private final int bufferId;
private ShortBuffer pcm;
public SoundBuffer(String filePath) {
this.bufferId = alGenBuffers();
try (STBVorbisInfo info = STBVorbisInfo.malloc()) {
pcm = readVorbis(filePath, info);
// Copy to buffer
alBufferData(bufferId, info.channels() == 1 ? AL_FORMAT_MONO16 : AL_FORMAT_STEREO16, pcm, info.sample_rate());
}
}
public void cleanup() {
alDeleteBuffers(this.bufferId);
if (pcm != null) {
MemoryUtil.memFree(pcm);
}
}
public int getBufferId() {
return this.bufferId;
}
private ShortBuffer readVorbis(String filePath, STBVorbisInfo info) {
try (MemoryStack stack = MemoryStack.stackPush()) {
IntBuffer error = stack.mallocInt(1);
long decoder = stb_vorbis_open_filename(filePath, error, null);
if (decoder == NULL) {
throw new RuntimeException("Failed to open Ogg Vorbis file. Error: " + error.get(0));
}
stb_vorbis_get_info(decoder, info);
int channels = info.channels();
int lengthSamples = stb_vorbis_stream_length_in_samples(decoder);
ShortBuffer result = MemoryUtil.memAllocShort(lengthSamples * channels);
result.limit(stb_vorbis_get_samples_short_interleaved(decoder, channels, result) * channels);
stb_vorbis_close(decoder);
return result;
}
}
}
```
The constructor of the class expects a sound file path and creates a new buffer from it. The first thing that we do is create an OpenAL buffer with the call to `alGenBuffers`. At the end our sound buffer will be identified by an integer which is like a pointer to the data it holds. Once the buffer has been created we dump the audio data in it. The constructor expects a file in OGG format, so we need to transform it to PCM format. This is done in the `readVorbis` method.
Previous versions of LWJGL had a helper class named `WaveData` which was used to load audio files in WAV format. This class is no longer present in LWJGL 3. Nevertheless, you may get the source code from that class and use it in your games (maybe without requiring any changes).
The `SoundBuffer` class also provides a `cleanup` method to free the resources when we are done with it.
Let's continue by modelling an OpenAL, which will be implemented by class named `SoundSource`. The class is defined below.
```java
package org.lwjglb.engine.sound;
import org.joml.Vector3f;
import static org.lwjgl.openal.AL10.*;
public class SoundSource {
private final int sourceId;
public SoundSource(boolean loop, boolean relative) {
this.sourceId = alGenSources();
alSourcei(sourceId, AL_LOOPING, loop ? AL_TRUE : AL_FALSE);
alSourcei(sourceId, AL_SOURCE_RELATIVE, relative ? AL_TRUE : AL_FALSE);
}
public void cleanup() {
stop();
alDeleteSources(sourceId);
}
public boolean isPlaying() {
return alGetSourcei(sourceId, AL_SOURCE_STATE) == AL_PLAYING;
}
public void pause() {
alSourcePause(sourceId);
}
public void play() {
alSourcePlay(sourceId);
}
public void setBuffer(int bufferId) {
stop();
alSourcei(sourceId, AL_BUFFER, bufferId);
}
public void setGain(float gain) {
alSourcef(sourceId, AL_GAIN, gain);
}
public void setPosition(Vector3f position) {
alSource3f(sourceId, AL_POSITION, position.x, position.y, position.z);
}
public void stop() {
alSourceStop(sourceId);
}
}
```
The sound source class provides some methods to setup its position, the gain, and control methods for playing, stopping, and pausing it. Keep in mind that sound control actions are made over a source (not over the buffer), remember that several sources can share the same buffer. As in the `SoundBuffer` class, a `SoundSource` is identified by an identifier, which is used in each operation. This class also provides a `cleanup` method to free the reserved resources. But let’s examine the constructor. The first thing that we do is to create the source with the `alGenSources` call. Then, we set up some interesting properties using the constructor parameters.
The first parameter, `loop`, indicates if the sound to be played should be in loop mode or not. By default, when a play action is invoked over a source the playing stops when the audio data is consumed. This is fine for some sounds, but some others, like background music, need to be played over and over again. Instead of manually controlling when the audio has stopped and re-launch the play process, we just simply set the looping property to true: “`alSourcei(sourceId, AL_LOOPING, AL_TRUE);`”.
The other parameter, `relative`, controls if the position of the source is relative to the listener or not. In this case, when we set the position for a source, we basically are defining the distance (with a vector) to the listener, not the position in the OpenAL 3D scene, not the world position. This activated by the “`alSourcei(sourceId, AL_SOURCE_RELATIVE, AL_TRUE);”` call. But, What can we use this for? This property is interesting, for instance, for background sounds that shouldn't be affected (attenuated) by the distance to the listener. Think, for instance, in background music or sound effects related to player controls. If we set these sources as relative, and set their position to $$(0, 0, 0)$$ they will not be attenuated.
Now it’s turn for the listener which, surprise, is modelled by a class named `SoundListener`. Here’s the definition for that class.
```java
package org.lwjglb.engine.sound;
import org.joml.Vector3f;
import static org.lwjgl.openal.AL10.*;
public class SoundListener {
public SoundListener(Vector3f position) {
alListener3f(AL_POSITION, position.x, position.y, position.z);
alListener3f(AL_VELOCITY, 0, 0, 0);
}
public void setOrientation(Vector3f at, Vector3f up) {
float[] data = new float[6];
data[0] = at.x;
data[1] = at.y;
data[2] = at.z;
data[3] = up.x;
data[4] = up.y;
data[5] = up.z;
alListenerfv(AL_ORIENTATION, data);
}
public void setPosition(Vector3f position) {
alListener3f(AL_POSITION, position.x, position.y, position.z);
}
public void setSpeed(Vector3f speed) {
alListener3f(AL_VELOCITY, speed.x, speed.y, speed.z);
}
}
```
A difference you will notice from the previous classes is that there’s no need to create a listener. There will always be one listener, so no need to create one, it’s already there for us. Thus, in the constructor we just simply set its initial position. For the same reason there’s no need for a `cleanup` method. The class has methods also for setting listener position and velocity, as in the `SoundSource` class, but we have an extra method for changing the listener orientation. Let’s review what orientation is all about. Listener orientation is defined by two vectors, “at” vector and “up” one, which are shown in the next figure.
.png>)
The “at” vector basically points where the listener is facing, and by default its coordinates are $$(0, 0, -1)$$. The “up” vector determines which direction is up for the listener, and by default it points to $$(0, 1, 0)$$. So the three components of each of those two vectors are what are set in the `alListenerfv` method call. This method is used to transfer a set of floats (a variable number of floats) to a property, in this case, the orientation.
Before continuing, it's necessary to stress out some concepts in relation to source and listener speeds. The relative speed between sources and listener will cause OpenAL to simulate Doppler effect. In case you don’t know, Doppler effect is what causes that a moving object that is getting closer to you seems to emit in a higher frequency than it seems to emit when it is moving away. The thing is, that simply by setting a source or listener velocity, OpenAL will not update their position for you. It will use the relative velocity to calculate the Doppler effect, but the positions won’t be modified. So, if you want to simulate a moving source or listener, you must take care of updating their positions in the game loop.
Now that we have modelled the key elements, we can set them up to work; we need to initialize the OpenAL library, so we will create a new class named `SoundManager` that will handle this and starts like this:
```java
package org.lwjglb.engine.sound;
import org.joml.*;
import org.lwjgl.openal.*;
import org.lwjglb.engine.scene.Camera;
import java.nio.*;
import java.util.*;
import static org.lwjgl.openal.AL10.alDistanceModel;
import static org.lwjgl.openal.ALC10.*;
import static org.lwjgl.system.MemoryUtil.NULL;
public class SoundManager {
private final List soundBufferList;
private final Map soundSourceMap;
private long context;
private long device;
private SoundListener listener;
public SoundManager() {
soundBufferList = new ArrayList<>();
soundSourceMap = new HashMap<>();
device = alcOpenDevice((ByteBuffer) null);
if (device == NULL) {
throw new IllegalStateException("Failed to open the default OpenAL device.");
}
ALCCapabilities deviceCaps = ALC.createCapabilities(device);
this.context = alcCreateContext(device, (IntBuffer) null);
if (context == NULL) {
throw new IllegalStateException("Failed to create OpenAL context.");
}
alcMakeContextCurrent(context);
AL.createCapabilities(deviceCaps);
}
...
}
```
This class holds references to the `SoundBuffer` and `SoundSource` instances to track and later cleanup them properly. SoundBuffers are stored in a List but SoundSources are stored in in a `Map` so they can be retrieved by a name. The constructor initializes the OpenAL subsystem:
* Opens the default device.
* Create the capabilities for that device.
* Create a sound context, like the OpenGL one, and set it as the current one.
The `SoundManager` class defines methods to add sound sources and buffers and a `cleanup` method to free all the resources:
```java
public class SoundManager {
...
public void addSoundBuffer(SoundBuffer soundBuffer) {
this.soundBufferList.add(soundBuffer);
}
public void addSoundSource(String name, SoundSource soundSource) {
this.soundSourceMap.put(name, soundSource);
}
public void cleanup() {
soundSourceMap.values().forEach(SoundSource::cleanup);
soundSourceMap.clear();
soundBufferList.forEach(SoundBuffer::cleanup);
soundBufferList.clear();
if (context != NULL) {
alcDestroyContext(context);
}
if (device != NULL) {
alcCloseDevice(device);
}
}
...
}
```
It also provides methods to manage the listener and the sources and the `playSoundSource` to activate a sound using its name:
```java
public class SoundManager {
...
public SoundListener getListener() {
return this.listener;
}
public SoundSource getSoundSource(String name) {
return this.soundSourceMap.get(name);
}
public void playSoundSource(String name) {
SoundSource soundSource = this.soundSourceMap.get(name);
if (soundSource != null && !soundSource.isPlaying()) {
soundSource.play();
}
}
public void removeSoundSource(String name) {
this.soundSourceMap.remove(name);
}
public void setAttenuationModel(int model) {
alDistanceModel(model);
}
public void setListener(SoundListener listener) {
this.listener = listener;
}
...
}
```
The `SoundManager` class also has a method to update the listener orientation given a camera position. In our case, the listener will be placed whenever the camera is. So, given camera position and rotation information, how do we calculate the “at” and “up” vectors? The answer is by using the view matrix associated to the camera. We need to transform the “at” $$(0, 0, -1)$$ and “up” $$(0, 1, 0)$$ vectors taking into consideration camera rotation. Let `cameraMatrix` be the view matrix associated to the camera. The code to accomplish that is:
```java
public class SoundManager {
...
public void updateListenerPosition(Camera camera) {
Matrix4f viewMatrix = camera.getViewMatrix();
listener.setPosition(camera.getPosition());
Vector3f at = new Vector3f();
viewMatrix.positiveZ(at).negate();
Vector3f up = new Vector3f();
viewMatrix.positiveY(up);
listener.setOrientation(at, up);
}
...
}
```
The code above is equivalent to the explanation decsribed previously, it’s just a more efficient approach. It uses a faster method, available in [JOML](https://github.com/JOML-CI/JOML) library, that just does not need to calculate the full inverse matrix but achieves the same results. This method was provided by the [JOML author](https://github.com/httpdigest) in a LWJGL forum, so you can check more details [there](http://forum.lwjgl.org/index.php?topic=6080.0). If you check the source code you will see that the `SoundManager` class calculates its own copy of the view matrix.
And that’s all. We have all the infrastructure we need in order to play sounds. We just need to use it in the `Main` class where we set up a background sound and a specific sound whish is activated at a specific animation frame with its intensity relative to listener position:
```java
public class Main implements IAppLogic {
...
private SoundSource playerSoundSource;
private SoundManager soundMgr;
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-16", opts, main);
...
}
@Override
public void cleanup() {
soundMgr.cleanup();
}
@Override
public void init(Window window, Scene scene, Render render) {
...
lightAngle = 45;
initSounds(bobEntity.getPosition(), camera);
}
private void initSounds(Vector3f position, Camera camera) {
soundMgr = new SoundManager();
soundMgr.setAttenuationModel(AL11.AL_EXPONENT_DISTANCE);
soundMgr.setListener(new SoundListener(camera.getPosition()));
SoundBuffer buffer = new SoundBuffer("resources/sounds/creak1.ogg");
soundMgr.addSoundBuffer(buffer);
playerSoundSource = new SoundSource(false, false);
playerSoundSource.setPosition(position);
playerSoundSource.setBuffer(buffer.getBufferId());
soundMgr.addSoundSource("CREAK", playerSoundSource);
buffer = new SoundBuffer("resources/sounds/woo_scary.ogg");
soundMgr.addSoundBuffer(buffer);
SoundSource source = new SoundSource(true, true);
source.setBuffer(buffer.getBufferId());
soundMgr.addSoundSource("MUSIC", source);
source.play();
}
@Override
public void input(Window window, Scene scene, long diffTimeMillis, boolean inputConsumed) {
...
soundMgr.updateListenerPosition(camera);
}
@Override
public void update(Window window, Scene scene, long diffTimeMillis) {
animationData.nextFrame();
if (animationData.getCurrentFrameIdx() == 45) {
playerSoundSource.play();
}
}
}
```
A final note. OpenAL also allows you to change the attenuation model by using the alDistanceModel and passing the model you want (`AL11.AL_EXPONENT_DISTANCE`, `AL_EXPONENT_DISTANCE_CLAMP`, etc.). You can play with them and check the results.
[Next chapter](../chapter-17/chapter-17.md)
================================================
FILE: chapter-17/chapter-17.md
================================================
# Chapter 17 - Cascade shadow maps
Currently we are able to represent how light affects the objects in a 3D scene. Objects that get more light are shown brighter than objects that do not receive light. However we are still not able to cast shadows. Shadows will increase the degree of realism of a 3D scene. This is what we will do in this chapter.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-17).
## Shadow Mapping
We will use a technique named Shadow mapping which is widely used in games and does not severely affect the engine performance. Shadow mapping may seem simple to understand but it’s somehow difficult to implement correctly. Or, to be more precise, it’s very difficult to implement it in a general way that covers all the potential cases and produces consistent results.
So let’s start by thinking how we could check if a specific area (indeed a fragment) is in shadow or not. While drawing that area if we can cast rays to the light source and reach the light source without any collision then that pixel is in light. If not, the pixel is in shadow.
The following picture shows the case for a point light: point PA can reach the source light, but points PB and PC can’t so they are in shadow.
.png>)
How we can check in an efficient manner if we can cast that ray without collisions? A light source can theoretically cast infinitely ray lights, so how do we check if a ray light is blocked or not? What we can do instead of casting ray lights is to look at the 3D scene from the light’s perspective and render the scene from that location. We can set the camera at the light position and render the scene so we can store the depth for each fragment. This is equivalent to calculate the distance of each fragment to the light source. At the end, what we are doing is storing the minimum distance as seen from the light source as a shadow map.
The following picture shows a cube floating over a plane and a perpendicular light.
.png>)
The scene as seen from the light perspective would be something like this (the darker the color, the closer to the light source).
.png>)
With that information we can render the 3D scene as usual and check the distance for each fragment to the light source with the minimum stored distance. If the distance is less that the value stored in the shadow map, then the object is in light, otherwise it's in shadow. We can have several objects that could be hit by the same ray light, but we store the minimum distance.
Thus, shadow mapping is a two step process:
* First we render the scene from the light space into a shadow map to get the minimum distances.
* Second we render the scene from the camera point of view and use that depth map to calculate if objects are in shadow or not.
In order to render the depth map we need to talk about the depth buffer. When we render a scene, all the depth information is stored in a buffer named, obviously, depth-buffer (or z-buffer). That depth information is the $$z$$ value of each of the fragment that is rendered. If you recall from the first chapters what we are doing while rendering a scene is transforming from world coordinates to screen coordinates. We are drawing to a coordinate space which ranges from $$0$$ to $$1$$ for $$x$$ and $$y$$ axis. If an object is more distant than another, we must calculate how this affects their $$x$$ and $$y$$ coordinates through the perspective projection matrix. This is not calculated automatically depending on the $$z$$ value, but must be done us. What is actually stored in the z coordinate is the depth of that fragment, nothing less and nothing more.
## Cascaded Shadow Maps
The solution presented above, as it is, does not produce quality results for open spaces. The reason for that is that shadows resolution is limited by the texture size. We are covering now a potentially huge area, and textures we are using to store depth information have not enough resolution in order to get good results. You may think that the solution is just to increase texture resolution, but this is not sufficient to completely fix the problem. You would need huge textures for that. Therefore, once explained the basis we will explain a technique called Cascaded Shadow Maps (CSM) which is an improvement over the plain shadow maps one.
The key concept is that, shadows of objects that are closer to the camera need to have a higher quality than shadows for distant objects. One approach could be to just render shadows for objects close to the camera, but this would cause shadows to appear / disappear as long as we move through the scene.
The approach that Cascaded Shadow Maps (CSMs) use is to divide the view frustum into several splits. Splits closer to the camera cover a smaller amount spaces whilst distant regions cover a much wider region of space. The next figure shows a view frustum divided into three splits.
.png>)
For each of these splits, the depth map is rendered, adjusting the light view and projection matrices to cover fit to each split. Thus, the texture that stores the depth map covers a reduced area of the view frustum. And, since the split closest to the camera covers less space, the depth resolution is increased.
As it can be deduced from the explanation above, we will need as many depth textures as splits, and we will also change the light view and projection matrices for each of the. Hence, the steps to be done in order to apply CSMs are:
* Divide the view frustum into n splits.
* While rendering the depth map, for each split:
* Calculate light view and projection matrices.
* Render the scene from light’s perspective into a separate depth map
* While rendering the scene:
* Use the depths maps calculated above.
* Determine the split that the fragment to be drawn belongs to.
* Calculate shadow factor as in shadow maps.
As you can see, the main drawback of CSMs is that we need to render the scene, from light’s perspective, for each split. This is why is often only used for open spaces (of course you can apply caching to shadow calculations to reduce overhead).
## Implementation
The first class that we will create will be responsible of calculating the matrices required to render the shadow maps from light perspective. The class is named `CascadeShadow` and will store the projection view matrix (from light perspective) for a specific cascade shadow split (`projViewMatrix` attribute) and the far plane distance for its ortho-projection matrix (`splitDistance` attribute):
```java
public class CascadeShadow {
public static final int SHADOW_MAP_CASCADE_COUNT = 3;
private Matrix4f projViewMatrix;
private float splitDistance;
public CascadeShadow() {
projViewMatrix = new Matrix4f();
}
...
public Matrix4f getProjViewMatrix() {
return projViewMatrix;
}
public float getSplitDistance() {
return splitDistance;
}
...
}
```
The `CascadeShadow` class defines a static method to initialize a list of cascade shadows instances with the proper values named `updateCascadeShadows`. This method starts like this:
```java
public class CascadeShadow {
...
public static void updateCascadeShadows(List cascadeShadows, Scene scene) {
Matrix4f viewMatrix = scene.getCamera().getViewMatrix();
Matrix4f projMatrix = scene.getProjection().getProjMatrix();
Vector4f lightPos = new Vector4f(scene.getSceneLights().getDirLight().getDirection(), 0);
float cascadeSplitLambda = 0.95f;
float[] cascadeSplits = new float[SHADOW_MAP_CASCADE_COUNT];
float nearClip = projMatrix.perspectiveNear();
float farClip = projMatrix.perspectiveFar();
float clipRange = farClip - nearClip;
float minZ = nearClip;
float maxZ = nearClip + clipRange;
float range = maxZ - minZ;
float ratio = maxZ / minZ;
...
}
...
}
```
We start by retrieving the matrices that we will need to calculate the splits data, the view and projection matrices, the light position and the near and far clips of the perspective projection we are using to render the scene. With that information we can calculate the split distances for each of the shadow cascades:
```java
public class CascadeShadow {
...
public static void updateCascadeShadows(List cascadeShadows, Scene scene) {
...
// Calculate split depths based on view camera frustum
// Based on method presented in https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch10.html
for (int i = 0; i < SHADOW_MAP_CASCADE_COUNT; i++) {
float p = (i + 1) / (float) (SHADOW_MAP_CASCADE_COUNT);
float log = (float) (minZ * java.lang.Math.pow(ratio, p));
float uniform = minZ + range * p;
float d = cascadeSplitLambda * (log - uniform) + uniform;
cascadeSplits[i] = (d - nearClip) / clipRange;
}
...
}
...
}
```
The algorithm used to calculate the split positions, uses a logarithm schema to better distribute the distances. We could just use other different approaches, such as splitting the cascades evenly, or according to a pre-set proportion,. The advantage of the logarithm schema is that it uses less space for near view splits, achieving a higher resolution for the elements closer to the camera. You can check the [NVIDIA article](https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch10.html) for the math details. The `cascadeSplits` array will have a set of values in the range \[0, 1] which we will use later on to perform the required calculations to get the split distances and the projection matrices for each cascade.
Now we define a loop to calculate all the data for the cascade splits. In that loop, we first create the frustum corners in NDC (Normalized Device Coordinates) space. After that, we project those coordinates into world space by using the inverse of the view and perspective matrices. Since we are using directional lights, we will use ortographic projection matrices for rendering the shadow maps, this is the reason why we set, as the NDC coordinates, just the limits of the cube that contains the visible volume (distant objects will not be rendered smaller, as in the perspective projection).
```java
public class CascadeShadow {
...
public static void updateCascadeShadows(List cascadeShadows, Scene scene) {
...
// Calculate orthographic projection matrix for each cascade
float lastSplitDist = 0.0f;
for (int i = 0; i < SHADOW_MAP_CASCADE_COUNT; i++) {
float splitDist = cascadeSplits[i];
Vector3f[] frustumCorners = new Vector3f[]{
new Vector3f(-1.0f, 1.0f, -1.0f),
new Vector3f(1.0f, 1.0f, -1.0f),
new Vector3f(1.0f, -1.0f, -1.0f),
new Vector3f(-1.0f, -1.0f, -1.0f),
new Vector3f(-1.0f, 1.0f, 1.0f),
new Vector3f(1.0f, 1.0f, 1.0f),
new Vector3f(1.0f, -1.0f, 1.0f),
new Vector3f(-1.0f, -1.0f, 1.0f),
};
// Project frustum corners into world space
Matrix4f invCam = (new Matrix4f(projMatrix).mul(viewMatrix)).invert();
for (int j = 0; j < 8; j++) {
Vector4f invCorner = new Vector4f(frustumCorners[j], 1.0f).mul(invCam);
frustumCorners[j] = new Vector3f(invCorner.x / invCorner.w, invCorner.y / invCorner.w, invCorner.z / invCorner.w);
}
...
}
...
}
...
}
```
At this point, `frustumCorners` variable has the coordinates of a cube which contains the visible space, but we need the world coordinates for this specific cascade split. Therefore, the next step is to put the cascade distances calculated at the beginning of them method into work. We adjust the coordinates of near and far planes for this specific split according to the pre-calculated distances:
```java
public class CascadeShadow {
...
public static void updateCascadeShadows(List cascadeShadows, Scene scene) {
...
for (int i = 0; i < SHADOW_MAP_CASCADE_COUNT; i++) {
...
for (int j = 0; j < 4; j++) {
Vector3f dist = new Vector3f(frustumCorners[j + 4]).sub(frustumCorners[j]);
frustumCorners[j + 4] = new Vector3f(frustumCorners[j]).add(new Vector3f(dist).mul(splitDist));
frustumCorners[j] = new Vector3f(frustumCorners[j]).add(new Vector3f(dist).mul(lastSplitDist));
}
...
}
...
}
...
}
```
After that, we calculate the coordinates of the center of that split (still working in world coordinates), and the radius of that split:
```java
public class CascadeShadow {
...
public static void updateCascadeShadows(List cascadeShadows, Scene scene) {
...
for (int i = 0; i < SHADOW_MAP_CASCADE_COUNT; i++) {
...
// Get frustum center
Vector3f frustumCenter = new Vector3f(0.0f);
for (int j = 0; j < 8; j++) {
frustumCenter.add(frustumCorners[j]);
}
frustumCenter.div(8.0f);
float radius = 0.0f;
for (int j = 0; j < 8; j++) {
float distance = (new Vector3f(frustumCorners[j]).sub(frustumCenter)).length();
radius = java.lang.Math.max(radius, distance);
}
radius = (float) java.lang.Math.ceil(radius * 16.0f) / 16.0f;
...
}
...
}
...
}
```
With that information, we can now calculate the view matrix, from the light point of view and the orthographic projection matrix as well as the split distance (in camera view coordinates):
```java
public class CascadeShadow {
...
public static void updateCascadeShadows(List cascadeShadows, Scene scene) {
...
for (int i = 0; i < SHADOW_MAP_CASCADE_COUNT; i++) {
...
Vector3f maxExtents = new Vector3f(radius);
Vector3f minExtents = new Vector3f(maxExtents).mul(-1);
Vector3f lightDir = (new Vector3f(lightPos.x, lightPos.y, lightPos.z).mul(-1)).normalize();
Vector3f eye = new Vector3f(frustumCenter).sub(new Vector3f(lightDir).mul(-minExtents.z));
Vector3f up = new Vector3f(0.0f, 1.0f, 0.0f);
Matrix4f lightViewMatrix = new Matrix4f().lookAt(eye, frustumCenter, up);
Matrix4f lightOrthoMatrix = new Matrix4f().ortho
(minExtents.x, maxExtents.x, minExtents.y, maxExtents.y, 0.0f, maxExtents.z - minExtents.z, true);
// Store split distance and matrix in cascade
CascadeShadow cascadeShadow = cascadeShadows.get(i);
cascadeShadow.splitDistance = (nearClip + splitDist * clipRange) * -1.0f;
cascadeShadow.projViewMatrix = lightOrthoMatrix.mul(lightViewMatrix);
lastSplitDist = cascadeSplits[i];
}
...
}
...
}
```
We have now completed the code that calculates the matrices required to render the shadow maps. Therefore, we can start coding the classes required to perform that rendering. In this case, we will be rendering to a different image (a depth image). We will need one texture per cascade map split. In order to manage that, we will create a new class named `ArrTexture` that will create a set of textures and it is defined like this:
```java
package org.lwjglb.engine.graph;
import java.nio.ByteBuffer;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL12.GL_CLAMP_TO_EDGE;
import static org.lwjgl.opengl.GL14.GL_TEXTURE_COMPARE_MODE;
public class ArrTexture {
private final int[] ids;
public ArrTexture(int numTextures, int width, int height, int pixelFormat) {
ids = new int[numTextures];
glGenTextures(ids);
for (int i = 0; i < numTextures; i++) {
glBindTexture(GL_TEXTURE_2D, ids[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, pixelFormat, GL_FLOAT, (ByteBuffer) null);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
}
public void cleanup() {
for (int id : ids) {
glDeleteTextures(id);
}
}
public int[] getIds() {
return ids;
}
}
```
We set the texture wrapping mode to `GL_CLAMP_TO_EDGE` since we do not want the texture to repeat in case we exceed the $$[0, 1]$$ range.
So now that we are able to create empty textures, we need to be able to render a scene into it. In order to do that we need to use Frame Buffers Objects (or FBOs). A Frame Buffer is a collection of buffers that can be used as a destination for rendering. When we have been rendering to the screen we have using OpenGL’s default buffer. OpenGL allows us to render to user defined buffers by using FBOs. We will isolate the rest of the code of the process of creating FBOs for shadow mapping by creating a new class named `ShadowBuffer`. This is the definition of that class.
```java
package org.lwjglb.engine.graph;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL13.glActiveTexture;
import static org.lwjgl.opengl.GL30.*;
public class ShadowBuffer {
public static final int SHADOW_MAP_WIDTH = 4096;
public static final int SHADOW_MAP_HEIGHT = SHADOW_MAP_WIDTH;
private final ArrTexture depthMap;
private final int depthMapFBO;
public ShadowBuffer() {
// Create a FBO to render the depth map
depthMapFBO = glGenFramebuffers();
// Create the depth map textures
depthMap = new ArrTexture(CascadeShadow.SHADOW_MAP_CASCADE_COUNT, SHADOW_MAP_WIDTH, SHADOW_MAP_HEIGHT, GL_DEPTH_COMPONENT);
// Attach the the depth map texture to the FBO
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthMap.getIds()[0], 0);
// Set only depth
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
throw new RuntimeException("Could not create FrameBuffer");
}
// Unbind
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
public void bindTextures(int start) {
for (int i = 0; i < CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {
glActiveTexture(start + i);
glBindTexture(GL_TEXTURE_2D, depthMap.getIds()[i]);
}
}
public void cleanup() {
glDeleteFramebuffers(depthMapFBO);
depthMap.cleanup();
}
public int getDepthMapFBO() {
return depthMapFBO;
}
public ArrTexture getDepthMapTexture() {
return depthMap;
}
}
```
The `ShadowBuffer` class defines two constants that determine the size of the texture that will hold the depth map. It also defines two attributes, one for the FBO and one for the texture. In the constructor, we create a new FBO and an array of textures. Each elements of that array will be used to render a shadow map for each cascade shadow split. For the FBO we will use as the pixel format the constant `GL_DEPTH_COMPONENT` since we are only interested in storing depth values. Then we attach the FBO to the texture instance.
The following lines explicitly set the FBO to not render any color. A FBO needs a color buffer, but we are not going to needed. This is why we set the color buffers to be used as `GL_NONE`.
Now we can put all the previous classes to work in order to render the shadow maps. We will be doing this in a new class named `ShadowRender` which starts like this:
```java
package org.lwjglb.engine.graph;
import org.lwjglb.engine.scene.*;
import java.util.*;
import static org.lwjgl.opengl.GL30.*;
public class ShadowRender {
private ArrayList cascadeShadows;
private ShaderProgram shaderProgram;
private ShadowBuffer shadowBuffer;
private UniformsMap uniformsMap;
public ShadowRender() {
List shaderModuleDataList = new ArrayList<>();
shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/shadow.vert", GL_VERTEX_SHADER));
shaderProgram = new ShaderProgram(shaderModuleDataList);
shadowBuffer = new ShadowBuffer();
cascadeShadows = new ArrayList<>();
for (int i = 0; i < CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {
CascadeShadow cascadeShadow = new CascadeShadow();
cascadeShadows.add(cascadeShadow);
}
createUniforms();
}
public void cleanup() {
shaderProgram.cleanup();
shadowBuffer.cleanup();
}
private void createUniforms() {
uniformsMap = new UniformsMap(shaderProgram.getProgramId());
uniformsMap.createUniform("modelMatrix");
uniformsMap.createUniform("projViewMatrix");
uniformsMap.createUniform("bonesMatrices");
}
public List getCascadeShadows() {
return cascadeShadows;
}
public ShadowBuffer getShadowBuffer() {
return shadowBuffer;
}
...
}
```
As you can see, it is quite similar to the other render classes, we create the shader program, the required uniforms and provide a `cleanup` method. The only exceptions are:
* We are just interested in depth values, so we do noe need a fragment shader at all, we just dump the vertex position, including its depth from a vertex shader-
* We create the cascade shadow splits (modelled by instances of the `CascadeShadow` class instances). In addition to that we provide some getters to get the cascade shadow maps and the buffer where we render the shadow maps. These getters will be used in the `SceneRender` class to access shadow map data.
The `render` method in the `ShadowRender` class is defined like this:
```java
public class ShadowRender {
...
public void render(Scene scene) {
CascadeShadow.updateCascadeShadows(cascadeShadows, scene);
glBindFramebuffer(GL_FRAMEBUFFER, shadowBuffer.getDepthMapFBO());
glViewport(0, 0, ShadowBuffer.SHADOW_MAP_WIDTH, ShadowBuffer.SHADOW_MAP_HEIGHT);
shaderProgram.bind();
Collection models = scene.getModelMap().values();
for (int i = 0; i < CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowBuffer.getDepthMapTexture().getIds()[i], 0);
glClear(GL_DEPTH_BUFFER_BIT);
CascadeShadow shadowCascade = cascadeShadows.get(i);
uniformsMap.setUniform("projViewMatrix", shadowCascade.getProjViewMatrix());
for (Model model : models) {
List entities = model.getEntitiesList();
for (Material material : model.getMaterialList()) {
for (Mesh mesh : material.getMeshList()) {
glBindVertexArray(mesh.getVaoId());
for (Entity entity : entities) {
uniformsMap.setUniform("modelMatrix", entity.getModelMatrix());
AnimationData animationData = entity.getAnimationData();
if (animationData == null) {
uniformsMap.setUniform("bonesMatrices", AnimationData.DEFAULT_BONES_MATRICES);
} else {
uniformsMap.setUniform("bonesMatrices", animationData.getCurrentFrame().boneMatrices());
}
glDrawElements(GL_TRIANGLES, mesh.getNumVertices(), GL_UNSIGNED_INT, 0);
}
}
}
}
}
shaderProgram.unbind();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
}
```
The first thing we do is to update the cascade maps, that is the projection matrices of each cascade split so we can render the shadow maps (the scene can be updated, the camera moved, the player or the animations). This is something you may want to cache and just recalculate that if the scene has changed. To simplify we do it each frame. After that we bind the frame buffer where we will render the shadow maps by calling the `glBindFramebuffer`function- We clear it and iterate over the different cascade shadow splits.
For each split we perform the following actions:
* Bind the texture associated to a cascade shadow split by calling the `glFramebufferTexture2D` and clear it.
* Update the projection matrix according to the current cascade shadow split.
* Render each entity as we used to do in the `SceneRender` class.
We need a new vertex shader (`shadow.vert`) which is defined like this:
```glsl
#version 330
const int MAX_WEIGHTS = 4;
const int MAX_BONES = 150;
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
layout (location=2) in vec3 tangent;
layout (location=3) in vec3 bitangent;
layout (location=4) in vec2 texCoord;
layout (location=5) in vec4 boneWeights;
layout (location=6) in ivec4 boneIndices;
uniform mat4 modelMatrix;
uniform mat4 projViewMatrix;
uniform mat4 bonesMatrices[MAX_BONES];
void main()
{
vec4 initPos = vec4(0, 0, 0, 0);
int count = 0;
for (int i = 0; i < MAX_WEIGHTS; i++) {
float weight = boneWeights[i];
if (weight > 0) {
count++;
int boneIndex = boneIndices[i];
vec4 tmpPos = bonesMatrices[boneIndex] * vec4(position, 1.0);
initPos += weight * tmpPos;
}
}
if (count == 0) {
initPos = vec4(position, 1.0);
}
gl_Position = projViewMatrix * modelMatrix * initPos;
}
```
As you can set, we receive the same set of input attributes as the scene vertex shader, we just project the position, updating previously input position according to model matrices and animation data.
Now we need to update the `SceneRender` class to use cascade shadow maps when rendering to properly display shadows. First, we will access shadow maps as textures in the fragment shader, therefore, we need to create uniforms for them. We also need to pass cascade splits projection matrices and split distance to select which split should be used according to the vertex position.
```java
public class SceneRender {
...
private void createUniforms() {
...
for (int i = 0; i < CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {
uniformsMap.createUniform("shadowMap[" + i + "]");
uniformsMap.createUniform("cascadeshadows[" + i + "]" + ".projViewMatrix");
uniformsMap.createUniform("cascadeshadows[" + i + "]" + ".splitDistance");
}
}
...
}
```
In the `render` method of the `SceneRender` class we just need to populate those uniforms prior to render the models:
```java
public class SceneRender {
...
public void render(Scene scene, ShadowRender shadowRender) {
...
uniformsMap.setUniform("txtSampler", 0);
uniformsMap.setUniform("normalSampler", 1);
int start = 2;
List cascadeShadows = shadowRender.getCascadeShadows();
for (int i = 0; i < CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {
uniformsMap.setUniform("shadowMap[" + i + "]", start + i);
CascadeShadow cascadeShadow = cascadeShadows.get(i);
uniformsMap.setUniform("cascadeshadows[" + i + "]" + ".projViewMatrix", cascadeShadow.getProjViewMatrix());
uniformsMap.setUniform("cascadeshadows[" + i + "]" + ".splitDistance", cascadeShadow.getSplitDistance());
}
shadowRender.getShadowBuffer().bindTextures(GL_TEXTURE2);
...
}
...
}
```
Now let's see the changes on the scene shaders. In the vertex shader (`scene.vert`), we will just need to pass to the fargemnet shader also vertex position in model coordinates (without been affected by view matrix):
```glsl
#version 330
...
out vec3 outNormal;
out vec3 outTangent;
out vec3 outBitangent;
out vec2 outTextCoord;
out vec3 outViewPosition;
out vec4 outWorldPosition;
...
void main()
{
...
outViewPosition = mvPosition.xyz;
outWorldPosition = modelMatrix * initPos;
...
}
```
Most of the changes will be in the fragment shader (`scene.frag`):
```glsl
#version 330
...
const int DEBUG_SHADOWS = 0;
...
const float BIAS = 0.0005;
const float SHADOW_FACTOR = 0.25;
...
in vec3 outViewPosition;
in vec4 outWorldPosition;
```
We first define a set of constants:
* `DEBUG_SHADOWS`: This will control if we apply a color to the fragments to identify the cascade split to which they will assigned (it will need to have the value `1` to activate this).
* `SHADOW_FACTOR`: The darkening factor that fill be applied to a fragment when in shadow.
* `BIAS`: The depth bias to apply when estimating if a fragment is affected by a shadow or not. This is used to reduce shadow artifacts, such as shadow acne.TShadow acne is produced by the limited resolution of the texture that stores the depth map which will produce strange artifacts. We will solve this problem by setting a threshold that will reduce precision problems.
After that, wee define the new uniforms which store cascade splits and the textures of the shadow maps. We will need also to pass to the shader the inverse view matrix. In previous chapter, we used the inverse of the projection matrix to get the fragment position in view coordinates. In this case, we need to go a step beyond and get the fragment position also in world coordinates, if we multiply the inverse view matrix by the fragment position in view coordinates we will get the world coordinates. In addition to that, we need the projection view matrices of the cascade splits as well as their split distances. We also need an array of uniforms with the cascade information and array of samplers to access, as an array of textures, the results of the shadow render process. Instead of an array of sampler you could use a `sampler2DArray` (with an array of samplers, such as the one used here, you could make each shadow map cascade texture to have a different size. So it offers a little bit more of flexibility, although we are not exploiting that here)
```glsl
...
struct CascadeShadow {
mat4 projViewMatrix;
float splitDistance;
};
...
uniform CascadeShadow cascadeshadows[NUM_CASCADES];
uniform sampler2D shadowMap[NUM_CASCADES];
...
```
We will create a new function, named `calcShadow`, which given a world position an a cascade split index, will return a shadow factor that will be applied to the final fragment color. If the fragment is not affected by a shadow, the result will be `1`, it will not affect the final color:
```glsl
...
float calcShadow(vec4 worldPosition, int idx) {
vec4 shadowMapPosition = cascadeshadows[idx].projViewMatrix * worldPosition;
float shadow = 1.0;
vec4 shadowCoord = (shadowMapPosition / shadowMapPosition.w) * 0.5 + 0.5;
shadow = textureProj(shadowCoord, vec2(0, 0), idx);
return shadow;
}
...
```
This function, transforms from world coordinates space to the NDC space of the directional light, for a specific cascade split, using its ortographic projection. That is, we multiply world space by the projection view matrix of the specified cascade split. After that, we need to transform those coordinates to texture coordinates (that is in the range \[0, 1], starting at the top left corner). With that information, we will use `textureProj` function which just selects the proper shadow map texture to use and depending on the resulting value will apply the shadow factor:
```glsl
...
float textureProj(vec4 shadowCoord, vec2 offset, int idx) {
float shadow = 1.0;
if (shadowCoord.z > -1.0 && shadowCoord.z < 1.0) {
float dist = 0.0;
dist = texture(shadowMap[idx], vec2(shadowCoord.xy + offset)).r;
if (shadowCoord.w > 0 && dist < shadowCoord.z - BIAS) {
shadow = SHADOW_FACTOR;
}
}
return shadow;
}
...
```
In the `main` function, taking as an input the view position, we iterate over the split distances, calculated for each cascade split, to determine the cascade index that this fragment belongs to and calculate the shadow factor:
```glsl
...
void main() {
...
...
vec4 diffuseSpecularComp = calcDirLight(diffuse, specular, dirLight, outViewPosition, normal);
int cascadeIndex = 0;
for (int i=0; i 0) {
diffuseSpecularComp += calcPointLight(diffuse, specular, pointLights[i], outViewPosition, normal);
}
}
for (int i=0; i 0) {
diffuseSpecularComp += calcSpotLight(diffuse, specular, spotLights[i], outViewPosition, normal);
}
}
fragColor = ambient + diffuseSpecularComp;
fragColor.rgb = fragColor.rgb * shadowFactor;
if (fog.activeFog == 1) {
fragColor = calcFog(outViewPosition, fragColor, fog, ambientLight.color, dirLight);
}
if (DEBUG_SHADOWS == 1) {
switch (cascadeIndex) {
case 0:
fragColor.rgb *= vec3(1.0f, 0.25f, 0.25f);
break;
case 1:
fragColor.rgb *= vec3(0.25f, 1.0f, 0.25f);
break;
case 2:
fragColor.rgb *= vec3(0.25f, 0.25f, 1.0f);
break;
default :
fragColor.rgb *= vec3(1.0f, 1.0f, 0.25f);
break;
}
}
}
```
The final fragment color is modulated by the shadow factor. Finally, if the debug mode is activated we apply a color to that fragment to identify the cascades we are using.
finally, we need to update the `Render` class to instantiate and use the `ShadowRender` class. We will also move the the blending activation code to this class:
```java
public class Render {
...
private ShadowRender shadowRender;
...
public Render(Window window) {
...
// Support for transparencies
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
...
shadowRender = new ShadowRender();
}
public void cleanup() {
...
shadowRender.cleanup();
}
public void render(Window window, Scene scene) {
shadowRender.render(scene);
...
sceneRender.render(scene, shadowRender);
...
}
...
}
```
In the `Main` class, we just remove the sound code. At the end you will be able to see something like this:
.png>)
If you set the `DEBUG_SHADOWS` constant to `1` you will see how the cascade shadows splits

[Next chapter](../chapter-18/chapter-18.md)
================================================
FILE: chapter-18/chapter-18.md
================================================
# Chapter 18 - 3D Object Picking
One of the key aspects of every game is the ability to interact with the environment. This capability requires to be able to select objects in the 3D scene. In this chapter we will explore how this can be achieved.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-18).
## Concepts
We will add the capability to select entities by clicking the mouse on the screen. In order to do so, we will cast a ray from the camera position (our origin) using as a direction the point where we have clicked with the mouse (transforming from mouse coordinates to world coordinates). With that ray we will check if it intersects with bounding boxes associate to each entity (that is a cube that encloses the model associated to an entity).

We need to implement the follow steps:
* Associate a bounding box to each model (to each mesh of the model indeed).
* Transform mouse coordinates to world space ones to cast a ray from the camera position.
* For each entity, iterate over the associated meshes and check if we intersect with the ray.
* We will select the entity which intersects with the closest distance to the ray.
* If we have a selected entity we will highlight it in the fragment shader.
## Code preparation
We will start first by calculating the bounding box for each mesh of the models we load. We will let [assimp](https://github.com/assimp/assimp) do this work for us by adding an additional flag when loading the models: `aiProcess_GenBoundingBoxes`. This flag will automatically calculate a bounding box for each mex. That box will embed all the meshes and will be axis aligned. You may see the acronym "AABB" used for this, which means Axis Aligned Bounding Box. Why axis aligned boxes ? Because it will simplify intersection calculations a lot. By using that flag, [assimp](https://github.com/assimp/assimp) will perform those calculations which will be available as the corners of the bounding box (with minimum and maximum coordinates). The following figure shows how it would look like for a cube.

Once enabled the calculation, we need to retrieve that information when processing the meshes:
```java
public class ModelLoader {
...
public static Model loadModel(String modelId, String modelPath, TextureCache textureCache, boolean animation) {
return loadModel(modelId, modelPath, textureCache, aiProcess_GenSmoothNormals | aiProcess_JoinIdenticalVertices |
aiProcess_Triangulate | aiProcess_FixInfacingNormals | aiProcess_CalcTangentSpace | aiProcess_LimitBoneWeights |
aiProcess_GenBoundingBoxes | (animation ? 0 : aiProcess_PreTransformVertices));
}
...
private static Mesh processMesh(AIMesh aiMesh, List boneList) {
...
AIAABB aabb = aiMesh.mAABB();
Vector3f aabbMin = new Vector3f(aabb.mMin().x(), aabb.mMin().y(), aabb.mMin().z());
Vector3f aabbMax = new Vector3f(aabb.mMax().x(), aabb.mMax().y(), aabb.mMax().z());
return new Mesh(vertices, normals, tangents, bitangents, textCoords, indices, animMeshData.boneIds,
animMeshData.weights, aabbMin, aabbMax);
}
...
}
```
We need to store that information in the `Mesh` class:
```java
public class Mesh {
...
private Vector3f aabbMax;
private Vector3f aabbMin;
...
public Mesh(float[] positions, float[] normals, float[] tangents, float[] bitangents, float[] textCoords, int[] indices) {
this(positions, normals, tangents, bitangents, textCoords, indices,
new int[Mesh.MAX_WEIGHTS * positions.length / 3], new float[Mesh.MAX_WEIGHTS * positions.length / 3],
new Vector3f(), new Vector3f());
}
public Mesh(float[] positions, float[] normals, float[] tangents, float[] bitangents, float[] textCoords, int[] indices,
int[] boneIndices, float[] weights, Vector3f aabbMin, Vector3f aabbMax) {
this.aabbMin = aabbMin;
this.aabbMax = aabbMax;
...
}
...
public Vector3f getAabbMax() {
return aabbMax;
}
public Vector3f getAabbMin() {
return aabbMin;
}
...
}
```
While performing the ray intersection calculations we will need inverse view and projection matrices in order to transform from screen space to world space coordinates. Therefore, we will modify the `Camera` and `Projection` class to automatically calculate the inverse of their respective matrices whenever they are updated:
```java
public class Camera {
...
private Matrix4f invViewMatrix;
...
public Camera() {
...
invViewMatrix = new Matrix4f();
...
}
...
public Matrix4f getInvViewMatrix() {
return invViewMatrix;
}
...
private void recalculate() {
viewMatrix.identity()
.rotateX(rotation.x)
.rotateY(rotation.y)
.translate(-position.x, -position.y, -position.z);
invViewMatrix.set(viewMatrix).invert();
}
...
}
```
```java
public class Projection {
...
private Matrix4f invProjMatrix;
...
public Projection(int width, int height) {
...
invProjMatrix = new Matrix4f();
...
}
public Matrix4f getInvProjMatrix() {
return invProjMatrix;
}
...
public void updateProjMatrix(int width, int height) {
projMatrix.setPerspective(FOV, (float) width / height, Z_NEAR, Z_FAR);
invProjMatrix.set(projMatrix).invert();
}
}
```
We will need also to store the selected `Entity` once we have done the calculations, we will do this in the `Scene` class:
```java
public class Scene {
...
private Entity selectedEntity;
...
public Entity getSelectedEntity() {
return selectedEntity;
}
...
public void setSelectedEntity(Entity selectedEntity) {
this.selectedEntity = selectedEntity;
}
...
}
```
Finally, we will create a new uniform while rendering the scene that will be activated if we are rendering an `Entity` that is selected:
```java
public class SceneRender {
...
private void createUniforms() {
...
uniformsMap.createUniform("selected");
}
public void render(Scene scene, ShadowRender shadowRender) {
...
Entity selectedEntity = scene.getSelectedEntity();
for (Model model : models) {
List entities = model.getEntitiesList();
for (Material material : model.getMaterialList()) {
...
for (Mesh mesh : material.getMeshList()) {
glBindVertexArray(mesh.getVaoId());
for (Entity entity : entities) {
uniformsMap.setUniform("selected",
selectedEntity != null && selectedEntity.getId().equals(entity.getId()) ? 1 : 0);
...
}
...
}
...
}
}
...
}
...
}
```
In the fragment shader (`scene.frag`), we will just modify the blue component of the fragment that belongs to a selected entity:
```glsl
#version 330
...
uniform int selected;
...
void main() {
...
if (selected > 0) {
fragColor = vec4(fragColor.x, fragColor.y, 1, 1);
}
}
```
## Entity selection
We can now proceed with the code for determining if an `Entity` must be selected. In the `Main` class, in the `input` method, we will check if the mouse left button has been pressed. If so, we will invoke a new method (`selectEntity`) where will be doing the calculations:
```java
public class Main implements IAppLogic {
...
public void input(Window window, Scene scene, long diffTimeMillis, boolean inputConsumed) {
...
if (mouseInput.isLeftButtonPressed()) {
selectEntity(window, scene, mouseInput.getCurrentPos());
}
...
}
...
}
```
The `selectEntity` method starts like this:
```java
public class Main implements IAppLogic {
...
private void selectEntity(Window window, Scene scene, Vector2f mousePos) {
int wdwWidth = window.getWidth();
int wdwHeight = window.getHeight();
float x = (2 * mousePos.x) / wdwWidth - 1.0f;
float y = 1.0f - (2 * mousePos.y) / wdwHeight;
float z = -1.0f;
Matrix4f invProjMatrix = scene.getProjection().getInvProjMatrix();
Vector4f mouseDir = new Vector4f(x, y, z, 1.0f);
mouseDir.mul(invProjMatrix);
mouseDir.z = -1.0f;
mouseDir.w = 0.0f;
Matrix4f invViewMatrix = scene.getCamera().getInvViewMatrix();
mouseDir.mul(invViewMatrix);
...
}
...
}
```
We need to calculate that direction vector using the click coordinates. But, how do we pass from a $$(x,y)$$ coordinates in viewport space to world space? Let’s review how we pass from model space coordinates to view space. The different coordinate transformations that are applied in order to achieve that are:
* We pass from model coordinates to world coordinates using the model matrix.
* We pass from world coordinates to view space coordinates using the view matrix (that provides the camera effect)-
* We pass from view coordinates to homogeneous clip space by applying the perspective projection matrix.
* Final screen coordinates are calculate automatically by OpenGL for us. Before doing that, it passes to normalized device space (by dividing the $$x, y,z$$ coordinates by the $$w$$ component) and then to $$x,y$$ screen coordinates.
So we need just to perform the traverse the inverse path to get from screen coordinates $$(x,y)$$, to world coordinates.
The first step is to transform from screen coordinates to normalized device space. The $$(x, y)$$ coordinates in the view port space are in the range $$[0, screen width]$$ $$[0, screen height]$$. The upper left corner of the screen has a coordinate of $$(0, 0)$$. We need to transform that into coordinates in the range $$[-1, 1]$$.
.png>)
The maths are simple:
$$x = 2 \cdot screen_x / screenwidth - 1$$
$$y = 1 - 2 * screen_y / screenheight$$
But, how do we calculate the $$z$$ component? The answer is simple, we simply assign it the $$-1$$ value, so that the ray points to the farthest visible distance (Remember that in OpenGL, $$-1$$ points to the screen). Now we have the coordinates in normalized device space.
In order to continue with the transformations we need to convert them to the homogeneous clip space. We need to have the $$w$$ component, that is use homogeneous coordinates. Although this concept was presented in the previous chapters, let’s get back to it. In order to represent a 3D point we just need the $$x$$, $$y$$ and $$z$$ components, but we are continuously working with an additional component, the $$w$$ component. We need this extra component in order to use matrices to perform the different transformations. Some transformations do not need that extra component but other do. For instance, the translation matrix does not work if we only have $$x$$, $$y$$ and $$z$$ components. Thus, we have added the w component and assigned them a value of $$1$$ so we can work with 4 by 4 matrices.
Besides that, most of transformations, or to be more precise, most of the transformation matrices do not alter the $$w$$ component. An exception to this is the projection matrix. This matrix changes the $$w$$ value to be proportional to the $$z$$ component.
Transforming from homogeneous clip space to normalized device coordinates is achieved by dividing the $$x$$, $$y$$ and $$z$$ components by $$w$$. As this component is proportional to the z component, this implies that distant objects are drawn smaller. In our case we need to do the reverse, we need to unproject, but since what we are calculating is a ray we just simply can ignore that step, set the $$w$$ component to $$1$$ and leave the rest of the components at their original value.
Now we need to go back to view space. This is easy, we just need to calculate the inverse of the projection matrix and multiply it by our 4 components vector. Once we have done that, we need to transform them to world space. Again, we just need to use the view matrix, calculate its inverse and multiply it by our vector.
Remember that we are only interested in directions, so, in this case we set the $$w$$ component to $$0$$. Also we can set the $$z$$ component again to $$-1$$, since we want it to point towards the screen. Once we have done that and applied the inverse view matrix we have our vector in world space.
The next step is to iterate over entities with their associated meshes and check if their bounding boxes intersect with the ray which starts at the camera position:
```java
public class Main implements IAppLogic {
...
private void selectEntity(Window window, Scene scene, Vector2f mousePos) {
...
Vector4f min = new Vector4f(0.0f, 0.0f, 0.0f, 1.0f);
Vector4f max = new Vector4f(0.0f, 0.0f, 0.0f, 1.0f);
Vector2f nearFar = new Vector2f();
Entity selectedEntity = null;
float closestDistance = Float.POSITIVE_INFINITY;
Vector3f center = scene.getCamera().getPosition();
Collection models = scene.getModelMap().values();
Matrix4f modelMatrix = new Matrix4f();
for (Model model : models) {
List entities = model.getEntitiesList();
for (Entity entity : entities) {
modelMatrix.translate(entity.getPosition()).scale(entity.getScale());
for (Material material : model.getMaterialList()) {
for (Mesh mesh : material.getMeshList()) {
Vector3f aabbMin = mesh.getAabbMin();
min.set(aabbMin.x, aabbMin.y, aabbMin.z, 1.0f);
min.mul(modelMatrix);
Vector3f aabMax = mesh.getAabbMax();
max.set(aabMax.x, aabMax.y, aabMax.z, 1.0f);
max.mul(modelMatrix);
if (Intersectionf.intersectRayAab(center.x, center.y, center.z, mouseDir.x, mouseDir.y, mouseDir.z,
min.x, min.y, min.z, max.x, max.y, max.z, nearFar) && nearFar.x < closestDistance) {
closestDistance = nearFar.x;
selectedEntity = entity;
}
}
}
modelMatrix.identity();
}
}
scene.setSelectedEntity(selectedEntity);
}
...
}
```
We define a variable named `closestDistance`. This variable will hold the closest distance. For game items that intersect, the distance from the camera to the intersection point will be calculated, If it’s lower than the value stored in `closestDistance`, then this item will be the new candidate. We need to translate and scale the bounding box of eah mesh. We cannot use the model matrix a sit is as it will take into consideration also the rotation (we do not want that since we want the box to be axis aligned). This is why we just apply translation and scaling using entity's data to construct a model matrix. But, how do we calculate the intersection? This is where the glorious [JOML](https://github.com/JOML-CI/JOML) library comes to the rescue. We are using [JOML](https://github.com/JOML-CI/JOML)’s `Intersectionf` class, which provides several methods to calculate intersections in 2D and 3D. Specifically, we are using the `intersectRayAab` method.
This method implements the algorithm that test intersection for Axis Aligned Boxes. You can check the details, as pointed out in the JOML documentation, [here](http://people.csail.mit.edu/amy/papers/box-jgt.pdf).
The method tests if a ray, defined by an origin and a direction, intersects a box, defined by minimum and maximum corner. As it has been said beforeThis algorithm is valid, because our cubes, are aligned with the axis, if they were rotated, this method would not work. In addition to that, when having animations you may need to have different bounding boxes per animation frame (assimp calculates the bounding box for the binding pose). The `intersectRayAab` method receives the following parameters:
* An origin: In our case, this will be our camera position.
* A direction: This is the ray that points to the mouse coordinates (world space).
* The minimum corner of the box.
* The maximum corner. Self explanatory.
* A result vector. This will contain the near and far distances of the intersection points.
The method will return true if there is an intersection. If true, we check the closes distance and update it if needed, and store a reference of the candidate selected.
Obviously, the method presented here is far from optimal but it will give you the basics to develop more sophisticated methods on your own. Some parts of the scene could be easily discarded, like objects behind the camera, since they are not going to be intersected. Besides that, you may want to order your items according to the distance to the camera to speed up calculations.
We will modify the `Main` class to show two spinning cubes to illustrate the technique:
```java
public class Main implements IAppLogic {
...
private Entity cubeEntity1;
private Entity cubeEntity2;
...
private float rotation;
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-18", opts, main);
...
}
...
public void init(Window window, Scene scene, Render render) {
...
Model cubeModel = ModelLoader.loadModel("cube-model", "resources/models/cube/cube.obj",
scene.getTextureCache(), false);
scene.addModel(cubeModel);
cubeEntity1 = new Entity("cube-entity-1", cubeModel.getId());
cubeEntity1.setPosition(0, 2, -1);
scene.addEntity(cubeEntity1);
cubeEntity2 = new Entity("cube-entity-2", cubeModel.getId());
cubeEntity2.setPosition(-2, 2, -1);
scene.addEntity(cubeEntity2);
...
}
...
public void update(Window window, Scene scene, long diffTimeMillis) {
rotation += 1.5;
if (rotation > 360) {
rotation = 0;
}
cubeEntity1.setRotation(1, 1, 1, (float) Math.toRadians(rotation));
cubeEntity1.updateModelMatrix();
cubeEntity2.setRotation(1, 1, 1, (float) Math.toRadians(360 - rotation));
cubeEntity2.updateModelMatrix();
}
}
```
You will be able to see how cubes are rendered in blue when licked with the mouse:
.png>)
[Next chapter](../chapter-19/chapter-19.md)
================================================
FILE: chapter-19/chapter-19.md
================================================
# Chapter 19 - Deferred Shading
Up to now the way that we are rendering a 3D scene is called forward rendering. We first render the 3D objects and apply the texture and lighting effects in a fragment shader. This method is not very efficient if we have a complex fragment shader pass with many lights and complex effects. In addition to that we may end up applying these effects to fragments that may be later on discarded due to depth testing (although this is not exactly true if we enable [early fragment testing](https://www.khronos.org/opengl/wiki/Early_Fragment_Test)).
In order to alleviate the problems described above we may change the way that we render the scene by using a technique called deferred shading. With deferred shading we first render the geometry information that is required in later stages (in the fragment shader) to a buffer. The complex calculus required by the fragment shader are postponed, deferred, to a later stage when using the information stored in those buffers.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-19).
## Concepts
Deferred requires to perform two rendering passes. The first one, is the geometry pass, where we render the scene to a buffer that will contain the following information:
* Depth value.
* The diffuse colors and reflectance factor for each position.
* The specular component for each position.
* The normals at each position (also in light view coordinate system).
All that information is stored in a buffer called G-Buffer.
The second pass is called the lighting pass. This pass takes a quad that fills up all the screen and generates the color information for each fragment using the information contained in the G-Buffer. When we will be performing the lighting pass, the depth test will have already removed all the scene data that would not be seen. Hence, the number of operations to be done are restricted to what will be displayed on the screen.
You may be asking if performing additional rendering passes will result in an increase of performance or not. The answer is that it depends. Deferred shading is usually used when you have many different light passes. In this case, the additional rendering steps are compensated by the reduction of operations that will be done in the fragment shader.
## G-Buffer
So let’s start coding. The first task that we will be doing is create a new class for the G-Buffer. The class, named `GBuffer`, is defined like this:
```java
package org.lwjglb.engine.graph;
import org.lwjgl.opengl.GL30;
import org.lwjgl.system.MemoryStack;
import org.lwjglb.engine.Window;
import java.nio.*;
import java.util.Arrays;
import static org.lwjgl.opengl.GL30.*;
public class GBuffer {
private static final int TOTAL_TEXTURES = 4;
private int gBufferId;
private int height;
private int[] textureIds;
private int width;
...
}
```
The class defines a constant that models the maximum number of buffers to be used. The identifier associated to the G-Buffer itself and an array for the individual buffers. The size of the textures is also stored.
Let’s review the constructor:
```java
public class GBuffer {
...
public GBuffer(Window window) {
gBufferId = glGenFramebuffers();
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, gBufferId);
textureIds = new int[TOTAL_TEXTURES];
glGenTextures(textureIds);
this.width = window.getWidth();
this.height = window.getHeight();
for (int i = 0; i < TOTAL_TEXTURES; i++) {
glBindTexture(GL_TEXTURE_2D, textureIds[i]);
int attachmentType;
if (i == TOTAL_TEXTURES - 1) {
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT,
(ByteBuffer) null);
attachmentType = GL_DEPTH_ATTACHMENT;
} else {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, (ByteBuffer) null);
attachmentType = GL_COLOR_ATTACHMENT0 + i;
}
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, attachmentType, GL_TEXTURE_2D, textureIds[i], 0);
}
try (MemoryStack stack = MemoryStack.stackPush()) {
IntBuffer intBuff = stack.mallocInt(TOTAL_TEXTURES);
for (int i = 0; i < TOTAL_TEXTURES; i++) {
intBuff.put(i, GL_COLOR_ATTACHMENT0 + i);
}
glDrawBuffers(intBuff);
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
...
}
```
The first thing that we do is create a frame buffer. Remember that a frame buffer is just an OpenGL objects that can be used to render operations instead of rendering to the screen. Then we generate a set of textures (4 textures), that will be associated to the frame buffer.
After that, we use a for loop to initialize the textures. We have the following types:
* “Regular textures”, that will store positions, normals, the diffuse component, etc.
* A texture for storing the depth buffer. This will be our last texture.
Once the texture has been initialized, we enable sampling for them and attach them to the frame buffer. Each texture is attached using an identifier which starts at `GL_COLOR_ATTACHMENT0`. Each texture increments by one that id, so the positions are attached using `GL_COLOR_ATTACHMENT0`, the diffuse component uses `GL_COLOR_ATTACHMENT1` (which is `GL_COLOR_ATTACHMENT0 + 1`), and so on.
After all the textures have been created, we need to enable them to be used by the fragment shader for rendering. This is done with the `glDrawBuffers` call. We just pass the array with the identifiers of the color attachments used (`GL_COLOR_ATTACHMENT0` to `GL_COLOR_ATTACHMENT5`).
The rest of the class are just getter methods and the cleanup one.
```java
public class GBuffer {
...
public void cleanUp() {
glDeleteFramebuffers(gBufferId);
Arrays.stream(textureIds).forEach(GL30::glDeleteTextures);
}
public int getGBufferId() {
return gBufferId;
}
public int getHeight() {
return height;
}
public int[] getTextureIds() {
return textureIds;
}
public int getWidth() {
return width;
}
}
```
## Geometry pass
Let's examine the changes that we need to apply when doing the geometry pass. We will apply these changes to the `SceneRender` class and the associated shaders. Staring with the `SceneRender` class, we need to remove light constants and light uniforms, they will note be used in this pass (we will also not be using ambient color for materials to simplify, we need to remove also that uniform and will remove also the selected entity uniform):
```java
public class SceneRender {
private ShaderProgram shaderProgram;
private UniformsMap uniformsMap;
...
private void createUniforms() {
uniformsMap = new UniformsMap(shaderProgram.getProgramId());
uniformsMap.createUniform("projectionMatrix");
uniformsMap.createUniform("modelMatrix");
uniformsMap.createUniform("viewMatrix");
uniformsMap.createUniform("bonesMatrices");
uniformsMap.createUniform("txtSampler");
uniformsMap.createUniform("normalSampler");
uniformsMap.createUniform("material.diffuse");
uniformsMap.createUniform("material.specular");
uniformsMap.createUniform("material.reflectance");
uniformsMap.createUniform("material.hasNormalMap");
}
...
}
```
The `render` method is defined like this:
```java
public class SceneRender {
...
public void render(Scene scene, GBuffer gBuffer) {
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, gBuffer.getGBufferId());
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, gBuffer.getWidth(), gBuffer.getHeight());
glDisable(GL_BLEND);
shaderProgram.bind();
uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());
uniformsMap.setUniform("viewMatrix", scene.getCamera().getViewMatrix());
uniformsMap.setUniform("txtSampler", 0);
uniformsMap.setUniform("normalSampler", 1);
Collection models = scene.getModelMap().values();
TextureCache textureCache = scene.getTextureCache();
for (Model model : models) {
List entities = model.getEntitiesList();
for (Material material : model.getMaterialList()) {
uniformsMap.setUniform("material.diffuse", material.getDiffuseColor());
uniformsMap.setUniform("material.specular", material.getSpecularColor());
uniformsMap.setUniform("material.reflectance", material.getReflectance());
String normalMapPath = material.getNormalMapPath();
boolean hasNormalMapPath = normalMapPath != null;
uniformsMap.setUniform("material.hasNormalMap", hasNormalMapPath ? 1 : 0);
Texture texture = textureCache.getTexture(material.getTexturePath());
glActiveTexture(GL_TEXTURE0);
texture.bind();
if (hasNormalMapPath) {
Texture normalMapTexture = textureCache.getTexture(normalMapPath);
glActiveTexture(GL_TEXTURE1);
normalMapTexture.bind();
}
for (Mesh mesh : material.getMeshList()) {
glBindVertexArray(mesh.getVaoId());
for (Entity entity : entities) {
uniformsMap.setUniform("modelMatrix", entity.getModelMatrix());
AnimationData animationData = entity.getAnimationData();
if (animationData == null) {
uniformsMap.setUniform("bonesMatrices", AnimationData.DEFAULT_BONES_MATRICES);
} else {
uniformsMap.setUniform("bonesMatrices", animationData.getCurrentFrame().boneMatrices());
}
glDrawElements(GL_TRIANGLES, mesh.getNumVertices(), GL_UNSIGNED_INT, 0);
}
}
}
}
glBindVertexArray(0);
glEnable(GL_BLEND);
shaderProgram.unbind();
}
}
```
You can see that we receive now a `GBuffer` instance as a method parameter. That buffer is where we will perform the rendering, therefore we first bind that buffer by calling the `glBindFramebuffer` . After that, we clear that buffer and disable blending. When using deferred rendering, transparent objects are a bit tricky. The approach would be to render them in light pass or to discard them in the geometry pass. As you can see we have removed all the lights uniform set up code.
The ony change in the vertex shader (`scene.vert`) is that now the view position is a four components vector (`vec4`):
```glsl
#version 330
...
out vec4 outViewPosition;
...
void main()
{
...
outWorldPosition = modelMatrix * initPos;
outViewPosition = viewMatrix * outWorldPosition;
gl_Position = projectionMatrix * outViewPosition;
outNormal = normalize(modelViewMatrix * initNormal).xyz;
outTangent = normalize(modelViewMatrix * initTangent).xyz;
outBitangent = normalize(modelViewMatrix * initBitangent).xyz;
outTextCoord = texCoord;
}
```
The fragment shader (`scene.frag`) has been simplified a lot:
```glsl
#version 330
in vec3 outNormal;
in vec3 outTangent;
in vec3 outBitangent;
in vec2 outTextCoord;
in vec4 outViewPosition;
in vec4 outWorldPosition;
layout (location = 0) out vec4 buffAlbedo;
layout (location = 1) out vec4 buffNormal;
layout (location = 2) out vec4 buffSpecular;
struct Material
{
vec4 diffuse;
vec4 specular;
float reflectance;
int hasNormalMap;
};
uniform sampler2D txtSampler;
uniform sampler2D normalSampler;
uniform Material material;
vec3 calcNormal(vec3 normal, vec3 tangent, vec3 bitangent, vec2 textCoords) {
mat3 TBN = mat3(tangent, bitangent, normal);
vec3 newNormal = texture(normalSampler, textCoords).rgb;
newNormal = normalize(newNormal * 2.0 - 1.0);
newNormal = normalize(TBN * newNormal);
return newNormal;
}
void main() {
vec4 text_color = texture(txtSampler, outTextCoord);
vec4 diffuse = text_color + material.diffuse;
if (diffuse.a < 0.5) {
discard;
}
vec4 specular = text_color + material.specular;
vec3 normal = outNormal;
if (material.hasNormalMap > 0) {
normal = calcNormal(outNormal, outTangent, outBitangent, outTextCoord);
}
buffAlbedo = vec4(diffuse.xyz, material.reflectance);
buffNormal = vec4(0.5 * normal + 0.5, 1.0);
buffSpecular = specular;
}
```
The most relevant lines are:
```glsl
...
layout (location = 0) out vec4 buffAlbedo;
layout (location = 1) out vec4 buffNormal;
layout (location = 2) out vec4 buffSpecular;
...
```
This is where we are referring to the textures that this fragment shader will write to. As you can see we just dump the diffuse color (which can be the color of the associated texture of a component of the material), the specular component, the normal, and the depth values for the shadow map. You may notice that we do not store the position in the textures, this is because we can reconstruct fragment position using depth values. We will sww how this can be done in the lighting pass.
SIDE NOTE: We have simplified the `Material` class definition removing the ambient color component.
If you debug the sample execution with an OpenGL debugger (such as RenderDoc), you can view the textures generated during the geometry pass. The albedo texture will look like this:

The texture that holds the values for the normals will look like this:

The texture that holds the values for the specular colors will look like this:

And finally, the depth texture will look like this:

## Lighting pass
In order to perform the lighting pass, we will create a new class named `LightsRender` which starts like this:
```java
package org.lwjglb.engine.graph;
import org.joml.*;
import org.lwjglb.engine.scene.*;
import org.lwjglb.engine.scene.lights.*;
import java.util.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL14.*;
import static org.lwjgl.opengl.GL30.*;
public class LightsRender {
private static final int MAX_POINT_LIGHTS = 5;
private static final int MAX_SPOT_LIGHTS = 5;
private final ShaderProgram shaderProgram;
private QuadMesh quadMesh;
private UniformsMap uniformsMap;
public LightsRender() {
List shaderModuleDataList = new ArrayList<>();
shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/lights.vert", GL_VERTEX_SHADER));
shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/lights.frag", GL_FRAGMENT_SHADER));
shaderProgram = new ShaderProgram(shaderModuleDataList);
quadMesh = new QuadMesh();
createUniforms();
}
public void cleanup() {
quadMesh.cleanup();
shaderProgram.cleanup();
}
...
}
```
You can see that, in addition to create a new shader program, we define a new attribute of the `QadMesh` class (which has not been defined yet). Before analyzing the render method, let’s think a little bit about how we will render the lights. We need to use the contents of the G-Buffer, but in order to use them, we need to first render something. But, we have already drawn the scene, what are we going to render. now? The answer is simple, we just need to render a quad that fills all the screen. For each fragment of that quad, we will use the data contained in the G-Buffer and generate the correct output color. This where the `QuadMesh` class comes to play, it just defines a quad which will be used to render in the lighting pass and is defined like this:
```java
package org.lwjglb.engine.graph;
import org.lwjgl.opengl.GL30;
import org.lwjgl.system.*;
import java.nio.*;
import java.util.*;
import static org.lwjgl.opengl.GL30.*;
public class QuadMesh {
private int numVertices;
private int vaoId;
private List vboIdList;
public QuadMesh() {
vboIdList = new ArrayList<>();
float[] positions = new float[]{
-1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,};
float[] textCoords = new float[]{
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,};
int[] indices = new int[]{0, 2, 1, 1, 2, 3};
numVertices = indices.length;
vaoId = glGenVertexArrays();
glBindVertexArray(vaoId);
// Positions VBO
int vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer positionsBuffer = MemoryUtil.memCallocFloat(positions.length);
positionsBuffer.put(0, positions);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, positionsBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);
// Texture coordinates VBO
vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer textCoordsBuffer = MemoryUtil.memCallocFloat(textCoords.length);
textCoordsBuffer.put(0, textCoords);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, textCoordsBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, false, 0, 0);
// Index VBO
vboId = glGenBuffers();
vboIdList.add(vboId);
IntBuffer indicesBuffer = MemoryUtil.memCallocInt(indices.length);
indicesBuffer.put(0, indices);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indicesBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
MemoryUtil.memFree(positionsBuffer);
MemoryUtil.memFree(textCoordsBuffer);
MemoryUtil.memFree(indicesBuffer);
}
public void cleanup() {
vboIdList.forEach(GL30::glDeleteBuffers);
glDeleteVertexArrays(vaoId);
}
public int getNumVertices() {
return numVertices;
}
public int getVaoId() {
return vaoId;
}
}
```
As you can see we just need position and texture coordinate attributes (to properly access G-Buffer textures). Going back to the `LightsRender` class, we need a method to create the uniforms, which as uou will see, restores the light uniforms previously used in the `SceneRender`class plus a set of new ones to map G-Buffer textures (`albedoSampler`, `normalSampler`, `specularSampler` and `depthSampler`). In addition to that, we will need new uniforms to calculate fragment position form depth values such as `invProjectionMatrix` and `invViewMatrix`. We will see in the shaders code how they will be used.
```java
public class LightsRender {
...
private void createUniforms() {
uniformsMap = new UniformsMap(shaderProgram.getProgramId());
uniformsMap.createUniform("albedoSampler");
uniformsMap.createUniform("normalSampler");
uniformsMap.createUniform("specularSampler");
uniformsMap.createUniform("depthSampler");
uniformsMap.createUniform("invProjectionMatrix");
uniformsMap.createUniform("invViewMatrix");
uniformsMap.createUniform("ambientLight.factor");
uniformsMap.createUniform("ambientLight.color");
for (int i = 0; i < MAX_POINT_LIGHTS; i++) {
String name = "pointLights[" + i + "]";
uniformsMap.createUniform(name + ".position");
uniformsMap.createUniform(name + ".color");
uniformsMap.createUniform(name + ".intensity");
uniformsMap.createUniform(name + ".att.constant");
uniformsMap.createUniform(name + ".att.linear");
uniformsMap.createUniform(name + ".att.exponent");
}
for (int i = 0; i < MAX_SPOT_LIGHTS; i++) {
String name = "spotLights[" + i + "]";
uniformsMap.createUniform(name + ".pl.position");
uniformsMap.createUniform(name + ".pl.color");
uniformsMap.createUniform(name + ".pl.intensity");
uniformsMap.createUniform(name + ".pl.att.constant");
uniformsMap.createUniform(name + ".pl.att.linear");
uniformsMap.createUniform(name + ".pl.att.exponent");
uniformsMap.createUniform(name + ".conedir");
uniformsMap.createUniform(name + ".cutoff");
}
uniformsMap.createUniform("dirLight.color");
uniformsMap.createUniform("dirLight.direction");
uniformsMap.createUniform("dirLight.intensity");
uniformsMap.createUniform("fog.activeFog");
uniformsMap.createUniform("fog.color");
uniformsMap.createUniform("fog.density");
for (int i = 0; i < CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {
uniformsMap.createUniform("shadowMap_" + i);
uniformsMap.createUniform("cascadeshadows[" + i + "]" + ".projViewMatrix");
uniformsMap.createUniform("cascadeshadows[" + i + "]" + ".splitDistance");
}
}
...
}
```
The `render` method is defined like this:
```java
public class LightsRender {
...
public void render(Scene scene, ShadowRender shadowRender, GBuffer gBuffer) {
shaderProgram.bind();
updateLights(scene);
// Bind the G-Buffer textures
int[] textureIds = gBuffer.getTextureIds();
int numTextures = textureIds != null ? textureIds.length : 0;
for (int i = 0; i < numTextures; i++) {
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, textureIds[i]);
}
uniformsMap.setUniform("albedoSampler", 0);
uniformsMap.setUniform("normalSampler", 1);
uniformsMap.setUniform("specularSampler", 2);
uniformsMap.setUniform("depthSampler", 3);
Fog fog = scene.getFog();
uniformsMap.setUniform("fog.activeFog", fog.isActive() ? 1 : 0);
uniformsMap.setUniform("fog.color", fog.getColor());
uniformsMap.setUniform("fog.density", fog.getDensity());
int start = 4;
List cascadeShadows = shadowRender.getCascadeShadows();
for (int i = 0; i < CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {
glActiveTexture(GL_TEXTURE0 + start + i);
uniformsMap.setUniform("shadowMap_" + i, start + i);
CascadeShadow cascadeShadow = cascadeShadows.get(i);
uniformsMap.setUniform("cascadeshadows[" + i + "]" + ".projViewMatrix", cascadeShadow.getProjViewMatrix());
uniformsMap.setUniform("cascadeshadows[" + i + "]" + ".splitDistance", cascadeShadow.getSplitDistance());
}
shadowRender.getShadowBuffer().bindTextures(GL_TEXTURE0 + start);
uniformsMap.setUniform("invProjectionMatrix", scene.getProjection().getInvProjMatrix());
uniformsMap.setUniform("invViewMatrix", scene.getCamera().getInvViewMatrix());
glBindVertexArray(quadMesh.getVaoId());
glDrawElements(GL_TRIANGLES, quadMesh.getNumVertices(), GL_UNSIGNED_INT, 0);
shaderProgram.unbind();
}
...
}
```
After updating lights we activate the textures that l hold the results of the geometry pass. After that, we set fog and cascade shadows uniforms and draw just a quad.
So, how the vertex shader for the light pass looks like (`lights.vert`)?
```glsl
#version 330
layout (location=0) in vec3 inPos;
layout (location=1) in vec2 inCoord;
out vec2 outTextCoord;
void main()
{
outTextCoord = inCoord;
gl_Position = vec4(inPos, 1.0f);
}
```
The code above just dumps the vertices directly and passes the texture coordinates to the fragment shader. The fragment shader (`lights.frag`) is defined like this:
```glsl
#version 330
const int MAX_POINT_LIGHTS = 5;
const int MAX_SPOT_LIGHTS = 5;
const float SPECULAR_POWER = 10;
const int NUM_CASCADES = 3;
const float BIAS = 0.0005;
const float SHADOW_FACTOR = 0.25;
in vec2 outTextCoord;
out vec4 fragColor;
struct Attenuation
{
float constant;
float linear;
float exponent;
};
struct AmbientLight
{
float factor;
vec3 color;
};
struct PointLight {
vec3 position;
vec3 color;
float intensity;
Attenuation att;
};
struct SpotLight
{
PointLight pl;
vec3 conedir;
float cutoff;
};
struct DirLight
{
vec3 color;
vec3 direction;
float intensity;
};
struct Fog
{
int activeFog;
vec3 color;
float density;
};
struct CascadeShadow {
mat4 projViewMatrix;
float splitDistance;
};
uniform sampler2D albedoSampler;
uniform sampler2D normalSampler;
uniform sampler2D specularSampler;
uniform sampler2D depthSampler;
uniform mat4 invProjectionMatrix;
uniform mat4 invViewMatrix;
uniform AmbientLight ambientLight;
uniform PointLight pointLights[MAX_POINT_LIGHTS];
uniform SpotLight spotLights[MAX_SPOT_LIGHTS];
uniform DirLight dirLight;
uniform Fog fog;
uniform CascadeShadow cascadeshadows[NUM_CASCADES];
uniform sampler2D shadowMap_0;
uniform sampler2D shadowMap_1;
uniform sampler2D shadowMap_2;
vec4 calcAmbient(AmbientLight ambientLight, vec4 ambient) {
return vec4(ambientLight.factor * ambientLight.color, 1) * ambient;
}
vec4 calcLightColor(vec4 diffuse, vec4 specular, float reflectance, vec3 lightColor, float light_intensity, vec3 position, vec3 to_light_dir, vec3 normal) {
vec4 diffuseColor = vec4(0, 0, 0, 1);
vec4 specColor = vec4(0, 0, 0, 1);
// Diffuse Light
float diffuseFactor = max(dot(normal, to_light_dir), 0.0);
diffuseColor = diffuse * vec4(lightColor, 1.0) * light_intensity * diffuseFactor;
// Specular Light
vec3 camera_direction = normalize(-position);
vec3 from_light_dir = -to_light_dir;
vec3 reflected_light = normalize(reflect(from_light_dir, normal));
float specularFactor = max(dot(camera_direction, reflected_light), 0.0);
specularFactor = pow(specularFactor, SPECULAR_POWER);
specColor = specular * light_intensity * specularFactor * reflectance * vec4(lightColor, 1.0);
return (diffuseColor + specColor);
}
vec4 calcPointLight(vec4 diffuse, vec4 specular, float reflectance, PointLight light, vec3 position, vec3 normal) {
vec3 light_direction = light.position - position;
vec3 to_light_dir = normalize(light_direction);
vec4 light_color = calcLightColor(diffuse, specular, reflectance, light.color, light.intensity, position, to_light_dir, normal);
// Apply Attenuation
float distance = length(light_direction);
float attenuationInv = light.att.constant + light.att.linear * distance +
light.att.exponent * distance * distance;
return light_color / attenuationInv;
}
vec4 calcSpotLight(vec4 diffuse, vec4 specular, float reflectance, SpotLight light, vec3 position, vec3 normal) {
vec3 light_direction = light.pl.position - position;
vec3 to_light_dir = normalize(light_direction);
vec3 from_light_dir = -to_light_dir;
float spot_alfa = dot(from_light_dir, normalize(light.conedir));
vec4 color = vec4(0, 0, 0, 0);
if (spot_alfa > light.cutoff)
{
color = calcPointLight(diffuse, specular, reflectance, light.pl, position, normal);
color *= (1.0 - (1.0 - spot_alfa)/(1.0 - light.cutoff));
}
return color;
}
vec4 calcDirLight(vec4 diffuse, vec4 specular, float reflectance, DirLight light, vec3 position, vec3 normal) {
return calcLightColor(diffuse, specular, reflectance, light.color, light.intensity, position, normalize(light.direction), normal);
}
vec4 calcFog(vec3 pos, vec4 color, Fog fog, vec3 ambientLight, DirLight dirLight) {
vec3 fogColor = fog.color * (ambientLight + dirLight.color * dirLight.intensity);
float distance = length(pos);
float fogFactor = 1.0 / exp((distance * fog.density) * (distance * fog.density));
fogFactor = clamp(fogFactor, 0.0, 1.0);
vec3 resultColor = mix(fogColor, color.xyz, fogFactor);
return vec4(resultColor.xyz, color.w);
}
float textureProj(vec4 shadowCoord, vec2 offset, int idx) {
float shadow = 1.0;
if (shadowCoord.z > -1.0 && shadowCoord.z < 1.0) {
float dist = 0.0;
if (idx == 0) {
dist = texture(shadowMap_0, vec2(shadowCoord.xy + offset)).r;
} else if (idx == 1) {
dist = texture(shadowMap_1, vec2(shadowCoord.xy + offset)).r;
} else {
dist = texture(shadowMap_2, vec2(shadowCoord.xy + offset)).r;
}
if (shadowCoord.w > 0 && dist < shadowCoord.z - BIAS) {
shadow = SHADOW_FACTOR;
}
}
return shadow;
}
float calcShadow(vec4 worldPosition, int idx) {
vec4 shadowMapPosition = cascadeshadows[idx].projViewMatrix * worldPosition;
float shadow = 1.0;
vec4 shadowCoord = (shadowMapPosition / shadowMapPosition.w) * 0.5 + 0.5;
shadow = textureProj(shadowCoord, vec2(0, 0), idx);
return shadow;
}
void main()
{
vec4 albedoSamplerValue = texture(albedoSampler, outTextCoord);
vec3 albedo = albedoSamplerValue.rgb;
vec4 diffuse = vec4(albedo, 1);
float reflectance = albedoSamplerValue.a;
vec3 normal = normalize(2.0 * texture(normalSampler, outTextCoord).rgb - 1.0);
vec4 specular = texture(specularSampler, outTextCoord);
// Retrieve position from depth
float depth = texture(depthSampler, outTextCoord).x * 2.0 - 1.0;
if (depth == 1) {
discard;
}
vec4 clip = vec4(outTextCoord.x * 2.0 - 1.0, outTextCoord.y * 2.0 - 1.0, depth, 1.0);
vec4 view_w = invProjectionMatrix * clip;
vec3 view_pos = view_w.xyz / view_w.w;
vec4 world_pos = invViewMatrix * vec4(view_pos, 1);
vec4 diffuseSpecularComp = calcDirLight(diffuse, specular, reflectance, dirLight, view_pos, normal);
int cascadeIndex = 0;
for (int i=0; i 0) {
diffuseSpecularComp += calcPointLight(diffuse, specular, reflectance, pointLights[i], view_pos, normal);
}
}
for (int i=0; i 0) {
diffuseSpecularComp += calcSpotLight(diffuse, specular, reflectance, spotLights[i], view_pos, normal);
}
}
vec4 ambient = calcAmbient(ambientLight, diffuse);
fragColor = ambient + diffuseSpecularComp;
fragColor.rgb = fragColor.rgb * shadowFactor;
if (fog.activeFog == 1) {
fragColor = calcFog(view_pos, fragColor, fog, ambientLight.color, dirLight);
}
}
```
As you can see, it contains functions that should look familiar to you. They were used in previous chapters in the scene fragment shader. The important things here to note are the following lines:
```glsl
uniform sampler2D albedoSampler;
uniform sampler2D normalSampler;
uniform sampler2D specularSampler;
uniform sampler2D depthSampler;
```
We first sample the albedo, normal map (converting from \[0, -1] to \[-1, 1] range) and the specular attachment according to current fragment coordinates. In addition to that there is a code fragment that may look new to y ou. We need the fragment position to perform the light calculations. But, we have no position attachments. This is where the depth attachment and the inverse projection matrix comes into play. With that information we can reconstruct the world position (view space coordinates) without requiring to have another attachment which stores the position. You will see in other tutorials, that they set up a specific attachment for positions, but it is much more efficient to do it this way. Always remember, that, the less memory consumed by the deferred attachments, the better. With all that information we just simply iterate over the lights to calculate the light contribution to the final color.
The rest of the code is quite similar to the one in the fragment shader of the scene render.
Finally, we need to update the `Render` class to use the new classes:
```java
public class Render {
...
private GBuffer gBuffer;
...
private LightsRender lightsRender;
...
public Render(Window window) {
...
lightsRender = new LightsRender();
gBuffer = new GBuffer(window);
}
public void cleanup() {
...
lightsRender.cleanup();
gBuffer.cleanUp();
}
private void lightRenderFinish() {
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
private void lightRenderStart(Window window) {
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, window.getWidth(), window.getHeight());
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
glBindFramebuffer(GL_READ_FRAMEBUFFER, gBuffer.getGBufferId());
}
public void render(Window window, Scene scene) {
shadowRender.render(scene);
sceneRender.render(scene, gBuffer);
lightRenderStart(window);
lightsRender.render(scene, shadowRender, gBuffer);
skyBoxRender.render(scene);
lightRenderFinish();
guiRender.render(scene);
}
public void resize(int width, int height) {
guiRender.resize(width, height);
}
}
```
At the end you will be able to see something like this:

[Next chapter](../chapter-20/chapter-20.md)
================================================
FILE: chapter-20/chapter-20.md
================================================
# Chapter 20 - Indirect drawing (static models)
Until this chapter, we have rendered the models by binding their material uniforms, their textures, their vertices and indices buffers and submitting one draw command for each of the meshes they are composed. In this chapter, we will begin making a more efficient way of rendering: the implementation of a bind-less render (at least almost bind-less). In this type of rendering we do not invoke a bunch of draw commands to draw the scene, instead we populate a buffer with the instructions that will allow the GPU to render them. This is called indirect rendering and it is a more efficient way of drawing because:
* We remove the need to perform several bind operations before drawing each mesh.
* We just need to invoke a single draw call.
* We can perform in-GPU operations, such as frustum culling reducing the load on the CPU side.
As you can see, the ultimate goal is to maximize the utilization of the GPU while removing potential bottlenecks that may occur at the CPU side and latencies due to CPU to GPU communications. In this chapter we will transform our render to use indirect drawing starting with just static models. Animated models will be handled in next chapter.
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-20).
## Concepts
Prior to explaining the code, let's explain the concepts behind indirect drawing. In essence, we need to create a buffer which stores the drawing parameters which wll be used to render the vertices. You can think about that as instruction blocks, or draw commands, that will be read by the GPU that will instruct it to perform the drawing. Once the buffer is populated, we invoke the `glMultiDrawElementsIndirect` to trigger that process. Each draw command stored in the buffer is defined by the following parameters (if you are using C, this is modelled by the `DrawElementsIndirectCommand` structure):
* `count`: The number of vertices to be drawn (understanding a vertex as the structure which groups the position, normal information, texture coordinates, etc.). This should contain the same values as the number of vertices which we used when invoking the `glDrawElements` when rendering meshes.
* `instanceCount`: The number of instances to be drawn. We may have several entities that share the same model. Instead of storing a drawing instruction for each entity, we can just submit a single draw instruction but setting the number of entities that we want to draw. This is called instance rendering, and will save a lot of computing time. Without indirect drawing you can achieve the same results by setting specific attributes per VAO. I think that it is even simpler with this technique.
* `firstIndex`: An offset to the buffer that will hold the indices values used for this draw instructions (the offset is measured in number of indices, not a byte offset).
* `baseVertex`: An offset to the buffer that will hold the vertices data (the offset is measured on number of vertices, not a byte offset).
* `baseInstance`: We can use this parameter to set a value that will be shared by all the instances to be drawn. Combining this value with the number of the instance to be drawn we will be able to access per instance data (we will see this later on).
Although it has been already commented when describing the parameters, indirect drawing needs a buffer that will hold the vertices data and another one for the indices. The difference is that we will need to combine all that data form the multiple meshes that conform the models of our scene into a single buffer. We will access per-mesh specific data by playing with the offset values of the drawing parameters.
Another aspect we have to solve is the passing of material information or per-entity data (such as model matrices). In previous chapters we used uniforms for that, setting the proper value when we changed the mesh or the entities to be drawn. With indirect drawing we cannot do that, we cannot modify data during the render process, since submit a bulk set of drawing instructions at once. The solution to that is to use additional buffers, we can store per-entity data in a buffer and use the `baseInstance` parameter (combined with the instance id) to access the proper data (per entity) inside that buffer (we will see later on, that instead of a buffer we will use an array of uniforms, but you could use also a simpler buffer for that). Inside that buffer we will hold indices to access two additional buffers:
* One that will hold model matrices data.
* One that will hold material data (albedo color, etc.).
For textures we will use an array of textures which should not be confused with an array texture. An array texture is a texture which contains an array of values with texture information, with multiple images of the same size. An array of texture is a list of samples which map to regular textures, therefore they can have different sizes. Arrays of textures have a limitation, its length cannot have arbitrary length, they have a limit, that in the examples we will set up to 16 textures (although you may want to check the capabilities of your GPU prior to setting that limit). Sixteen is not a high number if you are using multiple models, in order to circumvent this limitation you have two options:
* Use a texture atlas (a giant texture file which combines individual textures). Even if you are not using indirect drawing you should try to use texture atlas as much as possible, since it limits the binding calls.
* Use bindless textures. This approach basically allows us to pass handles (64 bit integer values) to identify a texture and use that identifier to get a sampler withing the shader program. This should be definitely the way to go with indirect rendering if you can (this is not a core feature but an extension starting with 4.4 version). We will not use this approach because RenderDoc does not currently support this (losing the capability of debugging without RenderDoc is a showstopper for me).
The following picture depicts the buffers and structures involved in indirect drawing (keep in mind that this is only valid while rendering static models. We will see the new structures that we need to use when rendering animated models in the next chapter).

Please keep in mind that we will use arrays of uniforms for per entity-data, materials and model matrices (at the end an array is a buffer, but we will be able to access the dat in handy way by using uniforms).
## Implementation
In order to use indirect drawing we will need to use at least OpenGL version 4.6. Therefore, the first step is to update the major and minor versions we use as window hints for window creation:
```java
public class Window {
...
public Window(String title, WindowOptions opts, Callable resizeFunc) {
...
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);
...
}
...
}
```
The next step is to modify the code to load all the meshes into a single buffer, but, prior to that, we will modify the class hierarchy that stores models, materials and meshes. Up to now, models have a set of associated materials which have a set of meshes. This class hierarchy was set to optimize the draw calls, where we first iterated over models, then over materials and finally over meshes. We will change this structure, not storing meshes under materials any more. Instead, meshes will be stored directly under the models. We will store materials in a sort of cache, and will have a reference to a key in that cache for the meshes. In addition to that, previously, we created a `Mesh` instance for each of the model meshes, which in essence contained a VAO and the associated VBOs for the mesh data. Since we will be using a single buffer for all the meshes, we will just need a single VAO, and its associated VBOs, for the whole set of meshes of the scene. Therefore, instead of storing a list of `Mesh` instances under the `Model` class, we will store the data that will be used to construct the draw parameters, such as the offset on the vertices buffer, the offset for the indices buffer, etc. Let's examine the changes one by one.
We will start with the `MaterialCache` class, which is defined like this:
```java
package org.lwjglb.engine.graph;
import java.util.*;
public class MaterialCache {
public static final int DEFAULT_MATERIAL_IDX = 0;
private List materialsList;
public MaterialCache() {
materialsList = new ArrayList<>();
Material defaultMaterial = new Material();
materialsList.add(defaultMaterial);
}
public void addMaterial(Material material) {
materialsList.add(material);
material.setMaterialIdx(materialsList.size() - 1);
}
public Material getMaterial(int idx) {
return materialsList.get(idx);
}
public List getMaterialsList() {
return materialsList;
}
}
```
AS you can see, we just store the `Material` instances in a `List`. Therefore, in order to identify a `Material`, we just need the index of that instance in the list. (This approach, may make more difficult to add dynamically new materials, but it is simple enough for the purpose of this sample. You may want to change that, and provide robust support for adding new models, materials, etc. in your code.). We will need to modify the `Material` class to remove the list of `Mesh` instances and store the material index in the materials cache:
```java
public class Material {
...
private Vector4f ambientColor;
private Vector4f diffuseColor;
private int materialIdx;
private String normalMapPath;
private float reflectance;
private Vector4f specularColor;
private String texturePath;
public Material() {
diffuseColor = DEFAULT_COLOR;
ambientColor = DEFAULT_COLOR;
specularColor = DEFAULT_COLOR;
materialIdx = 0;
}
...
public int getMaterialIdx() {
return materialIdx;
}
...
public void setMaterialIdx(int materialIdx) {
this.materialIdx = materialIdx;
}
...
}
```
As it has been explained before, we need to change the `Model` class to remove references to materials. Instead, we will hold two main references:
* A list `MeshData` instances (a new class), which will hold the meshes data read using Assimp.
* A list of `RenderBuffers.MeshDrawData` instances (also a new class), that will contained the information needed for indirect drawing (mainly offsets information associated to the data buffers explained above).
We will first populate the list of `MeshData` instances, when loading the models with assimp, and after that we will construct the global buffers that will hold the data, populating the `RenderBuffers.MeshDrawData` instances. After that, we can remove the references to `MeshData` instances. This is not a very elegant solution, but it is simple enough to explain the concepts without introducing more complexity using pre and post loading hierarchies. The changes in the `Model` class are as follows:
```java
public class Model {
...
private final String id;
private List animationList;
private List entitiesList;
private List meshDataList;
private List meshDrawDataList;
public Model(String id, List meshDataList, List animationList) {
entitiesList = new ArrayList<>();
this.id = id;
this.meshDataList = meshDataList;
this.animationList = animationList;
meshDrawDataList = new ArrayList<>();
}
...
public List getMeshDataList() {
return meshDataList;
}
public List getMeshDrawDataList() {
return meshDrawDataList;
}
public boolean isAnimated() {
return animationList != null && !animationList.isEmpty();
}
...
}
```
The definition of the `MeshData` class is very simple. It just stores, vertices positions, texture coordinates, etc:
```java
package org.lwjglb.engine.graph;
import org.joml.Vector3f;
public class MeshData {
private Vector3f aabbMax;
private Vector3f aabbMin;
private float[] bitangents;
private int[] boneIndices;
private int[] indices;
private int materialIdx;
private float[] normals;
private float[] positions;
private float[] tangents;
private float[] textCoords;
private float[] weights;
public MeshData(float[] positions, float[] normals, float[] tangents, float[] bitangents,
float[] textCoords, int[] indices, int[] boneIndices, float[] weights,
Vector3f aabbMin, Vector3f aabbMax) {
materialIdx = 0;
this.positions = positions;
this.normals = normals;
this.tangents = tangents;
this.bitangents = bitangents;
this.textCoords = textCoords;
this.indices = indices;
this.boneIndices = boneIndices;
this.weights = weights;
this.aabbMin = aabbMin;
this.aabbMax = aabbMax;
}
public Vector3f getAabbMax() {
return aabbMax;
}
public Vector3f getAabbMin() {
return aabbMin;
}
public float[] getBitangents() {
return bitangents;
}
public int[] getBoneIndices() {
return boneIndices;
}
public int[] getIndices() {
return indices;
}
public int getMaterialIdx() {
return materialIdx;
}
public float[] getNormals() {
return normals;
}
public float[] getPositions() {
return positions;
}
public float[] getTangents() {
return tangents;
}
public float[] getTextCoords() {
return textCoords;
}
public float[] getWeights() {
return weights;
}
public void setMaterialIdx(int materialIdx) {
this.materialIdx = materialIdx;
}
}
```
Changes in the `ModelLoader` class are also quite simple, we need to use the materials cache and store the data read in the new `MeshData` class (instead of the previous `Mesh` class). Also, materials wil not have references to mesh data, but mesh data will have a reference to the index of the material in the cache:
```java
public class ModelLoader {
...
public static Model loadModel(String modelId, String modelPath, TextureCache textureCache, MaterialCache materialCache,
boolean animation) {
return loadModel(modelId, modelPath, textureCache, materialCache, aiProcess_GenSmoothNormals | aiProcess_JoinIdenticalVertices |
aiProcess_Triangulate | aiProcess_FixInfacingNormals | aiProcess_CalcTangentSpace | aiProcess_LimitBoneWeights |
aiProcess_GenBoundingBoxes | (animation ? 0 : aiProcess_PreTransformVertices));
}
public static Model loadModel(String modelId, String modelPath, TextureCache textureCache,
MaterialCache materialCache, int flags) {
...
for (int i = 0; i < numMaterials; i++) {
AIMaterial aiMaterial = AIMaterial.create(aiScene.mMaterials().get(i));
Material material = processMaterial(aiMaterial, modelDir, textureCache);
materialCache.addMaterial(material);
materialList.add(material);
}
int numMeshes = aiScene.mNumMeshes();
PointerBuffer aiMeshes = aiScene.mMeshes();
List meshDataList = new ArrayList<>();
List boneList = new ArrayList<>();
for (int i = 0; i < numMeshes; i++) {
AIMesh aiMesh = AIMesh.create(aiMeshes.get(i));
MeshData meshData = processMesh(aiMesh, boneList);
int materialIdx = aiMesh.mMaterialIndex();
if (materialIdx >= 0 && materialIdx < materialList.size()) {
meshData.setMaterialIdx(materialList.get(materialIdx).getMaterialIdx());
} else {
meshData.setMaterialIdx(MaterialCache.DEFAULT_MATERIAL_IDX);
}
meshDataList.add(meshData);
}
...
return new Model(modelId, meshDataList, animations);
}
...
private static MeshData processMesh(AIMesh aiMesh, List boneList) {
...
return new MeshData(vertices, normals, tangents, bitangents, textCoords, indices, animMeshData.boneIds,
animMeshData.weights, aabbMin, aabbMax);
}
}
```
The `Scene` class will be the one that will hold the materials cache (also, the `cleanup` method is no longer needed, because the VAOs and VBOs will not be longer linked to the model map):
```java
public class Scene {
...
private MaterialCache materialCache;
...
public Scene(int width, int height) {
...
materialCache = new MaterialCache();
...
}
...
public MaterialCache getMaterialCache() {
return materialCache;
}
...
}
```
Changes in the `Mesh` class are due to the fact we introduced the `MeshData` class (just a matter of changing constructor arguments and methods):
```java
public class Mesh {
...
public Mesh(MeshData meshData) {
this.aabbMin = meshData.getAabbMin();
this.aabbMax = meshData.getAabbMax();
numVertices = meshData.getIndices().length;
...
FloatBuffer positionsBuffer = MemoryUtil.memCallocFloat(meshData.getPositions().length);
positionsBuffer.put(0, meshData.getPositions());
...
FloatBuffer normalsBuffer = MemoryUtil.memCallocFloat(meshData.getNormals().length);
normalsBuffer.put(0, meshData.getNormals());
...
FloatBuffer tangentsBuffer = MemoryUtil.memCallocFloat(meshData.getTangents().length);
tangentsBuffer.put(0, meshData.getTangents());
...
FloatBuffer bitangentsBuffer = MemoryUtil.memCallocFloat(meshData.getBitangents().length);
bitangentsBuffer.put(0, meshData.getBitangents());
...
FloatBuffer textCoordsBuffer = MemoryUtil.memCallocFloat(meshData.getTextCoords().length);
textCoordsBuffer.put(0, meshData.getTextCoords());
...
FloatBuffer weightsBuffer = MemoryUtil.memCallocFloat(meshData.getWeights().length);
weightsBuffer.put(meshData.getWeights()).flip();
...
IntBuffer boneIndicesBuffer = MemoryUtil.memCallocInt(meshData.getBoneIndices().length);
boneIndicesBuffer.put(meshData.getBoneIndices()).flip();
...
IntBuffer indicesBuffer = MemoryUtil.memCallocInt(meshData.getIndices().length);
indicesBuffer.put(0, meshData.getIndices());
}
...
}
```
We have now arrived at the creation of one of the new key classes that we will create for indirect drawing, the `RenderBuffers` class. This class will create a single VAO which will hold the VBOs which will contain the data for all the meshes. In this case, we will just supporting static models, so we will need a single VAO. The `RenderBuffers` class starts like this:
```java
public class RenderBuffers {
private int staticVaoId;
private List vboIdList;
public RenderBuffers() {
vboIdList = new ArrayList<>();
}
public void cleanup() {
vboIdList.forEach(GL30::glDeleteBuffers);
glDeleteVertexArrays(staticVaoId);
}
...
}
```
This class defines two methods to load models:
* `loadAnimatedModels` for animated models. This will not be implemented in this chapter.
* `loadStaticModels`for models with no animations.
Those methods are defined like this:
```java
public class RenderBuffers {
...
public final int getStaticVaoId() {
return staticVaoId;
}
public void loadAnimatedModels(Scene scene) {
// To be completed
}
public void loadStaticModels(Scene scene) {
List modelList = scene.getModelMap().values().stream().filter(m -> !m.isAnimated()).toList();
staticVaoId = glGenVertexArrays();
glBindVertexArray(staticVaoId);
int positionsSize = 0;
int normalsSize = 0;
int textureCoordsSize = 0;
int indicesSize = 0;
int offset = 0;
for (Model model : modelList) {
List meshDrawDataList = model.getMeshDrawDataList();
for (MeshData meshData : model.getMeshDataList()) {
positionsSize += meshData.getPositions().length;
normalsSize += meshData.getNormals().length;
textureCoordsSize += meshData.getTextCoords().length;
indicesSize += meshData.getIndices().length;
int meshSizeInBytes = meshData.getPositions().length * 14 * 4;
meshDrawDataList.add(new MeshDrawData(meshSizeInBytes, meshData.getMaterialIdx(), offset,
meshData.getIndices().length));
offset = positionsSize / 3;
}
}
int vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer meshesBuffer = MemoryUtil.memAllocFloat(positionsSize + normalsSize * 3 + textureCoordsSize);
for (Model model : modelList) {
for (MeshData meshData : model.getMeshDataList()) {
populateMeshBuffer(meshesBuffer, meshData);
}
}
meshesBuffer.flip();
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, meshesBuffer, GL_STATIC_DRAW);
MemoryUtil.memFree(meshesBuffer);
defineVertexAttribs();
// Index VBO
vboId = glGenBuffers();
vboIdList.add(vboId);
IntBuffer indicesBuffer = MemoryUtil.memAllocInt(indicesSize);
for (Model model : modelList) {
for (MeshData meshData : model.getMeshDataList()) {
indicesBuffer.put(meshData.getIndices());
}
}
indicesBuffer.flip();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indicesBuffer, GL_STATIC_DRAW);
MemoryUtil.memFree(indicesBuffer);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
...
}
```
We start by creating a VAO (which will be used for static models), and then iterate over the meshes of the models. We will use a single buffer to hold all the data, so we just iterate over those elements to get the final buffer size. We will calculate the number of position elements, normals, etc. We use that first loop to also populate the offset information that we wil store in a list that will contain `RenderBuffers.MeshDrawData` instances. After that we will create a single VBO. You will find a major difference with the one in the `Mesh` class that did a similar task, creating the VAO and VBOs. In this case, we use a single VBO for positions, normals etc. We just load all that data row by row instead of using separate VBOs. This is done in the `populateMeshBuffer` (which we will see after this). After that, we create the index VBO which will contain the indices for all the meshes of all the models.
The `MeshDrawData` class is defined like this:
```java
public class RenderBuffers {
...
public record MeshDrawData(int sizeInBytes, int materialIdx, int offset, int vertices) {
}
}
```
It basically stores the size of the mesh in bytes (`sizeInBytes`), the material index to which it is associated, the offset in the buffer that holds the vertices information and the vertices, the number of indices for this mesh. The offset is measured in "rows" You can think that the portion of the mesh that holds positions, normals and texture coordinates as a single "row". This "row" holds all the information associated to a single vertex and will be processed in the vertex shader. This is why we just dive by three the number of position elements, each "row" will have three position elements, and the number of "rows" in the positions data will match the number of "rows" in the normals data and so on.
The `populateMeshBuffer` is defined like this:
```java
public class RenderBuffers {
...
private void populateMeshBuffer(FloatBuffer meshesBuffer, MeshData meshData) {
float[] positions = meshData.getPositions();
float[] normals = meshData.getNormals();
float[] tangents = meshData.getTangents();
float[] bitangents = meshData.getBitangents();
float[] textCoords = meshData.getTextCoords();
int rows = positions.length / 3;
for (int row = 0; row < rows; row++) {
int startPos = row * 3;
int startTextCoord = row * 2;
meshesBuffer.put(positions[startPos]);
meshesBuffer.put(positions[startPos + 1]);
meshesBuffer.put(positions[startPos + 2]);
meshesBuffer.put(normals[startPos]);
meshesBuffer.put(normals[startPos + 1]);
meshesBuffer.put(normals[startPos + 2]);
meshesBuffer.put(tangents[startPos]);
meshesBuffer.put(tangents[startPos + 1]);
meshesBuffer.put(tangents[startPos + 2]);
meshesBuffer.put(bitangents[startPos]);
meshesBuffer.put(bitangents[startPos + 1]);
meshesBuffer.put(bitangents[startPos + 2]);
meshesBuffer.put(textCoords[startTextCoord]);
meshesBuffer.put(textCoords[startTextCoord + 1]);
}
}
...
}
```
As you can see, we just iterate over the "rows" of data and pack positions, normals and texture coordinates into the buffer. The `defineVertexAttribs` is defined like this:
```java
public class RenderBuffers {
...
private void defineVertexAttribs() {
int stride = 3 * 4 * 4 + 2 * 4;
int pointer = 0;
// Positions
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, false, stride, pointer);
pointer += 3 * 4;
// Normals
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, false, stride, pointer);
pointer += 3 * 4;
// Tangents
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 3, GL_FLOAT, false, stride, pointer);
pointer += 3 * 4;
// Bitangents
glEnableVertexAttribArray(3);
glVertexAttribPointer(3, 3, GL_FLOAT, false, stride, pointer);
pointer += 3 * 4;
// Texture coordinates
glEnableVertexAttribArray(4);
glVertexAttribPointer(4, 2, GL_FLOAT, false, stride, pointer);
}
...
}
```
We just define the vertex attributes for the VAO as in previous examples. The only difference here is that we are using a single VBO for them.
Prior to examining the changes in the `SceneRender` class, let's start with the vertex shader (`scene.vert`), which starts like this:
```glsl
#version 460
const int MAX_DRAW_ELEMENTS = 100;
const int MAX_ENTITIES = 50;
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
layout (location=2) in vec3 tangent;
layout (location=3) in vec3 bitangent;
layout (location=4) in vec2 texCoord;
out vec3 outNormal;
out vec3 outTangent;
out vec3 outBitangent;
out vec2 outTextCoord;
out vec4 outViewPosition;
out vec4 outWorldPosition;
flat out uint outMaterialIdx;
struct DrawElement
{
int modelMatrixIdx;
int materialIdx;
};
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
uniform DrawElement drawElements[MAX_DRAW_ELEMENTS];
uniform mat4 modelMatrices[MAX_ENTITIES];
...
```
The first thing that you will notice is that we have increased the version to `460`. We also have removed the constants associated with animations (`MAX_WEIGHTS` and `MAX_BONES`), the attributes for bones indices and the uniform for bone matrices. You will see in next chapter that we will no need this information here for animations. We have created two new constants to define the size oif the `drawElements` and `modelMatrices` uniforms. The `drawElements` uniform will hold `DrawElement` instances. It will have one item per mesh and associated entity. If you remember, we will record a single instruction to draw all the items associated to a mesh, setting the number of instances to be drawn. We will need however, specific per entity data, such as the model matrix. This will be hold in the `drawElements` array, which will also point to the material index to be used. The `modelMatrices` array will just hold the model matrices for each of the entities. Material information will be used in the fragment shader you we pass it using the `outMaterialIdx` output variable.
The `main` function, since we do not have to deal with animations, has been simplified a lot:
```glsl
...
void main()
{
vec4 initPos = vec4(position, 1.0);
vec4 initNormal = vec4(normal, 0.0);
vec4 initTangent = vec4(tangent, 0.0);
vec4 initBitangent = vec4(bitangent, 0.0);
uint idx = gl_BaseInstance + gl_InstanceID;
DrawElement drawElement = drawElements[idx];
outMaterialIdx = drawElement.materialIdx;
mat4 modelMatrix = modelMatrices[drawElement.modelMatrixIdx];
mat4 modelViewMatrix = viewMatrix * modelMatrix;
outWorldPosition = modelMatrix * initPos;
outViewPosition = viewMatrix * outWorldPosition;
gl_Position = projectionMatrix * outViewPosition;
outNormal = normalize(modelViewMatrix * initNormal).xyz;
outTangent = normalize(modelViewMatrix * initTangent).xyz;
outBitangent = normalize(modelViewMatrix * initBitangent).xyz;
outTextCoord = texCoord;
}
```
The key here is to get the proper index to access the `drawElements` size. We use the `gl_BaseInstance` and `gl_InstanceID` built-in in variables. When recording the instructions for indirect drawing we will use the `baseInstance` attribute. The value for that attribute will be the one associated to `gl_BaseInstance` built-in in variable. The `gl_InstanceID` will start at `0` whenever we change form a mesh to another, and will be increased for of of the instances of the entities associated to the models. Therefore, by combining this two variables we will be able to access the per-entity specific information in the `drawElements` array. Once we have the proper index, we just transform positions and normal information as in previous versions of the shader.
The scene fragment shader (`scene.frag`) is defined like this:
```glsl
#version 400
const int MAX_MATERIALS = 20;
const int MAX_TEXTURES = 16;
in vec3 outNormal;
in vec3 outTangent;
in vec3 outBitangent;
in vec2 outTextCoord;
in vec4 outViewPosition;
in vec4 outWorldPosition;
flat in uint outMaterialIdx;
layout (location = 0) out vec4 buffAlbedo;
layout (location = 1) out vec4 buffNormal;
layout (location = 2) out vec4 buffSpecular;
struct Material
{
vec4 diffuse;
vec4 specular;
float reflectance;
int normalMapIdx;
int textureIdx;
};
uniform sampler2D txtSampler[MAX_TEXTURES];
uniform Material materials[MAX_MATERIALS];
vec3 calcNormal(int idx, vec3 normal, vec3 tangent, vec3 bitangent, vec2 textCoords) {
mat3 TBN = mat3(tangent, bitangent, normal);
vec3 newNormal = texture(txtSampler[idx], textCoords).rgb;
newNormal = normalize(newNormal * 2.0 - 1.0);
newNormal = normalize(TBN * newNormal);
return newNormal;
}
void main() {
Material material = materials[outMaterialIdx];
vec4 text_color = texture(txtSampler[material.textureIdx], outTextCoord);
vec4 diffuse = text_color + material.diffuse;
if (diffuse.a < 0.5) {
discard;
}
vec4 specular = text_color + material.specular;
vec3 normal = outNormal;
if (material.normalMapIdx > 0) {
normal = calcNormal(material.normalMapIdx, outNormal, outTangent, outBitangent, outTextCoord);
}
buffAlbedo = vec4(diffuse.xyz, material.reflectance);
buffNormal = vec4(0.5 * normal + 0.5, 1.0);
buffSpecular = specular;
}
```
The main changes are related to the way we access material information and textures. We will now have an array of materials information, which will be accessed by the index we calculated in the vertex shader which is now in the `outMaterialIdx` input variable (which has the `flat` modifier which states that this value should not be interpolated from vertex to fragment stage). We will be using an array of textures to access either regular textures or normal maps. The index to those textures are stored now in the `Material` struct. Since we will be accessing the array of samplers using non constant expressions we need to upgrade GLSL version to 400 (that feature is only available since OpenGL 4.0)
Now it is time to examine the changes in the `SceneRender` class. We will start by defining a set of constants that will be used in the code, one handle for the buffer that will have the indirect drawing instructions (`staticRenderBufferHandle`) and the number of drawing commands (`staticDrawCount`). We will need also to modify the `createUniforms` method according to the changes in the shaders shown before:
```java
public class SceneRender {
...
public static final int MAX_DRAW_ELEMENTS = 100;
public static final int MAX_ENTITIES = 50;
private static final int COMMAND_SIZE = 5 * 4;
private static final int MAX_MATERIALS = 20;
private static final int MAX_TEXTURES = 16;
...
private Map entitiesIdxMap;
...
private int staticDrawCount;
private int staticRenderBufferHandle;
...
public SceneRender() {
...
entitiesIdxMap = new HashMap<>();
}
private void createUniforms() {
uniformsMap = new UniformsMap(shaderProgram.getProgramId());
uniformsMap.createUniform("projectionMatrix");
uniformsMap.createUniform("viewMatrix");
for (int i = 0; i < MAX_TEXTURES; i++) {
uniformsMap.createUniform("txtSampler[" + i + "]");
}
for (int i = 0; i < MAX_MATERIALS; i++) {
String name = "materials[" + i + "]";
uniformsMap.createUniform(name + ".diffuse");
uniformsMap.createUniform(name + ".specular");
uniformsMap.createUniform(name + ".reflectance");
uniformsMap.createUniform(name + ".normalMapIdx");
uniformsMap.createUniform(name + ".textureIdx");
}
for (int i = 0; i < MAX_DRAW_ELEMENTS; i++) {
String name = "drawElements[" + i + "]";
uniformsMap.createUniform(name + ".modelMatrixIdx");
uniformsMap.createUniform(name + ".materialIdx");
}
for (int i = 0; i < MAX_ENTITIES; i++) {
uniformsMap.createUniform("modelMatrices[" + i + "]");
}
}
...
}
```
The `entitiesIdxMap` will store the position in the list of entities associated to a model which each entity is located. We store that information in a `Map` using entity identifier as key. We will need this info later on since, the indirect drawing commands will be recorded iterating over meshes associated to each model. The main changes are in the `render` method, which is defined like this:
```java
public class SceneRender {
...
public void render(Scene scene, RenderBuffers renderBuffers, GBuffer gBuffer) {
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, gBuffer.getGBufferId());
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, gBuffer.getWidth(), gBuffer.getHeight());
glDisable(GL_BLEND);
shaderProgram.bind();
uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());
uniformsMap.setUniform("viewMatrix", scene.getCamera().getViewMatrix());
TextureCache textureCache = scene.getTextureCache();
List textures = textureCache.getAll().stream().toList();
int numTextures = textures.size();
if (numTextures > MAX_TEXTURES) {
Logger.warn("Only " + MAX_TEXTURES + " textures can be used");
}
for (int i = 0; i < Math.min(MAX_TEXTURES, numTextures); i++) {
uniformsMap.setUniform("txtSampler[" + i + "]", i);
Texture texture = textures.get(i);
glActiveTexture(GL_TEXTURE0 + i);
texture.bind();
}
int entityIdx = 0;
for (Model model : scene.getModelMap().values()) {
List entities = model.getEntitiesList();
for (Entity entity : entities) {
uniformsMap.setUniform("modelMatrices[" + entityIdx + "]", entity.getModelMatrix());
entityIdx++;
}
}
// Static meshes
int drawElement = 0;
for (Model model: scene.getModelMap().values()) {
if (model.isAnimated()) {
continue;
}
List entities = model.getEntitiesList();
for (RenderBuffers.MeshDrawData meshDrawData : model.getMeshDrawDataList()) {
for (Entity entity : entities) {
String name = "drawElements[" + drawElement + "]";
uniformsMap.setUniform(name + ".modelMatrixIdx", entitiesIdxMap.get(entity.getId()));
uniformsMap.setUniform(name + ".materialIdx", meshDrawData.materialIdx());
drawElement++;
}
}
}
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, staticRenderBufferHandle);
glBindVertexArray(renderBuffers.getStaticVaoId());
glMultiDrawElementsIndirect(GL_TRIANGLES, GL_UNSIGNED_INT, 0, staticDrawCount, 0);
glBindVertexArray(0);
glEnable(GL_BLEND);
shaderProgram.unbind();
}
...
}
```
You can see that we now have to bind the array of texture samplers and activate all the texture units. In addition to that, we iterate over the entities and set up the uniform values for the model matrices. The next step is to setup the `drawElements` array uniform with the proper values for each of the entities that will point to the index of the model matrix and the material index. After that, we call the `glMultiDrawElementsIndirect` function to perform the indirect drawing. Prior to that, we need to bind the buffers that hold drawing instructions (drawing commands) and the VAO that holds the meshes and indices data. But, when do we populate the buffer for indirect drawing? The answer is that this not need to be performed each render call, if there are no changes in the number of entities, you can record that buffer once, and use it in each render call. In this specific example, we will just populate that buffer at start-up. This means, that, if you want to make changes in the number of entities, you would nee to re-create that buffer again (you should do that for your own engine).
The method that actually builds the indirect draw buffer is called `setupStaticCommandBuffer` which is defined like this:
```java
public class SceneRender {
...
private void setupStaticCommandBuffer(Scene scene) {
List modelList = scene.getModelMap().values().stream().filter(m -> !m.isAnimated()).toList();
int numMeshes = 0;
for (Model model : modelList) {
numMeshes += model.getMeshDrawDataList().size();
}
int firstIndex = 0;
int baseInstance = 0;
ByteBuffer commandBuffer = MemoryUtil.memAlloc(numMeshes * COMMAND_SIZE);
for (Model model : modelList) {
List entities = model.getEntitiesList();
int numEntities = entities.size();
for (RenderBuffers.MeshDrawData meshDrawData : model.getMeshDrawDataList()) {
// count
commandBuffer.putInt(meshDrawData.vertices());
// instanceCount
commandBuffer.putInt(numEntities);
commandBuffer.putInt(firstIndex);
// baseVertex
commandBuffer.putInt(meshDrawData.offset());
commandBuffer.putInt(baseInstance);
firstIndex += meshDrawData.vertices();
baseInstance += entities.size();
}
}
commandBuffer.flip();
staticDrawCount = commandBuffer.remaining() / COMMAND_SIZE;
staticRenderBufferHandle = glGenBuffers();
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, staticRenderBufferHandle);
glBufferData(GL_DRAW_INDIRECT_BUFFER, commandBuffer, GL_DYNAMIC_DRAW);
MemoryUtil.memFree(commandBuffer);
}
...
}
```
We first calculate the total number of meshes. After that, we will create the buffer that will hold indirect drawing instructions and populate it. As you can see we first allocate a `ByteBuffer`. This buffer will hold as many instruction sets as meshes. Each set of draw instructions is composed of five attributes, each of them with a length of 4 bytes (total length of each set of parameters is what defines the `COMMAND_SIZE` constant). Once we have the buffer we start iterating over the meshes associated to the model. Check the beginning of this chapter to find the struct that draw indirect requires. In addition to that, we also populate the `drawElements` uniform using the `Map` we calculated previously, to properly get the model matrix index for each entity. Finally, we just create a GPU buffer and dump the data into it.
We will need to update the `cleanup` method to free the indirect drawing buffer:
```java
public class SceneRender {
...
public void cleanup() {
shaderProgram.cleanup();
glDeleteBuffers(staticRenderBufferHandle);
}
...
}
```
We will need a new method to the set up the values for the materials uniform:
```java
public class SceneRender {
...
private void setupMaterialsUniform(TextureCache textureCache, MaterialCache materialCache) {
List textures = textureCache.getAll().stream().toList();
int numTextures = textures.size();
if (numTextures > MAX_TEXTURES) {
Logger.warn("Only " + MAX_TEXTURES + " textures can be used");
}
Map texturePosMap = new HashMap<>();
for (int i = 0; i < Math.min(MAX_TEXTURES, numTextures); i++) {
texturePosMap.put(textures.get(i).getTexturePath(), i);
}
shaderProgram.bind();
List materialList = materialCache.getMaterialsList();
int numMaterials = materialList.size();
for (int i = 0; i < numMaterials; i++) {
Material material = materialCache.getMaterial(i);
String name = "materials[" + i + "]";
uniformsMap.setUniform(name + ".diffuse", material.getDiffuseColor());
uniformsMap.setUniform(name + ".specular", material.getSpecularColor());
uniformsMap.setUniform(name + ".reflectance", material.getReflectance());
String normalMapPath = material.getNormalMapPath();
int idx = 0;
if (normalMapPath != null) {
idx = texturePosMap.computeIfAbsent(normalMapPath, k -> 0);
}
uniformsMap.setUniform(name + ".normalMapIdx", idx);
Texture texture = textureCache.getTexture(material.getTexturePath());
idx = texturePosMap.computeIfAbsent(texture.getTexturePath(), k -> 0);
uniformsMap.setUniform(name + ".textureIdx", idx);
}
shaderProgram.unbind();
}
...
}
```
We just check that we are not surpassing the maximum number of supported textures (`MAX_TEXTURES`) and just create an array of materials information with the information we used in the previous chapters. The only change is that we will need to store the index of the associated texture and normal maps in the material information.
We need another method to update the entities indices map:
```java
public class SceneRender {
...
private void setupEntitiesData(Scene scene) {
entitiesIdxMap.clear();
int entityIdx = 0;
for (Model model : scene.getModelMap().values()) {
List entities = model.getEntitiesList();
for (Entity entity : entities) {
entitiesIdxMap.put(entity.getId(), entityIdx);
entityIdx++;
}
}
}
...
}
```
To complete the changes in the `SceneRender` class, we will create a method that wraps the `setupXX` so it can be invoked from the `Render` class:
```java
public class SceneRender {
...
public void setupData(Scene scene) {
setupEntitiesData(scene);
setupStaticCommandBuffer(scene);
setupMaterialsUniform(scene.getTextureCache(), scene.getMaterialCache());
}
...
}
```
We will change also the shadow render process to use indirect drawing. The changes in the vertex shader (`shadow.vert`) are quite similar, we will not be using animation information and we need to access the proper model matrices using the combination of `gl_BaseInstance` and `gl_InstanceID` built-in variables. In this case, we do not need material information so the fragment shader (`shadow.frag`) is not changed.
```glsl
#version 460
const int MAX_DRAW_ELEMENTS = 100;
const int MAX_ENTITIES = 50;
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
layout (location=2) in vec3 tangent;
layout (location=3) in vec3 bitangent;
layout (location=4) in vec2 texCoord;
struct DrawElement
{
int modelMatrixIdx;
};
uniform mat4 modelMatrix;
uniform mat4 projViewMatrix;
uniform DrawElement drawElements[MAX_DRAW_ELEMENTS];
uniform mat4 modelMatrices[MAX_ENTITIES];
void main()
{
vec4 initPos = vec4(position, 1.0);
uint idx = gl_BaseInstance + gl_InstanceID;
int modelMatrixIdx = drawElements[idx].modelMatrixIdx;
mat4 modelMatrix = modelMatrices[modelMatrixIdx];
gl_Position = projViewMatrix * modelMatrix * initPos;
}
```
Changes in `ShadowRender` are also pretty similar as the ones in the `SceneRender` class:
```java
public class ShadowRender {
private static final int COMMAND_SIZE = 5 * 4;
...
private Map entitiesIdxMap;
...
private int staticRenderBufferHandle;
...
public ShadowRender() {
...
entitiesIdxMap = new HashMap<>();
}
public void cleanup() {
shaderProgram.cleanup();
shadowBuffer.cleanup();
glDeleteBuffers(staticRenderBufferHandle);
}
private void createUniforms() {
...
for (int i = 0; i < SceneRender.MAX_DRAW_ELEMENTS; i++) {
String name = "drawElements[" + i + "]";
uniformsMap.createUniform(name + ".modelMatrixIdx");
}
for (int i = 0; i < SceneRender.MAX_ENTITIES; i++) {
uniformsMap.createUniform("modelMatrices[" + i + "]");
}
}
...
}
```
The `createUniforms` method needs to be update to use the new uniforms and the `cleanup` one needs to free the indirect draw buffer. The `render` method will use now the `glMultiDrawElementsIndirect` instead of submitting individual draw commands for meshes and entities:
```java
public class ShadowRender {
...
public void render(Scene scene, RenderBuffers renderBuffers) {
CascadeShadow.updateCascadeShadows(cascadeShadows, scene);
glBindFramebuffer(GL_FRAMEBUFFER, shadowBuffer.getDepthMapFBO());
glViewport(0, 0, ShadowBuffer.SHADOW_MAP_WIDTH, ShadowBuffer.SHADOW_MAP_HEIGHT);
shaderProgram.bind();
int entityIdx = 0;
for (Model model : scene.getModelMap().values()) {
List entities = model.getEntitiesList();
for (Entity entity : entities) {
uniformsMap.setUniform("modelMatrices[" + entityIdx + "]", entity.getModelMatrix());
entityIdx++;
}
}
for (int i = 0; i < CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowBuffer.getDepthMapTexture().getIds()[i], 0);
glClear(GL_DEPTH_BUFFER_BIT);
}
// Static meshes
int drawElement = 0;
for (Model model: scene.getModelMap().values()) {
if (model.isAnimated()) {
continue;
}
List entities = model.getEntitiesList();
for (RenderBuffers.MeshDrawData meshDrawData : model.getMeshDrawDataList()) {
for (Entity entity : entities) {
String name = "drawElements[" + drawElement + "]";
uniformsMap.setUniform(name + ".modelMatrixIdx", entitiesIdxMap.get(entity.getId()));
drawElement++;
}
}
}
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, staticRenderBufferHandle);
glBindVertexArray(renderBuffers.getStaticVaoId());
for (int i = 0; i < CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowBuffer.getDepthMapTexture().getIds()[i], 0);
CascadeShadow shadowCascade = cascadeShadows.get(i);
uniformsMap.setUniform("projViewMatrix", shadowCascade.getProjViewMatrix());
glMultiDrawElementsIndirect(GL_TRIANGLES, GL_UNSIGNED_INT, 0, staticDrawCount, 0);
}
glBindVertexArray(0);
shaderProgram.unbind();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
...
}
```
Finally, we need a similar method to set up the indirect draw buffer and the entities map:
```java
public class ShadowRender {
...
public void setupData(Scene scene) {
setupEntitiesData(scene);
setupStaticCommandBuffer(scene);
}
private void setupEntitiesData(Scene scene) {
entitiesIdxMap.clear();
int entityIdx = 0;
for (Model model : scene.getModelMap().values()) {
List entities = model.getEntitiesList();
for (Entity entity : entities) {
entitiesIdxMap.put(entity.getId(), entityIdx);
entityIdx++;
}
}
}
private void setupStaticCommandBuffer(Scene scene) {
List modelList = scene.getModelMap().values().stream().filter(m -> !m.isAnimated()).toList();
Map entitiesIdxMap = new HashMap<>();
int entityIdx = 0;
int numMeshes = 0;
for (Model model : scene.getModelMap().values()) {
List entities = model.getEntitiesList();
numMeshes += model.getMeshDrawDataList().size();
for (Entity entity : entities) {
entitiesIdxMap.put(entity.getId(), entityIdx);
entityIdx++;
}
}
int firstIndex = 0;
int baseInstance = 0;
int drawElement = 0;
shaderProgram.bind();
ByteBuffer commandBuffer = MemoryUtil.memAlloc(numMeshes * COMMAND_SIZE);
for (Model model : modelList) {
List entities = model.getEntitiesList();
int numEntities = entities.size();
for (RenderBuffers.MeshDrawData meshDrawData : model.getMeshDrawDataList()) {
// count
commandBuffer.putInt(meshDrawData.vertices());
// instanceCount
commandBuffer.putInt(numEntities);
commandBuffer.putInt(firstIndex);
// baseVertex
commandBuffer.putInt(meshDrawData.offset());
commandBuffer.putInt(baseInstance);
firstIndex += meshDrawData.vertices();
baseInstance += entities.size();
for (Entity entity : entities) {
String name = "drawElements[" + drawElement + "]";
uniformsMap.setUniform(name + ".modelMatrixIdx", entitiesIdxMap.get(entity.getId()));
drawElement++;
}
}
}
commandBuffer.flip();
shaderProgram.unbind();
staticDrawCount = commandBuffer.remaining() / COMMAND_SIZE;
staticRenderBufferHandle = glGenBuffers();
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, staticRenderBufferHandle);
glBufferData(GL_DRAW_INDIRECT_BUFFER, commandBuffer, GL_DYNAMIC_DRAW);
MemoryUtil.memFree(commandBuffer);
}
}
```
In the `Render` class, we just need to instantiate the `RenderBuffers` class and provide a new method `setupData` which can be called when every model and entity has been created to create the indirect drawing buffers and associated data.
```java
public class Render {
...
private RenderBuffers renderBuffers;
...
public Render(Window window) {
...
renderBuffers = new RenderBuffers();
}
public void cleanup() {
...
renderBuffers.cleanup();
}
...
public void render(Window window, Scene scene) {
shadowRender.render(scene, renderBuffers);
sceneRender.render(scene, renderBuffers, gBuffer);
...
}
...
public void setupData(Scene scene) {
renderBuffers.loadStaticModels(scene);
renderBuffers.loadAnimatedModels(scene);
sceneRender.setupData(scene);
shadowRender.setupData(scene);
List modelList = new ArrayList<>(scene.getModelMap().values());
modelList.forEach(m -> m.getMeshDataList().clear());
}
}
```
We need toi update the `TextureCache` class to provide a method to return all the textures:
```java
public class TextureCache {
...
public Collection getAll() {
return textureMap.values();
}
...
}
```
Since we have modified the class hierarchy that deals with models and materials we need to update the `SkyBox` class (loading individual models require now additional steps):
```java
public class SkyBox {
private Material material;
private Mesh mesh;
...
public SkyBox(String skyBoxModelPath, TextureCache textureCache, MaterialCache materialCache) {
skyBoxModel = ModelLoader.loadModel("skybox-model", skyBoxModelPath, textureCache, materialCache, false);
MeshData meshData = skyBoxModel.getMeshDataList().get(0);
material = materialCache.getMaterial(meshData.getMaterialIdx());
mesh = new Mesh(meshData);
skyBoxModel.getMeshDataList().clear();
skyBoxEntity = new Entity("skyBoxEntity-entity", skyBoxModel.getId());
}
public void cleanuo() {
mesh.cleanup();
}
public Material getMaterial() {
return material;
}
public Mesh getMesh() {
return mesh;
}
...
}
```
These changes also affect the `SkyBoxRender` class. For sky box rendering we will not use indirect drawing (it is not worth it since we will be rendering just one mesh):
```java
public class SkyBoxRender {
...
public void render(Scene scene) {
SkyBox skyBox = scene.getSkyBox();
if (skyBox == null) {
return;
}
shaderProgram.bind();
uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());
viewMatrix.set(scene.getCamera().getViewMatrix());
viewMatrix.m30(0);
viewMatrix.m31(0);
viewMatrix.m32(0);
uniformsMap.setUniform("viewMatrix", viewMatrix);
uniformsMap.setUniform("txtSampler", 0);
Entity skyBoxEntity = skyBox.getSkyBoxEntity();
TextureCache textureCache = scene.getTextureCache();
Material material = skyBox.getMaterial();
Mesh mesh = skyBox.getMesh();
Texture texture = textureCache.getTexture(material.getTexturePath());
glActiveTexture(GL_TEXTURE0);
texture.bind();
uniformsMap.setUniform("diffuse", material.getDiffuseColor());
uniformsMap.setUniform("hasTexture", texture.getTexturePath().equals(TextureCache.DEFAULT_TEXTURE) ? 0 : 1);
glBindVertexArray(mesh.getVaoId());
uniformsMap.setUniform("modelMatrix", skyBoxEntity.getModelMatrix());
glDrawElements(GL_TRIANGLES, mesh.getNumVertices(), GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
shaderProgram.unbind();
}
...
}
```
In the `Scene` class, we do not just need to invoke the `Scene` `cleanup` method (since the data associated to the buffers is in the `RenderBuffers` class):
```java
public class Engine {
...
private void cleanup() {
appLogic.cleanup();
render.cleanup();
window.cleanup();
}
...
}
```
Finally, in the `Main` class, we will load two entities associated to a cube model. We will rotate them independently to check that the code works ok. The most important part is to call the `Render` class `setupData` method when everything is loaded.
```java
public class Main implements IAppLogic {
...
private Entity cubeEntity1;
private Entity cubeEntity2;
...
private float rotation;
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-20", opts, main);
...
}
public void init(Window window, Scene scene, Render render) {
...
Model terrainModel = ModelLoader.loadModel(terrainModelId, "resources/models/terrain/terrain.obj",
scene.getTextureCache(), scene.getMaterialCache(), false);
...
Model cubeModel = ModelLoader.loadModel("cube-model", "resources/models/cube/cube.obj",
scene.getTextureCache(), scene.getMaterialCache(), false);
scene.addModel(cubeModel);
cubeEntity1 = new Entity("cube-entity-1", cubeModel.getId());
cubeEntity1.setPosition(0, 2, -1);
cubeEntity1.updateModelMatrix();
scene.addEntity(cubeEntity1);
cubeEntity2 = new Entity("cube-entity-2", cubeModel.getId());
cubeEntity2.setPosition(-2, 2, -1);
cubeEntity2.updateModelMatrix();
scene.addEntity(cubeEntity2);
render.setupData(scene);
...
SkyBox skyBox = new SkyBox("resources/models/skybox/skybox.obj", scene.getTextureCache(),
scene.getMaterialCache());
...
}
...
public void update(Window window, Scene scene, long diffTimeMillis) {
rotation += 1.5;
if (rotation > 360) {
rotation = 0;
}
cubeEntity1.setRotation(1, 1, 1, (float) Math.toRadians(rotation));
cubeEntity1.updateModelMatrix();
cubeEntity2.setRotation(1, 1, 1, (float) Math.toRadians(360 - rotation));
cubeEntity2.updateModelMatrix();
}
}
```
With all of that changes implemented you should be able to see something similar to this.

[Next chapter](../chapter-21/chapter-21.md)
================================================
FILE: chapter-21/chapter-21.md
================================================
# Chapter 21 - Indirect drawing (animated models) and compute shaders
In this chapter we will add support for animated models when using indirect drawing. In order to do so, we will introduce a new topic, compute shaders. We will use compute shaders to transform model vertices from the binding pose to their final position (according to current animation). Once we have done this, we can use regular shaders to render them, there will be no need to distinguish between animated and non animated models while rendering. In addition to that, we will be able decouple animation transformations from the rendering process. By doing so, we will be able to update animation models in a different rate than the render rate (we do not need to transform animated vertices in each frame if they have not changed).
You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-20).
## Concepts
Prior to explaining the code, let's explain the concepts behind indirect drawing for animated models. The approach we will follow will be more or less the same as the one used in the previous chapter. We will have a global buffer which will contain vertices data. The main difference is that we will use first a compute shader to transform vertices from the binding pose to the final one. In addition to that, we will not use multiple instances for a model. The reason for that is, even if we have several entities that share the same animated model, they can be in different animation state (the animation may have started after, have a lower update rate or even the specific selected animation of the model may be different). Therefore we will need, inside the global buffer that will contain the animated vertices, a single chunk of data per entity.
We will still need to keep binding data, we will create another global buffer for that for all the meshes of the scene. In this case we do not need to have separate chunks per entity, just one per mesh. The compute shader will access that binding poses data buffer, will process that for each of the entities and will store the results into another global buffer with a structure similar to the one used for static models.
## Model loading
We need to update the `Model` class since we will not store bone matrices data any more in this class. Instead, that information will be stored in a common buffer. Therefore the inner class `AnimatedFrame` cannot be a record any longer (records are immutable).
```java
public class Model {
...
public static class AnimatedFrame {
private Matrix4f[] bonesMatrices;
private int offset;
public AnimatedFrame(Matrix4f[] bonesMatrices) {
this.bonesMatrices = bonesMatrices;
}
public void clearData() {
bonesMatrices = null;
}
public Matrix4f[] getBonesMatrices() {
return bonesMatrices;
}
public int getOffset() {
return offset;
}
public void setOffset(int offset) {
this.offset = offset;
}
}
...
}
```
The fact that we pass from a record to a regular inner class, changing the way we access `Model` class attributes requires a slight modification in the `ModelLoader` class:
```java
public class ModelLoader {
...
private static void buildFrameMatrices(AIAnimation aiAnimation, List boneList, Model.AnimatedFrame animatedFrame,
int frame, Node node, Matrix4f parentTransformation, Matrix4f globalInverseTransform) {
...
for (Bone bone : affectedBones) {
...
animatedFrame.getBonesMatrices()[bone.boneId()] = boneTransform;
}
...
}
...
}
```
Let's review now the new global buffers that we will need, which will be managed in the `RenderBuffers` class:
```java
public class RenderBuffers {
private int animVaoId;
private int bindingPosesBuffer;
private int bonesIndicesWeightsBuffer;
private int bonesMatricesBuffer;
private int destAnimationBuffer;
...
public void cleanup() {
...
glDeleteVertexArrays(animVaoId);
}
...
public int getAnimVaoId() {
return animVaoId;
}
public int getBindingPosesBuffer() {
return bindingPosesBuffer;
}
public int getBonesIndicesWeightsBuffer() {
return bonesIndicesWeightsBuffer;
}
public int getBonesMatricesBuffer() {
return bonesMatricesBuffer;
}
public int getDestAnimationBuffer() {
return destAnimationBuffer;
}
...
}
```
The `animVaoId` will store the VAO which will define the data which will contain the transformed animation vertices, that is, the data after it has been processed by the compute shader (remember one chunk per mesh and entity). The data itself will be stored in a buffer, whose handle will be stored in `destAnimationBuffer`. We need to access that buffer in the compute shader which does not understand VAOs, just buffers. We will need also to store bone matrices and indices and weights into two buffers represented by `bonesMatricesBuffer` and `bonesIndicesWeightsBuffer` respectively. In the `cleanup` method we must not forget to clean the new VAO. We also need to add getters for the new attributes.
We can now implement the `loadAnimatedModels` which starts like this:
```java
public class RenderBuffers {
...
public void loadAnimatedModels(Scene scene) {
List modelList = scene.getModelMap().values().stream().filter(Model::isAnimated).toList();
loadBindingPoses(modelList);
loadBonesMatricesBuffer(modelList);
loadBonesIndicesWeights(modelList);
animVaoId = glGenVertexArrays();
glBindVertexArray(animVaoId);
int positionsSize = 0;
int normalsSize = 0;
int textureCoordsSize = 0;
int indicesSize = 0;
int offset = 0;
int chunkBindingPoseOffset = 0;
int bindingPoseOffset = 0;
int chunkWeightsOffset = 0;
int weightsOffset = 0;
for (Model model : modelList) {
List entities = model.getEntitiesList();
for (Entity entity : entities) {
List meshDrawDataList = model.getMeshDrawDataList();
bindingPoseOffset = chunkBindingPoseOffset;
weightsOffset = chunkWeightsOffset;
for (MeshData meshData : model.getMeshDataList()) {
positionsSize += meshData.getPositions().length;
normalsSize += meshData.getNormals().length;
textureCoordsSize += meshData.getTextCoords().length;
indicesSize += meshData.getIndices().length;
int meshSizeInBytes = (meshData.getPositions().length + meshData.getNormals().length * 3 + meshData.getTextCoords().length) * 4;
meshDrawDataList.add(new MeshDrawData(meshSizeInBytes, meshData.getMaterialIdx(), offset,
meshData.getIndices().length, new AnimMeshDrawData(entity, bindingPoseOffset, weightsOffset)));
bindingPoseOffset += meshSizeInBytes / 4;
int groupSize = (int) Math.ceil((float) meshSizeInBytes / (14 * 4));
weightsOffset += groupSize * 2 * 4;
offset = positionsSize / 3;
}
}
chunkBindingPoseOffset += bindingPoseOffset;
chunkWeightsOffset += weightsOffset;
}
destAnimationBuffer = glGenBuffers();
vboIdList.add(destAnimationBuffer);
FloatBuffer meshesBuffer = MemoryUtil.memAllocFloat(positionsSize + normalsSize * 3 + textureCoordsSize);
for (Model model : modelList) {
model.getEntitiesList().forEach(e -> {
for (MeshData meshData : model.getMeshDataList()) {
populateMeshBuffer(meshesBuffer, meshData);
}
});
}
meshesBuffer.flip();
glBindBuffer(GL_ARRAY_BUFFER, destAnimationBuffer);
glBufferData(GL_ARRAY_BUFFER, meshesBuffer, GL_STATIC_DRAW);
MemoryUtil.memFree(meshesBuffer);
defineVertexAttribs();
// Index VBO
int vboId = glGenBuffers();
vboIdList.add(vboId);
IntBuffer indicesBuffer = MemoryUtil.memAllocInt(indicesSize);
for (Model model : modelList) {
model.getEntitiesList().forEach(e -> {
for (MeshData meshData : model.getMeshDataList()) {
indicesBuffer.put(meshData.getIndices());
}
});
}
indicesBuffer.flip();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indicesBuffer, GL_STATIC_DRAW);
MemoryUtil.memFree(indicesBuffer);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
...
}
```
We will see later on how the following methods are defined but, by now:
* `loadBindingPoses`: Stores binding pose information for all the meshes associated to animated model.
* `loadBonesMatricesBuffer` : Stores the bone matrices for each animation of the animated models.
* `loadBonesIndicesWeights`: Stores the bones indices and weights information of the animated models.
The code is very similar to the `loadStaticModels`, we start by creating a VAO for animated models, and then iterate over the meshes of the models. We will use a single buffer to hold all the data, so we just iterate over those elements to get the final buffer size. Please note that the first loop is a little bit different than the static version. We need to iterate over the entities associated to a model, and for each of them we calculate the size of all the associated meshes.
Let's examine the `loadBindingPoses` method:
```java
public class RenderBuffers {
...
private void loadBindingPoses(List modelList) {
int meshSize = 0;
for (Model model : modelList) {
for (MeshData meshData : model.getMeshDataList()) {
meshSize += meshData.getPositions().length + meshData.getNormals().length * 3 +
meshData.getTextCoords().length + meshData.getIndices().length;
}
}
bindingPosesBuffer = glGenBuffers();
vboIdList.add(bindingPosesBuffer);
FloatBuffer meshesBuffer = MemoryUtil.memAllocFloat(meshSize);
for (Model model : modelList) {
for (MeshData meshData : model.getMeshDataList()) {
populateMeshBuffer(meshesBuffer, meshData);
}
}
meshesBuffer.flip();
glBindBuffer(GL_SHADER_STORAGE_BUFFER, bindingPosesBuffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, meshesBuffer, GL_STATIC_DRAW);
MemoryUtil.memFree(meshesBuffer);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
...
}
```
The `loadBindingPoses` iterates over all the animated models, getting the total size to accommodate all the associated meshes. With that size, a buffer is created and populated using the `populateMeshBuffer` which was already present in the chapter before. Therefore, we store binding pose vertices for all the meshes of the animated models into a single buffer. We will access this buffer in the compute shader, so you can see that we use the `GL_SHADER_STORAGE_BUFFER` flag when binding.
The `loadBonesMatricesBuffer` method is defined like this:
```java
public class RenderBuffers {
...
private void loadBonesMatricesBuffer(List modelList) {
int bufferSize = 0;
for (Model model : modelList) {
List animationsList = model.getAnimationList();
for (Model.Animation animation : animationsList) {
List frameList = animation.frames();
for (Model.AnimatedFrame frame : frameList) {
Matrix4f[] matrices = frame.getBonesMatrices();
bufferSize += matrices.length * 64;
}
}
}
bonesMatricesBuffer = glGenBuffers();
vboIdList.add(bonesMatricesBuffer);
ByteBuffer dataBuffer = MemoryUtil.memAlloc(bufferSize);
int matrixSize = 4 * 4 * 4;
for (Model model : modelList) {
List animationsList = model.getAnimationList();
for (Model.Animation animation : animationsList) {
List frameList = animation.frames();
for (Model.AnimatedFrame frame : frameList) {
frame.setOffset(dataBuffer.position() / matrixSize);
Matrix4f[] matrices = frame.getBonesMatrices();
for (Matrix4f matrix : matrices) {
matrix.get(dataBuffer);
dataBuffer.position(dataBuffer.position() + matrixSize);
}
frame.clearData();
}
}
}
dataBuffer.flip();
glBindBuffer(GL_SHADER_STORAGE_BUFFER, bonesMatricesBuffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, dataBuffer, GL_STATIC_DRAW);
MemoryUtil.memFree(dataBuffer);
}
...
}
```
We start iterating over the animation data for each of the models, getting the associated transformation matrices (for all the bones) for each of the animated frames in order to calculate the buffer that will hold all that information. Once we have the size, we create the buffer and start populating that (in the second loop) with those matrices. As in the previous buffer we will access this buffer in the compute shader, therefore we need to use the `GL_SHADER_STORAGE_BUFFER` flag.
The `loadBonesIndicesWeights` method is defined like this:
```java
public class RenderBuffers {
...
private void loadBonesIndicesWeights(List modelList) {
int bufferSize = 0;
for (Model model : modelList) {
for (MeshData meshData : model.getMeshDataList()) {
bufferSize += meshData.getBoneIndices().length * 4 + meshData.getWeights().length * 4;
}
}
ByteBuffer dataBuffer = MemoryUtil.memAlloc(bufferSize);
for (Model model : modelList) {
for (MeshData meshData : model.getMeshDataList()) {
int[] bonesIndices = meshData.getBoneIndices();
float[] weights = meshData.getWeights();
int rows = bonesIndices.length / 4;
for (int row = 0; row < rows; row++) {
int startPos = row * 4;
dataBuffer.putFloat(weights[startPos]);
dataBuffer.putFloat(weights[startPos + 1]);
dataBuffer.putFloat(weights[startPos + 2]);
dataBuffer.putFloat(weights[startPos + 3]);
dataBuffer.putFloat(bonesIndices[startPos]);
dataBuffer.putFloat(bonesIndices[startPos + 1]);
dataBuffer.putFloat(bonesIndices[startPos + 2]);
dataBuffer.putFloat(bonesIndices[startPos + 3]);
}
}
}
dataBuffer.flip();
bonesIndicesWeightsBuffer = glGenBuffers();
vboIdList.add(bonesIndicesWeightsBuffer);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, bonesIndicesWeightsBuffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, dataBuffer, GL_STATIC_DRAW);
MemoryUtil.memFree(dataBuffer);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
}
...
}
```
As in the previous methods, we will store the weights and bone indices information into a single buffer, so we need to first calculate its size and later on populate it. As in the previous buffer we will access this buffers in the compute shader, therefore we need to use the `GL_SHADER_STORAGE_BUFFER` flag.
## Compute shaders
It is turn now to implement animation transformations through compute shaders. As it has been said before, a shader is like any other shader but it does not compose any restrictions on its inputs and its outputs. We will use them to transform data, they will have access to the global buffers that hold information about binding poses and animation transformation matrices and it will dump the result into another buffer. The shader code for animations (`anim.comp`) is defined like this:
```glsl
#version 460
layout (std430, binding=0) readonly buffer srcBuf {
float data[];
} srcVector;
layout (std430, binding=1) readonly buffer weightsBuf {
float data[];
} weightsVector;
layout (std430, binding=2) readonly buffer bonesBuf {
mat4 data[];
} bonesMatrices;
layout (std430, binding=3) buffer dstBuf {
float data[];
} dstVector;
struct DrawParameters
{
int srcOffset;
int srcSize;
int weightsOffset;
int bonesMatricesOffset;
int dstOffset;
};
uniform DrawParameters drawParameters;
layout (local_size_x=1, local_size_y=1, local_size_z=1) in;
void main()
{
int baseIdx = int(gl_GlobalInvocationID.x) * 14;
uint baseIdxWeightsBuf = drawParameters.weightsOffset + int(gl_GlobalInvocationID.x) * 8;
uint baseIdxSrcBuf = drawParameters.srcOffset + baseIdx;
uint baseIdxDstBuf = drawParameters.dstOffset + baseIdx;
if (baseIdx >= drawParameters.srcSize) {
return;
}
vec4 weights = vec4(weightsVector.data[baseIdxWeightsBuf], weightsVector.data[baseIdxWeightsBuf + 1], weightsVector.data[baseIdxWeightsBuf + 2], weightsVector.data[baseIdxWeightsBuf + 3]);
ivec4 bonesIndices = ivec4(weightsVector.data[baseIdxWeightsBuf + 4], weightsVector.data[baseIdxWeightsBuf + 5], weightsVector.data[baseIdxWeightsBuf + 6], weightsVector.data[baseIdxWeightsBuf + 7]);
vec4 position = vec4(srcVector.data[baseIdxSrcBuf], srcVector.data[baseIdxSrcBuf + 1], srcVector.data[baseIdxSrcBuf + 2], 1);
position =
weights.x * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.x] * position +
weights.y * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.y] * position +
weights.z * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.z] * position +
weights.w * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.w] * position;
dstVector.data[baseIdxDstBuf] = position.x / position.w;
dstVector.data[baseIdxDstBuf + 1] = position.y / position.w;
dstVector.data[baseIdxDstBuf + 2] = position.z / position.w;
baseIdxSrcBuf += 3;
baseIdxDstBuf += 3;
vec4 normal = vec4(srcVector.data[baseIdxSrcBuf], srcVector.data[baseIdxSrcBuf + 1], srcVector.data[baseIdxSrcBuf + 2], 0);
normal =
weights.x * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.x] * normal +
weights.y * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.y] * normal +
weights.z * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.z] * normal +
weights.w * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.w] * normal;
dstVector.data[baseIdxDstBuf] = normal.x;
dstVector.data[baseIdxDstBuf + 1] = normal.y;
dstVector.data[baseIdxDstBuf + 2] = normal.z;
baseIdxSrcBuf += 3;
baseIdxDstBuf += 3;
vec4 tangent = vec4(srcVector.data[baseIdxSrcBuf], srcVector.data[baseIdxSrcBuf + 1], srcVector.data[baseIdxSrcBuf + 2], 0);
tangent =
weights.x * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.x] * tangent +
weights.y * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.y] * tangent +
weights.z * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.z] * tangent +
weights.w * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.w] * tangent;
dstVector.data[baseIdxDstBuf] = tangent.x;
dstVector.data[baseIdxDstBuf + 1] = tangent.y;
dstVector.data[baseIdxDstBuf + 2] = tangent.z;
baseIdxSrcBuf += 3;
baseIdxDstBuf += 3;
vec4 bitangent = vec4(srcVector.data[baseIdxSrcBuf], srcVector.data[baseIdxSrcBuf + 1], srcVector.data[baseIdxSrcBuf + 2], 0);
bitangent =
weights.x * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.x] * bitangent +
weights.y * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.y] * bitangent +
weights.z * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.z] * bitangent +
weights.w * bonesMatrices.data[drawParameters.bonesMatricesOffset + bonesIndices.w] * bitangent;
dstVector.data[baseIdxDstBuf] = bitangent.x;
dstVector.data[baseIdxDstBuf + 1] = bitangent.y;
dstVector.data[baseIdxDstBuf + 2] = bitangent.z;
baseIdxSrcBuf += 3;
baseIdxDstBuf += 3;
vec2 textCoords = vec2(srcVector.data[baseIdxSrcBuf], srcVector.data[baseIdxSrcBuf + 1]);
dstVector.data[baseIdxDstBuf] = textCoords.x;
dstVector.data[baseIdxDstBuf + 1] = textCoords.y;
}
```
As you can see the code is very similar to the one used in previous chapters for animation (unrolling the loops). You wil notice that we need to apply an offset for each mesh, since the data is now stored in a common buffer. In order to support push constants in the compute shader. The input / output data is defined as a set of buffers:
* `srcVector`: this buffer will contain vertices information (positions, normals, etc.).
* `weightsVector`: this buffer will contain the weights for the current animation state for a specific mesh and entity.
* `bonesMatrices`: the same but with bones matrices information.
* `dstVector`: this buffer will hold the result of applying animation transformations.
The interesting thing is how we compute that offset. The `gl_GlobalInvocationID` variable will contain the index of work item currently being execute din the compute shader. In our case, we will create as many work items as "chunks" we will have in the global buffer. A chunk models its vertex data, position, normals, texture coordinates etc. Therefore, for vertices data each time the work item is increased, we need to move forward in the buffer 14 positions (14 floats: 3 for positions,. 3 for normals, 3 for bitangents, 3 for tangent and 2 for texture coordinates). The same applies for weights buffers which holds data for weights (4 floats) and bone indices (4 floats) associated to each vertex. We use also the vertex offset to move long the binding poses buffer and the destination buffer along with the `drawParameters` data which point to th ebase offset for each mesh and entity.
We will use this shader in a new class named `AnimationRender` which is defined like this:
```java
package org.lwjglb.engine.graph;
import org.lwjglb.engine.scene.*;
import java.util.*;
import static org.lwjgl.opengl.GL43.*;
public class AnimationRender {
private ShaderProgram shaderProgram;
private UniformsMap uniformsMap;
public AnimationRender() {
List shaderModuleDataList = new ArrayList<>();
shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/anim.comp", GL_COMPUTE_SHADER));
shaderProgram = new ShaderProgram(shaderModuleDataList);
createUniforms();
}
public void cleanup() {
shaderProgram.cleanup();
}
private void createUniforms() {
uniformsMap = new UniformsMap(shaderProgram.getProgramId());
uniformsMap.createUniform("drawParameters.srcOffset");
uniformsMap.createUniform("drawParameters.srcSize");
uniformsMap.createUniform("drawParameters.weightsOffset");
uniformsMap.createUniform("drawParameters.bonesMatricesOffset");
uniformsMap.createUniform("drawParameters.dstOffset");
}
public void render(Scene scene, RenderBuffers globalBuffer) {
shaderProgram.bind();
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, globalBuffer.getBindingPosesBuffer());
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, globalBuffer.getBonesIndicesWeightsBuffer());
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, globalBuffer.getBonesMatricesBuffer());
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 3, globalBuffer.getDestAnimationBuffer());
int dstOffset = 0;
for (Model model : scene.getModelMap().values()) {
if (model.isAnimated()) {
for (RenderBuffers.MeshDrawData meshDrawData : model.getMeshDrawDataList()) {
RenderBuffers.AnimMeshDrawData animMeshDrawData = meshDrawData.animMeshDrawData();
Entity entity = animMeshDrawData.entity();
Model.AnimatedFrame frame = entity.getAnimationData().getCurrentFrame();
int groupSize = (int) Math.ceil((float) meshDrawData.sizeInBytes() / (14 * 4));
uniformsMap.setUniform("drawParameters.srcOffset", animMeshDrawData.bindingPoseOffset());
uniformsMap.setUniform("drawParameters.srcSize", meshDrawData.sizeInBytes() / 4);
uniformsMap.setUniform("drawParameters.weightsOffset", animMeshDrawData.weightsOffset());
uniformsMap.setUniform("drawParameters.bonesMatricesOffset", frame.getOffset());
uniformsMap.setUniform("drawParameters.dstOffset", dstOffset);
glDispatchCompute(groupSize, 1, 1);
dstOffset += meshDrawData.sizeInBytes() / 4;
}
}
}
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
shaderProgram.unbind();
}
}
```
As you can see, the definition is quite simple, when creating the shader we need to set up the `GL_COMPUTE_SHADER` to indicate that this is the compute shader. The uniforms that we use will contain the offset in binding pose buffer, weights and matrices buffer and destination buffer. In the `render` method we just iterate over the models and get the mesh draw data for each entity to dispatch a call to the compute shader by invoking the `glDispatchCompute`. The key is to use the `groupSize` variable again. As you can see we need to invoke the shader as many times as vertices chunks there are in the mesh.
## Other changes
We need to update the `SceneRender` class to render the entities associated to animated models. The changes are shown below:
```java
public class SceneRender {
...
private int animDrawCount;
private int animRenderBufferHandle;
...
public void cleanup() {
...
glDeleteBuffers(animRenderBufferHandle);
}
...
public void render(Scene scene, RenderBuffers renderBuffers, GBuffer gBuffer) {
...
// Animated meshes
drawElement = 0;
for (Model model: scene.getModelMap().values()) {
if (!model.isAnimated()) {
continue;
}
for (RenderBuffers.MeshDrawData meshDrawData : model.getMeshDrawDataList()) {
RenderBuffers.AnimMeshDrawData animMeshDrawData = meshDrawData.animMeshDrawData();
Entity entity = animMeshDrawData.entity();
String name = "drawElements[" + drawElement + "]";
uniformsMap.setUniform(name + ".modelMatrixIdx", entitiesIdxMap.get(entity.getId()));
uniformsMap.setUniform(name + ".materialIdx", meshDrawData.materialIdx());
drawElement++;
}
}
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, animRenderBufferHandle);
glBindVertexArray(renderBuffers.getAnimVaoId());
glMultiDrawElementsIndirect(GL_TRIANGLES, GL_UNSIGNED_INT, 0, animDrawCount, 0);
glBindVertexArray(0);
glEnable(GL_BLEND);
shaderProgram.unbind();
}
private void setupAnimCommandBuffer(Scene scene) {
List modelList = scene.getModelMap().values().stream().filter(m -> m.isAnimated()).toList();
int numMeshes = 0;
for (Model model : modelList) {
numMeshes += model.getMeshDrawDataList().size();
}
int firstIndex = 0;
int baseInstance = 0;
ByteBuffer commandBuffer = MemoryUtil.memAlloc(numMeshes * COMMAND_SIZE);
for (Model model : modelList) {
for (RenderBuffers.MeshDrawData meshDrawData : model.getMeshDrawDataList()) {
// count
commandBuffer.putInt(meshDrawData.vertices());
// instanceCount
commandBuffer.putInt(1);
commandBuffer.putInt(firstIndex);
// baseVertex
commandBuffer.putInt(meshDrawData.offset());
commandBuffer.putInt(baseInstance);
firstIndex += meshDrawData.vertices();
baseInstance++;
}
}
commandBuffer.flip();
animDrawCount = commandBuffer.remaining() / COMMAND_SIZE;
animRenderBufferHandle = glGenBuffers();
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, animRenderBufferHandle);
glBufferData(GL_DRAW_INDIRECT_BUFFER, commandBuffer, GL_DYNAMIC_DRAW);
MemoryUtil.memFree(commandBuffer);
}
public void setupData(Scene scene) {
...
setupAnimCommandBuffer(scene);
...
}
...
}
```
The code to render animated models is quite similar as the one used for static entities. The differences is that we are not grouping entities that share the same model, we need to record draw instructions for each of the entities and associated meshes.
We need also to update the `ShadowRender` class to render animated models:
```java
public class ShadowRender {
...
private int animDrawCount;
private int animRenderBufferHandle;
...
public void cleanup() {
...
glDeleteBuffers(animRenderBufferHandle);
}
...
public void render(Scene scene, RenderBuffers renderBuffers) {
...
// Animated meshes
drawElement = 0;
for (Model model: scene.getModelMap().values()) {
if (!model.isAnimated()) {
continue;
}
for (RenderBuffers.MeshDrawData meshDrawData : model.getMeshDrawDataList()) {
RenderBuffers.AnimMeshDrawData animMeshDrawData = meshDrawData.animMeshDrawData();
Entity entity = animMeshDrawData.entity();
String name = "drawElements[" + drawElement + "]";
uniformsMap.setUniform(name + ".modelMatrixIdx", entitiesIdxMap.get(entity.getId()));
drawElement++;
}
}
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, animRenderBufferHandle);
glBindVertexArray(renderBuffers.getAnimVaoId());
for (int i = 0; i < CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowBuffer.getDepthMapTexture().getIds()[i], 0);
CascadeShadow shadowCascade = cascadeShadows.get(i);
uniformsMap.setUniform("projViewMatrix", shadowCascade.getProjViewMatrix());
glMultiDrawElementsIndirect(GL_TRIANGLES, GL_UNSIGNED_INT, 0, animDrawCount, 0);
}
glBindVertexArray(0);
}
private void setupAnimCommandBuffer(Scene scene) {
List modelList = scene.getModelMap().values().stream().filter(m -> m.isAnimated()).toList();
int numMeshes = 0;
for (Model model : modelList) {
numMeshes += model.getMeshDrawDataList().size();
}
int firstIndex = 0;
int baseInstance = 0;
ByteBuffer commandBuffer = MemoryUtil.memAlloc(numMeshes * COMMAND_SIZE);
for (Model model : modelList) {
for (RenderBuffers.MeshDrawData meshDrawData : model.getMeshDrawDataList()) {
RenderBuffers.AnimMeshDrawData animMeshDrawData = meshDrawData.animMeshDrawData();
Entity entity = animMeshDrawData.entity();
// count
commandBuffer.putInt(meshDrawData.vertices());
// instanceCount
commandBuffer.putInt(1);
commandBuffer.putInt(firstIndex);
// baseVertex
commandBuffer.putInt(meshDrawData.offset());
commandBuffer.putInt(baseInstance);
firstIndex += meshDrawData.vertices();
baseInstance++;
}
}
commandBuffer.flip();
animDrawCount = commandBuffer.remaining() / COMMAND_SIZE;
animRenderBufferHandle = glGenBuffers();
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, animRenderBufferHandle);
glBufferData(GL_DRAW_INDIRECT_BUFFER, commandBuffer, GL_DYNAMIC_DRAW);
MemoryUtil.memFree(commandBuffer);
}
}
```
In the `Render` class we just need to instantiate the `AnimationRender` class, and use it in the `render` loop and the `cleanup` method. In the `render` loop we will invoke the `AnimationRender` class `render` method at the very beginning, so animation transformations are applied prior to render the scene.
```java
public class Render {
private AnimationRender animationRender;
...
public Render(Window window) {
...
animationRender = new AnimationRender();
...
}
public void cleanup() {
...
animationRender.cleanup();
...
}
public void render(Window window, Scene scene) {
animationRender.render(scene, renderBuffers);
...
}
...
}
```
Finally, in the `Main` class we will create two animated entities which will have a different animation update rate to check that we correctly separate per entity information:
```java
public class Main implements IAppLogic {
...
private AnimationData animationData1;
private AnimationData animationData2;
...
public static void main(String[] args) {
...
Engine gameEng = new Engine("chapter-21", opts, main);
...
}
...
public void init(Window window, Scene scene, Render render) {
...
String bobModelId = "bobModel";
Model bobModel = ModelLoader.loadModel(bobModelId, "resources/models/bob/boblamp.md5mesh",
scene.getTextureCache(), scene.getMaterialCache(), true);
scene.addModel(bobModel);
Entity bobEntity = new Entity("bobEntity-1", bobModelId);
bobEntity.setScale(0.05f);
bobEntity.updateModelMatrix();
animationData1 = new AnimationData(bobModel.getAnimationList().get(0));
bobEntity.setAnimationData(animationData1);
scene.addEntity(bobEntity);
Entity bobEntity2 = new Entity("bobEntity-2", bobModelId);
bobEntity2.setPosition(2, 0, 0);
bobEntity2.setScale(0.025f);
bobEntity2.updateModelMatrix();
animationData2 = new AnimationData(bobModel.getAnimationList().get(0));
bobEntity2.setAnimationData(animationData2);
scene.addEntity(bobEntity2);
...
}
...
public void update(Window window, Scene scene, long diffTimeMillis) {
animationData1.nextFrame();
if (diffTimeMillis % 2 == 0) {
animationData2.nextFrame();
}
...
}
}
```
With all of that changes implemented you should be able to see something similar to this.
.png>)
================================================
FILE: styles/pdf.css
================================================
/* CSS for pdf */