main cc6664605e31 cached
27 files
521.3 KB
122.8k tokens
1 requests
Download .txt
Showing preview only (539K chars total). Download the full file or copy to clipboard to get everything.
Repository: lwjglgamedev/lwjglbook-bookcontents
Branch: main
Commit: cc6664605e31
Files: 27
Total size: 521.3 KB

Directory structure:
gitextract_brkbjcqe/

├── GLOSSARY.md
├── README.md
├── SUMMARY.md
├── appendix-a/
│   └── appendix-a.md
├── book.json
├── chapter-01/
│   └── chapter-01.md
├── chapter-02/
│   └── chapter-02.md
├── chapter-03/
│   └── chapter-03.md
├── chapter-04/
│   └── chapter-04.md
├── chapter-05/
│   └── chapter-05.md
├── chapter-06/
│   └── chapter-06.md
├── chapter-07/
│   └── chapter-07.md
├── chapter-08/
│   └── chapter-08.md
├── chapter-09/
│   └── chapter-09.md
├── chapter-10/
│   └── chapter-10.md
├── chapter-11/
│   └── chapter-11.md
├── chapter-12/
│   └── chapter-12.md
├── chapter-13/
│   └── chapter-13.md
├── chapter-14/
│   └── chapter-14.md
├── chapter-15/
│   └── chapter-15.md
├── chapter-16/
│   └── chapter-16.md
├── chapter-17/
│   └── chapter-17.md
├── chapter-18/
│   └── chapter-18.md
├── chapter-19/
│   └── chapter-19.md
├── chapter-20/
│   └── chapter-20.md
├── chapter-21/
│   └── chapter-21.md
└── styles/
    └── pdf.css

================================================
FILE CONTENTS
================================================

================================================
FILE: GLOSSARY.md
================================================
# Glossary


================================================
FILE: README.md
================================================
# 3D Game Development with LWJGL 3

This online book will introduce the main concepts required to write a 3D game using the LWJGL 3 library.

[LWJGL](http://www.lwjgl.org/) is a Java library that provides access to native APIs used in the development of graphics \(OpenGL\), audio \(OpenAL\) and parallel computing \(OpenCL\) applications. This library leverages the high performance of native OpenGL applications while using the Java language.

My initial goal was to learn the techniques involved in writing a 3D game using OpenGL. All the information required was there in the internet but it was not organized and sometimes it was very hard to find and even incomplete or misleading.

I started to collect some materials, develop some examples and decided to organize that information in the form of a book.

[Table of contents](SUMMARY.md).

You can also check my Vulkan book [here](https://github.com/lwjglgamedev/vulkanbook).

## Source Code

The source code of the samples of this book is in [GitHub](https://github.com/lwjglgamedev/lwjglbook).

The source code for the book itself is also published in [GitHub](https://github.com/lwjglgamedev/lwjglbook-bookcontents).

## License

The book is licensed under [Attribution-ShareAlike 4.0 International \(CC BY-SA 4.0\)](http://creativecommons.org/licenses/by-sa/4.0/)

The source code for the book is licensed under [Apache v2.0](https://www.apache.org/licenses/LICENSE-2.0 "Apache v2.0")

## Previous version

Previous version of the book can still be accessed in Github. Here are the links:

* [Book contents](https://github.com/lwjglgamedev/lwjglbook-bookcontents-leg)
* [Source code](https://github.com/lwjglgamedev/lwjglbook-leg)

**NOTE**: The old version of the book was published, originally, using a different URL. That site is now part of what GitBook considers legacy content. Unfortunately, I cannot access that content any more nor even delete it (according to GitBook this is not possible). Therefore, if you are accessing the book through a GitBook URL which starts with: "https://lwjglgamedev.gitbooks.io/" you are accessing and out of sync version (not even in sync with previous book legacy version which is still hosted in Github).


## Support

If you like the book you can become a [sponsor](https://github.com/sponsors/lwjglgamedev)

## Comments are welcome

Suggestions and corrections are more than welcome \(and if you do like it please rate it with a star\). Please send them using the discussion forum and make the corrections you consider in order to improve the book.

## Author

Antonio Hernández Bejarano

## Special Thanks

To all the readers that have contributed with corrections, improvements and ideas.


================================================
FILE: SUMMARY.md
================================================
# Summary

* [Introduction](README.md)
* [Chapter 01 - First steps](chapter-01/chapter-01.md)
* [Chapter 02 - The Game Loop](chapter-02/chapter-02.md)
* [Chapter 03 - Our first triangle](chapter-03/chapter-03.md)
* [Chapter 04 - Render a quad](chapter-04/chapter-04.md)
* [Chapter 05 - Perspective projection](chapter-05/chapter-05.md)
* [Chapter 06 - Going 3D](chapter-06/chapter-06.md)
* [Chapter 07 - Textures](chapter-07/chapter-07.md)
* [Chapter 08 - Camera](chapter-08/chapter-08.md)
* [Chapter 09 - Loading more complex models (Assimp)](chapter-09/chapter-09.md)
* [Chapter 10 - GUI (Imgui)](chapter-10/chapter-10.md)
* [Chapter 11 - Lights](chapter-11/chapter-11.md)
* [Chapter 12 - Sky Box](chapter-12/chapter-12.md)
* [Chapter 13 - Fog](chapter-13/chapter-13.md)
* [Chapter 14 - Normal Mapping](chapter-14/chapter-14.md)
* [Chapter 15 - Animations](chapter-15/chapter-15.md)
* [Chapter 16 - Audio](chapter-16/chapter-16.md)
* [Chapter 17 - Cascade shadow maps](chapter-17/chapter-17.md)
* [Chapter 18 - 3D Object Picking](chapter-18/chapter-18.md)
* [Chapter 19 - Deferred Shading](chapter-19/chapter-19.md)
* [Chapter 20 - Indirect drawing (static models)](chapter-20/chapter-20.md)
* [Chapter 21 - Indirect drawing (animated models) and compute shaders](chapter-21/chapter-21.md)
* [Appendix A - OpenGL Debugging](appendix-a/appendix-a.md)

================================================
FILE: appendix-a/appendix-a.md
================================================
# Appendix A - OpenGL Debugging

Debugging an OpenGL program can be a daunting task. Most of the times you end up with a black screen and you have no means of knowing what’s going on. In order to alleviate this problem we can use some existing tools that will provide more information about the rendering process.

In this annex we will describe how to use the [RenderDoc](https://renderdoc.org/) tool to debug our LWJGL programs. RenderDoc is a graphics debugging tool that can be used with Direct3D, Vulkan and OpenGL. In the case of OpenGL it only supports the core profile from 3.2 up to 4.5.

So let’s get started. You need to download and install the RenderDoc version for your OS. Once installed, when you launch it you will see something similar to this.

![RenderDoc](../.gitbook/assets/renderdoc.png)

The first step is to configure RenderDoc to execute and monitor our samples. In the “Capture Executable” tab we need to setup the following parameters:

* **Executable path**: In our case this should point to the JVM launcher (For instance, “C:\Program Files\Java\jdk-XX\bin\java.exe”).
* **Working Directory**: This is the working directory that will be setup for your program. In our case it should be set to the target directory where maven dumps the result. By setting this way, the dependencies will be able to be found (For instance, "D:/Projects/booksamples/chapter-18/target").
* **Command line arguments**: This will contain the arguments required by the JVM to execute our sample. In our case, just passing the jar to be executed (For instance, “-jar chapter-18-1.0.jar”).

![Exec arguments](../.gitbook/assets/exec_arguments.png)

There are many other options int this tab to configure the capture options. You can consult their purpose in [RenderDoc documentation](https://renderdoc.org/docs/index.html). Once everything has been setup you can execute your program by clicking on the “Launch” button. You will see something like this:

![Sample](../.gitbook/assets/sample.png)

Once launched the process, you will see that a new tab has been added which is named “java \[PID XXXX]” (where the XXXX number represents the PID, the process identifier, of the java process).

![Java process](../.gitbook/assets/java_process.png)

From that tab you can capture the state of your program by pressing the “Trigger capture” button. Once a capture has been generated, you will see a little snapshot in that same tab.

![Capture](../.gitbook/assets/capture.png)

If you double click on that capture, all the data collected will be loaded and you can start inspecting it. The “Event Browser” panel will be populated will all the relevant OpenGL calls executed during one rendering cycle.

![Event browser](../.gitbook/assets/event_browser.png)

You can see, the following events:

* Three depth passes for the cascade shadows.
* The geometry pass. If you click over a glDrawELements event, and select the “Mesh” tab you can see the mesh that was drawn, its input and output for the vertex shader.
* The lighting pass.

You can also view the input textures used for that drawing operation (by clicking the “Texture Viewer” tab).

![Texture inputs](../.gitbook/assets/texture_inputs.png)

In the center panel, you can see the output, and on the right panel you can see the list of textures used as an input. You can also view the output textures one by one. This is very illustrative to show how deferred shading works.

![Texture outputs](../.gitbook/assets/texture_outputs.png)

As you can see, this tool provides valuable information about what’s happening when rendering. It can save precious time while debugging rendering problems. It can even display information about the shaders used in the rendering pipeline.

![Pipeline state](../.gitbook/assets/pipeline_state.png)


================================================
FILE: book.json
================================================
{
    "plugins": [
        "katex"
    ],
    "pluginsConfig": {}
}

================================================
FILE: chapter-01/chapter-01.md
================================================
# Chapter 01 - First steps

In this book we will learn the principal techniques involved in developing 3D games using [OpenGL](https://www.opengl.org). We will develop our samples in Java and we will use the Lightweight Java Game Library [LWJGL](http://www.lwjgl.org/). LWJGL library enables the access to low-level APIs (Application Programming Interface) such as OpenGL from Java.

LWJGL is a low level API that acts like a wrapper around OpenGL. Therefore, if your idea is to start creating 3D games in a short period of time maybe you should consider other alternatives like [jMonkeyEngine](https://jmonkeyengine.org) or [Unity](https::/unity.com). By using this low level API you will have to go through many concepts and write lots of lines of code before you see the results. The benefit of doing it this way is that you will get a much better understanding of 3D graphics and you will always have the control.

Regarding Java, you will need at least Java 17. So the first step, in case you do not have that version installed, is to download the Java SDK. You can download the OpenJDK binaries [here](https://jdk.java.net/17/). In any case, this book assumes that you have a moderate understanding of the Java language. If this is not your case, you should first get proper knowledge of the language.

The best way to work with the examples is to clone the Github repository. You can either download the whole repository as a zip and extract in in your desired folder or clone it by using the following command: `git clone https://github.com/lwjglgamedev/lwjglbook.git`. In bot cases you will have a root folder which contains one sub folder per chapter.

You may use the Java IDE you want in order to run the samples. You can download IntelliJ IDEA which has good support for Java. IntelliJ provides a free open source version, the Community version, which you can download from here: [https://www.jetbrains.com/idea/download/](https://www.jetbrains.com/idea/download/).

![IntelliJ](../.gitbook/assets/intellij.png)

When you open the source code in your IDE you can either open the root folder which contains all the chapters (the parent project) or each chapter independently. In the first case, please remember to properly set the working directory for each chapter to the root folder of the chapter. The samples will try to access files using relative paths assuming that the root folder is the chapter base folder.

For building our samples we will be using [Maven](https://maven.apache.org/). Maven is already integrated in most IDEs and you can directly open the different samples inside them. Just open the folder that contains the chapter sample and IntelliJ will detect that it is a maven project. ![Maven](<../.gitbook/assets/maven_project (1).png>)

Maven builds projects based on an XML file named `pom.xml` (Project Object Model) which manages project dependencies (the libraries you need to use) and the steps to be performed during the build process. Maven follows the principle of convention over configuration, that is, if you stick to the standard project structure and naming conventions the configuration file does not need to explicitly say where source files are or where compiled classes should be located.

This book does not intend to be a maven tutorial, so please find the information about it in the web in case you need it. The source code root folder defines a parent project which defines the plugins to be used and collects the versions of the libraries employed. Therefore you will find there a `pom.xml` file which defines common actions and properties for all the chapters, which are handled as sub-projects.

LWJGL 3.1 introduced some changes in the way that the project is built. Now the base code is much more modular, and we can be more selective in the packages that we want to use instead of using a giant monolithic jar file. This comes at a cost: You now need to carefully specify the dependencies one by one. But the [download](https://www.lwjgl.org/download) page includes a fancy tool that generates the pom file for you. In our case, we will be first using GLFW and OpenGL bindings. You can check what the pom file looks like in the source code.

The LWJGL platform dependency already takes care of unpacking native libraries for your platform, so there's no need to use other plugins (such as `mavennatives`). We just need to set up three profiles to set a property that will configure the LWJGL platform. The profiles will set up the correct values of that property for Windows, Linux and Mac OS families.

```xml
    <profiles>
        <profile>
            <id>windows-profile</id>
            <activation>
                <os>
                    <family>Windows</family>
                </os>
            </activation>
            <properties>
                <native.target>natives-windows</native.target>
            </properties>                
        </profile>
        <profile>
            <id>linux-profile</id>
            <activation>
                <os>
                    <family>Linux</family>
                </os>
            </activation>
            <properties>
                <native.target>natives-linux</native.target>
            </properties>                
        </profile>
        <profile>
            <id>OSX-profile</id>
            <activation>
                <os>
                    <family>mac</family>
                </os>
            </activation>
            <properties>
                <native.target>natives-osx</native.target>
            </properties>
        </profile>
    </profiles>
```

Inside each project, the LWJGL platform dependency will use the correct property established in the profile for the current platform.

```xml
        <dependency>
            <groupId>org.lwjgl</groupId>
            <artifactId>lwjgl-platform</artifactId>
            <version>${lwjgl.version}</version>
            <classifier>${native.target}</classifier>
        </dependency>
```

Besides that, every project generates a runnable jar (one that can be executed by typing java -jar name\_of\_the\_jar.jar). This is achieved by using the maven-jar-plugin which creates a jar with a `MANIFEST.MF` file with the correct values. The most important attribute for that file is `Main-Class`, which sets the entry point for the program. In addition, all the dependencies are set as entries in the `Class-Path` attribute for that file. In order to execute it on another computer, you just need to copy the main jar file and the lib directory (with all the jars included there) which are located under the target directory.

The jars that contain LWJGL classes, also contain the native libraries. LWJGL will also take care of extracting them and adding them to the path where the JVM will look for libraries.

This chapter's source code is taken directly from the getting started sample in the LWJGL site [http://www.lwjgl.org/guide](http://www.lwjgl.org/guide). Although it is very well documented let's go through the source code and explain the most relevant parts. Since pasting the source code for each class will make it impossible to read, we will include fragments. In order for you to better understand the class to which each specific fragment belongs, we will always include the class header in each fragment. We will use three dots (`...`) to indicate that there is more code before / after the fragment. The sample is contained in a single class named `HelloWorld` which starts like this:

```java
package org.lwjglb;

import org.lwjgl.Version;
import org.lwjgl.glfw.*;
import org.lwjgl.opengl.GL;
import org.lwjgl.system.MemoryStack;

import java.nio.IntBuffer;

import static org.lwjgl.glfw.Callbacks.glfwFreeCallbacks;
import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.system.MemoryStack.stackPush;
import static org.lwjgl.system.MemoryUtil.NULL;

public class HelloWorld {

    // The window handle
    private long window;

    public static void main(String[] args) {
        new HelloWorld().run();
    }
    ...
}
```

The class just stores a reference to a Window handle (we will see what this means later on), an in the `main` method we just call the `run` method. Let's start dissecting that method:

```java
public class HelloWorld {
    ...
    public void run() {
        System.out.println("Hello LWJGL " + Version.getVersion() + "!");

        init();
        loop();

        // Free the window callbacks and destroy the window
        glfwFreeCallbacks(window);
        glfwDestroyWindow(window);

        // Terminate GLFW and free the error callback
        glfwTerminate();
        glfwSetErrorCallback(null).free();
    }
    ...
}
```

This method just calls the `init` method to initialize the application and then calls the `loop` method which is basically and endless loop which renders to a a window. When the `loop` method is finished we just need to free some resources created during initialization (the GLFW window). Let's start with the `init` method.

```java
public class HelloWorld {
    ...
    private void init() {
        // Setup an error callback. The default implementation
        // will print the error message in System.err.
        GLFWErrorCallback.createPrint(System.err).set();

        // Initialize GLFW. Most GLFW functions will not work before doing this.
        if (!glfwInit())
            throw new IllegalStateException("Unable to initialize GLFW");

        // Configure GLFW
        glfwDefaultWindowHints(); // optional, the current window hints are already the default
        glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE); // the window will stay hidden after creation
        glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE); // the window will be resizable

        // Create the window
        window = glfwCreateWindow(300, 300, "Hello World!", NULL, NULL);
        if (window == NULL)
            throw new RuntimeException("Failed to create the GLFW window");

        // Setup a key callback. It will be called every time a key is pressed, repeated or released.
        glfwSetKeyCallback(window, (window, key, scancode, action, mods) -> {
            if (key == GLFW_KEY_ESCAPE && action == GLFW_RELEASE)
                glfwSetWindowShouldClose(window, true); // We will detect this in the rendering loop
        });
        ...
    }
    ...
}
```

We start by invoking [GLFW](https://www.glfw.org/), which is library to handle GUI components (Windows, etc.) and events (key presses, mouse movements, etc.) with an OpenGL context attached in a straightforward way. Currently, you cannot using Swing or AWT directly to render OpenGL. If you want to use AWT you can check [lwjgl3-awt ](https://github.com/LWJGLX/lwjgl3-awt), but in this book we will stick with GLFW. We first start by initializing GLFW library and setting some parameters for window initialization (such as if it is resizable or not). The window is created by calling the `glfwCreateWindow` which receive window's width and height and the window title. This function returns a handle, which we need to store so we can use it ith any other GLFW related function. After that, we set a keyboard callback, that is a function that will be called when a key is pressed. In this case we just want to detect if the `ESC` key is pressed to close the window. Let's continue with the `init` method:

```java
public class HelloWorld {
    ...
    private void init() {
        ...
        // Get the thread stack and push a new frame
        try (MemoryStack stack = stackPush()) {
            IntBuffer pWidth = stack.mallocInt(1); // int*
            IntBuffer pHeight = stack.mallocInt(1); // int*

            // Get the window size passed to glfwCreateWindow
            glfwGetWindowSize(window, pWidth, pHeight);

            // Get the resolution of the primary monitor
            GLFWVidMode vidmode = glfwGetVideoMode(glfwGetPrimaryMonitor());

            // Center the window
            glfwSetWindowPos(
                    window,
                    (vidmode.width() - pWidth.get(0)) / 2,
                    (vidmode.height() - pHeight.get(0)) / 2
            );
        } // the stack frame is popped automatically

        // Make the OpenGL context current
        glfwMakeContextCurrent(window);
        // Enable v-sync
        glfwSwapInterval(1);

        // Make the window visible
        glfwShowWindow(window);
    }
    ...
}
```

Although we will explain it in next chapters, you will see here a key class in LWJGL which is the `MemoryStack`. As it has been said before, LJWGL provides wrappers around native libraries (C-based functions). Java does not have the concept of pointers (at least thinking in C terms), so passing structures to C functions is not a straight forward task. In order to share those structures, and to have pass by reference parameters, such as in the example above, we need to allocate memory which can be accessed by native code. LWJGL provides the `MemoryStack` class which allows us to allocate native-accessible memory / structures which is automatically cleaned (in fact is returned to a pool like structure so it can be reused) when we are out of the scope where `stackPush` method is called. Every native-accessible memory / structure is instantiated through this stack class. In the sample above we need to call the `glfwGetWindowSize` to get window dimensions. The values are returned using a pass-by-reference approach, so meed to allocate two ints (in the form of two `IntBuffer`'s). With that information and the dimensions of the monitor we can center the window, setup OpenGL, enable v-sync (more on this in next chapter) and finally show the window.

Now we need an endless loop to render continuously something:

```java
public class HelloWorld {
    ...
    private void loop() {
        // This line is critical for LWJGL's interoperation with GLFW's
        // OpenGL context, or any context that is managed externally.
        // LWJGL detects the context that is current in the current thread,
        // creates the GLCapabilities instance and makes the OpenGL
        // bindings available for use.
        GL.createCapabilities();

        // Set the clear color
        glClearColor(1.0f, 0.0f, 0.0f, 0.0f);

        // Run the rendering loop until the user has attempted to close
        // the window or has pressed the ESCAPE key.
        while (!glfwWindowShouldClose(window)) {
            glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear the framebuffer

            glfwSwapBuffers(window); // swap the color buffers

            // Poll for window events. The key callback above will only be
            // invoked during this call.
            glfwPollEvents();
        }
    }
    ...
}
```

We first create OpenGL context, set up the clear color and perform a clear operation (over color abd depth buffers) in each loop, polling for keyboard events to detect if window should be closed. We will explain these concepts in detail along next chapters. However, just for the sake of completeness, render is done over a target, in this case over a buffer which contains color information and depth values (for 3D), after we have finished rendering over these buffers, we just need to inform GLFW that this buffer is ready for presenting by calling `glfwSwapBuffers`. GLFW will maintain several buffers so we can perform render operations over one buffer while the other one is presented in the window (if not, we would have flickering artifacts).

If you have your environment correctly set up you should be able to execute it and see a window with a red background.

![Hello World](<../.gitbook/assets/hello_world (1).png>)

The source code of this chapter is located [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-01).

[Next chapter](../chapter-02/chapter-02.md)


================================================
FILE: chapter-02/chapter-02.md
================================================
# Chapter 02 - The Game Loop

In this chapter we will start developing our game engine by creating the game loop. The game loop is the core component of every game. It is basically an endless loop which is responsible for periodically handling user input, updating game state and rendering to the screen.

You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-02).

## The basis

The following snippet shows the structure of a game loop:

```java
while (keepOnRunning) {
    input();
    update();
    render();
}
```

The `input` method is responsible of handling user input (key strokes, mouse movements, etc.). The `update` method is responsible of updating game state (enemy positions, AI, etc.). Finally, the `render` method is responsible for rendering the visuals of the game with OpenGL. So, is that all? Are we finished with game loops? Well, not yet. The above snippet has many pitfalls. First of all the speed that the game loop runs at will be different depending on the machine it runs on. If the machine is fast enough the user will not even be able to see what is happening in the game. Moreover, that game loop will consume all the machine resources.

First of all we may want to control separately the period at which the game state is updated and the period at which the game is rendered to the screen. Why do we do this? Well, updating our game state at a constant rate is more important, especially if we use some physics engine. On the contrary, if our rendering is not done in time it makes no sense to render old frames while processing our game loop. We have the flexibility to skip some frames. 

## Implementation

Prior to examining the game loop, let's create the supporting classes that will form the core of the engine. We will first create an interface that will encapsulate the game logic. By doing this we will make our game engine reusable across the different chapters. This interface will have methods to initialize the game assets (`init`), handle user input (`input`), update game state (`update`) and clean up the resources (`cleanup`).

```java
package org.lwjglb.engine;

import org.lwjglb.engine.graph.Render;
import org.lwjglb.engine.scene.Scene;

public interface IAppLogic {

    void cleanup();

    void init(Window window, Scene scene, Render render);

    void input(Window window, Scene scene, long diffTimeMillis);

    void update(Window window, Scene scene, long diffTimeMillis);
}
```

As you can see, there are some classes instances which we have not defined yet (`Window`, `Scene` and `Render`) and a parameter named `diffTimeMillis` which holds the milliseconds passed between invocations of those methods.

Let's start with the `Window` class. We will encapsulate in this class all the invocations to GLFW library to create and manage a window, and its structure is like this:

```java
package org.lwjglb.engine;

import org.lwjgl.glfw.GLFWVidMode;
import org.lwjgl.system.MemoryUtil;
import org.tinylog.Logger;

import java.util.concurrent.Callable;

import static org.lwjgl.glfw.Callbacks.glfwFreeCallbacks;
import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.system.MemoryUtil.NULL;

public class Window {

    private final long windowHandle;
    private int height;
    private Callable<Void> resizeFunc;
    private int width;
    ...
    ...
    public static class WindowOptions {
        public boolean compatibleProfile;
        public int fps;
        public int height;
        public int ups = Engine.TARGET_UPS;
        public int width;
    }
}
```

As you can see, it defines some attributes to store the window handle, its width and height and a callback function which will be invoked nay time the window is resized. It also defines an inner class to set up some options to control window creation:

* `compatibleProfile`: This controls whether we want to use old functions from previous versions (deprecated functions) or not. 
* `fps`: Defines the target frames per second (FPS). If it has a value less than or equal to zero it means that we do not want to set a fixed target FPS but instead use the monitor refresh rate as the target FPS. In order to do so, we will use v-sync (that is the number of screen updates to wait from the time `glfwSwapBuffers` was called before swapping the buffers and returning).
* `height`: Desired window height.
* `width`: Desired window width:
* `ups`: Defines the target number of updates per second (initialized to a default value).

Let's examine the constructor of the `Window` class:

```java
public class Window {
    ...
    public Window(String title, WindowOptions opts, Callable<Void> resizeFunc) {
        this.resizeFunc = resizeFunc;
        if (!glfwInit()) {
            throw new IllegalStateException("Unable to initialize GLFW");
        }

        glfwDefaultWindowHints();
        glfwWindowHint(GLFW_VISIBLE, GL_FALSE);
        glfwWindowHint(GLFW_RESIZABLE, GL_TRUE);

        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
        if (opts.compatibleProfile) {
            glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
        } else {
            glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
            glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
        }

        if (opts.width > 0 && opts.height > 0) {
            this.width = opts.width;
            this.height = opts.height;
        } else {
            glfwWindowHint(GLFW_MAXIMIZED, GLFW_TRUE);
            GLFWVidMode vidMode = glfwGetVideoMode(glfwGetPrimaryMonitor());
            width = vidMode.width();
            height = vidMode.height();
        }

        windowHandle = glfwCreateWindow(width, height, title, NULL, NULL);
        if (windowHandle == NULL) {
            throw new RuntimeException("Failed to create the GLFW window");
        }

        glfwSetFramebufferSizeCallback(windowHandle, (window, w, h) -> resized(w, h));

        glfwSetErrorCallback((int errorCode, long msgPtr) ->
                Logger.error("Error code [{}], msg [{}]", errorCode, MemoryUtil.memUTF8(msgPtr))
        );

        glfwSetKeyCallback(windowHandle, (window, key, scancode, action, mods) -> {
            keyCallBack(key, action);
        });

        glfwMakeContextCurrent(windowHandle);

        if (opts.fps > 0) {
            glfwSwapInterval(0);
        } else {
            glfwSwapInterval(1);
        }

        glfwShowWindow(windowHandle);

        int[] arrWidth = new int[1];
        int[] arrHeight = new int[1];
        glfwGetFramebufferSize(windowHandle, arrWidth, arrHeight);
        width = arrWidth[0];
        height = arrHeight[0];
    }
    ...
    public void keyCallBack(int key, int action) {
        if (key == GLFW_KEY_ESCAPE && action == GLFW_RELEASE) {
            glfwSetWindowShouldClose(windowHandle, true); // We will detect this in the rendering loop
        }
    }
    ...
}
```

We start by setting some window hints to hide the window and set it resizable. After that, we set OpenGL version and set either core or compatible profile depending on window options. Then, if we have not set a preferred width and height we get the primary monitor dimensions to set window size. We then create the window by calling the `glfwCreateWindow` and set some callbacks when window is resized or to detect window termination (when `ESC` key is pressed). If we want to manually set a target FPS, we invoke `glfwSwapInterval(0)` to disable v-sync and finally, we show the window and get the frame buffer size (the portion of the window used to render()).

The rest of the methods of the `Window` class are for cleaning up resources, the resize callback, some getters for window size and methods to poll events and to check if the window should be closed.

```java
public class Window {
    ...
    public void cleanup() {
        glfwFreeCallbacks(windowHandle);
        glfwDestroyWindow(windowHandle);
        glfwTerminate();
        GLFWErrorCallback callback = glfwSetErrorCallback(null);
        if (callback != null) {
            callback.free();
        }
    }

    public int getHeight() {
        return height;
    }

    public int getWidth() {
        return width;
    }

    public long getWindowHandle() {
        return windowHandle;
    }
    
    public boolean isKeyPressed(int keyCode) {
        return glfwGetKey(windowHandle, keyCode) == GLFW_PRESS;
    }

    public void pollEvents() {
        glfwPollEvents();
    }

    protected void resized(int width, int height) {
        this.width = width;
        this.height = height;
        try {
            resizeFunc.call();
        } catch (Exception excp) {
            Logger.error("Error calling resize callback", excp);
        }
    }

    public void update() {
        glfwSwapBuffers(windowHandle);
    }

    public boolean windowShouldClose() {
        return glfwWindowShouldClose(windowHandle);
    }
    ...
}
```

The `Scene` class will hold 3D scene future elements (models, etc.). For now it is just an empty placeholder:

```java
package org.lwjglb.engine.scene;

public class Scene {

    public Scene() {
    }

    public void cleanup() {
        // Nothing to be done here yet
    }
}
```

The `Render` class is just another placeholder that clears the screen:

```java
package org.lwjglb.engine.graph;

import org.lwjgl.opengl.GL;
import org.lwjglb.engine.Window;
import org.lwjglb.engine.scene.Scene;

import static org.lwjgl.opengl.GL11.*;

public class Render {

    public Render() {
        GL.createCapabilities();
    }

    public void cleanup() {
        // Nothing to be done here yet
    }

    public void render(Window window, Scene scene) {
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    }
}
```

Now we can implement the game loop in a new class named `Engine` which starts like this:

```java
package org.lwjglb.engine;

import org.lwjglb.engine.graph.Render;
import org.lwjglb.engine.scene.Scene;

public class Engine {

    public static final int TARGET_UPS = 30;
    private final IAppLogic appLogic;
    private final Window window;
    private Render render;
    private boolean running;
    private Scene scene;
    private int targetFps;
    private int targetUps;

    public Engine(String windowTitle, Window.WindowOptions opts, IAppLogic appLogic) {
        window = new Window(windowTitle, opts, () -> {
            resize();
            return null;
        });
        targetFps = opts.fps;
        targetUps = opts.ups;
        this.appLogic = appLogic;
        render = new Render();
        scene = new Scene();
        appLogic.init(window, scene, render);
        running = true;
    }

    private void cleanup() {
        appLogic.cleanup();
        render.cleanup();
        scene.cleanup();
        window.cleanup();
    }

    private void resize() {
        // Nothing to be done yet
    }
    ...
}
```

The `Engine` class, receives in the constructor the title of the window, the window options and a reference to the implementation of the `IAppLogic` interface.  In the constructor it creates instance of the `Window`, `Render` and `Scene` classes. The `cleanup` method just invokes the other classes `cleanup` resources. The game loop is defined in the `run` method which is defined like this:

```java
public class Engine {
    ...
    private void run() {
        long initialTime = System.currentTimeMillis();
        float timeU = 1000.0f / targetUps;
        float timeR = targetFps > 0 ? 1000.0f / targetFps : 0;
        float deltaUpdate = 0;
        float deltaFps = 0;

        long updateTime = initialTime;
        while (running && !window.windowShouldClose()) {
            window.pollEvents();

            long now = System.currentTimeMillis();
            deltaUpdate += (now - initialTime) / timeU;
            deltaFps += (now - initialTime) / timeR;

            if (targetFps <= 0 || deltaFps >= 1) {
                appLogic.input(window, scene, now - initialTime);
            }

            if (deltaUpdate >= 1) {
                long diffTimeMillis = now - updateTime;
                appLogic.update(window, scene, diffTimeMillis);
                updateTime = now;
                deltaUpdate--;
            }

            if (targetFps <= 0 || deltaFps >= 1) {
                render.render(window, scene);
                deltaFps--;
                window.update();
            }
            initialTime = now;
        }

        cleanup();
    }
    ...
}
```

The loop starts by calculating two parameters: `timeU` and `timeR` which control the maximum elapsed time between updates (`timeU`) and render calls (`timeR`) in milliseconds. If those periods are consumed we need either to update game state or to render. In the later case, if the target FPS is set to 0 we will rely on v-sync refresh rate so we just set the value to `0`. The loop starts by polling the events over the window, after that, we get current time in milliseconds. After that we get the elapsed time between update and render calls. If we have passed the maximum elapsed time for render (or relay in v-sync), we process user input by calling `appLogic.input`. If we have surpassed maximum update elapsed time we update game state by calling `appLogic.update`.  we have passed the maximum elapsed time for render (or relay in v-sync), we trigger render calls by calling `render.render`.

At the end of the loop we call the `cleanup` method to free resources.

Finally the `Engine` is completed like this:

```java
public class Engine {
    ...
    public void start() {
        running = true;
        run();
    }

    public void stop() {
        running = false;
    }
}
```

A little note on threading. GLFW requires to be initialized from the main thread. Polling of events should also be done in that thread. Therefore, instead of creating a separate thread for the game loop, which is what you would see commonly in games, we will execute everything from the main thread. This is why we do not create a new `Thread` in the `start` method.

Finally, we just simplify the `Main` class to this:

```java
package org.lwjglb.game;

import org.lwjglb.engine.*;
import org.lwjglb.engine.graph.Render;
import org.lwjglb.engine.scene.Scene;

public class Main implements IAppLogic {

    public static void main(String[] args) {
        Main main = new Main();
        Engine gameEng = new Engine("chapter-02", new Window.WindowOptions(), main);
        gameEng.start();
    }

    @Override
    public void cleanup() {
        // Nothing to be done yet
    }

    @Override
    public void init(Window window, Scene scene, Render render) {
        // Nothing to be done yet
    }

    @Override
    public void input(Window window, Scene scene, long diffTimeMillis) {
        // Nothing to be done yet
    }

    @Override
    public void update(Window window, Scene scene, long diffTimeMillis) {
        // Nothing to be done yet
    }
}
```

We just create the `Engine` instance and start it up in the `main` method. The `Main` class also implements the `IAppLogic` interface which, for now, is just empty.

[Next chapter](../chapter-03/chapter-03.md)


================================================
FILE: chapter-03/chapter-03.md
================================================
# Chapter 03 - Our first triangle

In this chapter we will render our first triangle to the screen and introduce the basis of a programmable graphics pipeline. But, prior to that, we will explain first the basis of coordinate systems. Trying to introduce some fundamental mathematical concepts in a simple way to support the techniques and topics that we will address in subsequent chapters. We will assume some simplifications which may sacrifice preciseness for the sake of legibility.

You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-03).

## A brief about coordinates

We locate objects in space by specifying their coordinates. Think about a map. You specify a point on a map by stating its latitude or longitude. With just a pair of numbers a point is precisely identified. That pair of numbers are the point coordinates (things are a little bit more complex in reality, since a map is a projection of a non perfect ellipsoid, the earth, so more data is needed but it’s a good analogy).

A coordinate system is a system which employs one or more numbers, that is, one or more components to uniquely specify the position of a point. There are different coordinate systems (Cartesian, polar, etc.) and you can transform coordinates from one system to another. We will use the Cartesian coordinate system.

In the Cartesian coordinate system, for two dimensions, a coordinate is defined by two numbers that measure the signed distance to two perpendicular axes, x and y.

![Cartesian Coordinate System](../.gitbook/assets/cartesian_coordinate_system.png)

Continuing with the map analogy, coordinate systems define an origin. For geographic coordinates the origin is set to the point where the equator and the zero meridian cross. Depending on where we set the origin, coordinates for a specific point are different. A coordinate system may also define the orientation of the axis. In the previous figure, the x coordinate increases as long as we move to the right and the y coordinate increases as we move upwards. But we could also define an alternative Cartesian coordinate system with different axis orientation in which we would obtain different coordinates.

![Alternative Cartesian Coordinate System](../.gitbook/assets/alt_cartesian_coordinate_system.png)

As you can see we need to define some arbitrary parameters, such as the origin and the axis orientation in order to give the appropriate meaning to the pair of numbers that constitute a coordinate. We will refer to that coordinate system with the set of arbitrary parameters as the coordinate space. In order to work with a set of coordinates we must use the same coordinate space. The good news is that we can transform coordinates from one space to another just by performing translations and rotations.

If we are dealing with 3D coordinates we need an additional axis, the z axis. 3D coordinates will be formed by a set of three numbers (x, y, z).

![3D Cartesian Coordinate System](../.gitbook/assets/3d_cartesian_coordinate_system.png)

As in 2D Cartesian coordinate spaces we can change the orientation of the axes in 3D coordinate spaces as long as the axes are perpendicular. The next figure shows another 3D coordinate space.

![Alternative 3D Cartesian Coordinate System](../.gitbook/assets/alt_3d_cartesian_coordinate_system.png)

3D coordinates can be classified in two types: left handed and right handed. How do you know which type it is? Take your hand and form a “L” between your thumb and your index fingers, the middle finger should point in a direction perpendicular to the other two. The thumb should point to the direction where the x axis increases, the index finger should point where the y axis increases and the middle finger should point where the z axis increases. If you are able to do that with your left hand, then it's left handed, if you need to use your right hand it's right-handed.

![Right Handed vs Left Handed](../.gitbook/assets/righthanded_lefthanded.png)

2D coordinate spaces are all equivalent since by applying rotation we can transform from one to another. 3D coordinate spaces, on the contrary, are not all equal. You can only transform from one to another by applying rotation if they both have the same handedness, that is, if both are left handed or right handed.

Now that we have defined some basic topics let’s talk about some commonly used terms when dealing with 3D graphics. When we explain in later chapters how to render 3D models we will see that we use different 3D coordinate spaces, that is because each of those coordinate spaces has a context, a purpose. A set of coordinates is meaningless unless it refers to something. When you examine this coordinates (40.438031, -3.676626) they may say something to you or not. But if I say that they are geometric coordinates (latitude and longitude) you will see that they are the coordinates of a place in Madrid.

When we will load 3D objects we will get a set of 3D coordinates. Those coordinates are expressed in a 3D coordinate space which is called object coordinate space. When the graphics designers are creating those 3D models they don’t know anything about the 3D scene that this model will be displayed in, so they can only define the coordinates using a coordinate space that is only relevant for the model.

When we will be drawing a 3D scene all of our 3D objects will be relative to the so called world space coordinate space. We will need to transform from 3D object space to world space coordinates. Some objects will need to be rotated, stretched or enlarged and translated in order to be displayed properly in a 3D scene.

We will also need to restrict the range of the 3D space that is shown, which is like moving a camera through our 3D space. Then we will need to transform world space coordinates to camera or view space coordinates. Finally these coordinates need to be transformed to screen coordinates, which are 2D, so we need to project 3D view coordinates to a 2D screen coordinate space.

The following picture shows OpenGL coordinates, (the z axis is perpendicular to the screen) and coordinates are between -1 and +1.

![OpenGL coordinates](../.gitbook/assets/opengl_coordinates.png)

## Your first triangle

Now we can start learning the processes that takes place while rendering a scene using OpenGL. If you are used to older versions of OpenGL, that is fixed-function pipeline, you may end this chapter wondering why it needs to be so complex. You may end up thinking that drawing a simple shape to the screen should not require so many concepts and lines of code. Let me give you an advice for those of you that think that way. It is actually simpler and much more flexible. You only need to give it a chance. Modern OpenGL lets you think in one problem at a time and it lets you organize your code and processes in a more logical way.

The sequence of steps that ends up drawing a 3D representation into your 2D screen is called the graphics pipeline. First versions of OpenGL employed a model which was called fixed-function pipeline. This model employed a set of steps in the rendering process which defined a fixed set of operations. The programmer was constrained to the set of functions available for each step and could set some parameters to tweak it. Thus, the effects and operations that could be applied were limited by the API itself (for instance, “set fog” or “add light”, but the implementation of those functions were fixed and could not be changed).

The graphics pipeline was composed of these steps:

![Graphics Pipeline](../.gitbook/assets/rendering_pipeline.png)

OpenGL 2.0 introduced the concept of programmable pipeline. In this model, the different steps that compose the graphics pipeline can be controlled or programmed by using a set of specific programs called shaders. The following picture depicts a simplified version of the OpenGL programmable pipeline:

![Programmable pipeline](../.gitbook/assets/rendering_pipeline_2.png)

The rendering starts taking as its input a list of vertices in the form of Vertex Buffers. But, what is a vertex? A vertex is any data structure that can be used as an input to render a scene. By now you can think as a structure that describes a point in 2D or 3D space. And how do you describe a point in a 3D space? By specifying its x, y and z coordinates. And what is a Vertex Buffer? A Vertex Buffer is another data structure that packs all the vertices that need to be rendered, by using vertex arrays, and makes that information available to the shaders in the graphics pipeline.

Those vertices are processed by the vertex shader whose main purpose is to calculate the projected position of each vertex into the screen space. This shader can generate also other outputs related to color or texture, but its main goal is to project the vertices into the screen space, that is, to generate dots.

The geometry processing stage connects the vertices that are transformed by the vertex shader to form triangles. It does so by taking into consideration the order in which the vertices were stored and grouping them using different models. Why triangles? A triangle is like the basic work unit for graphic cards. It’s a simple geometric shape that can be combined and transformed to construct complex 3D scenes. This stage can also use a specific shader to group the vertices.

The rasterization stage takes the triangles generated in the previous stages, clips them and transforms them into pixel-sized fragments. Those fragments are used during the fragment processing stage by the fragment shader to generate pixels assigning them the final color that gets written into the framebuffer. The framebuffer is the final result of the graphics pipeline. It holds the value of each pixel that should be drawn to the screen.

Keep in mind that 3D cards are designed to parallelize all the operations described above. The input data is processed in parallel in order to generate the final scene.

So let's start writing our first shader program. Shaders are written by using the GLSL language (OpenGL Shading Language) which is based on ANSI C. First we will create a file named “`scene.vert`” (the extension is for Vertex Shader) under the `resources\shaders` directory with the following content:

```glsl
#version 330

layout (location=0) in vec3 inPosition;

void main()
{
    gl_Position = vec4(inPosition, 1.0);
}
```

The first line is a directive that states the version of the GLSL language we are using. The following table relates the GLSL version, the OpenGL that matches that version and the directive to use (Wikipedia: [https://en.wikipedia.org/wiki/OpenGL\_Shading\_Language#Versions](https://en.wikipedia.org/wiki/OpenGL_Shading_Language#Versions)).

| GLS Version | OpenGL Version | Shader Preprocessor |
| ----------- | -------------- | ------------------- |
| 1.10.59     | 2.0            | #version 110        |
| 1.20.8      | 2.1            | #version 120        |
| 1.30.10     | 3.0            | #version 130        |
| 1.40.08     | 3.1            | #version 140        |
| 1.50.11     | 3.2            | #version 150        |
| 3.30.6      | 3.3            | #version 330        |
| 4.00.9      | 4.0            | #version 400        |
| 4.10.6      | 4.1            | #version 410        |
| 4.20.11     | 4.2            | #version 420        |
| 4.30.8      | 4.3            | #version 430        |
| 4.40        | 4.4            | #version 440        |
| 4.50        | 4.5            | #version 450        |

The second line specifies the input format for this shader. Data in an OpenGL buffer can be whatever we want, that is, the language does not force you to pass a specific data structure with a predefined semantic. From the point of view of the shader it is expecting to receive a buffer with data. It can be a position, a position with some additional information or whatever else we want. In this example, from vertex shader's perspective is just receiving an array of floats. When we fill the buffer, we define the buffer chunks that are going to be processed by the shader.

So, first we need to get that chunk into something that’s meaningful to us. In this case we are saying that, starting from the position 0, we are expecting to receive a vector composed of 3 attributes (x, y, z).

The shader has a main block like any other C program which in this case is very simple. It is just returning the received position in the output variable `gl_Position` without applying any transformation. You now may be wondering why the vector of three attributes has been converted into a vector of four attributes (vec4). This is because `gl_Position` is expecting the result in vec4 format since it is using homogeneous coordinates. That is, it’s expecting something in the form (x, y, z, w), where w represents an extra dimension. Why add another dimension? In later chapters you will see that most of the operations we need to do are based on vectors and matrices. Some of those operations cannot be combined if we do not have that extra dimension. For instance we could not combine rotation and translation operations. (If you want to learn more on this, this extra dimension allow us to combine affine and linear transformations. You can learn more about this by reading the excellent book “3D Math Primer for Graphics and Game Development", by Fletcher Dunn and Ian Parberry).

Let us now have a look at our first fragment shader. We will create a file named “`scene.frag`” (the extension is for Fragment Shader) under the resources directory with the following content:

```glsl
#version 330

out vec4 fragColor;

void main()
{
    fragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
```

The structure is quite similar to our vertex shader. In this case we will set a fixed color for each fragment. The output variable is defined in the second line and set as a vec4 fragColor.

Now that we have our shaders created, how do we use them? We will need to create a new class named `ShaderProgram` which basically receives the source code of the different shader modules (vertex, fragment) and complies then and links them together to generate a shader program. This is the sequence of steps we need to follow:

1. Create an OpenGL program.
2. Load the shader program modules (vertex or fragment shaders).
3. For each shader, create a new shader module and specify its type (vertex, fragment).
4. Compile the shader.
5. Attach the shader to the program.
6. Link the program.

At the end the shader program will be loaded in the GPU and we can use it by referencing an identifier, the program identifier.

```java
package org.lwjglb.engine.graph;

import org.lwjgl.opengl.GL30;
import org.lwjglb.engine.Utils;

import java.util.*;

import static org.lwjgl.opengl.GL30.*;

public class ShaderProgram {

    private final int programId;

    public ShaderProgram(List<ShaderModuleData> shaderModuleDataList) {
        programId = glCreateProgram();
        if (programId == 0) {
            throw new RuntimeException("Could not create Shader");
        }

        List<Integer> shaderModules = new ArrayList<>();
        shaderModuleDataList.forEach(s -> shaderModules.add(createShader(Utils.readFile(s.shaderFile), s.shaderType)));

        link(shaderModules);
    }

    public void bind() {
        glUseProgram(programId);
    }

    public void cleanup() {
        unbind();
        if (programId != 0) {
            glDeleteProgram(programId);
        }
    }

    protected int createShader(String shaderCode, int shaderType) {
        int shaderId = glCreateShader(shaderType);
        if (shaderId == 0) {
            throw new RuntimeException("Error creating shader. Type: " + shaderType);
        }

        glShaderSource(shaderId, shaderCode);
        glCompileShader(shaderId);

        if (glGetShaderi(shaderId, GL_COMPILE_STATUS) == 0) {
            throw new RuntimeException("Error compiling Shader code: " + glGetShaderInfoLog(shaderId, 1024));
        }

        glAttachShader(programId, shaderId);

        return shaderId;
    }

    public int getProgramId() {
        return programId;
    }

    private void link(List<Integer> shaderModules) {
        glLinkProgram(programId);
        if (glGetProgrami(programId, GL_LINK_STATUS) == 0) {
            throw new RuntimeException("Error linking Shader code: " + glGetProgramInfoLog(programId, 1024));
        }

        shaderModules.forEach(s -> glDetachShader(programId, s));
        shaderModules.forEach(GL30::glDeleteShader);
    }

    public void unbind() {
        glUseProgram(0);
    }

    public void validate() {
        glValidateProgram(programId);
        if (glGetProgrami(programId, GL_VALIDATE_STATUS) == 0) {
            throw new RuntimeException("Error validating Shader code: " + glGetProgramInfoLog(programId, 1024));
        }
    }

    public record ShaderModuleData(String shaderFile, int shaderType) {
    }
}
```

The constructor of the `ShaderProgram` receives a list of `ShaderModuleData` instances which define the shader module type (vertex, fragment, etc.) and the path to the source file which contains the shader module code. The constructor starts by creating a new OpenGL shader program by compiling firs each shader module (by invoking the `createShader` method) and finally linking all together (by invoking the `link` method). Once the shader program has been linked, the compiled vertex and fragment shaders can be freed up (by calling `glDetachShader`).

The `validate` method, basically calls the `glValidateProgram` function. This function is used mainly for debugging purposes, and it should not be used when your game reaches production stage. This method tries to validate if the shader is correct given the **current OpenGL state**. This means, that validation may fail in some cases even if the shader is correct, due to the fact that the current state is not complete enough to run the shader (some data may have not been uploaded yet). You should call it when all required input and output data is properly bound (better just before performing any drawing call).

`ShaderProgram` also provides methods to use this program for rendering, that is binding it, another one for unbinding (when we are done with it) and finally, a cleanup method to free all the resources when they are no longer needed.

We will create an utility class named `Utils`, which in this case defines a public method to load a file into a `String`:

```java
package org.lwjglb.engine;

import java.io.IOException;
import java.nio.file.*;

public class Utils {

    private Utils() {
        // Utility class
    }

    public static String readFile(String filePath) {
        String str;
        try {
            str = new String(Files.readAllBytes(Paths.get(filePath)));
        } catch (IOException excp) {
            throw new RuntimeException("Error reading file [" + filePath + "]", excp);
        }
        return str;
    }
}
```

We will need also a new class, named `Scene`, which will hold values of our 3D scene, such as models, lights, etc. By now it just tores the meshes (set of vertices) of the models we want to draw. This is the source code for that class:

```java
package org.lwjglb.engine.scene;

import org.lwjglb.engine.graph.Mesh;

import java.util.*;

public class Scene {

    private Map<String, Mesh> meshMap;

    public Scene() {
        meshMap = new HashMap<>();
    }

    public void addMesh(String meshId, Mesh mesh) {
        meshMap.put(meshId, mesh);
    }

    public void cleanup() {
        meshMap.values().forEach(Mesh::cleanup);
    }

    public Map<String, Mesh> getMeshMap() {
        return meshMap;
    }
}
```

As you can see it just stores `Mesh` instances in a Map, which is later on used for drawing. But what is a `Mesh`? It is basically our way to load vertices data into the GPU so it can be used for render. Prior to describe in detail the `Mesh` class, let's see how it can be used in our `Main` class:

```java
public class Main implements IAppLogic {

    public static void main(String[] args) {
        Main main = new Main();
        Engine gameEng = new Engine("chapter-03", new Window.WindowOptions(), main);
        gameEng.start();
    }
    ...
    @Override
    public void init(Window window, Scene scene, Render render) {
        float[] positions = new float[]{
                0.0f, 0.5f, 0.0f,
                -0.5f, -0.5f, 0.0f,
                0.5f, -0.5f, 0.0f
        };
        Mesh mesh = new Mesh(positions, 3);
        scene.addMesh("triangle", mesh);
    }
    ...
}
```

In the `init` method, we define an array of floats that contain the coordinates of the vertices of a triangle. As you can see there’s no structure in that array, we just dump there all the coordinates. As it is right now, OpenGL cannot know the structure of that data. It’s just a sequence of floats. The following picture depicts the triangle in our coordinate system.

![Triangle](../.gitbook/assets/triangle_coordinates.png)

The class that defines the structure of that data and loads it in the GPU is the `Mesh` class which is defined like this:

```java
package org.lwjglb.engine.graph;

import org.lwjgl.opengl.GL30;
import org.lwjgl.system.*;

import java.nio.FloatBuffer;
import java.util.*;

import static org.lwjgl.opengl.GL30.*;

public class Mesh {

    private int numVertices;
    private int vaoId;
    private List<Integer> vboIdList;

    public Mesh(float[] positions, int numVertices) {
        this.numVertices = numVertices;
        vboIdList = new ArrayList<>();

        vaoId = glGenVertexArrays();
        glBindVertexArray(vaoId);

        // Positions VBO
        int vboId = glGenBuffers();
        vboIdList.add(vboId);
        FloatBuffer positionsBuffer = MemoryUtil.memCallocFloat(positions.length);
        positionsBuffer.put(0, positions);
        glBindBuffer(GL_ARRAY_BUFFER, vboId);
        glBufferData(GL_ARRAY_BUFFER, positionsBuffer, GL_STATIC_DRAW);
        glEnableVertexAttribArray(0);
        glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);

        glBindBuffer(GL_ARRAY_BUFFER, 0);
        glBindVertexArray(0);

        MemoryUtil.memFree(positionsBuffer);
    }

    public void cleanup() {
        vboIdList.forEach(GL30::glDeleteBuffers);
        glDeleteVertexArrays(vaoId);
    }

    public int getNumVertices() {
        return numVertices;
    }

    public final int getVaoId() {
        return vaoId;
    }
}
```

We are introducing now two important concepts, Vertex Array Objects (VAOs) and Vertex Buffer Object (VBOs). If you get lost in the code above remember that at the end what we are doing is sending the data that models the objects we want to draw to the graphics card memory. When we store it we get an identifier that serves us later to refer to it while drawing.

Let us first start with Vertex Buffer Object (VBOs). A VBO is just a memory buffer stored in the graphics card memory that stores vertices. This is where we will transfer our array of floats that model a triangle. As we said before, OpenGL does not know anything about our data structure. In fact it can hold not just coordinates but other information, such as textures, color, etc. A Vertex Array Objects (VAOs) is an object that contains one or more VBOs which are usually called attribute lists. Each attribute list can hold one type of data: position, color, texture, etc. You are free to store whichever you want in each slot.

A VAO is like a wrapper that groups a set of definitions for the data that is going to be stored in the graphics card. When we create a VAO we get an identifier. We use that identifier to render it and the elements it contains using the definitions we specified during its creation.

So let us review the code above. The first thing that we do is to create the VAO (by calling the `glGenVertexArrays` function) and bind it (by calling the `glBindVertexArray` function). After that, we need to create the VBO (by calling the `glGenBuffers`) and put the data into it. In order to do so, we store our array of floats into a `FloatBuffer`. This is mainly due to the fact that we must interface with the OpenGL library, which is C-based, so we must transform our array of floats into something that can be managed by the library.

We use the `MemoryUtil` class to create the buffer in off-heap memory so that it's accessible by the OpenGL library. We have store the data (with the put method). Remember, that Java objects, are allocated in a space called the heap. The heap is a large bunch of memory reserved in the JVM's process memory. Memory stored in the heap cannot be accessed by native code (JNI, the mechanism that allows calling native code from Java does not allow that). The only way of sharing memory data between Java and native code is by directly allocating memory in Java.

If you come from previous versions of LWJGL it's important to stress out a few topics. You may have noticed that we do not use the utility class `BufferUtils` to create the buffers. Instead we use the `MemoryUtil` class. This is due to the fact that `BufferUtils` was not very efficient, and has been maintained only for backwards compatibility. Instead, LWJGL 3 proposes two methods for buffer management:

* Auto-managed buffers, that is, buffers that are automatically collected by the Garbage Collector. These buffers are mainly used for short-lived operations, or for data that is transferred to the GPU and does not need to be present in the process memory. This is achieved by using the `org.lwjgl.system.MemoryStack` class.
* Manually managed buffers. In this case we need to carefully free them once we are finished. These buffers are intended for long time operations or for large amounts of data. This is achieved by using the `MemoryUtil` class.

You can consult the details here: [https://blog.lwjgl.org/memory-management-in-lwjgl-3/](https://blog.lwjgl.org/memory-management-in-lwjgl-3/).

In this specific case, positions data is short lived, once we have loaded the data, we are done with that buffer. You may think, we then you do not use the `org.lwjgl.system.MemoryStack` class? The reason to use the second approach (`MemoryUtil` class) is that LWJGL's stack is limited. If you end up loading large modes you may consume all the available space and get and "Out of stack space" exception. The drawback for this approach is that we need to manually free the memory once we are done with it by calling `MemoryUtil.memFree`.

After that, we bind the VBO (by calling the `glBindBuffer`) and load the data into it (by calling the `glBufferData` function). Now comes the most important part. We need to define the structure of our data and store it in one of the attribute lists of the VAO. This is done with the following line.

```java
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);
```

The parameters are:

* index: Specifies the location where the shader expects this data.
* size: Specifies the number of components per vertex attribute (from 1 to 4). In this case, we are passing 3D coordinates, so it should be 3.
* type: Specifies the type of each component in the array, in this case a float.
* normalized: Specifies if the values should be normalized or not.
* stride: Specifies the byte offset between consecutive generic vertex attributes. (We will explain it later).
* offset: Specifies an offset to the first component in the buffer.

After we are finished with our VBO and VAO so we can unbind them (bind them to 0)

Since we are using auto-managed buffers, once the `try` / `catch` block finished, the buffer is automatically cleaned up.

The `Mesh` class is completed by the `cleanup` method which basically frees the VAO and the VBO and some getter methods to get the number of vertices of the mesh and the id of the VAO. When rendering this elements, we will use the VAO id when using drawing operations.

Now let's put all of this into work. We will create a new class named `SceneRender` which will perform the render of all the models in our scene and is defined like this:

```java
package org.lwjglb.engine.graph;

import org.lwjglb.engine.Window;
import org.lwjglb.engine.scene.Scene;

import java.util.*;

import static org.lwjgl.opengl.GL30.*;

public class SceneRender {

    private ShaderProgram shaderProgram;

    public SceneRender() {
        List<ShaderProgram.ShaderModuleData> shaderModuleDataList = new ArrayList<>();
        shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/scene.vert", GL_VERTEX_SHADER));
        shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/scene.frag", GL_FRAGMENT_SHADER));
        shaderProgram = new ShaderProgram(shaderModuleDataList);
    }

    public void cleanup() {
        shaderProgram.cleanup();
    }

    public void render(Scene scene) {
        shaderProgram.bind();

        scene.getMeshMap().values().forEach(mesh -> {
                    glBindVertexArray(mesh.getVaoId());
                    glDrawArrays(GL_TRIANGLES, 0, mesh.getNumVertices());
                }
        );

        glBindVertexArray(0);

        shaderProgram.unbind();
    }
}
```

As you can see, in the constructor, we create two `ShaderModuleData` instances (one for vertex shader and the other one for fragment) and creates a shader program. We define a `cleanup` method to free the resources (in this case the shader program) and a `render` method which is the one which performs the drawing. This method starts by using the shader program by calling its `bind` method. Then, we iterate over the meshes stored in the `Scene` instance, bind them (by calling the `glBindVertexArray` function) and draw the vertices of the VAO (by calling the `glDrawArrays` function). Finally, we unbind the VAO and the shader program to restore the state.

Finally, we just need to update the `Render` class to use the `SceneRender` class.

```java
package org.lwjglb.engine.graph;

import org.lwjgl.opengl.GL;
import org.lwjglb.engine.Window;
import org.lwjglb.engine.scene.Scene;

public class Render {

    private SceneRender sceneRender;

    public Render() {
        GL.createCapabilities();
        sceneRender = new SceneRender();
    }

    public void cleanup() {
        sceneRender.cleanup();
    }

    public void render(Window window, Scene scene) {
        ...
        glViewport(0, 0, window.getWidth(), window.getHeight());
        sceneRender.render(scene);
    }
}
```

The `render` method starts by clearing the framebuffer and setting the view port (by calling the `glViewport` method) to the window dimensions. That is, we set the rendering area to those dimensions (This does not need to be done for every frame, but if we want to support window resizing we can do it this way to adapt to potential changes in each frame). After that we just invoke the `render` method over the `SceneRender` instance. And, that’s all! If you followed the steps carefully you will see something like this:

![Triangle game](../.gitbook/assets/triangle_window.png)

Our first triangle! You may think that this will not make it into the top ten game list, and you will be totally right. You may also think that this has been too much work for drawing a boring triangle. But keep in mind that we are introducing key concepts and preparing the base infrastructure to do more complex things. Please be patient and continue reading.

[Next chapter](../chapter-04/chapter-04.md)


================================================
FILE: chapter-04/chapter-04.md
================================================
# Chapter 04 - Render a quad

## Chapter 04 - More on render

In this chapter, we will continue talking about how OpenGL renders things. We will draw a quad instead of a triangle and set additional data to the Mesh, such as a color for each vertex.

You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-04).

## Mesh modification

As we said at the beginning, we want to draw a quad. A quad can be constructed by using two triangles, as shown in the next figure.

![Quad coordinates](<../.gitbook/assets/quad_coordinates (1).png>)

As you can see, each of the two triangles is composed of three vertices. The first one is formed by the vertices V1, V2, and V4 (the orange one), and the second one is formed by the vertices V4, V2, and V3 (the green one). Vertices are specified in a counter-clockwise order, so the float array to be passed will be \[V1, V2, V4, V4, V2, V3]. Thus, the data for that shape could be:

```java
float[] positions = new float[] {
    -0.5f,  0.5f, 0.0f,
    -0.5f, -0.5f, 0.0f,
     0.5f,  0.5f, 0.0f,
     0.5f,  0.5f, 0.0f,
    -0.5f, -0.5f, 0.0f,
     0.5f, -0.5f, 0.0f,
}
```

The code above still presents some issues. We are repeating coordinates to represent the quad. We are passing the V2 and V4 coordinates twice. With this small shape, it may not seem like a big deal, but imagine a much more complex 3D model. We would be repeating the coordinates many times, like in the figure below (where a vertex can be shared between six triangles).

![Dolphin](<../.gitbook/assets/dolphin (1).png>)

At the end, we would need much more memory because of that duplicate information. But the major problem is not this; the biggest problem is that we will be repeating processes in our shaders for the same vertex. This is where Index Buffers come to the rescue. For drawing the quad, we only need to specify each vertex once, this way: V1, V2, V3, V4). Each vertex has a position in the array. V1 has position 0, V2 has position 1, etc.:

| V1 | V2 | V3 | V4 |
| -- | -- | -- | -- |
| 0  | 1  | 2  | 3  |

Then we specify the order in which those vertices should be drawn by referring to their position:

| 0  | 1  | 3  | 3  | 1  | 2  |
| -- | -- | -- | -- | -- | -- |
| V1 | V2 | V4 | V4 | V2 | V3 |

So we need to modify our `Mesh` class to accept another parameter, an array of indices, and now the number of vertices to draw will be the length of that indices array. Keep in mind also that now we are just using three floats to represent the position of a vertex, but we want to associate the color of each one. Therefore, we need to modify the `Mesh` class like this.

```java
public class Mesh {
    ...
    public Mesh(float[] positions, float[] colors, int[] indices) {
        numVertices = indices.length;
        ...
        // Color VBO
        vboId = glGenBuffers();
        vboIdList.add(vboId);
        FloatBuffer colorsBuffer = MemoryUtil.memCallocFloat(colors.length);
        colorsBuffer.put(0, colors);
        glBindBuffer(GL_ARRAY_BUFFER, vboId);
        glBufferData(GL_ARRAY_BUFFER, colorsBuffer, GL_STATIC_DRAW);
        glEnableVertexAttribArray(1);
        glVertexAttribPointer(1, 3, GL_FLOAT, false, 0, 0);

        // Index VBO
        vboId = glGenBuffers();
        vboIdList.add(vboId);
        IntBuffer indicesBuffer = MemoryUtil.memCallocInt(indices.length);
        indicesBuffer.put(0, indices);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, indicesBuffer, GL_STATIC_DRAW);
        ...
        MemoryUtil.memFree(colorsBuffer);
        MemoryUtil.memFree(indicesBuffer);
    }
    ...
}           
```

After we have created the VBO that stores the positions, we need to create another VBO that will hold the color data. After that, we create another one for the indices. The process of creating that VBO is similar to the previous ones, but notice that the type is now `GL_ELEMENT_ARRAY_BUFFER`. Since we are dealing with integers, we need to create an `IntBuffer` instead of a `FloatBuffer`. The VAO will now contain three VBOs, one for positions, the other one for colors, and another one that will hold the indices and that will be used for rendering.

After that, we need to change the drawing call in the `SceneRender` class to use indices:

```java
public class SceneRender {
    ...
    public void render(Scene scene) {
        ...
        scene.getMeshMap().values().forEach(mesh -> {
                    glBindVertexArray(mesh.getVaoId());
                    glDrawElements(GL_TRIANGLES, mesh.getNumVertices(), GL_UNSIGNED_INT, 0);
                }
        );
        ...
    }
    ...
}
```

The parameters of the `glDrawElements` method are:

* mode: Specifies the primitives for rendering, triangles in this case. No changes here.
* count: Specifies the number of elements to be rendered.
* type: Specifies the type of value in the indices data. In this case, we are using integers.
* indices: Specifies the offset to apply to the indices data to start rendering.

Now we can just create a new Mesh with the extra vertex parameters (colors) and the indices in the `Main` class:

```java
public class Main implements IAppLogic {
    ...
    public static void main(String[] args) {
        ...
        Engine gameEng = new Engine("chapter-04", new Window.WindowOptions(), main);
        ...
    }
    ...
    public void init(Window window, Scene scene, Render render) {
        float[] positions = new float[]{
                -0.5f, 0.5f, 0.0f,
                -0.5f, -0.5f, 0.0f,
                0.5f, -0.5f, 0.0f,
                0.5f, 0.5f, 0.0f,
        };
        float[] colors = new float[]{
                0.5f, 0.0f, 0.0f,
                0.0f, 0.5f, 0.0f,
                0.0f, 0.0f, 0.5f,
                0.0f, 0.5f, 0.5f,
        };
        int[] indices = new int[]{
                0, 1, 3, 3, 1, 2,
        };
        Mesh mesh = new Mesh(positions, colors, indices);
        scene.addMesh("quad", mesh);
    }
    ...
}
```

Now we need to modify the shaders, not because of the indices, but to use the color per vertex. The vertex shader (`scene.vert`) is like this:

```glsl
#version 330

layout (location=0) in vec3 position;
layout (location=1) in vec3 color;

out vec3 outColor;

void main()
{
    gl_Position = vec4(position, 1.0);
    outColor = color;
}
```

In the input parameters, you can see we receive a new `vec2` for the color, and we just return that to be used in the fragment shader (`scene.frag`), which is like this:

```glsl
#version 330

in  vec3 outColor;
out vec4 fragColor;

void main()
{
    fragColor = vec4(outColor, 1.0);
}
```

We use just the input color parameter to return the fragment color. It is important to notice that the color value will be interpolated when using it in the fragment shader, so the result will look something like this:

![Colored quad](../.gitbook/assets/colored_quad.png)

[Next chapter](../chapter-05/chapter-05.md)


================================================
FILE: chapter-05/chapter-05.md
================================================
# Chapter 05 - Perspective projection

In this chapter, we will learn two important concepts, perspective projection (to render far away objects smaller than closer ones) and uniforms (a buffer like structure to pass additional data to the shader).

You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-05).

## Perspective projection

Let’s get back to our nice colored quad we created in the previous chapter. If you look carefully, you will see that the quad is distorted and appears as a rectangle. You can even change the width of the window from 600 pixels to 900 and the distortion will be more evident. What’s happening here?

If you revisit our vertex shader code we are just passing our coordinates directly. That is, when we say that a vertex has a value for coordinate x of 0.5 we are saying to OpenGL to draw it at x position 0.5 on our screen. The following figure shows the OpenGL coordinates (just for x and y axis).

![Coordinates](../.gitbook/assets/coordinates.png)

Those coordinates are mapped, considering our window size, to window coordinates (which have the origin at the top-left corner of the previous figure). So, if our window has a size of 900x580, OpenGL coordinates (1,0) will be mapped to coordinates (900, 0) creating a rectangle instead of a quad.

![Rectangle](<../.gitbook/assets/rectangle (1).png>)

But, the problem is more serious than that. Modify the z coordinate of our quad from 0.0 to 1.0 and to -1.0. What do you see? The quad is exactly drawn in the same place no matter if it’s displaced along the z axis. Why is this happening? Objects that are further away should be drawn smaller than objects that are closer. But we are drawing them with the same x and y coordinates.

But, wait. Should this not be handled by the z coordinate? The answer is yes and no. The z coordinate tells OpenGL that an object is closer or farther away, but OpenGL does not know anything about the size of your object. You could have two objects of different sizes, one closer and smaller and one bigger and further that could be projected correctly onto the screen with the same size (those would have same x and y coordinates but different z). OpenGL just uses the coordinates we are passing, so we must take care of this. We need to correctly project our coordinates.

Now that we have diagnosed the problem, how do we fix it? The answer is using a perspective projection matrix. The perspective projection matrix will take care of the aspect ratio (the relation between size and height) of our drawing area so objects won’t be distorted. It also will handle the distance so objects far away from us will be drawn smaller. The projection matrix will also consider our field of view and the maximum distance to be displayed.

For those not familiar with matrices, a matrix is a bi-dimensional array of numbers arranged in columns and rows. Each number inside a matrix is called an element. A matrix order is the number of rows and columns. For instance, here you can see a 2x2 matrix (2 rows and 2 columns).

![2x2 Matrix](<../.gitbook/assets/2_2_matrix (1).png>)

Matrices have a number of basic operations that can be applied to them (such as addition, multiplication, etc.) that you can consult in a math book. The main characteristics of matrices, related to 3D graphics, is that they are very useful to transform points in the space.

You can think about the projection matrix as a camera, which has a field of view and a minimum and maximum distance. The vision area of that camera will be obtained from a truncated pyramid. The following picture shows a top view of that area.

![Projection Matrix concepts](<../.gitbook/assets/projection_matrix (1).png>)

A projection matrix will correctly map 3D coordinates so they can be correctly represented on a 2D screen. The mathematical representation of that matrix is as follows (don’t be scared).

![Projection Matrix](<../.gitbook/assets/projection_matrix_eq (1).png>)

Where aspect ratio is the relation between our screen width and our screen height ($$a=width/height$$). In order to obtain the projected coordinates of a given point we just need to multiply the projection matrix by the original coordinates. The result will be another vector that will contain the projected version.

So we need to handle a set of mathematical entities such as vectors, matrices and include the operations that can be done on them. We could choose to write all that code by our own from scratch or use an already existing library. We will choose the easy path and use a specific library for dealing with math operations in LWJGL which is called JOML (Java OpenGL Math Library). In order to use that library we just need to add another dependency to our `pom.xml` file.

```xml
        <dependency>
            <groupId>org.joml</groupId>
            <artifactId>joml</artifactId>
            <version>${joml.version}</version>
        </dependency>
```

Now that everything has been set up let’s define our projection matrix. We will create a new class named `Projection` which is defined like this:

```java
package org.lwjglb.engine.scene;

import org.joml.Matrix4f;

public class Projection {

    private static final float FOV = (float) Math.toRadians(60.0f);
    private static final float Z_FAR = 1000.f;
    private static final float Z_NEAR = 0.01f;

    private Matrix4f projMatrix;

    public Projection(int width, int height) {
        projMatrix = new Matrix4f();
        updateProjMatrix(width, height);
    }

    public Matrix4f getProjMatrix() {
        return projMatrix;
    }

    public void updateProjMatrix(int width, int height) {
        projMatrix.setPerspective(FOV, (float) width / height, Z_NEAR, Z_FAR);
    }
}
```

As you can see, it relies on the `Matrix4f` class (provided by the JOML library) which provides a method to set up a perspective projection matrix named `setPerspective`. This method needs the following parameters:

* Field of View: The Field of View angle in radians. We just use the `FOV` constant for that
* Aspect Ratio: That is, the relation ship between render width and height.
* Distance to the near plane (z-near)
* Distance to the far plane (z-far).

We will store a `Projection` class instance in the `Scene` class and initialize it in the constructor. In order to allow for the window to be resized, we will need to create a new method in the `Scene` class, named `resize` to recalculate the perspective projection matrix when window dimensions change.

```java
public class Scene {
    ...
    private Projection projection;

    public Scene(int width, int height) {
        ...
        projection = new Projection(width, height);
    }
    ...
    public Projection getProjection() {
        return projection;
    }

    public void resize(int width, int height) {
        projection.updateProjMatrix(width, height);
    }    
}
```

We need also to update the `Engine` to adapt it to the new `Scene` class constructor parameters and to invoke the `resize` method:

```java
public class Engine {
    ...
    public Engine(String windowTitle, Window.WindowOptions opts, IAppLogic appLogic) {
        ...
        scene = new Scene(window.getWidth(), window.getHeight());
        ...
    }
    ...
    private void resize() {
        scene.resize(window.getWidth(), window.getHeight());
    }
    ...
}
```

## Uniforms

Now that we have the infrastructure to calculate the perspective projection matrix, how do we use it? We need to use it in our shader, and it should be applied to all the vertices. At first, you could think of bundling it in the vertex input (like the coordinates and the colors). In this case we would be wasting lots of space since the projection matrix is common to any vertex. You may also think of multiplying the vertices by the matrix in the java code. But then, our VBOs would be useless and we will not be using the process power available in the graphics card.

The answer is to use “uniforms”. Uniforms are global GLSL variables that shaders can use and that we will employ to pass data that is common to all elements or to a model. So, let's start with how uniforms are used in shader programs. We need to modify our vertex shader code and declare a new uniform called `projectionMatrix` and use it to calculate the projected position.

```glsl
#version 330

layout (location=0) in vec3 position;
layout (location=1) in vec3 color;

out vec3 outColor;

uniform mat4 projectionMatrix;

void main()
{
    gl_Position = projectionMatrix * vec4(position, 1.0);
    outColor = color;
}
```

As you can see we define our `projectionMatrix` as a 4x4 matrix and the position is obtained by multiplying it by our original coordinates. Now we need to pass the values of the projection matrix to our shader. We will create a new class named `UniformMap` which will allow us to create references to the uniforms and set up their values. It starts like this:

```java
package org.lwjglb.engine.graph;

import org.joml.Matrix4f;
import org.lwjgl.system.MemoryStack;

import java.util.*;

import static org.lwjgl.opengl.GL20.*;

public class UniformsMap {

    private int programId;
    private Map<String, Integer> uniforms;

    public UniformsMap(int programId) {
        this.programId = programId;
        uniforms = new HashMap<>();
    }

    public void createUniform(String uniformName) {
        int uniformLocation = glGetUniformLocation(programId, uniformName);
        if (uniformLocation < 0) {
            throw new RuntimeException("Could not find uniform [" + uniformName + "] in shader program [" +
                    programId + "]");
        }
        uniforms.put(uniformName, uniformLocation);
    }
    ...
}
```

As you can see, the constructor receives the identifier of the shader program and it defines a `Map` to store the references (`Integer` instances) to uniforms which are created in the `createUniform` method. Uniforms references are retrieved by calling the `glGetUniformLocation` function, which receives two parameters:

* The shader program identifier.
* The name of the uniform (it should match the one defined in the shader code).

As you can see, uniform creation is independent on the data type associated to it. We will need to have separate methods for the different types when we want to set the data for that uniform. By now, we will just need a method to load a 4x4 matrix:

```java
public class UniformsMap {
    ...
    public void setUniform(String uniformName, Matrix4f value) {
        try (MemoryStack stack = MemoryStack.stackPush()) {
            Integer location = uniforms.get(uniformName);
            if (location == null) {
                throw new RuntimeException("Could not find uniform [" + uniformName + "]");
            }
            glUniformMatrix4fv(location.intValue(), false, value.get(stack.mallocFloat(16)));
        }
    }
}
```

Now, we can use the code above in the `SceneRender` class:

```java
public class SceneRender {
    ...
    private UniformsMap uniformsMap;

    public SceneRender() {
        ...
        createUniforms();
    }
    ...
    private void createUniforms() {
        uniformsMap = new UniformsMap(shaderProgram.getProgramId());
        uniformsMap.createUniform("projectionMatrix");
    }
    ...
    public void render(Scene scene) {
        ...
        uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());
        ...
    }
}
```

We are almost done. We can now show the quad correctly rendered, So you can now launch your program and will obtain a... black background without any coloured quad. What’s happening? Did we break something? Well, actually no. Remember that we are now simulating the effect of a camera looking at our scene. And we provided two distances, one to the farthest plane (equal to 1000f) and one to the closest plane (equal to 0.01f). Our coordinates were:

```java
float[] positions = new float[]{
    -0.5f,  0.5f, 0.0f,
    -0.5f, -0.5f, 0.0f,
     0.5f, -0.5f, 0.0f,
     0.5f,  0.5f, 0.0f,
};
```

That is, our z coordinates are outside the visible zone. Let’s assign them a value of `-0.05f`. Now you will see a giant square like this:

![Square 1](../.gitbook/assets/square_1.png)

What is happening now is that we are drawing the quad too close to our camera. We are actually zooming into it. If we assign now a value of `-1.0f` to the z coordinate we can now see our coloured quad.

```java
public class Main implements IAppLogic {
    ...
    public static void main(String[] args) {
        ...
        Engine gameEng = new Engine("chapter-05", new Window.WindowOptions(), main);
        ...
    }
    ...
    public void init(Window window, Scene scene, Render render) {
        float[] positions = new float[]{
                -0.5f, 0.5f, -1.0f,
                -0.5f, -0.5f, -1.0f,
                0.5f, -0.5f, -1.0f,
                0.5f, 0.5f, -1.0f,
        };
        ...
    }
    ...
}
```

![Square coloured](../.gitbook/assets/square_colored.png)

If we continue pushing the quad backwards we will see it becoming smaller. Notice also that our quad does not appear as a rectangle anymore.

[Next chapter](../chapter-06/chapter-06.md)


================================================
FILE: chapter-06/chapter-06.md
================================================
# Chapter 06 - Going 3D

In this chapter we will set up the basis for 3D models and explain the concept of model transformations to render our first 3D shaper, a rotating cube.

You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-06).

## Models and Entities

Let us first define the concept of a 3D model. Up to now, we have been working with meshes (a collection of vertices). A model is an structure which glues together vertices, colors, textures and materials. A model may be composed of several meshes and can be used by several game entities. A game entity represents a player and enemy, and obstacle, anything that is part of the 3D scene. In this book we will assume that an entity always is related to a model (although you can have entities that are not rendered and therefore cannot have a model). An entity has specific data such a position, which we need to use when rendering. You will see later on that we start the render process by getting the models and then draw the entities associated to that model. This is because efficiency, since several entities can share the same model is better to set up the elements that belong to the model once and later on handle the data that is specific for each entity.

The class that represents a model is, obviously, called `Model` and is defined like this:

```java
package org.lwjglb.engine.graph;

import org.lwjglb.engine.scene.Entity;

import java.util.*;

public class Model {

    private final String id;
    private List<Entity> entitiesList;
    private List<Mesh> meshList;

    public Model(String id, List<Mesh> meshList) {
        this.id = id;
        this.meshList = meshList;
        entitiesList = new ArrayList<>();
    }

    public void cleanup() {
        meshList.forEach(Mesh::cleanup);
    }

    public List<Entity> getEntitiesList() {
        return entitiesList;
    }

    public String getId() {
        return id;
    }

    public List<Mesh> getMeshList() {
        return meshList;
    }

}
```

As you can see, a model, by now, stores a list of `Mesh` instances and has a unique identifier. In addition to that, we store the list of game entities (modelled by the `Entity` class) that are associated to that model. If you are going to create a full engine, you may want to store those relationships in somewhere else (not in the model), but, for simplicity, we will store those links in the `Model` class. This way, render process will be simpler.

In order to represent models in a 3D scene, we'd like to provide support for some basic operations:

* Translation: Move an object in some direction by some amount.
* Rotation: Rotate an object by some angle around any of the three axes.
* Scale: Adjust the size of an object.

![Transformations](<../.gitbook/assets/transformations (1).png>)

The operations described above are known as transformations. You probably guessed--correctly--that the way we'll achieve that is by multiplying our coordinates by a sequence of matrices: one for performing a translation, one for rotation and one for scaling. Those three matrices will be combined into a single matrix called a transformation matrix and passed as a uniform to our vertex shader.

Generally, 3D models exist in their own space. A 3D artist would probably choose to model a pancake with the center of the pancake at the origin and the flat sides pointing straight up and down. When you place the model in your scene, though, you might want it to be somewhere else, like on a griddle, or in the air, being flipped. You might even want more than one! In that case, it would be useful to, instead of having a different 3D file for each different pancake, just have one pancake file that we render in different positions for each different pancake we want to render. This is why model matrices are so standard: easier model creation, easier placing of the model in space, and the ability to have more than one stagnant model.

Such a matrix, comprised of three transformations, will appear twice: once for the world matrix, used for converting the coordinates of the vertices of your model (your centered, straight-up-and-down pancake) to where you want them to be in your 3D space (a pancake flying through the air); and once for the projection matrix, or view matrix, for rendering the shapes in 3D. This second use will be discussed later, in chapter 8. For now, let's jsut worry about the world matrix.

That world matrix will be calculated like this:

$$
World Matrix=\left[Translation Matrix\right]\left[Rotation Matrix\right]\left[Scale Matrix\right]
$$

Transformation matrices are applied right to left. So here, first the coordinates are scaled, then rotated, then transformed. While the order of the rotation and scale matrices don't matter, we want the Translation Matrix to be last: since we're rotating about the origin, the position of the object would be rotated, too!

If we include our projection matrix in the transformation matrix it would be like this:

$$
\begin{array}{lcl} Transf & = & \left[Proj Matrix\right]\left[Translation Matrix\right]\left[Rotation Matrix\right]\left[Scale Matrix\right] \\ & = & \left[Proj Matrix\right]\left[World Matrix\right] \end{array}
$$

The translation matrix is defined like this:

$$
\begin{bmatrix} 1 & 0 & 0 & dx \\ 0 & 1 & 0 & dy \\ 0 & 0 & 1 & dz \\ 0 & 0 & 0 & 1 \end{bmatrix}
$$

Translation Matrix Parameters:

* dx: Displacement along the x axis.
* dy: Displacement along the y axis.
* dz: Displacement along the z axis.

The scale matrix is defined like this:

$$
\begin{bmatrix} sx & 0 & 0 & 0 \\ 0 & sy & 0 & 0 \\ 0 & 0 & sz & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}
$$

Scale Matrix Parameters:

* sx: Scaling along the x axis.
* sy: Scaling along the y axis.
* sz: Scaling along the z axis.

The rotation matrix is much more complex. But keep in mind that it can be constructed by multiplying 3 rotation matrices, one for each axis, or by applying a quaternion (more on this later).

Now, let's define the `Entity` class:

```java
package org.lwjglb.engine.scene;

import org.joml.*;

public class Entity {

    private final String id;
    private final String modelId;
    private Matrix4f modelMatrix;
    private Vector3f position;
    private Quaternionf rotation;
    private float scale;

    public Entity(String id, String modelId) {
        this.id = id;
        this.modelId = modelId;
        modelMatrix = new Matrix4f();
        position = new Vector3f();
        rotation = new Quaternionf();
        scale = 1;
    }

    public String getId() {
        return id;
    }

    public String getModelId() {
        return modelId;
    }

    public Matrix4f getModelMatrix() {
        return modelMatrix;
    }

    public Vector3f getPosition() {
        return position;
    }

    public Quaternionf getRotation() {
        return rotation;
    }

    public float getScale() {
        return scale;
    }

    public final void setPosition(float x, float y, float z) {
        position.x = x;
        position.y = y;
        position.z = z;
    }

    public void setRotation(float x, float y, float z, float angle) {
        this.rotation.fromAxisAngleRad(x, y, z, angle);
    }

    public void setScale(float scale) {
        this.scale = scale;
    }

    public void updateModelMatrix() {
        modelMatrix.translationRotateScale(position, rotation, scale);
    }
}
```

As you can see, an `Entity` instance has a unique ID and defines attributes for its position (as a 3 components vector), its scale (just a float, we will be assuming that we scale evenly across all three axis) and rotation (as a quaternion). We could have store rotation information storing rotation angles for pitch, yaw and roll, but instead we are using a strange mathematical object, called a quaternion, that you may not have heard of. The problem with using rotation angles is the so called gimbal lock. When applying rotation using these angles (called Euler angles) we may end up aligning two rotating axis and losing degrees of freedom, resulting in the inability to properly rotate an object. Quaternions do not have these problem. Instead of me trying to poorly explain what quaternions are, let me just link to an excellent [blog entry](https://www.3dgep.com/understanding-quaternions/) which explain all the concepts behind them. All you need to know about quaternions, though, is that they represent a rotation without having the same pitfalls Euler angles do.

All the transformations applied to a model are defined by a 4x4 matrix, therefore a `Model` instance stores a `Matrix4f` instance for this transformation. It's automatically constructed using the JOML method `translationRotateScale` (taking position, scale and rotation). But we'll need to call the `updateModelMatrix` method each time we modify the attributes of a `Model` instance, to update that matrix.

## Other code changes

We need to change the `Scene` class to store models instead of `Mesh` instances. In addition to that, we need to add support for linking `Entity` instances with models so we can later on render them.

```java
package org.lwjglb.engine.scene;

import org.lwjglb.engine.graph.Model;

import java.util.*;

public class Scene {

    private Map<String, Model> modelMap;
    private Projection projection;

    public Scene(int width, int height) {
        modelMap = new HashMap<>();
        projection = new Projection(width, height);
    }

    public void addEntity(Entity entity) {
        String modelId = entity.getModelId();
        Model model = modelMap.get(modelId);
        if (model == null) {
            throw new RuntimeException("Could not find model [" + modelId + "]");
        }
        model.getEntitiesList().add(entity);
    }

    public void addModel(Model model) {
        modelMap.put(model.getId(), model);
    }

    public void cleanup() {
        modelMap.values().forEach(Model::cleanup);
    }

    public Map<String, Model> getModelMap() {
        return modelMap;
    }

    public Projection getProjection() {
        return projection;
    }

    public void resize(int width, int height) {
        projection.updateProjMatrix(width, height);
    }
}
```

Now we need to modify the `SceneRender` class a little. Ths first thing that we need to do is to pass model matrix information to the shader through an uniform. Therefore, we will create a new uniform named `modelMatrix` in the vertex shader and, consequently, retrieve its location in the `createUniforms` method.

```java
public class SceneRender {
    ...
    private void createUniforms() {
        ...
        uniformsMap.createUniform("modelMatrix");
    }
    ...
}
```

The next step is to modify the `render` method to change how we access models and set up properly the model matrix uniform:

```java
public class SceneRender {
    ...
    public void render(Scene scene) {
        shaderProgram.bind();

        uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());

        Collection<Model> models = scene.getModelMap().values();
        for (Model model : models) {
            model.getMeshList().stream().forEach(mesh -> {
                glBindVertexArray(mesh.getVaoId());
                List<Entity> entities = model.getEntitiesList();
                for (Entity entity : entities) {
                    uniformsMap.setUniform("modelMatrix", entity.getModelMatrix());
                    glDrawElements(GL_TRIANGLES, mesh.getNumVertices(), GL_UNSIGNED_INT, 0);
                }
            });
        }

        glBindVertexArray(0);

        shaderProgram.unbind();
    }
    ...
}
```

As you can see, we iterate over the models, then over their meshes, binding their VAO and after that we get the associated entities. For each entity, prior to invoking the drawing call, we fill up the `modelMatrix` uniform with the proper data.

The next step is to modify the vertex shader to use the `modelMatrix` uniform.

```glsl
#version 330

layout (location=0) in vec3 position;
layout (location=1) in vec3 color;

out vec3 outColor;

uniform mat4 projectionMatrix;
uniform mat4 modelMatrix;

void main()
{
    gl_Position = projectionMatrix * modelMatrix * vec4(position, 1.0);
    outColor = color;
}
```

As you can see the code is exactly the same. We are using the uniform to correctly project our coordinates taking into consideration our frustum, position, scale and rotation information. Another important thing to think about is, why don’t we pass the translation, rotation and scale matrices instead of combining them into a world matrix? The reason is that we should try to limit the matrices we use in our shaders. Also keep in mind that the matrix multiplication that we do in our shader is done once per each vertex. The projection matrix does not change between render calls and the world matrix does not change per `Entity` instance. If we passed the translation, rotation and scale matrices independently we would be doing many more matrix multiplications. Think about a model with tons of vertices. That’s a lot of extra operations.

But you may think now that if the model matrix does not change per `Entity` instance, why didn't we do the matrix multiplication in our Java class? We could multiply the projection matrix and the model matrix just once per `Entity` and send it as a single uniform. In this case we would be saving many more operations, right? The answer is that this is a valid point for now, but when we add more features to our game engine we will need to operate with world coordinates in the shaders anyway, so it’s better to handle those two matrices in an independent way.

Finally another very important aspect to remark is the order of multiplication of the matrices. We first need to multiply position information by model matrix, that is we transform model coordinates in world space. After that, we apply the projection. Keep in mind that matrix multiplication is not commutative, so order is very important.

Now we need to modify the `Main` class to adapt to the new way of loading models and entities and the coordinates of indices of a 3D cube.

```java
public class Main implements IAppLogic {

    private Entity cubeEntity;
    private Vector4f displInc = new Vector4f();
    private float rotation;

    public static void main(String[] args) {
        ...
        Engine gameEng = new Engine("chapter-06", new Window.WindowOptions(), main);
        ...
    }
    ...
    @Override
    public void init(Window window, Scene scene, Render render) {
        float[] positions = new float[]{
                // VO
                -0.5f, 0.5f, 0.5f,
                // V1
                -0.5f, -0.5f, 0.5f,
                // V2
                0.5f, -0.5f, 0.5f,
                // V3
                0.5f, 0.5f, 0.5f,
                // V4
                -0.5f, 0.5f, -0.5f,
                // V5
                0.5f, 0.5f, -0.5f,
                // V6
                -0.5f, -0.5f, -0.5f,
                // V7
                0.5f, -0.5f, -0.5f,
        };
        float[] colors = new float[]{
                0.5f, 0.0f, 0.0f,
                0.0f, 0.5f, 0.0f,
                0.0f, 0.0f, 0.5f,
                0.0f, 0.5f, 0.5f,
                0.5f, 0.0f, 0.0f,
                0.0f, 0.5f, 0.0f,
                0.0f, 0.0f, 0.5f,
                0.0f, 0.5f, 0.5f,
        };
        int[] indices = new int[]{
                // Front face
                0, 1, 3, 3, 1, 2,
                // Top Face
                4, 0, 3, 5, 4, 3,
                // Right face
                3, 2, 7, 5, 3, 7,
                // Left face
                6, 1, 0, 6, 0, 4,
                // Bottom face
                2, 1, 6, 2, 6, 7,
                // Back face
                7, 6, 4, 7, 4, 5,
        };
        List<Mesh> meshList = new ArrayList<>();
        Mesh mesh = new Mesh(positions, colors, indices);
        meshList.add(mesh);
        String cubeModelId = "cube-model";
        Model model = new Model(cubeModelId, meshList);
        scene.addModel(model);

        cubeEntity = new Entity("cube-entity", cubeModelId);
        cubeEntity.setPosition(0, 0, -2);
        scene.addEntity(cubeEntity);
    }
    ...
}
```

In order to draw a cube we just need to define eight vertices. We'll define the new vertices in the `positions` array, making sure to keep the `colors` array length 8.

![Cube coords](<../.gitbook/assets/cube_coords (1).png>)

Since a cube is made of six faces we need to draw twelve triangles (two per face), so we need to update the indices array. Remember that triangles must be defined in counter-clock wise order. Doing this by hard is tedious, and it's easy to mess up. Here's how to do it: Always put the face that you want to define indices for in front of you. Then, identify the vertices and draw the triangles in counter-clock wise order. Finally, we create a model with just one mesh and an entity associated to that model.

We will first use the `input` method to modify the cube position by using cursor arrows and its scale by using `Z` and `X`key. We just need to detect the key that has been pressed, update the cube entity position and /or scale, and, finally, update its model matrix.

```java
public class Main implements IAppLogic {
    ...
    public void input(Window window, Scene scene, long diffTimeMillis) {
        displInc.zero();
        if (window.isKeyPressed(GLFW_KEY_UP)) {
            displInc.y = 1;
        } else if (window.isKeyPressed(GLFW_KEY_DOWN)) {
            displInc.y = -1;
        }
        if (window.isKeyPressed(GLFW_KEY_LEFT)) {
            displInc.x = -1;
        } else if (window.isKeyPressed(GLFW_KEY_RIGHT)) {
            displInc.x = 1;
        }
        if (window.isKeyPressed(GLFW_KEY_A)) {
            displInc.z = -1;
        } else if (window.isKeyPressed(GLFW_KEY_Q)) {
            displInc.z = 1;
        }
        if (window.isKeyPressed(GLFW_KEY_Z)) {
            displInc.w = -1;
        } else if (window.isKeyPressed(GLFW_KEY_X)) {
            displInc.w = 1;
        }

        displInc.mul(diffTimeMillis / 1000.0f);

        Vector3f entityPos = cubeEntity.getPosition();
        cubeEntity.setPosition(displInc.x + entityPos.x, displInc.y + entityPos.y, displInc.z + entityPos.z);
        cubeEntity.setScale(cubeEntity.getScale() + displInc.w);
        cubeEntity.updateModelMatrix();
    }
    ...
}
```

In order to better view the cube, we'll have the model in the `Main` class  rotate along the three axes. We will do this in the `update` method.

```java
public class Main implements IAppLogic {
    ...
    public void update(Window window, Scene scene, long diffTimeMillis) {
        rotation += 1.5;
        if (rotation > 360) {
            rotation = 0;
        }
        cubeEntity.setRotation(1, 1, 1, (float) Math.toRadians(rotation));
        cubeEntity.updateModelMatrix();
    }
    ...
}
```

And that’s all. We are now able to display a spinning 3D cube! You can now compile and run your example and you will obtain something like this.

![Cube with no depth tests](<../.gitbook/assets/cube_no_depth_test (1).png>)

There is something weird with this cube. Some faces are not being painted correctly. What is happening? The reason why the cube has this aspect is that the triangles that compose the cube are being drawn in a sort of random order. The pixels that are far away should be drawn before pixels that are closer. This is not happening right now and in order to do that we must enable depth testing.

This can be done in the constructor of the `Render` class:

```java
public class Render {
    ...
    public Render() {
        GL.createCapabilities();
        glEnable(GL_DEPTH_TEST);
        ...
    }
    ...
}
```

Now our cube is being rendered correctly!

![Cube with depth test](<../.gitbook/assets/cube_depth_test (1).png>)

[Next chapter](../chapter-07/chapter-07.md)


================================================
FILE: chapter-07/chapter-07.md
================================================
# Chapter 07 - Textures

In this chapter we will learn how to load textures, how they relate to a model and how to use them in the rendering process

You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-07).

## Texture loading

A texture is an image which is mapped to a model to set the color of the pixels of the model. You can think of a texture as a skin that is wrapped around your 3D model. What you do is assign points in the image texture to the vertices in your model. With that information, OpenGL is able to calculate the color to apply to the other pixels based on the texture image.

![Texture mapping](<../.gitbook/assets/texture_mapping (1).png>)

The texture image can be larger or smaller than the model. OpenGL will extrapolate the color if the pixel to be processed cannot be mapped to a specific point in the texture. You can control this extrapolation when a specific texture is created.

In order to apply a texture to a model, we must assign texture coordinates to each of our vertices. The texture coordinate system is a bit different from the coordinate system of our model. First of all, we have a 2D texture so our coordinates will only have two components, x and y. Besides that, the origin is setup in the top left corner of the image and the maximum value of the x or y value is 1.

![Texture coordinates](<../.gitbook/assets/texture_coordinates (1).png>)

How do we relate texture coordinates with our position coordinates? Easy, in the same way we passed the color information. We set up a VBO which will have a texture coordinate for each vertex position.

So let’s start modifying the code base to use textures in our 3D cube. The first step is to load the image that will be used as a texture. For this task, we will use the LWJGL wrapper for the [stb](https://github.com/nothings/stb) library. In order to do that, we need first to declare that dependency, including the natives in our `pom.xml` file.

```xml
<dependency>
    <groupId>org.lwjgl</groupId>
    <artifactId>lwjgl-stb</artifactId>
    <version>${lwjgl.version}</version>
</dependency>
[...]
<dependency>
    <groupId>org.lwjgl</groupId>
    <artifactId>lwjgl-stb</artifactId>
    <version>${lwjgl.version}</version>
    <classifier>${native.target}</classifier>
    <scope>runtime</scope>
</dependency>
```

The first step we will do is to create a new `Texture` class that will perform all the necessary steps to load a texture and is defined like this:

```java
package org.lwjglb.engine.graph;

import org.lwjgl.system.MemoryStack;

import java.nio.*;

import static org.lwjgl.opengl.GL30.*;
import static org.lwjgl.stb.STBImage.*;

public class Texture {

    private int textureId;
    private String texturePath;

    public Texture(int width, int height, ByteBuffer buf) {
        this.texturePath = "";
        generateTexture(width, height, buf);
    }

    public Texture(String texturePath) {
        try (MemoryStack stack = MemoryStack.stackPush()) {
            this.texturePath = texturePath;
            IntBuffer w = stack.mallocInt(1);
            IntBuffer h = stack.mallocInt(1);
            IntBuffer channels = stack.mallocInt(1);

            ByteBuffer buf = stbi_load(texturePath, w, h, channels, 4);
            if (buf == null) {
                throw new RuntimeException("Image file [" + texturePath + "] not loaded: " + stbi_failure_reason());
            }

            int width = w.get();
            int height = h.get();

            generateTexture(width, height, buf);

            stbi_image_free(buf);
        }
    }

    public void bind() {
        glBindTexture(GL_TEXTURE_2D, textureId);
    }

    public void cleanup() {
        glDeleteTextures(textureId);
    }

    private void generateTexture(int width, int height, ByteBuffer buf) {
        textureId = glGenTextures();

        glBindTexture(GL_TEXTURE_2D, textureId);
        glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0,
                GL_RGBA, GL_UNSIGNED_BYTE, buf);
        glGenerateMipmap(GL_TEXTURE_2D);
    }

    public String getTexturePath() {
        return texturePath;
    }
}
```

The first thing we do in the constructor is to allocate `IntBuffer`s for the library to return the image size and number of channels. Then, we call the `stbi_load` method to actually load the image into a `ByteBuffer`. This method requires the following parameters:

* `filePath`: The absolute path to the file. The stb library is native and does not understand anything about `CLASSPATH`. Therefore, we will be using regular file system paths.
* `width`: Image width. This will be populated with the image width.
* `height`: Image height. This will be populated with the image height.
* `channels`: The image channels.
* `desired_channels`: The desired image channels. We pass 4 (RGBA).

One important thing to remember is that OpenGL, for historical reasons, requires that texture images have a size (number of texels in each dimension) of a power of two (2, 4, 8, 16, ....). I think this is not required by OpenGL drivers anymore but if you have some issues you can try modifying the dimensions. Texels are to textures as pixels are to pictures.

The next step is to upload the texture into the GPU. This will be done in the `generateTexture` method. First of all, we need to create a new texture identifier (by calling the `glGenTextures` function). After that, we need to bind to that texture (by calling the `glBindTexture`). Then, we need to tell OpenGL how to unpack our RGBA bytes. Since each component is one byte in size, we just use `GL_UNPACK_ALIGNMENT` for the `glPixelStorei` function. Finally, we load the texture data by calling the `glTexImage2D`.

The `glTexImage2D` method has the following parameters:

* `target`: Specifies the target texture (its type). In this case: `GL_TEXTURE_2D`.
* `level`: Specifies the level-of-detail number. Level 0 is the base image level. Level n is the nth mipmap reduction image. More on this later.
* `internal format`: Specifies the number of colour components in the texture.
* `width`: Specifies the width of the texture image.
* `height`: Specifies the height of the texture image.
* `border`: This value must be zero.
* `format`: Specifies the format of the pixel data: RGBA in this case.
* `type`: Specifies the data type of the pixel data. We are using unsigned bytes for this.
* `data`: The buffer that stores our data.

After that, by calling the `glTexParameteri` function, we basically say that when a pixel is drawn with no direct one-to-one association to a texture coordinate, it will pick the nearest texture coordinate point. After that, we generate a mipmap. A mipmap is a decreasing resolution set of images generated from a high detailed texture. These lower resolution images will be used automatically when our object is scaled. We do this when calling the `glGenerateMipmap` function. And that’s all, we have successfully loaded our texture. Now we need to use it.

Now we will create a texture cache. It is very frequent that models reuse the same texture; therefore, instead of loading the same texture multiple times, we will cache the textures already loaded to load each texture just once. This will be controlled by the `TextureCache` class:

```java
package org.lwjglb.engine.graph;

import java.util.*;

public class TextureCache {

    public static final String DEFAULT_TEXTURE = "resources/models/default/default_texture.png";

    private Map<String, Texture> textureMap;

    public TextureCache() {
        textureMap = new HashMap<>();
        textureMap.put(DEFAULT_TEXTURE, new Texture(DEFAULT_TEXTURE));
    }

    public void cleanup() {
        textureMap.values().forEach(Texture::cleanup);
    }

    public Texture createTexture(String texturePath) {
        return textureMap.computeIfAbsent(texturePath, Texture::new);
    }

    public Texture getTexture(String texturePath) {
        Texture texture = null;
        if (texturePath != null) {
            texture = textureMap.get(texturePath);
        }
        if (texture == null) {
            texture = textureMap.get(DEFAULT_TEXTURE);
        }
        return texture;
    }
}
```

As you can see, we just store the loaded textures in a `Map` and return a default texture in case texture path is null (models with no textures). The default texture is just a black image; for models that define colors instead of textures, we can combine both the default texture and the model in the fragment shader. The `TextureCache` class instance will be stored in the `Scene` class:

```java
public class Scene {
    ...
    private TextureCache textureCache;
    ...
    public Scene(int width, int height) {
        ...
        textureCache = new TextureCache();
    }
    ...
    public TextureCache getTextureCache() {
        return textureCache;
    }
    ...
}
```

Now we need to change the way we define models to add support for textures. In order to do so (and to prepare for the more complex models we are going to load in future chapters), we will introduce a new class named `Material`. This class will hold texture path and a list of `Mesh` instances. Therefore, we will associate `Model` instances with a `List` of `Material`'s instead of `Mesh`'es. In future chapters, materials will be able to contain other properties, such as diffuse or specular colors.

The `Material` class is defined like this:

```java
package org.lwjglb.engine.graph;

import java.util.*;

public class Material {

    private List<Mesh> meshList;
    private String texturePath;

    public Material() {
        meshList = new ArrayList<>();
    }

    public void cleanup() {
        meshList.forEach(Mesh::cleanup);
    }

    public List<Mesh> getMeshList() {
        return meshList;
    }

    public String getTexturePath() {
        return texturePath;
    }

    public void setTexturePath(String texturePath) {
        this.texturePath = texturePath;
    }
}
```

As you can see, `Mesh` instances are now under the `Material` class. Therefore, we need to modify the `Model` class like this:

```java
package org.lwjglb.engine.graph;

import org.lwjglb.engine.scene.Entity;

import java.util.*;

public class Model {

    private final String id;
    private List<Entity> entitiesList;
    private List<Material> materialList;

    public Model(String id, List<Material> materialList) {
        this.id = id;
        entitiesList = new ArrayList<>();
        this.materialList = materialList;
    }

    public void cleanup() {
        materialList.forEach(Material::cleanup);
    }

    public List<Entity> getEntitiesList() {
        return entitiesList;
    }

    public String getId() {
        return id;
    }

    public List<Material> getMaterialList() {
        return materialList;
    }
}
```

As we said before, we need to pass texture coordinates as another VBO. So we will modify our `Mesh` class to accept an array of floats that contains texture coordinates instead of colors. The `Mesh` class is modified like this:

```java
public class Mesh {
    ...
    public Mesh(float[] positions, float[] textCoords, int[] indices) {
        numVertices = indices.length;
        vboIdList = new ArrayList<>();

        vaoId = glGenVertexArrays();
        glBindVertexArray(vaoId);

        // Positions VBO
        int vboId = glGenBuffers();
        vboIdList.add(vboId);
        FloatBuffer positionsBuffer = MemoryUtil.memCallocFloat(positions.length);
        positionsBuffer.put(0, positions);
        glBindBuffer(GL_ARRAY_BUFFER, vboId);
        glBufferData(GL_ARRAY_BUFFER, positionsBuffer, GL_STATIC_DRAW);
        glEnableVertexAttribArray(0);
        glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);

        // Texture coordinates VBO
        vboId = glGenBuffers();
        vboIdList.add(vboId);
        FloatBuffer textCoordsBuffer = MemoryUtil.memCallocFloat(textCoords.length);
        textCoordsBuffer.put(0, textCoords);
        glBindBuffer(GL_ARRAY_BUFFER, vboId);
        glBufferData(GL_ARRAY_BUFFER, textCoordsBuffer, GL_STATIC_DRAW);
        glEnableVertexAttribArray(1);
        glVertexAttribPointer(1, 2, GL_FLOAT, false, 0, 0);

        // Index VBO
        vboId = glGenBuffers();
        vboIdList.add(vboId);
        IntBuffer indicesBuffer = MemoryUtil.memCallocInt(indices.length);
        indicesBuffer.put(0, indices);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, indicesBuffer, GL_STATIC_DRAW);

        glBindBuffer(GL_ARRAY_BUFFER, 0);
        glBindVertexArray(0);

        MemoryUtil.memFree(positionsBuffer);
        MemoryUtil.memFree(textCoordsBuffer);
        MemoryUtil.memFree(indicesBuffer);
    }
    ...
}
```

## Using the textures

Now we need to use the texture in our shaders. In the vertex shader, we have changed the second input parameter (previously `outColor`, now `outTexCoord`) from `vec3` to a `vec2`. The vertex shader, as in the color case, just passes the texture coordinates to be used by the fragment shader.

```glsl
#version 330

layout (location=0) in vec3 position;
layout (location=1) in vec2 texCoord;

out vec2 outTextCoord;

uniform mat4 projectionMatrix;
uniform mat4 modelMatrix;

void main()
{
    gl_Position = projectionMatrix * modelMatrix * vec4(position, 1.0);
    outTextCoord = texCoord;
}
```

In the fragment shader, we must use the texture coordinates in order to set the pixel colors by sampling a texture (through a `sampler2D` uniform)

```glsl
#version 330

in vec2 outTextCoord;

out vec4 fragColor;

uniform sampler2D txtSampler;

void main()
{
    fragColor = texture(txtSampler, outTextCoord);
}
```

We will see now how all of this is used in the `SceneRender` class. First, we need to create a new uniform for the texture sampler.

```java
public class SceneRender {
    ...
    private void createUniforms() {
        ...
        uniformsMap.createUniform("txtSampler");
    }
    ...
}
```

Now, we can use the texture in the render process:

```java
public class SceneRender {
    ...
    public void render(Scene scene) {
        shaderProgram.bind();

        uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());

        uniformsMap.setUniform("txtSampler", 0);

        Collection<Model> models = scene.getModelMap().values();
        TextureCache textureCache = scene.getTextureCache();
        for (Model model : models) {
            List<Entity> entities = model.getEntitiesList();

            for (Material material : model.getMaterialList()) {
                Texture texture = textureCache.getTexture(material.getTexturePath());
                glActiveTexture(GL_TEXTURE0);
                texture.bind();

                for (Mesh mesh : material.getMeshList()) {
                    glBindVertexArray(mesh.getVaoId());
                    for (Entity entity : entities) {
                        uniformsMap.setUniform("modelMatrix", entity.getModelMatrix());
                        glDrawElements(GL_TRIANGLES, mesh.getNumVertices(), GL_UNSIGNED_INT, 0);
                    }
                }
            }
        }

        glBindVertexArray(0);

        shaderProgram.unbind();
    }
}
```

As you can see, we first set the texture sampler uniform to the `0` value. Let's explain why we do this. A graphics card has several spaces or slots to store textures. Each of these spaces is called a texture unit. When we are working with textures we must set the texture unit that we want to work with. In this case, we are using just one texture, so we will use the texture unit `0`. The uniform has a `sampler2D` type and will hold the value of the texture unit that we want to work with. When we iterate over models and materials, we get the texture associated to each material from the cache, activate the texture unit by calling the `glActiveTexture` function with the parameter `GL_TEXTURE0`, and bind the texture unit and texture identifier. 

We need to modify also the `UniformsMap` class to add a new method which accepts an integer to set up the sampler value. This method will also be called `setUniform` but will instead accept the name of the uniform and an integer value. Since we will be repeating some code between the `setUniform` method used to set up matrices and this new one, we will extract the part of the code that retrieves the uniform location to a new method named `getUniformLocation`. The changes in the `UniformsMap` class are shown below:

```java
public class UniformsMap {
    ...
    private int getUniformLocation(String uniformName) {
        Integer location = uniforms.get(uniformName);
        if (location == null) {
            throw new RuntimeException("Could not find uniform [" + uniformName + "]");
        }
        return location.intValue();
    }

    public void setUniform(String uniformName, int value) {
        glUniform1i(getUniformLocation(uniformName), value);
    }

    public void setUniform(String uniformName, Matrix4f value) {
        try (MemoryStack stack = MemoryStack.stackPush()) {
            glUniformMatrix4fv(getUniformLocation(uniformName), false, value.get(stack.mallocFloat(16)));
        }
    }
    ...
}
```

Right now, we have just modified our code base to support textures. Now, we need to setup texture coordinates for our 3D cube. Our texture image file will be something like this:

![Cube texture](<../.gitbook/assets/cube_texture (1).png>)

In our 3D model we have eight vertices. Let’s see how this can be done. Let’s first define the front face texture coordinates for each vertex.

![Texture coordinates front face](<../.gitbook/assets/cube_texture_front_face (1).png>)

| Vertex | Texture Coordinate |
| ------ | ------------------ |
| V0     | (0.0, 0.0)         |
| V1     | (0.0, 0.5)         |
| V2     | (0.5, 0.5)         |
| V3     | (0.5, 0.0)         |

Now, let’s define the texture mapping of the top face.

![Texture coordinates top face](<../.gitbook/assets/cube_texture_top_face (1).png>)

| Vertex | Texture Coordinate |
| ------ | ------------------ |
| V4     | (0.0, 0.5)         |
| V5     | (0.5, 0.5)         |
| V0     | (0.0, 1.0)         |
| V3     | (0.5, 1.0)         |

As you can see, we have a problem. We need to setup different texture coordinates for the same vertices (V0 and V3). How can we solve this? For each time we need to map a new set of texture coordinates to an existing vertex, we must make another vertex with the same set of vertex coordinates. For the top face, we need to repeat the four vertices and assign them the correct texture coordinates.

Since the front, back, and lateral faces use the same texture, we do not need to repeat all of these vertices. The complete definition of the source code used 20 total vertices compared to the original 8.

In the next chapters, we will learn how to load models generated by 3D modeling tools. That way, we won’t need to define by hand the positions and texture coordinates (which by the way, would be impractical for more complex models).

We just need to modify the `init` method in the `Main` class to define the texture coordinates and load texture data:

```java
public class Main implements IAppLogic {
    ...
    public static void main(String[] args) {
        ...
        Engine gameEng = new Engine("chapter-07", new Window.WindowOptions(), main);
        ...
    }
    ...
    public void init(Window window, Scene scene, Render render) {
        float[] positions = new float[]{
                // V0
                -0.5f, 0.5f, 0.5f,
                // V1
                -0.5f, -0.5f, 0.5f,
                // V2
                0.5f, -0.5f, 0.5f,
                // V3
                0.5f, 0.5f, 0.5f,
                // V4
                -0.5f, 0.5f, -0.5f,
                // V5
                0.5f, 0.5f, -0.5f,
                // V6
                -0.5f, -0.5f, -0.5f,
                // V7
                0.5f, -0.5f, -0.5f,

                // For text coords in top face
                // V8: V4 repeated
                -0.5f, 0.5f, -0.5f,
                // V9: V5 repeated
                0.5f, 0.5f, -0.5f,
                // V10: V0 repeated
                -0.5f, 0.5f, 0.5f,
                // V11: V3 repeated
                0.5f, 0.5f, 0.5f,

                // For text coords in right face
                // V12: V3 repeated
                0.5f, 0.5f, 0.5f,
                // V13: V2 repeated
                0.5f, -0.5f, 0.5f,

                // For text coords in left face
                // V14: V0 repeated
                -0.5f, 0.5f, 0.5f,
                // V15: V1 repeated
                -0.5f, -0.5f, 0.5f,

                // For text coords in bottom face
                // V16: V6 repeated
                -0.5f, -0.5f, -0.5f,
                // V17: V7 repeated
                0.5f, -0.5f, -0.5f,
                // V18: V1 repeated
                -0.5f, -0.5f, 0.5f,
                // V19: V2 repeated
                0.5f, -0.5f, 0.5f,
        };
        float[] textCoords = new float[]{
                0.0f, 0.0f,
                0.0f, 0.5f,
                0.5f, 0.5f,
                0.5f, 0.0f,

                0.0f, 0.0f,
                0.5f, 0.0f,
                0.0f, 0.5f,
                0.5f, 0.5f,

                // For text coords in top face
                0.0f, 0.5f,
                0.5f, 0.5f,
                0.0f, 1.0f,
                0.5f, 1.0f,

                // For text coords in right face
                0.0f, 0.0f,
                0.0f, 0.5f,

                // For text coords in left face
                0.5f, 0.0f,
                0.5f, 0.5f,

                // For text coords in bottom face
                0.5f, 0.0f,
                1.0f, 0.0f,
                0.5f, 0.5f,
                1.0f, 0.5f,
        };
        int[] indices = new int[]{
                // Front face
                0, 1, 3, 3, 1, 2,
                // Top Face
                8, 10, 11, 9, 8, 11,
                // Right face
                12, 13, 7, 5, 12, 7,
                // Left face
                14, 15, 6, 4, 14, 6,
                // Bottom face
                16, 18, 19, 17, 16, 19,
                // Back face
                4, 6, 7, 5, 4, 7,};
        Texture texture = scene.getTextureCache().createTexture("resources/models/cube/cube.png");
        Material material = new Material();
        material.setTexturePath(texture.getTexturePath());
        List<Material> materialList = new ArrayList<>();
        materialList.add(material);

        Mesh mesh = new Mesh(positions, textCoords, indices);
        material.getMeshList().add(mesh);
        Model cubeModel = new Model("cube-model", materialList);
        scene.addModel(cubeModel);

        cubeEntity = new Entity("cube-entity", cubeModel.getId());
        cubeEntity.setPosition(0, 0, -2);
        scene.addEntity(cubeEntity);
    }
```

The final result is like this.

![Cube with texture](<../.gitbook/assets/cube_with_texture (1).png>)

[Next chapter](../chapter-08/chapter-08.md)


================================================
FILE: chapter-08/chapter-08.md
================================================
# Chapter 08 - Camera

In this chapter, we will learn how to move inside a rendered 3D scene. This capability is like having a camera that can travel inside the 3D world and in fact, that's the term used to refer to it.

You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-08).

## Camera introduction

If you try to search for specific camera functions in OpenGL, you will discover that there is no camera concept, or in other words, the camera is always fixed, centered at position (0, 0, 0)  in the center of the screen. So, what we will do is a simulation that gives us the impression that we have a camera capable of moving inside the 3D scene. How do we achieve this? Well, if we cannot move the camera, then we must move all the objects contained in our 3D space at once. In other words, if we cannot move a camera, we will move the whole world.

Hence, suppose that we would like to move the camera position along the z axis from a starting position (Cx, Cy, Cz) to a position (Cx, Cy, Cz+dz) to get closer to the object, which is placed at the coordinates (Ox, Oy, Oz).

![Camera movement](../.gitbook/assets/camera_movement.png)

What we will actually do is move the object (all the objects in our 3D space indeed) in the opposite direction that the camera should move. Think about it like the objects being placed in a treadmill.

![Actual movement](../.gitbook/assets/actual_movement.png)

A camera can be displaced along the three axis (x, y and z) and also can rotate along them (roll, pitch and yaw).

![Roll pitch and yaw](../.gitbook/assets/roll_pitch_yaw.png)

So basically what we must do is to be able to move and rotate all of the objects of our 3D world. How are we going to do this? The answer is to apply another transformation that will translate all of the vertices of all of the objects in the opposite direction of the movement of the camera, and that will rotate them according to the camera rotation. This will be done, of course, with another matrix, the so called view matrix. This matrix will first perform the translation and then the rotation along the axis.

Let's see how we can construct that matrix. If you remember from the transformations chapter, our transformation equation was like this:

$$
\begin{array}{lcl} Transf & = & \lbrack ProjMatrix \rbrack \cdot \lbrack TranslationMatrix \rbrack \cdot \lbrack RotationMatrix \rbrack \cdot \lbrack ScaleMatrix \rbrack \\ & = & \lbrack ProjMatrix \rbrack \cdot \lbrack WorldMatrix \rbrack \end{array}
$$

The view matrix should be applied before multiplying by the projection matrix, so our equation should now be like this:

$$
\begin{array}{lcl} Transf & = & \lbrack ProjMatrix \rbrack \cdot \lbrack ViewMatrix \rbrack \cdot \lbrack TranslationMatrix \rbrack \cdot \lbrack RotationMatrix \rbrack \cdot \lbrack ScaleMatrix \rbrack \\ & = & \lbrack ProjMatrix \rbrack \cdot \lbrack ViewMatrix \rbrack \cdot \lbrack WorldMatrix \rbrack \end{array}
$$

## Camera implementation

So let’s start modifying our code to support a camera. First of all, we will create a new class called `Camera` which will hold the position and rotation state of our camera as well as its view matrix. The class is defined like this:

```java
package org.lwjglb.engine.scene;

import org.joml.*;

public class Camera {

    private Vector3f direction;
    private Vector3f position;
    private Vector3f right;
    private Vector2f rotation;
    private Vector3f up;
    private Matrix4f viewMatrix;

    public Camera() {
        direction = new Vector3f();
        right = new Vector3f();
        up = new Vector3f();
        position = new Vector3f();
        viewMatrix = new Matrix4f();
        rotation = new Vector2f();
    }

    public void addRotation(float x, float y) {
        rotation.add(x, y);
        recalculate();
    }

    public Vector3f getPosition() {
        return position;
    }

    public Matrix4f getViewMatrix() {
        return viewMatrix;
    }

    public void moveBackwards(float inc) {
        viewMatrix.positiveZ(direction).negate().mul(inc);
        position.sub(direction);
        recalculate();
    }

    public void moveDown(float inc) {
        viewMatrix.positiveY(up).mul(inc);
        position.sub(up);
        recalculate();
    }

    public void moveForward(float inc) {
        viewMatrix.positiveZ(direction).negate().mul(inc);
        position.add(direction);
        recalculate();
    }

    public void moveLeft(float inc) {
        viewMatrix.positiveX(right).mul(inc);
        position.sub(right);
        recalculate();
    }

    public void moveRight(float inc) {
        viewMatrix.positiveX(right).mul(inc);
        position.add(right);
        recalculate();
    }

    public void moveUp(float inc) {
        viewMatrix.positiveY(up).mul(inc);
        position.add(up);
        recalculate();
    }

    private void recalculate() {
        viewMatrix.identity()
                .rotateX(rotation.x)
                .rotateY(rotation.y)
                .translate(-position.x, -position.y, -position.z);
    }

    public void setPosition(float x, float y, float z) {
        position.set(x, y, z);
        recalculate();
    }

    public void setRotation(float x, float y) {
        rotation.set(x, y);
        recalculate();
    }
}
```

As you can see, besides rotation and position, we define some vectors to define forward, up and right directions. This is because we are implementing a free space movement camera. If we move after any rotation, we want to move in the direction the camera is pointing in, not along a predefined axis. We need to get those vectors to calculate where the next position will be placed. At the end, the state of the camera is stored into a 4x4 matrix, the view matrix. Any time we change position or rotation, we need to update it. As you can see, when updating the view matrix, we first need to do the rotation and then the translation. If we did the opposite, we would not be rotating along the camera position but along the coordinate origin.

The `Camera` class also provides methods to update position when moving forward, up or to the right. In these methods, the view matrix is used to calculate where the forward, up or right methods should be according to the current state, and updates the position accordingly. We use the fantastic JOML library to perform these calculations for us while maintaining the code quite simple.

## Using the Camera

We will store a `Camera` instance in the `Scene` class, so let's go for the changes:

```java
public class Scene {
    ...
    private Camera camera;
    ...
    public Scene(int width, int height) {
        ...
        camera = new Camera();
    }
    ...
    public Camera getCamera() {
        return camera;
    }
    ...
}
```

It would be nice to control de camera with our mouse. In order to do so, we will create a new class to handle mouse events so we can use them to update camera rotation. Here's the code for that class.

```java
package org.lwjglb.engine;

import org.joml.Vector2f;

import static org.lwjgl.glfw.GLFW.*;

public class MouseInput {

    private Vector2f currentPos;
    private Vector2f displVec;
    private boolean inWindow;
    private boolean leftButtonPressed;
    private Vector2f previousPos;
    private boolean rightButtonPressed;

    public MouseInput(long windowHandle) {
        previousPos = new Vector2f(-1, -1);
        currentPos = new Vector2f();
        displVec = new Vector2f();
        leftButtonPressed = false;
        rightButtonPressed = false;
        inWindow = false;

        glfwSetCursorPosCallback(windowHandle, (handle, xpos, ypos) -> {
            currentPos.x = (float) xpos;
            currentPos.y = (float) ypos;
        });
        glfwSetCursorEnterCallback(windowHandle, (handle, entered) -> inWindow = entered);
        glfwSetMouseButtonCallback(windowHandle, (handle, button, action, mode) -> {
            leftButtonPressed = button == GLFW_MOUSE_BUTTON_1 && action == GLFW_PRESS;
            rightButtonPressed = button == GLFW_MOUSE_BUTTON_2 && action == GLFW_PRESS;
        });
    }

    public Vector2f getCurrentPos() {
        return currentPos;
    }

    public Vector2f getDisplVec() {
        return displVec;
    }

    public void input() {
        displVec.x = 0;
        displVec.y = 0;
        if (previousPos.x > 0 && previousPos.y > 0 && inWindow) {
            double deltax = currentPos.x - previousPos.x;
            double deltay = currentPos.y - previousPos.y;
            boolean rotateX = deltax != 0;
            boolean rotateY = deltay != 0;
            if (rotateX) {
                displVec.y = (float) deltax;
            }
            if (rotateY) {
                displVec.x = (float) deltay;
            }
        }
        previousPos.x = currentPos.x;
        previousPos.y = currentPos.y;
    }

    public boolean isLeftButtonPressed() {
        return leftButtonPressed;
    }

    public boolean isRightButtonPressed() {
        return rightButtonPressed;
    }
}
```

The `MouseInput` class, in its constructor, registers a set of callbacks to process mouse events:

* `glfwSetCursorPosCallback`: Registers a callback that will be invoked when the mouse is moved.
* `glfwSetCursorEnterCallback`: Registers a callback that will be invoked when the mouse enters our window. We will be receiving mouse events even if the mouse is not in our window. We use this callback to track when the mouse is in our window.
* `glfwSetMouseButtonCallback`: Registers a callback that will be invoked when a mouse button is pressed.

The `MouseInput` class provides an input method that should be called when game input is processed. This method calculates the mouse displacement from the previous position and stores it in the `displVec` variable so it can be used by our game.

The `MouseInput` class will be instantiated in our `Window` class, which will also provide a getter to return its instance.

```java
public class Window {
    ...
    private MouseInput mouseInput;
    ...
    public Window(String title, WindowOptions opts, Callable<Void> resizeFunc) {
        ...
        mouseInput = new MouseInput(windowHandle);
    }
    ...
    public MouseInput getMouseInput() {
        return mouseInput;
    }
    ...    
}
```

In the `Engine` class, we will consume mouse input when handling regular input:

```java
public class Engine {
    ...
    private void run() {
        ...
            if (targetFps <= 0 || deltaFps >= 1) {
                window.getMouseInput().input();
                appLogic.input(window, scene, now - initialTime);
            }
        ...
    }
    ...
}
```

Now we can modify the vertex shader to use the `Camera`'s view matrix, which, as you may guess, will be passed as a uniform.

```glsl
#version 330

layout (location=0) in vec3 position;
layout (location=1) in vec2 texCoord;

out vec2 outTextCoord;

uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;

void main()
{
    gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
    outTextCoord = texCoord;
}
```

So the next step is to properly create the uniform in the `SceneRender` class and update its value in each `render` call:

```java
public class SceneRender {
    ...
    private void createUniforms() {
        ...
        uniformsMap.createUniform("viewMatrix");
        ...
    }    
    ...
    public void render(Scene scene) {
        ...
        uniformsMap.setUniform("projectionMatrix", scene.getProjection().getProjMatrix());
        uniformsMap.setUniform("viewMatrix", scene.getCamera().getViewMatrix());
        ...
    }
}
```

And that’s all, our base code supports the concept of a camera. Now we need to use it. We can change the way we handle the input and update the camera. We will set the following controls:

* Keys “A” and “D” to move the camera to the left and right (x axis), respectively.
* Keys “W” and “S” to move the camera forward and backwards (z axis), respectively.
* Keys “Z” and “X” to move the camera up and down (y-axis), respectively.

We will use the mouse position to rotate the camera along the x and y axis when the right button of the mouse is pressed.

Now we are ready to update our `Main` class to process the keyboard and mouse input.

```java

public class Main implements IAppLogic {

    private static final float MOUSE_SENSITIVITY = 0.1f;
    private static final float MOVEMENT_SPEED = 0.005f;
    ...

    public static void main(String[] args) {
        ...
        Engine gameEng = new Engine("chapter-08", new Window.WindowOptions(), main);
        ...
    }
    ...
    public void input(Window window, Scene scene, long diffTimeMillis) {
        float move = diffTimeMillis * MOVEMENT_SPEED;
        Camera camera = scene.getCamera();
        if (window.isKeyPressed(GLFW_KEY_W)) {
            camera.moveForward(move);
        } else if (window.isKeyPressed(GLFW_KEY_S)) {
            camera.moveBackwards(move);
        }
        if (window.isKeyPressed(GLFW_KEY_A)) {
            camera.moveLeft(move);
        } else if (window.isKeyPressed(GLFW_KEY_D)) {
            camera.moveRight(move);
        }
        if (window.isKeyPressed(GLFW_KEY_UP)) {
            camera.moveUp(move);
        } else if (window.isKeyPressed(GLFW_KEY_DOWN)) {
            camera.moveDown(move);
        }

        MouseInput mouseInput = window.getMouseInput();
        if (mouseInput.isRightButtonPressed()) {
            Vector2f displVec = mouseInput.getDisplVec();
            camera.addRotation((float) Math.toRadians(-displVec.x * MOUSE_SENSITIVITY),
                    (float) Math.toRadians(-displVec.y * MOUSE_SENSITIVITY));
        }
    }
    ...
}
```

[Next chapter](../chapter-09/chapter-09.md)


================================================
FILE: chapter-09/chapter-09.md
================================================
# Chapter 09 - Loading more complex models: Assimp

The capability of loading complex 3D models in different formats is crucial in order to write a game. The task of writing parsers for some of them would require lots of work. Even just supporting a single format can be time consuming. Fortunately, the [Assimp](http://assimp.sourceforge.net/) library already can be used to parse many common 3D formats. It’s a C/C++ library which can load static and animated models in a variety of formats. LWJGL provides the bindings to use them from Java code. In this chapter, we will explain how it can be used.


You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-09).

## Model loader

The first thing is adding Assimp maven dependencies to the project pom.xml. We need to add compile time and runtime dependencies.

```xml
<dependency>
    <groupId>org.lwjgl</groupId>
    <artifactId>lwjgl-assimp</artifactId>
    <version>${lwjgl.version}</version>
</dependency>
<dependency>
    <groupId>org.lwjgl</groupId>
    <artifactId>lwjgl-assimp</artifactId>
    <version>${lwjgl.version}</version>
    <classifier>${native.target}</classifier>
    <scope>runtime</scope>
</dependency>
```

Once the dependencies has been set, we will create a new class named `ModelLoader` that will be used to load models with Assimp. The class defines two static public methods:

```java
package org.lwjglb.engine.scene;

import org.joml.Vector4f;
import org.lwjgl.PointerBuffer;
import org.lwjgl.assimp.*;
import org.lwjgl.system.MemoryStack;
import org.lwjglb.engine.graph.*;

import java.io.File;
import java.nio.IntBuffer;
import java.util.*;

import static org.lwjgl.assimp.Assimp.*;

public class ModelLoader {

    private ModelLoader() {
        // Utility class
    }

    public static Model loadModel(String modelId, String modelPath, TextureCache textureCache) {
        return loadModel(modelId, modelPath, textureCache, aiProcess_GenSmoothNormals | aiProcess_JoinIdenticalVertices |
                aiProcess_Triangulate | aiProcess_FixInfacingNormals | aiProcess_CalcTangentSpace | aiProcess_LimitBoneWeights |
                aiProcess_PreTransformVertices);

    }

    public static Model loadModel(String modelId, String modelPath, TextureCache textureCache, int flags) {
        ...

    }
    ...
}
```

Both methods have the following arguments:

* `modelId`: A unique identifier for the model to be loaded.

* `modelPath`: The path to the file where the model file is located. This is a regular file path, no CLASSPATH relative paths, because Assimp may need to load additional files and may use the same base path as the `modelPath` \(For instance, material files for wavefront, OBJ, files\). If you embed your resources inside a JAR file, Assimp will not be able to import it, so it must be a file system path. When loading textures we will use `modelPath` to get the base directory where the model is located to load textures (overriding whatever path is defined in the model). We do this because some models contain absolute paths to local folders of where the model was developed which, obviously, are not accessible.

* `textureCache`: A reference to the texture cache to avoid loading the same texture multiple times.

The second method has an extra argument named `flags`. This parameter allows to tune the loading process. The first method invokes the second one and passes some values that are useful in most of the situations:

* `aiProcess_JoinIdenticalVertices`: This flag reduces the number of vertices that are used, identifying those that can be reused between faces.

* `aiProcess_Triangulate`: The model may use quads or other geometries to define their elements. Since we are only dealing with triangles, we must use this flag to split all he faces into triangles \(if needed\).

* `aiProcess_FixInfacingNormals`: This flags try to reverse normals that may point inwards.

* `aiProcess_CalcTangentSpace`: We will use this parameter when implementing lights, but it basically calculates tangent and bitangents using normals information.  

* `aiProcess_LimitBoneWeights`: We will use this parameter when implementing animations, but it basically limit the number of weights that affect a single vertex.

* `aiProcess_PreTransformVertices`: This flag performs some transformation over the data loaded so the model is placed in the origin and the coordinates are corrected to math OpenGL coordinate System. If you have problems with models that are rotated, make sure to use this flag. Important: do not use this flag if your model uses animations, this flag will remove that information.

There are many other flags that can be used, you can check them in the LWJGL or Assimp documentation.

Let’s go back to the second constructor. The first thing we do is invoke the `aiImportFile` method to load the model with the selected flags.

```java
public class ModelLoader {
    ...
    public static Model loadModel(String modelId, String modelPath, TextureCache textureCache, int flags) {
        File file = new File(modelPath);
        if (!file.exists()) {
            throw new RuntimeException("Model path does not exist [" + modelPath + "]");
        }
        String modelDir = file.getParent();

        AIScene aiScene = aiImportFile(modelPath, flags);
        if (aiScene == null) {
            throw new RuntimeException("Error loading model [modelPath: " + modelPath + "]");
        }
        ...
    }
    ...
}
```

The rest of the code for the constructor is as follows:

```java
public class ModelLoader {
    ...
    public static Model loadModel(String modelId, String modelPath, TextureCache textureCache, int flags) {
        ...
        int numMaterials = aiScene.mNumMaterials();
        List<Material> materialList = new ArrayList<>();
        for (int i = 0; i < numMaterials; i++) {
            AIMaterial aiMaterial = AIMaterial.create(aiScene.mMaterials().get(i));
            materialList.add(processMaterial(aiMaterial, modelDir, textureCache));
        }

        int numMeshes = aiScene.mNumMeshes();
        PointerBuffer aiMeshes = aiScene.mMeshes();
        Material defaultMaterial = new Material();
        for (int i = 0; i < numMeshes; i++) {
            AIMesh aiMesh = AIMesh.create(aiMeshes.get(i));
            Mesh mesh = processMesh(aiMesh);
            int materialIdx = aiMesh.mMaterialIndex();
            Material material;
            if (materialIdx >= 0 && materialIdx < materialList.size()) {
                material = materialList.get(materialIdx);
            } else {
                material = defaultMaterial;
            }
            material.getMeshList().add(mesh);
        }

        if (!defaultMaterial.getMeshList().isEmpty()) {
            materialList.add(defaultMaterial);
        }

        return new Model(modelId, materialList);
    }
    ...
}
```

We process the materials contained in the model. Materials define color and textures to be used by the meshes that compose the model. Then we process the different meshes. A model can define several meshes and each of them can use one of the materials defined for the model. This is why we process meshes after materials and link to them, to avoid repeating binding calls when rendering.

If you examine the code above you may see that many of the calls to the Assimp library return `PointerBuffer` instances. You can think about them like C pointers, they just point to a memory region which contain data. You need to know in advance the type of data that they hold in order to process them. In the case of materials, we iterate over that buffer creating instances of the `AIMaterial` class. In the second case, we iterate over the buffer that holds mesh data creating instance of the `AIMesh` class.

Let’s examine the `processMaterial` method.

```java
public class ModelLoader {
    ...
    private static Material processMaterial(AIMaterial aiMaterial, String modelDir, TextureCache textureCache) {
        Material material = new Material();
        try (MemoryStack stack = MemoryStack.stackPush()) {
            AIColor4D color = AIColor4D.create();

            int result = aiGetMaterialColor(aiMaterial, AI_MATKEY_COLOR_DIFFUSE, aiTextureType_NONE, 0,
                    color);
            if (result == aiReturn_SUCCESS) {
                material.setDiffuseColor(new Vector4f(color.r(), color.g(), color.b(), color.a()));
            }

            AIString aiTexturePath = AIString.calloc(stack);
            aiGetMaterialTexture(aiMaterial, aiTextureType_DIFFUSE, 0, aiTexturePath, (IntBuffer) null,
                    null, null, null, null, null);
            String texturePath = aiTexturePath.dataString();
            if (texturePath != null && texturePath.length() > 0) {
                material.setTexturePath(modelDir + File.separator + new File(texturePath).getName());
                textureCache.createTexture(material.getTexturePath());
                material.setDiffuseColor(Material.DEFAULT_COLOR);
            }

            return material;
        }
    }
    ...
}
```

We first get the material color, in this case the diffuse color (by setting the `AI_MATKEY_COLOR_DIFFUSE` flag). There are many different types of colors which we will use when applying lights, for example we have  diffuse, ambient (for ambient light), specular (for specular factor of lights, etc.) After that, we check if the material defines a texture or not. If so, that is if there is a texture path, we store the texture path and delegate texture creation to the `TexturCache` class as in previous examples. In this case, if the material defines a texture we set the diffuse color to a default value, which is black. By doing this we will be able to use both values, diffuse color and texture without checking if there is a texture or not. If the model does not define a texture we will use a default black texture which can be combined with the material color.

The `processMesh` method is defined like this.

```java
public class ModelLoader {
    ...
    private static Mesh processMesh(AIMesh aiMesh) {
        float[] vertices = processVertices(aiMesh);
        float[] textCoords = processTextCoords(aiMesh);
        int[] indices = processIndices(aiMesh);

        // Texture coordinates may not have been populated. We need at least the empty slots
        if (textCoords.length == 0) {
            int numElements = (vertices.length / 3) * 2;
            textCoords = new float[numElements];
        }

        return new Mesh(vertices, textCoords, indices);
    }
    ...
}
```

A `Mesh` is defined by a set of vertices position,  texture coordinates and indices. Each of these elements are processed in the `processVertices`,  `processTextCoords` and `processIndices` methods. After processing all that data we check if texture coordinates have been defined. If not, we just assign a set of texture coordinates to 0.0f to ensure consistency of the VAO.

The `processXXX` methods are very simple, they just invoke the corresponding method over the `AIMesh` instance that returns the desired data and store it into an array:

```java
public class ModelLoader {
    ...
    private static int[] processIndices(AIMesh aiMesh) {
        List<Integer> indices = new ArrayList<>();
        int numFaces = aiMesh.mNumFaces();
        AIFace.Buffer aiFaces = aiMesh.mFaces();
        for (int i = 0; i < numFaces; i++) {
            AIFace aiFace = aiFaces.get(i);
            IntBuffer buffer = aiFace.mIndices();
            while (buffer.remaining() > 0) {
                indices.add(buffer.get());
            }
        }
        return indices.stream().mapToInt(Integer::intValue).toArray();
    }
    ...
    private static float[] processTextCoords(AIMesh aiMesh) {
        AIVector3D.Buffer buffer = aiMesh.mTextureCoords(0);
        if (buffer == null) {
            return new float[]{};
        }
        float[] data = new float[buffer.remaining() * 2];
        int pos = 0;
        while (buffer.remaining() > 0) {
            AIVector3D textCoord = buffer.get();
            data[pos++] = textCoord.x();
            data[pos++] = 1 - textCoord.y();
        }
        return data;
    }

    private static float[] processVertices(AIMesh aiMesh) {
        AIVector3D.Buffer buffer = aiMesh.mVertices();
        float[] data = new float[buffer.remaining() * 3];
        int pos = 0;
        while (buffer.remaining() > 0) {
            AIVector3D textCoord = buffer.get();
            data[pos++] = textCoord.x();
            data[pos++] = textCoord.y();
            data[pos++] = textCoord.z();
        }
        return data;
    }
}
```

You can see that get get a buffer to the vertices by invoking the `mVertices` method. We just simply process them to create a `List` of floats that contain the vertices positions. Since, the method returns just a buffer you could pass that information directly to the OpenGL methods that create vertices. We do not do it that way for two reasons. The first one is try to reduce as much as possible the modifications over the code base. Second one is that by loading into an intermediate structure you may be able to perform some pros-processing tasks and even debug the loading process.

If you want a sample of the much more efficient approach, that is, directly passing the buffers to OpenGL, you can check this [sample](https://github.com/LWJGL/lwjgl3-demos/blob/master/src/org/lwjgl/demo/opengl/assimp/WavefrontObjDemo.java).

## Using the models

We need to modify the `Material` class to add support for diffuse color:

```java
public class Material {
 
    public static final Vector4f DEFAULT_COLOR = new Vector4f(0.0f, 0.0f, 0.0f, 1.0f);

    private Vector4f diffuseColor;
    ...
    public Material() {
        diffuseColor = DEFAULT_COLOR;
        ...
    }
    ...
    public Vector4f getDiffuseColor() {
        return diffuseColor;
    }
    ...
    public void setDiffuseColor(Vector4f diffuseColor) {
        this.diffuseColor = diffuseColor;
    }
    ...
}
```

In the `SceneRender` class, we need to create, and properly set up while rendering, the material diffuse color:
```java
public class SceneRender {
    ...
    private void createUniforms() {
        ...
        uniformsMap.createUniform("material.diffuse");
    }

    public void render(Scene scene) {
        ...
        for (Model model : models) {
            List<Entity> entities = model.getEntitiesList();

            for (Material material : model.getMaterialList()) {
                uniformsMap.setUniform("material.diffuse", material.getDiffuseColor());
                ...
            }
        }
        ...
    }
    ...
}
```

As you can see we are using a weird name for the uniform with a `.` in the name. This is because we will use structures in the shader. With structures we can group several types into a single combined one. You can see this in the fragment shader:

```glsl
#version 330

in vec2 outTextCoord;

out vec4 fragColor;

struct Material
{
    vec4 diffuse;
};

uniform sampler2D txtSampler;
uniform Material material;

void main()
{
    fragColor = texture(txtSampler, outTextCoord) + material.diffuse;
}
```

We will need also to add a new method to the `UniformsMap` class to add support for passing `Vector4f` values
```java
public class UniformsMap {
    ...
    public void setUniform(String uniformName, Vector4f value) {
        glUniform4f(getUniformLocation(uniformName), value.x, value.y, value.z, value.w);
    }
}
```

Finally, we need to modify the `Main` class to use the `ModelLoader` class to load models:
```java
public class Main implements IAppLogic {
    ...
    public static void main(String[] args) {
        ...
        Engine gameEng = new Engine("chapter-09", new Window.WindowOptions(), main);
        ...
    }
    ...
    public void init(Window window, Scene scene, Render render) {
        Model cubeModel = ModelLoader.loadModel("cube-model", "resources/models/cube/cube.obj",
                scene.getTextureCache());
        scene.addModel(cubeModel);

        cubeEntity = new Entity("cube-entity", cubeModel.getId());
        cubeEntity.setPosition(0, 0, -2);
        scene.addEntity(cubeEntity);
    }
    ...
}
```

As you can see, the `init` method has been simplified a lot, no more model data embedded in the code. Now we are using a cube model which uses the wavefront format. You can locate model files in the `resources\models\cube` folder. You will find there, the following files:

* `cube.obj`: The main model file. In fact is a text based format, so you can open it and see how vertices, indices and textures coordinates are defined and glued together by defining faces. It also contains a reference to a material file.

* `cube.mtl`: The material file, it defines colors and textures.

* `cube.png`: The texture file of the model.

Finally, we will add another feature to optimize the render. We will reduce the amount of data that is being rendered by applying face culling. As you well know, a cube is made of six faces and we are rendering the six faces they are not visible. You can check this if you zoom inside a cube, you will see its interior.

Faces that cannot be seen should be discarded immediately and this is what face culling does. In fact, for a cube you can only see 3 faces at the same time, so we can just discard half of the faces just by applying face culling \(this will only be valid if your game does not require you to dive into the inner side of a model\).

For every triangle, face culling checks if it's facing towards us and discards the ones that are not facing that direction. But, how do we know if a triangle is facing towards us or not? Well, the way that OpenGL does this is by the winding order of the vertices that compose a triangle.

Remember from the first chapters that we may define the vertices of a triangle in clockwise or counter-clockwise order. In OpenGL, by default, triangles that are in counter-clockwise order are facing towards the viewer and triangles that are in clockwise order are facing backwards. The key thing here, is that this order is checked while rendering taking into consideration the point of view. So a triangle that has been defined in counter-clock wise order can be interpreted, at rendering time, as being defined clockwise because of the point of view.

We will enable face culling in the `Render` class:

```java
public class Render {
    ...
    public Render() {
        ...
        glEnable(GL_CULL_FACE);
        glCullFace(GL_BACK);
        ...
    }
    ...
}
```

The first line will enable face culling and the second line states that faces that are facing backwards should be culled \(removed\).


If you run the sample you will see the same result as in previous chapter, however, if you zoom in into the cube, inner faces will not be rendered. You can modify this sample to load more complex models.
 
[Next chapter](../chapter-10/chapter-10.md)


================================================
FILE: chapter-10/chapter-10.md
================================================
# Chapter 10 - GUI (Imgui)

[Dear ImGui](https://github.com/ocornut/imgui) is a user interface library which can use several backends such as OpenGL and Vulkan. We will use it to display gui controls or to develop HUDs. It provides multiple widgets and the look and fell is easily customizable.

You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-10).

## Imgui integration

The first thing is adding Java Imgui wrapper maven dependencies to the project pom.xml. We need to add compile time and runtime dependencies.

```xml
<dependency>
   <groupId>io.github.spair</groupId>
   <artifactId>imgui-java-binding</artifactId>
   <version>${imgui-java.version}</version>
</dependency>
<dependency>
    <groupId>io.github.spair</groupId>
    <artifactId>imgui-java-${native.target}</artifactId>
    <version>${imgui-java.version}</version>
    <scope>runtime</scope>
</dependency>
```

With Imgui we can render windows, panels, etc. like we render any other 3D model, but using only 2D shapes. We set the controls that we want to use and Imgui translates that to a set of vertex buffers that we can render using shaders. This is why it can be used with any backend.

For each vertex, Imgui defines its coordinates (2D coordinates), texture coordinates and the associated color. Therefore, we need to create a new class to model Gui meshes and to create the associated VAO and VBO. The class, named `GuiMesh` is defined like this.

```java
package org.lwjglb.engine.graph;

import imgui.ImDrawData;

import static org.lwjgl.opengl.GL15.*;
import static org.lwjgl.opengl.GL20.*;
import static org.lwjgl.opengl.GL30.*;

public class GuiMesh {

    private int indicesVBO;
    private int vaoId;
    private int verticesVBO;

    public GuiMesh() {
        vaoId = glGenVertexArrays();
        glBindVertexArray(vaoId);

        // Single VBO
        verticesVBO = glGenBuffers();
        glBindBuffer(GL_ARRAY_BUFFER, verticesVBO);
        glEnableVertexAttribArray(0);
        glVertexAttribPointer(0, 2, GL_FLOAT, false, ImDrawData.sizeOfImDrawVert(), 0);
        glEnableVertexAttribArray(1);
        glVertexAttribPointer(1, 2, GL_FLOAT, false, ImDrawData.sizeOfImDrawVert(), 8);
        glEnableVertexAttribArray(2);
        glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, true, ImDrawData.sizeOfImDrawVert(), 16);

        indicesVBO = glGenBuffers();

        glBindBuffer(GL_ARRAY_BUFFER, 0);
        glBindVertexArray(0);
    }

    public void cleanup() {
        glDeleteBuffers(indicesVBO);
        glDeleteBuffers(verticesVBO);
        glDeleteVertexArrays(vaoId);
    }

    public int getIndicesVBO() {
        return indicesVBO;
    }

    public int getVaoId() {
        return vaoId;
    }

    public int getVerticesVBO() {
        return verticesVBO;
    }
}
```

As you can see, we use a single VBO but we define several attributes for the positions, texture coordinates and color. In this case, we do not populate the buffers with data, we will see later on how we will use it.

We need also to let the application create GUI controls and react to the user input. In order to support this, we will define a new interface named `IGuiInstance` which is defined like this:

```java
package org.lwjglb.engine;

import org.lwjglb.engine.scene.Scene;

public interface IGuiInstance {
    void drawGui();

    boolean handleGuiInput(Scene scene, Window window);
}
```

The method `drawGui` will be used to construct the GUI, this where we will define the window and widgets that will be used to construct the GUI meshes. We will use the `handleGuiInput` method to process input events in the GUI. It returns a boolean value to state that the input has been processed by the GUI or not. For example, if we display an overlapping window we may not be interested in keep processing keystrokes in the game logic. You can use the return value to control that. We will store the specific implementation of `IGuiInstance` interface in the `Scene` class.

```java
public class Scene {
    ...
    private IGuiInstance guiInstance;
    ...
    public IGuiInstance getGuiInstance() {
        return guiInstance;
    }
    ...
    public void setGuiInstance(IGuiInstance guiInstance) {
        this.guiInstance = guiInstance;
    }
}
```

The next step will be to create a new class to render our GUI, which will be named `GuiRender` and starts like this:

```java
package org.lwjglb.engine.graph;

import imgui.*;
import imgui.type.ImInt;
import org.joml.Vector2f;
import org.lwjglb.engine.*;
import org.lwjglb.engine.scene.Scene;

import java.nio.ByteBuffer;
import java.util.*;

import static org.lwjgl.opengl.GL32.*;
import static org.lwjgl.glfw.GLFW.*;

public class GuiRender {

    private GuiMesh guiMesh;
    private GLFWKeyCallback prevKeyCallBack;
    private Vector2f scale;
    private ShaderProgram shaderProgram;
    private Texture texture;
    private UniformsMap uniformsMap;

    public GuiRender(Window window) {
        List<ShaderProgram.ShaderModuleData> shaderModuleDataList = new ArrayList<>();
        shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/gui.vert", GL_VERTEX_SHADER));
        shaderModuleDataList.add(new ShaderProgram.ShaderModuleData("resources/shaders/gui.frag", GL_FRAGMENT_SHADER));
        shaderProgram = new ShaderProgram(shaderModuleDataList);
        createUniforms();
        createUIResources(window);
        setupKeyCallBack(window);
    }

    public void cleanup() {
        shaderProgram.cleanup();
        texture.cleanup();
        if (prevKeyCallBack != null) {
            prevKeyCallBack.free();
        }
    }
    ...
}
```

As you can see, most of the stuff here will be very familiar to you, we just set up the shaders and the uniforms. Since we will need to set up a custom key callback to handle ImGui input text controls, we need to keep track of a previous key callback in `prevKeyCallBack` to properly use it and free it. In addition to that, there is a new method called `createUIResources` which is defined like this:

```java
public class GuiRender {
    ...
    private void createUIResources(Window window) {
        ImGui.createContext();

        ImGuiIO imGuiIO = ImGui.getIO();
        imGuiIO.setIniFilename(null);
        imGuiIO.setDisplaySize(window.getWidth(), window.getHeight());

        ImFontAtlas fontAtlas = ImGui.getIO().getFonts();
        ImInt width = new ImInt();
        ImInt height = new ImInt();
        ByteBuffer buf = fontAtlas.getTexDataAsRGBA32(width, height);
        texture = new Texture(width.get(), height.get(), buf);

        guiMesh = new GuiMesh();
    }
    ...
}
```

In the method above is where we setup Imgui, we first create a context (required to perform any operation), and set up the display size to the window size. Imgui stores the status in an ini file, since we do not want the status to persist between runs we need to set it to null. The next step is to initialize the font atlas and set up a texture which will be used in the shaders so we can render properly texts, etc. The final step is to create the `GuiMesh` instance.

The `createUniforms` just creates a single two float for the scale (we will see later on how it will be used).

```java
public class GuiRender {
    ...
    private void createUniforms() {
        uniformsMap = new UniformsMap(shaderProgram.getProgramId());
        uniformsMap.createUniform("scale");
        scale = new Vector2f();
    }
    ...
}
```

The `setupKeyCallBack` method is required to properly process key events in Imgui and is defined like this:

```java
public class GuiRender {
    ...
    private void setupKeyCallBack(Window window) {
        prevKeyCallBack = glfwSetKeyCallback(window.getWindowHandle(), (handle, key, scancode, action, mods) -> {
            window.keyCallBack(key, action);
            ImGuiIO io = ImGui.getIO();
            if (!io.getWantCaptureKeyboard()) {
                return;
            }
            if (action == GLFW_PRESS) {
                io.addKeyEvent(getImKey(key), true);
            } else if (action == GLFW_RELEASE) {
                io.addKeyEvent(getImKey(key), false);
            }
        }
        );

        glfwSetCharCallback(window.getWindowHandle(), (handle, c) -> {
            ImGuiIO io = ImGui.getIO();
            if (!io.getWantCaptureKeyboard()) {
                return;
            }
            io.addInputCharacter(c);
        });
    }

    private static int getImKey(int key) {
        return switch (key) {
            case GLFW_KEY_TAB -> ImGuiKey.Tab;
            case GLFW_KEY_LEFT -> ImGuiKey.LeftArrow;
            case GLFW_KEY_RIGHT -> ImGuiKey.RightArrow;
            case GLFW_KEY_UP -> ImGuiKey.UpArrow;
            case GLFW_KEY_DOWN -> ImGuiKey.DownArrow;
            case GLFW_KEY_PAGE_UP -> ImGuiKey.PageUp;
            case GLFW_KEY_PAGE_DOWN -> ImGuiKey.PageDown;
            case GLFW_KEY_HOME -> ImGuiKey.Home;
            case GLFW_KEY_END -> ImGuiKey.End;
            case GLFW_KEY_INSERT -> ImGuiKey.Insert;
            case GLFW_KEY_DELETE -> ImGuiKey.Delete;
            case GLFW_KEY_BACKSPACE -> ImGuiKey.Backspace;
            case GLFW_KEY_SPACE -> ImGuiKey.Space;
            case GLFW_KEY_ENTER -> ImGuiKey.Enter;
            case GLFW_KEY_ESCAPE -> ImGuiKey.Escape;
            case GLFW_KEY_APOSTROPHE -> ImGuiKey.Apostrophe;
            case GLFW_KEY_COMMA -> ImGuiKey.Comma;
            case GLFW_KEY_MINUS -> ImGuiKey.Minus;
            case GLFW_KEY_PERIOD -> ImGuiKey.Period;
            case GLFW_KEY_SLASH -> ImGuiKey.Slash;
            case GLFW_KEY_SEMICOLON -> ImGuiKey.Semicolon;
            case GLFW_KEY_EQUAL -> ImGuiKey.Equal;
            case GLFW_KEY_LEFT_BRACKET -> ImGuiKey.LeftBracket;
            case GLFW_KEY_BACKSLASH -> ImGuiKey.Backslash;
            case GLFW_KEY_RIGHT_BRACKET -> ImGuiKey.RightBracket;
            case GLFW_KEY_GRAVE_ACCENT -> ImGuiKey.GraveAccent;
            case GLFW_KEY_CAPS_LOCK -> ImGuiKey.CapsLock;
            case GLFW_KEY_SCROLL_LOCK -> ImGuiKey.ScrollLock;
            case GLFW_KEY_NUM_LOCK -> ImGuiKey.NumLock;
            case GLFW_KEY_PRINT_SCREEN -> ImGuiKey.PrintScreen;
            case GLFW_KEY_PAUSE -> ImGuiKey.Pause;
            case GLFW_KEY_KP_0 -> ImGuiKey.Keypad0;
            case GLFW_KEY_KP_1 -> ImGuiKey.Keypad1;
            case GLFW_KEY_KP_2 -> ImGuiKey.Keypad2;
            case GLFW_KEY_KP_3 -> ImGuiKey.Keypad3;
            case GLFW_KEY_KP_4 -> ImGuiKey.Keypad4;
            case GLFW_KEY_KP_5 -> ImGuiKey.Keypad5;
            case GLFW_KEY_KP_6 -> ImGuiKey.Keypad6;
            case GLFW_KEY_KP_7 -> ImGuiKey.Keypad7;
            case GLFW_KEY_KP_8 -> ImGuiKey.Keypad8;
            case GLFW_KEY_KP_9 -> ImGuiKey.Keypad9;
            case GLFW_KEY_KP_DECIMAL -> ImGuiKey.KeypadDecimal;
            case GLFW_KEY_KP_DIVIDE -> ImGuiKey.KeypadDivide;
            case GLFW_KEY_KP_MULTIPLY -> ImGuiKey.KeypadMultiply;
            case GLFW_KEY_KP_SUBTRACT -> ImGuiKey.KeypadSubtract;
            case GLFW_KEY_KP_ADD -> ImGuiKey.KeypadAdd;
            case GLFW_KEY_KP_ENTER -> ImGuiKey.KeypadEnter;
            case GLFW_KEY_KP_EQUAL -> ImGuiKey.KeypadEqual;
            case GLFW_KEY_LEFT_SHIFT -> ImGuiKey.LeftShift;
            case GLFW_KEY_LEFT_CONTROL -> ImGuiKey.LeftCtrl;
            case GLFW_KEY_LEFT_ALT -> ImGuiKey.LeftAlt;
            case GLFW_KEY_LEFT_SUPER -> ImGuiKey.LeftSuper;
            case GLFW_KEY_RIGHT_SHIFT -> ImGuiKey.RightShift;
            case GLFW_KEY_RIGHT_CONTROL -> ImGuiKey.RightCtrl;
            case GLFW_KEY_RIGHT_ALT -> ImGuiKey.RightAlt;
            case GLFW_KEY_RIGHT_SUPER -> ImGuiKey.RightSuper;
            case GLFW_KEY_MENU -> ImGuiKey.Menu;
            case GLFW_KEY_0 -> ImGuiKey._0;
            case GLFW_KEY_1 -> ImGuiKey._1;
            case GLFW_KEY_2 -> ImGuiKey._2;
            case GLFW_KEY_3 -> ImGuiKey._3;
            case GLFW_KEY_4 -> ImGuiKey._4;
            case GLFW_KEY_5 -> ImGuiKey._5;
            case GLFW_KEY_6 -> ImGuiKey._6;
            case GLFW_KEY_7 -> ImGuiKey._7;
            case GLFW_KEY_8 -> ImGuiKey._8;
            case GLFW_KEY_9 -> ImGuiKey._9;
            case GLFW_KEY_A -> ImGuiKey.A;
            case GLFW_KEY_B -> ImGuiKey.B;
            case GLFW_KEY_C -> ImGuiKey.C;
            case GLFW_KEY_D -> ImGuiKey.D;
            case GLFW_KEY_E -> ImGuiKey.E;
            case GLFW_KEY_F -> ImGuiKey.F;
            case GLFW_KEY_G -> ImGuiKey.G;
            case GLFW_KEY_H -> ImGuiKey.H;
            case GLFW_KEY_I -> ImGuiKey.I;
            case GLFW_KEY_J -> ImGuiKey.J;
            case GLFW_KEY_K -> ImGuiKey.K;
            case GLFW_KEY_L -> ImGuiKey.L;
            case GLFW_KEY_M -> ImGuiKey.M;
            case GLFW_KEY_N -> ImGuiKey.N;
            case GLFW_KEY_O -> ImGuiKey.O;
            case GLFW_KEY_P -> ImGuiKey.P;
            case GLFW_KEY_Q -> ImGuiKey.Q;
            case GLFW_KEY_R -> ImGuiKey.R;
            case GLFW_KEY_S -> ImGuiKey.S;
            case GLFW_KEY_T -> ImGuiKey.T;
            case GLFW_KEY_U -> ImGuiKey.U;
            case GLFW_KEY_V -> ImGuiKey.V;
            case GLFW_KEY_W -> ImGuiKey.W;
            case GLFW_KEY_X -> ImGuiKey.X;
            case GLFW_KEY_Y -> ImGuiKey.Y;
            case GLFW_KEY_Z -> ImGuiKey.Z;
            case GLFW_KEY_F1 -> ImGuiKey.F1;
            case GLFW_KEY_F2 -> ImGuiKey.F2;
            case GLFW_KEY_F3 -> ImGuiKey.F3;
            case GLFW_KEY_F4 -> ImGuiKey.F4;
            case GLFW_KEY_F5 -> ImGuiKey.F5;
            case GLFW_KEY_F6 -> ImGuiKey.F6;
            case GLFW_KEY_F7 -> ImGuiKey.F7;
            case GLFW_KEY_F8 -> ImGuiKey.F8;
            case GLFW_KEY_F9 -> ImGuiKey.F9;
            case GLFW_KEY_F10 -> ImGuiKey.F10;
            case GLFW_KEY_F11 -> ImGuiKey.F11;
            case GLFW_KEY_F12 -> ImGuiKey.F12;
            default -> ImGuiKey.None;
        };
    }
    ...
}
```

First we need to setup a GLFW key callback which first calls `Window` key call back to handle key events and translate GFLW key code sto Imgui ones. When setting a callback we obtain a reference to a previously established one so we can chain them. In this case we will invoke it if the key event is not handled by ImGui. We are not using char callbacks in other parts of the code, but if you do, remember to apply that chain schema also. After that, we set up the state of Imgui according to key pressed or released events. Finally, we need to setup a char call back so text input widgets can process those events.

Let's view the `render` method now:

```java
public class GuiRender {
    ...
    public void render(Scene scene) {
        IGuiInstance guiInstance = scene.getGuiInstance();
        if (guiInstance == null) {
            return;
        }
        guiInstance.drawGui();

        shaderProgram.bind();

        glEnable(GL_BLEND);
        glBlendEquation(GL_FUNC_ADD);
        glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
        glDisable(GL_DEPTH_TEST);
        glDisable(GL_CULL_FACE);

        glBindVertexArray(guiMesh.getVaoId());

        glBindBuffer(GL_ARRAY_BUFFER, guiMesh.getVerticesVBO());
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, guiMesh.getIndicesVBO());

        ImGuiIO io = ImGui.getIO();
        scale.x = 2.0f / io.getDisplaySizeX();
        scale.y = -2.0f / io.getDisplaySizeY();
        uniformsMap.setUniform("scale", scale);

        ImDrawData drawData = ImGui.getDrawData();
        int numLists = drawData.getCmdListsCount();
        for (int i = 0; i < numLists; i++) {
            glBufferData(GL_ARRAY_BUFFER, drawData.getCmdListVtxBufferData(i), GL_STREAM_DRAW);
            glBufferData(GL_ELEMENT_ARRAY_BUFFER, drawData.getCmdListIdxBufferData(i), GL_STREAM_DRAW);

            int numCmds = drawData.getCmdListCmdBufferSize(i);
            for (int j = 0; j < numCmds; j++) {
                final int elemCount = drawData.getCmdListCmdBufferElemCount(i, j);
                final int idxBufferOffset = drawData.getCmdListCmdBufferIdxOffset(i, j);
                final int indices = idxBufferOffset * ImDrawData.sizeOfImDrawIdx();

                texture.bind();
                glDrawElements(GL_TRIANGLES, elemCount, GL_UNSIGNED_SHORT, indices);
            }
        }

        glEnable(GL_DEPTH_TEST);
        glEnable(GL_CULL_FACE);
        glDisable(GL_BLEND);
    }
    ...
}
```

The first thing that we do is to check if we have set up an implementation of the `IGuiInstance` interface. If there is no instance, we just return, there is no need to render anything. After that we call the `drawGui` method. That is, in each render call we invoke that method so the Imgui can update its status to be able to generate the proper vertex data. After binding the shader we first enable blending which will allow us to use transparencies. Just by enabling blending, transparencies still will not show up. We need also to instruct OpenGL about how the blending will be applied. This is done through the `glBlendFunc` function. You can check an excellent explanation about the details of the different functions that can be applied [here](https://learnopengl.com/Advanced-OpenGL/Blending).

After that, we need to disable depth testing and face culling for Imgui to work properly. Then, we bind the gui mesh which defines the structure of the data and bind the data and indices buffers. Imgui uses screen coordinates to generate the vertices data, that is `x` values cover the `[0, screen width]` range and `y` values cover the `[0, screen height]`. We will use the `scale` uniform to map from that coordinate system to the `[-1, 1]` range of OpenGL's clip space.

After that, we retrieve the data generated by Imgui to render the GUI. Imgui first organizes the data in what they call command lists. Each command list has a buffer where it stores the vertex and indices data, so we first dump data to the GPU by calling the `glBufferData`. Each command list defines also a set of commands which we will use to generate the draw calls. Each command stores the number of elements to be drawn and the offset to be applied to the buffer in the command list. When we have drawn all the elements we can re-enable the depth test.

Finally, we need to add a `resize` method which will be called any time the window is resized to adjust Imgui display size.

```java
public class GuiRender {
    ...
    public void resize(int width, int height) {
        ImGuiIO imGuiIO = ImGui.getIO();
        imGuiIO.setDisplaySize(width, height);
    }
}
```

We need to update the `UniformsMap` class to add support for 2D vectors:

```java
public class UniformsMap {
    ...
    public void setUniform(String uniformName, Vector2f value) {
        glUniform2f(getUniformLocation(uniformName), value.x, value.y);
    }
}
```

The vertex shader used for rendering the GUI is quite simple (`gui.vert`), we just transform the coordinates so they are in the `[-1, 1]` range and output the texture coordinates and color so they can be used in the fragment shader:

```glsl
#version 330

layout (location=0) in vec2 inPos;
layout (location=1) in vec2 inTextCoords;
layout (location=2) in vec4 inColor;

out vec2 frgTextCoords;
out vec4 frgColor;

uniform vec2 scale;

void main()
{
    frgTextCoords = inTextCoords;
    frgColor = inColor;
    gl_Position = vec4(inPos * scale + vec2(-1.0, 1.0), 0.0, 1.0);
}
```

In the fragment shader (`gui.frag`) we just output the combination of the vertex color and the texture color associated to its texture coordinates:

```glsl
#version 330

in vec2 frgTextCoords;
in vec4 frgColor;

uniform sampler2D txtSampler;

out vec4 outColor;

void main()
{
    outColor = frgColor  * texture(txtSampler, frgTextCoords);
}
```

## Putting it up all together

Now we need to glue all the previous pices to render the GUI. We will first start by using the new `GuiRender` class into the `Render` one.

```java
public class Render {
    ...
    private GuiRender guiRender;
    ...
    public Render(Window window) {
        ...
        guiRender = new GuiRender(window);
    }

    public void cleanup() {
        ...
        guiRender.cleanup();
    }

    public void render(Window window, Scene scene) {
        ...
        guiRender.render(scene);
    }

    public void resize(int width, int height) {
        guiRender.resize(width, height);
    }
}
```

We also need to modify the `Engine` class to include `IGuiInstance` in the update loop and to use its return value to indicate if input has been consumed or not.

```java
public class Engine {
    ...
    public Engine(String windowTitle, Window.WindowOptions opts, IAppLogic appLogic) {
        ...
        render = new Render(window);
        ...
    }
    ...
    private void resize() {
        int width = window.getWidth();
        int height = window.getHeight();
        scene.resize(width, height);
        render.resize(width, height);
    }

    private void run() {
        ...
        IGuiInstance iGuiInstance = scene.getGuiInstance();
        while (running && !window.windowShouldClose()) {
            ...
            if (targetFps <= 0 || deltaFps >= 1) {
                window.getMouseInput().input();
                boolean inputConsumed = iGuiInstance != null && iGuiInstance.handleGuiInput(scene, window);
                appLogic.input(window, scene, now - initialTime, inputConsumed);
            }
            ...
        }
        ...
    }
    ...
}
```

We need also to update the `IAppLogic` interface to use the input consumed return value.

```java
public interface IAppLogic {
    ...
    void input(Window window, Scene scene, long diffTimeMillis, boolean inputConsumed);
    ...
}
```

And finally, we will implement the `IGuiInstance` in the `Main` class:

```java
public class Main implements IAppLogic, IGuiInstance {
    ...
    public static void main(String[] args) {
        ...
        Engine gameEng = new Engine("chapter-10", new Window.WindowOptions(), main);
        ...
    }

    ...
    @Override
    public void drawGui() {
        ImGui.newFrame();
        ImGui.setNextWindowPos(0, 0, ImGuiCond.Always);
        ImGui.showDemoWindow();
        ImGui.endFrame();
        ImGui.render();
    }

    @Override
    public boolean handleGuiInput(Scene scene, Window window) {
        ImGuiIO imGuiIO = ImGui.getIO();
        MouseInput mouseInput = window.getMouseInput();
        Vector2f mousePos = mouseInput.getCurrentPos();
        imGuiIO.addMousePosEvent(mousePos.x, mousePos.y);
        imGuiIO.addMouseButtonEvent(0, mouseInput.isLeftButtonPressed());
        imGuiIO.addMouseButtonEvent(1, mouseInput.isRightButtonPressed());

        return imGuiIO.getWantCaptureMouse() || imGuiIO.getWantCaptureKeyboard();
    }
    ...
    public void input(Window window, Scene scene, long diffTimeMillis, boolean inputConsumed) {
        if (inputConsumed) {
            return;
        }
        ...
    }
}
```

In the `drawGui` method we just setup a new frame, the window position and just invoke the `showDemoWindow` to generate Imgui's demo window. After ending the frame it is very important to call the `render` this is what will generate the set of commands upon the GUI structure defined previously. The `handleGuiInput` first gets mouse position and updates Imgui's IO class with that information and mouse button status. We also return a boolean that indicates that input has been capture by Imgui. Finally, we just need to update the `input` method to receive that flag. In this specific case, if input has already been consumed by the Gui, we just return.

With all those changes you will be able to see Imgui demo window overlapping the rotating cube. You can interact with the different methods and panels to get a glimpse of the capabilities of Imgui.

![Demo window](../.gitbook/assets/demo.png)

[Next chapter](../chapter-11/chapter-11.md)


================================================
FILE: chapter-11/chapter-11.md
================================================
# Chapter 11 - Lights

In this chapter, we will learn how to add light to our 3D game engine. We will not implement a physically perfect light model because, taking aside the complexity, it would require a tremendous amount of computer resources. Instead, we will implement an approximation that will provide decent results, using an algorithm named Phong shading (developed by Bui Tuong Phong). Another important thing to point out is that we will only model lights, but we won’t model the shadows that should be generated by those lights (this will be done in another chapter).

You can find the complete source code for this chapter [here](https://github.com/lwjglgamedev/lwjglbook/tree/main/chapter-11).

## Some concepts

Before we start, let us define some light types:

* **Point light**: This type of light models a light source that’s emitted uniformly from a point in space in all directions.
* **Spot light**: This type of light models a light source that’s emitted from a point in space, but instead of emitting in all directions is restricted to a cone.
* **Directional light**: This type of light models the light that we receive from the sun; all the objects in the 3D space are hit by parallel ray lights coming from a specific direction. No matter if the object is close or far away, all the ray lights impact the objects with the same angle.
* **Ambient light**: This type of light comes from everywhere in the space and illuminates all the objects in the same way.

![Light types](<../.gitbook/assets/light_types (1).png>)

Thus, to model light, we need to take into consideration the type of light, plus its position and some other parameters like its color. Of course, we must also consider the way that objects, impacted by ray lights, absorb and reflect light.

The Phong shading algorithm will model the effects of light for each point in our model, that is, for every vertex. This is why it’s called a local illumination simulation, and this is the reason why this algorithm will not calculate shadows: it will just calculate the light to be applied to every vertex without taking into consideration if the vertex is behind an object that blocks the light. We will overcome this drawback in later chapters. But, because of that, it's a simple and fast algorithm that provides very good effects. We will use here a simplified version that does not take into account materials deeply.

The Phong algorithm considers three components for lighting:

* **Ambient light**: models light that comes from everywhere, this will serve us to illuminate (with the required intensity) the areas that are not hit by any light, it’s like a background light.
* **Diffuse reflectance**: takes into consideration that surfaces that are facing the light source are brighter.
* **Specular reflectance**: models how light reflects on polished or metallic surfaces.

At the end, what we want to obtain is a factor that, multiplied by the color assigned to a fragment, will set that color brighter or darker depending on the light it receives. Let’s name our components as $$A$$ for ambient, $$D$$ for diffuse, and $$S$$ for specular. That factor will be the addition of those components:

$$L = A + D + S$$

In fact, those components are indeed colors, that is, the color components that each light component contributes to. This is due to the fact that light components will not only provide a degree of intensity, but they can also modify the color of the model. In our fragment shader, we just need to multiply that light color by the original fragment color (obtained from a texture or a base color).

We can also assign different colors for the same materials, which will be used in the ambient, diffuse, and specular components. Hence, these components will be modulated by the colors associated with the material. If the material has a texture, we will simply use a single texture for each of the components.

So the final color for a non textured material will be: $$L = A * ambientColour + D * diffuseColour + S * specular Colour$$.

And the final color for a textured material will be:

$$L = A * textureColour + D * textureColour + S * textureColour$$

## Normals

Normals are a key element when working with lights. Let’s define it first. The normal of a plane is a vector perpendicular to that plane, which has a length equal to one.

![Normals](<../.gitbook/assets/normals (1).png>)

As you can see in the figure above, a plane can have two normals. Which one should we use? Normals in 3D graphics are used for lighting, so we should choose the normal that is oriented towards the source of light. In other words, we should choose the normal that points out from the external face of our model.

When we have a 3D model, it is composed by polygons, triangles in our case. Each triangle is composed of three vertices. The Normal vector for a triangle will be the vector perpendicular to the triangle's surface, which has a length equal to one.

A vertex normal is associated with a specific vertex and is the combination of the normals of the surrounding triangles (of course, its length is equal to one). Here you can see the vertex models of a 3D mesh (taken from [Wikipedia](https://en.wikipedia.org/wiki/Vertex_normal#/media/File:Vertex_normals.png))

![Vertex normals](<../.gitbook/assets/vertex_normals (1).png>)

## Diffuse reflectance

Let’s talk now about diffuse reflectance. This models the fact that surfaces which face in a perpendicular way to the light source look brighter than surfaces where light is received in a more indirect angle. Those objects receive more light, the light density (let me call it this way) is higher.

![Diffuse Light](<../.gitbook/assets/diffuse_light (1).png>)

But, how do we calculate this? This is where we will first start using normals. Let’s draw the normals for three points in the previous figure. As you can see, the normal for each point will be the vector perpendicular to the tangent plane for each point. Instead of drawing rays coming from the source of light we will draw vectors from each point to the point of light (that is, in the opposite direction).

![Normals and light direction](<../.gitbook/assets/diffuse_light_normals (1).png>)

As you can see, the normal associated to $$P1$$, named $$N1$$, is parallel to the vector that points to the light source, which models the opposite of the light ray ($$N1$$ has been sketched displaced so you can see it, but it’s equivalent mathematically). $$P1$$ has an angle equal to $$0$$ with the vector that points to the light source. Its surface is perpendicular to the light source and $$P1$$ would be the brightest point.

The normal associated to $$P2$$, named $$N2$$, has an angle of around 30 degrees with the vector that points the light source, so it should be darker tan $$P1$$. Finally, the normal associated to $$P3$$, named $$N3$$, is also parallel to the vector that points to the light source but the two vectors are in the opposite direction. $$P3$$ has an angle of 180 degrees with the vector that points the light source, and should not get any light at all.

So it seems that we have a good approach to determine the light intensity that gets to a point, and this is related to the angle that forms the normal with a vector that points to the 
Download .txt
gitextract_brkbjcqe/

├── GLOSSARY.md
├── README.md
├── SUMMARY.md
├── appendix-a/
│   └── appendix-a.md
├── book.json
├── chapter-01/
│   └── chapter-01.md
├── chapter-02/
│   └── chapter-02.md
├── chapter-03/
│   └── chapter-03.md
├── chapter-04/
│   └── chapter-04.md
├── chapter-05/
│   └── chapter-05.md
├── chapter-06/
│   └── chapter-06.md
├── chapter-07/
│   └── chapter-07.md
├── chapter-08/
│   └── chapter-08.md
├── chapter-09/
│   └── chapter-09.md
├── chapter-10/
│   └── chapter-10.md
├── chapter-11/
│   └── chapter-11.md
├── chapter-12/
│   └── chapter-12.md
├── chapter-13/
│   └── chapter-13.md
├── chapter-14/
│   └── chapter-14.md
├── chapter-15/
│   └── chapter-15.md
├── chapter-16/
│   └── chapter-16.md
├── chapter-17/
│   └── chapter-17.md
├── chapter-18/
│   └── chapter-18.md
├── chapter-19/
│   └── chapter-19.md
├── chapter-20/
│   └── chapter-20.md
├── chapter-21/
│   └── chapter-21.md
└── styles/
    └── pdf.css
Condensed preview — 27 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (549K chars).
[
  {
    "path": "GLOSSARY.md",
    "chars": 11,
    "preview": "# Glossary\n"
  },
  {
    "path": "README.md",
    "chars": 2696,
    "preview": "# 3D Game Development with LWJGL 3\n\nThis online book will introduce the main concepts required to write a 3D game using "
  },
  {
    "path": "SUMMARY.md",
    "chars": 1351,
    "preview": "# Summary\n\n* [Introduction](README.md)\n* [Chapter 01 - First steps](chapter-01/chapter-01.md)\n* [Chapter 02 - The Game L"
  },
  {
    "path": "appendix-a/appendix-a.md",
    "chars": 3789,
    "preview": "# Appendix A - OpenGL Debugging\n\nDebugging an OpenGL program can be a daunting task. Most of the times you end up with a"
  },
  {
    "path": "book.json",
    "chars": 67,
    "preview": "{\n    \"plugins\": [\n        \"katex\"\n    ],\n    \"pluginsConfig\": {}\n}"
  },
  {
    "path": "chapter-01/chapter-01.md",
    "chars": 15860,
    "preview": "# Chapter 01 - First steps\n\nIn this book we will learn the principal techniques involved in developing 3D games using [O"
  },
  {
    "path": "chapter-02/chapter-02.md",
    "chars": 15264,
    "preview": "# Chapter 02 - The Game Loop\n\nIn this chapter we will start developing our game engine by creating the game loop. The ga"
  },
  {
    "path": "chapter-03/chapter-03.md",
    "chars": 31485,
    "preview": "# Chapter 03 - Our first triangle\n\nIn this chapter we will render our first triangle to the screen and introduce the bas"
  },
  {
    "path": "chapter-04/chapter-04.md",
    "chars": 7001,
    "preview": "# Chapter 04 - Render a quad\n\n## Chapter 04 - More on render\n\nIn this chapter, we will continue talking about how OpenGL"
  },
  {
    "path": "chapter-05/chapter-05.md",
    "chars": 13236,
    "preview": "# Chapter 05 - Perspective projection\n\nIn this chapter, we will learn two important concepts, perspective projection (to"
  },
  {
    "path": "chapter-06/chapter-06.md",
    "chars": 19939,
    "preview": "# Chapter 06 - Going 3D\n\nIn this chapter we will set up the basis for 3D models and explain the concept of model transfo"
  },
  {
    "path": "chapter-07/chapter-07.md",
    "chars": 23293,
    "preview": "# Chapter 07 - Textures\n\nIn this chapter we will learn how to load textures, how they relate to a model and how to use t"
  },
  {
    "path": "chapter-08/chapter-08.md",
    "chars": 13817,
    "preview": "# Chapter 08 - Camera\n\nIn this chapter, we will learn how to move inside a rendered 3D scene. This capability is like ha"
  },
  {
    "path": "chapter-09/chapter-09.md",
    "chars": 18975,
    "preview": "# Chapter 09 - Loading more complex models: Assimp\n\nThe capability of loading complex 3D models in different formats is "
  },
  {
    "path": "chapter-10/chapter-10.md",
    "chars": 23898,
    "preview": "# Chapter 10 - GUI (Imgui)\n\n[Dear ImGui](https://github.com/ocornut/imgui) is a user interface library which can use sev"
  },
  {
    "path": "chapter-11/chapter-11.md",
    "chars": 57439,
    "preview": "# Chapter 11 - Lights\n\nIn this chapter, we will learn how to add light to our 3D game engine. We will not implement a ph"
  },
  {
    "path": "chapter-12/chapter-12.md",
    "chars": 12812,
    "preview": "# Chapter 12 - Sky Box\n\nIn this chapter we will see how to create a sky box. A skybox will allow us to set a background "
  },
  {
    "path": "chapter-13/chapter-13.md",
    "chars": 10407,
    "preview": "# Chapter 13 - Fog\n\nIn this chapter we will see review how to create a fog effect in our game engine. With that effect, "
  },
  {
    "path": "chapter-14/chapter-14.md",
    "chars": 18648,
    "preview": "# Chapter 14 - Normal Mapping\n\nIn this chapter, we will explain a technique that will dramatically improve the appearanc"
  },
  {
    "path": "chapter-15/chapter-15.md",
    "chars": 40199,
    "preview": "# Chapter 15 - Animations\n\nUntil now we have only loaded static 3D models, but in this chapter we will learn how to anim"
  },
  {
    "path": "chapter-16/chapter-16.md",
    "chars": 20045,
    "preview": "# Chapter 16 - Audio\n\nUntil this moment we have been dealing with graphics, but another key aspect of every game is audi"
  },
  {
    "path": "chapter-17/chapter-17.md",
    "chars": 35925,
    "preview": "# Chapter 17 - Cascade shadow maps\n\nCurrently we are able to represent how light affects the objects in a 3D scene. Obje"
  },
  {
    "path": "chapter-18/chapter-18.md",
    "chars": 18982,
    "preview": "# Chapter 18 - 3D Object Picking\n\nOne of the key aspects of every game is the ability to interact with the environment. "
  },
  {
    "path": "chapter-19/chapter-19.md",
    "chars": 34384,
    "preview": "# Chapter 19 - Deferred Shading\n\nUp to now the way that we are rendering a 3D scene is called forward rendering. We firs"
  },
  {
    "path": "chapter-20/chapter-20.md",
    "chars": 58828,
    "preview": "# Chapter 20 - Indirect drawing (static models)\n\nUntil this chapter, we have rendered the models by binding their materi"
  },
  {
    "path": "chapter-21/chapter-21.md",
    "chars": 35405,
    "preview": "# Chapter 21 - Indirect drawing (animated models) and compute shaders\n\nIn this chapter we will add support for animated "
  },
  {
    "path": "styles/pdf.css",
    "chars": 18,
    "preview": "/* CSS for pdf */\n"
  }
]

About this extraction

This page contains the full source code of the lwjglgamedev/lwjglbook-bookcontents GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 27 files (521.3 KB), approximately 122.8k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!