Repository: enkimute/LookMaNoMatrices
Branch: main
Commit: 0c33ae24ed8f
Files: 15
Total size: 178.1 KB
Directory structure:
gitextract_3gsjwqbd/
├── LICENSE
├── README.md
├── data/
│ ├── cow.glb
│ └── elephant.glb
├── index.html
└── src/
├── LookMaNoMatrices.js
├── miniGGX.glsl
├── miniGL.js
├── miniGLTF.js
├── miniIBL.glsl
├── miniPGA.glsl
├── miniPGA.js
├── miniRender.js
├── shaders.js
└── util.js
================================================
FILE CONTENTS
================================================
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2024 Steven De Keninck
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# Look, Ma, No Matrices!
To be presented at SIGGRAPH2024. (gensub_345).
Supplanting matrices with Geometric Algebra (PGA) in a forward 3D renderer.
For info and live demo :
https://enkimute.github.io/LookMaNoMatrices/
Features :
* PBR Metalness.
* GGX IBL + DL.
* glTF/glb with animations.
* PGA motor skinning (isomorphic to dual quaternions).
* Animation blending with motors.
* 25% Smaller Vertex Descriptor!
* Tangent Space normalmapping using rotors.
* no matrices!
================================================
FILE: index.html
================================================
<!DOCTYPE html>
<html lang="en">
<head>
<!-- Mobile scaling and title -->
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, user-scalable=no" />
<title>Look, Ma, No Matrices!</title>
<!-- A nice font -->
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Gasoek+One&family=Pathway+Extreme:ital,opsz,wght@0,8..144,100..900;1,8..144,100..900&display=swap" rel="stylesheet">
<!-- Latex support -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.16.9/dist/katex.min.css" integrity="sha384-n8MVd4RsNIU0tAv4ct0nTaAbDJwPJzDEaqSD1odI+WdtXRGWt2kTvGFasHpSy3SV" crossorigin="anonymous">
<script defer src="https://cdn.jsdelivr.net/npm/katex@0.16.9/dist/katex.min.js" integrity="sha384-XjKyOOlGwcjNTAIQHIpgOno0Hl1YQqzUOEleOLALmuqehneUG+vnGctmUb0ZY0l8" crossorigin="anonymous"></script>
<script defer src="https://cdn.jsdelivr.net/npm/katex@0.16.9/dist/contrib/auto-render.min.js" integrity="sha384-+VBxd3r6XgURycqtZ117nYw44OOcIax56Z4dCRWbxyPt0Koah1uHoK0o4+/RRE05" crossorigin="anonymous"
onload="renderMathInElement(document.body,{delimiters:[{left: '$$', right: '$$', display: true},{left: '$', right: '$', display: false}]});"></script>
<!-- Highlighting for snippets -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/highlightjs/cdn-release@11.9.0/build/styles/base16/marrakesh.min.css">
<script src="https://cdn.jsdelivr.net/gh/highlightjs/cdn-release@11.9.0/build/highlight.min.js"></script>
<script src="https://cdn.jsdelivr.net/gh/highlightjs/cdn-release@11.9.0/build/languages/glsl.min.js"></script>
<!-- Minimal local styling -->
<style>
html { background : linear-gradient(0deg, rgb(0 0 0) ,rgb(66 22 0) ) fixed; }
body { font-family: "Pathway Extreme", Arial, sans-serif; text-align:justify; counter-reset: section; }
b { color : white }
#render { position:absolute; width:100%; height:100%; top:0; left:0; pointer-events:none; z-index:100; }
.main { color:#BBB; position:relative; margin:auto; max-width: min(100%, 1024px); overflow:hidden }
h1, h2, h3, h4 { color: #EEE; }
h2::before { counter-increment: section; content: counter(section) ". "; }
h2 { counter-reset: subsection; }
h3::before { counter-increment: subsection; content: counter(section) "." counter(subsection) ". "; }
h3 { counter-reset : subsubsection; }
h4::before { counter-increment: subsubsection; content: counter(section) "." counter(subsection) "." counter(subsubsection) ". "; }
table { border-collapse:collapse; border:1px solid rgba(255,255,255,0.8); background : rgba(0,0,0,0.3); }
td { padding:5px; width: 6.25%; border: 1px solid rgba(255,255,255,0.8); text-align:center }
td.b { border-right:3px solid white }
th { text-align:center }
a { color:#CCC; }
a:visited { color:#AAF; }
p { margin-left:10px }
</style>
</head>
<body>
<!-- Our canvas and load progress indicator -->
<canvas id="render"></canvas>
<progress id="file" max="100" value="0" STYLE="position:fixed; top:5px; width:calc(100% - 20px)"></progress>
<!-- CONTENT -->
<DIV class="main">
<!-- Centered Document title ------------------------------------------------------------- -->
<CENTER>
<H1>Look, Ma, No Matrices!</H1>
<div>
<A TARGET="BLANK" HREF="https://github.com/enkimute/LookMaNoMatrices">https://github.com/enkimute/LookMaNoMatrices</A><BR>
A forward renderer without matrices.
</div>
<B>Steven De Keninck</B><BR>
<B>Computer Vision Group • University of Amsterdam</B><BR><BR><BR>
<big>Putting PGA ($\mathbb R_{3,0,1}$) to the test!</big>
</CENTER>
<BR><BR>
<!-- First glTF model and abstract --------------------------------------------------------- -->
<SPAN
class="glTF"
title="model by dogzerx@hotmail.com"
style="float:left; display:block; width:180px; height:270px;"
data-scene="data/elephant.glb"
data-animA="0"
data-animB="4"
></SPAN>
<p>Since the 2019 SIGGRAPH course [1], Geometric Algebra, and Euclidean PGA (plane-based or
projective geometric algebra) in particular, has been gaining traction within the computer
graphics and machine learning communities [2, 3, 4]. Despite its broad applicability,
including for higher dimensional geometry and physics, its adoption in traditional 3D
graphics has been limited, often merely re-branding a dual quaternion as a PGA motor.
The ’Look, Ma, No Matrices!’ project aims to broaden PGA’s application by introducing
a modern, glTF-compliant, forward-rendering 3D engine that fully integrates the PGA algebra.
</p>
<p>In this write up, we will go over this project, highlighting the solutions and techniques that
are required when moving to a PGA implementation. It was at times tempting to start from the existing
techniques and attempt an 'algebra-level' translation. This however often leads to
unsatisfactory solutions and in order for PGA to truly reach its potential a more fundamental
revisit is often needed. Algebra without geometry, indeed, is blind.
</p>
<!-- Introduction ----------------------------------------------------------------- ---------->
<H2>Introduction.</H2>
<P>
Matrices are everywhere in computer graphics. In fact, there was a time when 4x4 matrices were
both baked into the GPU and a mandatory part of all graphics API's. This project would then
simply not have been possible. Today however, pushed in no small part by the advancements in AI,
GPUs are highly programmable scalar processors, no longer tied into the long gone fixed function
pipeline. Yet, 4x4 matrices are still omnipresent. And why should they not be? They can represent
all linear transformations, and the typical forward graphics pipeline indeed involves both
rigid and projective transformations. Seems like a good fit.
</P>
<P>
Quaternions are also everywhere in computer graphics. Turns out, matrices have less than ideal
interpolation properties, and modern formats like Khronos' glTF use quaternions for all their
rotation needs. Fantastic for animations, and generally considered worth the cost of the
unavoidable conversions to and from matrices.
</P>
<P>
Out in the real world, however, the vast majority of matrices in your typical 3D engine setup
are going to be orthogonal matrices, encoding just rotations and translations. And this is where
the motor manifold of PGA comes in. At a lower computational and memory cost, PGA motors encode
the full set of Euclidean motions, additionally offering conversion free inclusion of quaternions
and dual quaternions.
</P>
<P>
This of course raises the question - can we replace all matrices in a typical forward renderer by
their PGA equivalents ? Is a true matrix free setup possible or even desirable ? Only one way to
find out ...
</P>
<p>
But before we start, a short disclaimer. This project aims to replace matrices without compromise.
That, of course, is not how one should approach any engineering problem. As a reference for this
project we are using the Khronos glTF viewer. This is a sensible choice, as no doubt others will
use this as reference, but it does not attempt to be an optimal implementation. It uses 4x4 matrices
in full, and the built-in glsl operators for them for most operations, when experienced graphics
programmers know there are still wins there. The point here is not to make the most optimal
implementation, especially in light of some of the findings here, the most optimal implementation
is most likely a hybrid solution, subject to a future writeup!
</p>
<!-- Fast PGA ----------------------------------------------------------------- ---------->
<H2>FPGA : Fast PGA!</H2>
<p>
A full introduction to PGA is outside of the scope of this article, and we will assume the reader
is familiar with at least the material in [1, 5, 6]. Instead, we will focus on the choices made for
this particular implementation, and specifically work out in detail the basic operators that are
needed, both for CPU and GPU.
</p>
<H3>Basics and Basis</H3>
<p>
The PGA algebra is generated by four basis vectors $\mathbf e_0$ to $\mathbf e_3$. The
$\mathbf e_1, \mathbf e_2, \mathbf e_3$ vectors map to the $x=0, y=0, z=0$ planes respectively
while the special degenerate $\mathbf e_0$ vector represents the plane at infinity. These four generators
are then combined to form six bivectors, four trivectors and a single quadvector that together with
the scalar represent all of the PGA elements. Our specific choice for basis and memory layout was
selected to minimize conversions when handling typical graphics data. All of the elements of PGA are
intricately connected, an overview of our choices and how they map to transformations and geometric
elements is in the following table (where the second row denotes the square of each element) :
</p>
<BR>
<CENTER STYLE="overflow-x:auto">
<TABLE>
<TR><TD>e1 <TD>e2 <TD>e3 <TD class="b">e0<TD>1 <TD> e23 <TD> e31 <TD> e12 <TD> e01 <TD> e02 <TD> e03 <TD class="b"> e0123 <TD> e032 <TD> e013 <TD> e021 <TD> e123 </TR>
<TR STYLE="color:#FA8"><TD>+1 <TD>+1 <TD>+1 <TD class="b">0<TD>+1 <TD> -1 <TD> -1 <TD> -1 <TD> 0 <TD> 0 <TD> 0 <TD class="b"> 0 <TD> 0 <TD> 0 <TD> 0 <TD> -1 </TR>
<TR><TD class="b" colspan=4 rowspan=2>plane-reflection<TD class="b" colspan=8>Motor / Dual Quaternion / Lie Group<TD colspan=4 rowspan=2>point-reflection</TR>
<TR><TD colspan=4>Quaternion<TD colspan=4 class="b"></TR>
<TR><TD rowspan=2 class="b" colspan=4>plane<BR>$a\mathbf e_1 + b\mathbf e_2 + c\mathbf e_3 + d\mathbf e_0 = 0$<TD rowspan=2><TD colspan=3>Line through orig.<TD colspan=3>∞ line<TD rowspan=2 class="b"><TD colspan=4 rowspan=2>point<BR>$(x\mathbf e_1 + y\mathbf e_2 + z\mathbf e_3 + w\mathbf e_0)^*$</TR>
<TR><TD colspan=6>Line / Lie Algebra</TR>
<TR><TD class="b" colspan=4>vector<TD>S<TD colspan=6>bivector<TD class="b">PSS<TD colspan=4>trivector</TR>
</TABLE>
</CENTER>
<BR>
<p>
These choices translate to the following simple shader structures, where we opted to stay within the built-in
types to retain addition, subtraction and scalar multiplication. (glsl does not support operator overloading for custom types).
</p>
<PRE><CODE class="language-glsl">#define motor mat2x4 // [ [s, e23, e31, e12], [e01, e02, e03, e0123] ]
#define line mat2x3 // [ [e23, e31, e12], [e01, e02, e03] ]
#define point vec3 // [ e032, e013, e021 ] implied 1 e123
#define direction vec3 // [ e032, e013, e021 ] implied 0 e123</CODE></PRE>
<H3>Get your Geometric Products!</H3>
<p>
With our data structures defined, we can focus our attention on implementing the (subset) of PGA products
we will need. Special attention is given to the composition and sandwich operators - the numerical efficiency
of matrix-vector multiplication is well known, and as we will discover, some creativity is needed to get
PGA up to par.
</p>
<H4>Composition of transformations.</H4>
<p>
The 8-float motors of PGA, isomorphic to the dual quaternions, naturally come with an efficient composition
operator in the form of the geometric product. Recall that the product of two 4x4 matrices requires 64
multiplications and 48 additions. For two general PGA motors, their composition clocks in at just 48
multiplications and 40 additions. Working out the product at coefficient level produces the following
implementation on the CPU:
</p>
<PRE><CODE class="language-javascript">// 48 mul, 40 add
export const gp_mm = (a,b,res=new baseType(8)) => {
const a0=a[0],a1=a[1],a2=a[2],a3=a[3],a4=a[4],a5=a[5],a6=a[6],a7=a[7],
b0=b[0],b1=b[1],b2=b[2],b3=b[3],b4=b[4],b5=b[5],b6=b[6],b7=b[7];
res[0] = a0*b0-a1*b1-a2*b2-a3*b3;
res[1] = a0*b1+a1*b0+a3*b2-a2*b3;
res[2] = a0*b2+a1*b3+a2*b0-a3*b1;
res[3] = a0*b3+a2*b1+a3*b0-a1*b2;
res[4] = a0*b4+a3*b5+a4*b0+a6*b2-a1*b7-a2*b6-a5*b3-a7*b1;
res[5] = a0*b5+a1*b6+a4*b3+a5*b0-a2*b7-a3*b4-a6*b1-a7*b2;
res[6] = a0*b6+a2*b4+a5*b1+a6*b0-a1*b5-a3*b7-a4*b2-a7*b3;
res[7] = a0*b7+a1*b4+a2*b5+a3*b6+a4*b1+a5*b2+a6*b3+a7*b0;
return res;
}</CODE></PRE>
<p>
This block of code, with some reshuffling and pattern matching can be written in glsl using dot and
cross products as:
</p>
<PRE><CODE class="language-glsl">// 48 mul, 40 add
motor gp_mm( motor a, motor b ) {
return motor(
a[0].x*b[0].x - dot(a[0].yzw, b[0].yzw),
a[0].x*b[0].yzw + b[0].x*a[0].yzw + cross(b[0].yzw, a[0].yzw),
a[0].x*b[1].xyz + b[0].x*a[1].xyz + cross(b[0].yzw, a[1].xyz) + cross(b[1].xyz, a[0].yzw) - b[1].w*a[0].yzw - a[1].w*b[0].yzw,
a[0].x*b[1].w + b[0].x*a[1].w + dot(a[0].yzw, b[1].xyz) + dot(a[1].xyz, b[0].yzw));
}</CODE></PRE>
<p>
While already reasonably efficient, the above code block handles general motors and there are many scenarios
where we deal with e.g. pure translations or rotations around the origin. In those scenarios many
of the motor coefficients will be zero, and reworking the above code block to incorporate that
is an easy task. For example, for the composition of two rotations around the origin we find we
need only 16 multiplications and 12 additions:
</p>
<PRE><CODE class="language-glsl">// 16 mul, 12 add
motor gp_rr( motor a, motor b ) {
return motor( a[0].x*b[0] + vec4( -dot(a[0].yzw, b[0].yzw), b[0].x*a[0].yzw + cross(b[0].yzw,a[0].yzw) ), vec4(0.) );
}</CODE></PRE>
<p>
Our implementation provides these optimized versions for any combination of translation (_t), rotation around the origin (_r)
and general motor (_m).
</p>
<CENTER>
<SPAN
class="glTF"
title="model by dogzerx@hotmail.com"
style="display:inline-block; width:140px; height:200px;"
data-scene="data/cow.glb"
data-animA="0"
data-animB="1"
></SPAN>
<DIV STYLE="display:inline-block;"><TABLE STYLE="max-width:600px;">
<TR><TH>Operation<TH>Multiplications<TH>Additions
<TR><TD>gp_mm<TD>48<TD>40
<TR><TD>gp_rr<TD>16<TD>12
<TR><TD>gp_tt<TD>0<TD>3
<TR><TD>gp_rt / gp_tr<TD>12<TD>8
<TR><TD>gp_rm / gp_mr<TD>32<TD>24
<TR><TD>gp_tm / gp_mt<TD>12<TD>12
</TABLE></DIV>
<SPAN
class="glTF"
title="model by dogzerx@hotmail.com"
style="display:inline-block; width:140px; height:200px;"
data-scene="data/elephant.glb"
data-animA="0"
data-animB="1"
></SPAN>
</CENTER>
<H4>Transforming points</H4>
<p>
For the transformation of a point $p$ with a motor $M$, done in PGA with the sandwich product, the situation is more involved.
$$ p' = M p \widetilde M $$
Working out these two geometric products naively results in a whopping 33 multiplications and 29 additions, or more than twice
the 16 multiplications and 12 additions required for the matrix-vector equivalent. The reason for this is that this naive
expansion does not take into account the fact that PGA motors satisfy $M\widetilde M = 1$. It is however not to difficult to
incorporate this into our sandwich product. To do so, we suggest starting instead from the expression
$$ p' = M p \widetilde M + p \cdot (1 - M\widetilde M)$$
Where the second term evaluates to zero for a normalized motor M. Evaluating this new expression at coefficient level allows
us to reduce the operations needed to 21 multiplications and 18 additions (which for the isomorphic dual quaternions is the best
known solution) :
</p>
<PRE><CODE CLASS="language-glsl">// 21 mul, 18 add
point sw_mp( motor a, point b ) {
direction t = cross(b, a[0].yzw) - a[1].xyz;
return (a[0].x * t + cross(t, a[0].yzw) - a[0].yzw * a[1].w) * 2. + b;
}</CODE></PRE>
<H4>Transforming directions</H4>
<p>
For directions, aka points at infinity, with an implied $\mathbf e_{123}$ coefficient of $0$, we can do a bit better still. Applying the same normalisation
trick we find a solution that requires only 18 multiplications and 12 additions.
</p>
<PRE><CODE CLASS="language-glsl">// 18 mul, 12 add
direction sw_md( motor a, direction b ) {
direction t = cross(b, a[0].yzw);
return (a[0].x * t + cross(t, a[0].yzw)) * 2. + b;
}</CODE></PRE>
<p>
Anticipating our needs when dealing with tangent spaces, we also work out the sandwich product on the basis directions. (as opposed
to the general direction above). In that scenario, the computational cost can be reduced even further. In fact, if we produce an
output normalized to $0.5$ instead of $1$ we can reduce the computational cost for the transformation of e.g. the x axis to an
amazing 6 multiplications and 4 additions - about the cost of a default cross product:
</p>
<PRE><CODE CLASS="language-glsl">// 6 mul, 4 add
direction sw_mx( motor a ) {
return direction(
0.5 - a[0].w*a[0].w - a[0].z*a[0].z,
a[0].z*a[0].y - a[0].x*a[0].w,
a[0].w*a[0].y + a[0].x*a[0].z
);
}</CODE></PRE>
<p>
This is an important observation, and as you will see will allow us to challenge the common belief that matrices are unconditionally
the fastest choice ...
</p>
<H4>Normalization</H4>
<p>
The squared (pseudo)norm of a PGA motor $M$ is given by
$$ \lvert M \rvert^2 = M \widetilde M = a + b\mathbf e_{0123}$$
For a normalized motor, $\lvert M \rvert = 1$, but in general the result of this expression is $a + b\mathbf e_{0123}$, a Study Number (a multivector whose
non-scalar part squares to a scalar). As a consequence, the normalized motor $\overline M$,
$$ \overline M = \cfrac {M} {\lvert M \rvert} $$
is a bit more involved to calculate. In 3D PGA it involves inverting a Study Number that here is isomorphic to a dual number.
We've worked out the details for a number of algebras in [7], from which we only need the inverse square root formula in 3D PGA :
$$\cfrac {1} {\lvert M \rvert} = \cfrac {1} {\sqrt{M \widetilde M}} = \cfrac{1}{\sqrt{a + b\mathbf e_{0123}}} = \cfrac{1}{\sqrt{a}} - \cfrac{b}{2{\sqrt{a}}^3}\mathbf e_{0123} $$
This leads to the following efficient implementation for 3DPGA:
</p>
<PRE><CODE CLASS="language-glsl">// 21 mul, 5 add
motor normalize_m( motor a ) {
float s = 1. / length( a[0] );
float d = (a[1].w * a[0].x - dot( a[1].xyz, a[0].yzw ))*s*s;
return motor(a[0]*s, a[1]*s + vec4(a[0].yzw*(s*d),-a[0].x*s*d));
}</CODE></PRE>
<p>
Note that this procedure should be compared not to vector normalization, but instead to Gram-Schmidt orthogonalization, as
the resulting motor is guaranteed to be an orthonormal transformation.
As before, when we are dealing with a pure translation or rotation, far more efficient versions of the normalization
procedure are available.
</p>
<H4>Square Roots</H4>
<p>
The square root plays an important role in PGA, as it is the key to constructing transformations between elements.
Given any pair of points/lines/planes $a,b$, there exist a rigid transformation that moves $a$ onto $b$. Such a rigid
transformation always has a motor form, and this motor is always given by the same simple expression :
$$ M = \sqrt{ \cfrac {b} {a} } $$
Combine this with the fact that for any normalized non-null blade $a$ its inverse is $\pm a$, and we can rewrite this as
$$ \pm M^2 = ba $$
Or, in other words, the geometric product $ba$ of any two points, two lines or two planes produces a motor that represents
double the transformation from $a$ to $b$. The square root comes in to halve this result and indeed find the motor that moves
$a$ exactly onto $b$. Here too, geometric algebra provides a single elegant method that universally applies :
$$ \sqrt M = \overline{1 + M} $$
Here the overline denotes the Study-normalization procedure from our previous block. Hence the computational cost of a
square root is exactly that of the normalization procedure with one extra addition.
</p>
<PRE><CODE CLASS="language-glsl">// 21 mul, 6 add
motor sqrt_m( motor R ) {
return normalize_m( motor( R[0].x + 1., R[0].yzw, R[1] ) );
}</CODE></PRE>
<H4>Exponential map</H4>
<p>
To complete our PGA toolbox, we adapt, also from [7], efficient implementations of the logarithmic and exponential maps.
Recall that the logarithm of a PGA motor is a (sum of) scaled lines, and similarly, any scaled line can be exponentiated
to construct a rotation around it. While the exponential map for general 4x4 matrices is numerically very expensive, for
our PGA motor manifold efficient closed form solutions are possible.
</p>
<PRE><CODE CLASS="language-glsl">// 14 muls 5 add 1 div 1 acos 1 sqrt
line log_m( motor M ) {
if (M[0].x == 1.) return line( vec3(0.), vec3(M[1].xyz) );
float a = 1./(1. - M[0].x*M[0].x), b = acos(M[0].x) * sqrt(a), c = a*M[1].w*(1. - M[0].x*b);
return line( b*M[0].yzw, b*M[1].xyz + c*M[0].wzy);
}</CODE></PRE>
<PRE><CODE CLASS="language-glsl">// 17 muls 8 add 2 div 1 sqrt 1 cos 1 sin
motor exp_b( line B ) {
float l = dot(B[0],B[0]);
if (l==0.) return motor( vec4(1., 0., 0., 0.), vec4(B[1], 0.) );
float a = sqrt(l), m = dot(B[0].xyz, B[1]), c = cos(a), s = sin(a)/a, t = m/l*(c-s);
return motor( c, s*B[0], s*B[1] + t*B[0].zyx, m*s );
}</CODE></PRE>
<H4>Inverses</H4>
<p>
If there's one place where Geometric Algebra sets itself clearly apart from our standard vector and matrix
algebra approach, it is the existence of inverses for (multi) vectors. Not only do these inverses exist,
but for the normalized objects we are working with in our context, they are very efficient to calculate.
</p>
<CENTER>
<DIV STYLE="display:inline-block">
<TABLE STYLE="max-width:400px;">
<TR><TH>Element $x$<TH>Inverse $x^{-1}$
<TR><TD>Plane<TD STYLE="text-align:left; padding-left:80px">$x^{-1} = x$
<TR><TD>Line <TD STYLE="text-align:left; padding-left:80px">$x^{-1} = -x$
<TR><TD>Point<TD STYLE="text-align:left; padding-left:80px">$x^{-1} = -x$
<TR><TD>Motor<TD STYLE="text-align:left; padding-left:80px">$x^{-1} = \widetilde x$
</TABLE>
</DIV>
<SPAN
class="glTF"
title="model by dogzerx@hotmail.com"
style="display:inline-block; width:140px; height:200px;"
data-scene="data/cow.glb"
data-animA="0"
data-animB="4"
></SPAN>
</CENTER>
<p>
Where $\widetilde x$, the reversion operation, changes the sign of the bivector and trivector coefficients only.
There is one more inverse that is occasionally needed, which is the inverse of a general bivector $B$.
Recall that a bivector $B$ only represents a single line iff $B \wedge B = 0$, the so called Plücker condition.
If a bivector $B$ does not satisfy that requirement, it is no blade, i.e. not the result of meeting two
planes or joining two points. For such an element the inverse is slightly more complicated.
</p>
<p>
To find this inverse, we start by multiplying with the reverse bivector from the right.
$$ \cfrac {1} {B} = \cfrac {\widetilde B}{B \widetilde B}$$
As before, the squared norm of $\lvert B \rvert^2 = B \widetilde B = a + b \mathbf e_{0123}$, is a Study number, isomorphic to
the dual numbers. This allows us to use the definition of a dual number inverse.
$$ \cfrac {1} {(a + b \mathbf e_{0123})} = \cfrac {1} {a} - \cfrac {b} {a^2} \mathbf e_{0123}$$
Multiplying this last expression with $\widetilde B$ produces the inverse we are looking for.
</p>
<H4>Motor Factorization</H4>
<p>
Just as the process of factorizing matrices can be very insightful, so is the factorization of PGA
motors. Two particular factorizations will be useful to us, and we will add them as the last tools
to our box. The first of those is called the <i>Euclidean Factorization</i>, and it decomposes a motor
into a rotation around the origin followed by a translation.
$$ M = T_e R_e $$
This factorization is particularly easy to calculate, as the Euclidean rotor $R_e$ is simply the Euclidean
part of our motor - the first four floats - isomorphic to a regular quaternion. If it is needed, the
translation $T_e$ can be computed as $T_e = M \widetilde R_e$
</p>
<p>
The second factorization of interest is the so called <I>Invariant factorization</I>. It decomposes a
motor $M$ into a commuting translation and rotation, which is always possible and generally known in 3D
as the Mozzi-Chasles theorem. You may have heard of it as every rigid body transformation can be
decomposed into a rotation around a line preceded or followed by a translation along the same line.
$$ M = TR = RT$$
In 3D PGA, the invariant factorization is also easy to calculate, with the commuting translation given by
$$ T = 1 + \cfrac {\langle M \rangle_4} {\langle M \rangle_2}$$
Where the angle brackets denote the grade extraction, and the general bivector inverse from above comes
in handy. The matching rotation can now be constructed as $R = M\widetilde T = \widetilde TM$.
</p>
<p>
We will in particular use the Euclidean factorization when composing the transformation of the tangent
frame with that of the object to world motor, as such a frame is invariant to translations and the composition
of rotations around the origin is more efficient.
</p>
<H3>Escaping the matrix</H3>
<p>
The prevalence of matrices in computer graphics means that interacting with existing content inevitably will
confront you with matrices. The Khronos glTF project from which we have started uses matrices throughout, for
transformations, binding poses for skinning etc. Our commitment to a matrix-free environment implies we will
have to convert these matrices to their PGA equivalents at load time.
</p>
<H4>Converting matrices to motors.</H4>
<p>
To convert a 4x4 orthogonal matrix to a motor, we happily employ the isomorphism to quaternions and upgrade
an industry standard solution to handle the entire PGA manifold.
</p>
<PRE><CODE CLASS="language-javascript">export const fromMatrix3 = M => {
// Shorthand.
var [m00,m01,m02,m10,m11,m12,m20,m21,m22] = M;
// Quick scale check
const scale = [hypot(m00,m01,m02),hypot(m10,m11,m12),hypot(m20,m21,m22)];
if (abs(scale[0]-1)>0.0001 || abs(scale[1]-1)>0.0001 || abs(scale[2]-1)>0.0001) {
const i = scale.map(s=>1/s);
m00 *= i[0]; m01 *= i[0]; m02 *= i[0];
m10 *= i[1]; m11 *= i[1]; m12 *= i[1];
m20 *= i[2]; m21 *= i[2]; m22 *= i[2];
if (abs(scale[0]/scale[1]-1)>0.0001 || abs(scale[1]/scale[2]-1)>0.0001) console.warn("non uniformly scaled matrix !", scale);
}
// Return a pure rotation (in motor format)
return normalize( m00 + m11 + m22 > 0 ? [m00 + m11 + m22 + 1.0, m21 - m12, m02 - m20, m10 - m01, 0,0,0,0]:
m00 > m11 && m00 > m22 ? [m21 - m12, 1.0 + m00 - m11 - m22, m01 + m10, m02 + m20, 0,0,0,0]:
m11 > m22 ? [m02 - m20, m01 + m10, 1.0 + m11 - m00 - m22, m12 + m21, 0,0,0,0]:
[m10 - m01, m02 + m20, m12 + m21, 1.0 + m22 - m00 - m11, 0,0,0,0]);
}
</CODE></PRE>
<PRE><CODE CLASS="language-javascript">export const fromMatrix = M => {
// Shorthand.
var [m00,m01,m02,m03,m10,m11,m12,m13,m20,m21,m22,m23,m30,m31,m32,m33] = M;
// Return rotor as translation * rotation
return gp_mm( [1,0,0,0,-0.5*m30,-0.5*m31,-0.5*m32,0], fromMatrix3([m00,m01,m02,m10,m11,m12,m20,m21,m22]) );
}
</CODE></PRE>
<p>
These conversions are run on all of the matrices in our imports, at load time.
</p>
<H4>Handling uniform scaling.</H4>
<p>
PGA motors, in contrast with 4x4 matrices, do not incorporate scaling, as it clearly is not a rigid body
transformation. Scaling, and specifically uniform scaling is however commonly used in scene graphs to
scale resources coming from potentially different sources and authored in different absolute sizes.
While less than $0.5$% of the almost 400 random glTF files tested had any animation on the scale, quite
a large number of them has some fixed uniform scale applied.
The advantage of uniform scaling is that it is invariant to both rotations and translations, and as
a result it requires only one floating point number per node, where each element's total scale is simply
the product of its own scale and that of its parent.
Our implementations tracks scaling in this manner, applying the total scale to the vertices either at
load time or as first step in the vertex shader, and applying the parent scale to the translations,
again at load time and when updating animations.
The impact of incorporating uniform scaling like this is absolutely minimal, enabling us to cover almost
all existing content without abandoning the PGA motor efficiency.
</p>
<H4>Handling non-uniform scaling.</H4>
<p>
For non uniform scaling the situation is trickier - in the scenario where non-uniform scaling is used,
ultimately a fall-back to 4x4 matrices is unavoidable. A non-uniform scale is not invariant to rotations,
and tracking these scales as we did in the uniform case is tedious. However, again from our sample of glTF
files, we could only find non-uniform scale applied on leaf nodes. (and given the problems caused by non-uniform
scaling, this is not unexpected). For such a scenario, animation keys are not impacted and we simply apply
the non-uniform scale separately before the rest of the transformations.
</p>
<!-- Forward Rendering. ----------------------------------------------------------------- -->
<H2>Forward Rendering.</H2>
<p>
Armed with our fully stocked PGA toolbox, we can now tackle the actual rendering task. Guided by the
reference implementation provided by Khronos, let us revisit those places where matrices are the de facto
solution.
</p>
<p>
The general idea of a forward renderer, is to transform all mesh geometry, and determining for each
triangle which pixels it covers. This is to be contrasted with a ray tracing approach where one starts
from a ray through a pixel and determines which triangles it hits. In a typical forward rendering setup
the transformation of the mesh from its specification in object space to its position on the screen is
usually handled by a set of matrices called the model, view and projection matrices.
</p>
<H3>Model - View - Projection.</H3>
<p>
Our conversion at load time of all matrices and transformations into PGA motors, is already a substantial
optimization on the amount of computation needed to update the scene graph hierarchy. For complex setups
many composition operators are required, and the gain of switching to motors is obvious.
</p>
<p>
However, while the CPU is concerned with producing updated transformations, the GPU has the task of applying
these transformations to the vertices, normals and tangents that make up our mesh, and as we've seen, the
computational complexity involved appears to make our motors a disadvantage.
</p>
<SPAN
class="glTF"
title="model by dogzerx@hotmail.com"
style="float:right; display:block; width:140px; height:150px;"
data-scene="data/elephant.glb"
data-animA="0"
data-animB="2"
></SPAN>
<p>
As we will soon discover, the situation is more subtle, and at this point we push through, replacing the
model and view matrices with motors, and using the above defined sandwich products to transform the incoming
vertex attributes.
</p>
<PRE><CODE class="language-glsl">vec3 worldPosition = sw_mp( toWorld, attrib_position );
vec3 worldNormal = sw_md( toWorld, attrib_normal );
vec4 worldTangent = vec4(sw_md( toWorld, attrib_tangent.xyz ), attrib_tangent.w);</CODE></PRE>
<p>
For the projection matrix, the situation is different. The typical 4x4 projection matrix has only 5 non-zero
entries, and even without PGA it is much more performant to simply write out the resulting expression. The
same simply holds here and we use a standard projection function for this.
</p>
<PRE><CODE class="language-glsl">vec4 project( const float n, const float f, const float minfov, float aspect, vec3 inpos ){
float cthf = cos(minfov/2.0) / sin(minfov/2.0); // cotangent of half the minimal fov.
float fa = 2.*f*n/(n-f), fb = (n+f)/(n-f); // all of these can be precomputed constants.
vec2 fit = cthf * vec2(-1.0/aspect, 1.0); // fit vertical.
return vec4( inpos.xy * fit, fa - fb*inpos.z, inpos.z );
}</CODE></PRE>
<p>
With our basic transformations all setup, let us turn our attention to one of todays most common shading
techniques, tangent space normal mapping.
</p>
<H3>Tangent Space Normalmapping.</H3>
<H4>Vertex Shader</H4>
<p>
For a tangent space normal-mapped mesh, the vertex shader needs to transform the position, the normal, and
the tangent vector. So it seems unavoidable that our choice for PGA means we are incurring the higher
transformation cost threefold. However, the normal, tangent and bitangent vector together form an
orthonormal frame. In PGA, any orthonormal frame is related to the canonical basis frame through a $k$-reflection.
When $k$ is even, these are just the rotation-only motors we encountered before (isomorphic to the quaternions), and
when $k$ is odd, this k-reflection instead represents a similar rotation followed by one extra reflection.
</p>
<p>
This implies that we can remove both the normal and the tangent vectors from our vertex description,
replacing them instead by a tangentRotor, which represents the rotation from the basis frame to the
desired tangent frame. Such a tangentRotor $R$ in fact double-covers all possible tangent frames in the
sense that both $R$ and $-R$ produce the same transformation. We can use this double covering to disambiguate
even and odd k-reflections, simply by making sure the sign of the scalar coefficient of $R$ matches the
classical handedness flag. Note that in doing so, we piggy-back on the IEE754 floating point specification,
that is we depend on the signed representation of zero. In the vertexshader we can then unambiguously extract
the original sign using <PRE><CODE CLASS="language-glsl">float handedness = sign(1/tangentRotor.x)</CODE></PRE>
</p>
<p>
Combining all this, we conclude that we can reduce our vertex descriptor for the most common tangent
space normalmapping setup from 12 floats (3 position, 3 normal, 4 tangent, 2 uv) down to 9 (3 position,
4 tangentRotor, 2 uv). That is a substantial save, which is implemented at load-time, converting loaded
normal and tangent vectors with:
</p>
<PRE><CODE "language-javascript">// Normalize, Orthogonalize
normal = normalize( normal );
tangent = normalize( sub(tangent, mul(normal, dot(normal,tangent) ) ) );
// Calculate the bitangent.
let bitangent = normalize(cross(normal, tangent));
// Now setup the matrix explicitly.
let mat = [...tangent, ...bitangent, ...normal];
// Convert to motor and store.
let motor = fromMatrix3( mat );
// Use the double cover to encode the handedness.
// in GA language, this means we are using half of the double cover to distinguish even and odd versors.
if (Math.sign(motor[0])!=tangents[i*4+3]) motor = motor.map(x=>-x);</CODE></PRE>
<p>
But there is more good news. Recall that using 4x4 matrices, the transformation of position, normal and
tangent includes 3 matrix vector products, totaling 48 multiplications and 36 additions. In the PGA
version, we can however transform the entire tangent frame in one go, for a cost of just 16 multiplications
and 12 additions. After which we can in fact extract the world-space normal and tangent directly with just
9 multiplications and 8 additions using :
</p>
<PRE><CODE class="language-glsl">// 9 muls, 8 adds
void extractNormalTangent( motor a, out direction normal, out direction tangent ) {
float yw = a[0].y * a[0].w;
float xz = a[0].x * a[0].z;
float zz = a[0].z * a[0].z;
normal = direction( yw - xz, a[0].z*a[0].w + a[0].y*a[0].x, 0.5 - zz - a[0].y*a[0].y );
tangent = direction( 0.5 - zz - a[0].w*a[0].w, a[0].z*a[0].y - a[0].x*a[0].w, yw + xz );
}</CODE></PRE>
<p>Add to that the 21 multiplications and 18 additions needed to transform the vertex position, and another
multiplication to extract the handedness, and we come to the remarkable conclusion that in order to transform
a vertex with full tangent frame to world space, our PGA approach needs only (16 + 9 + 21 + 1 = 47) multiplications
and (12 + 8 + 18 = 38) additions. That is nearly identical to the 48 multiplications and 36 additions that would
be required if we use 4x4 matrices and normal and tangent vectors instead.
</p>
<center>
<SPAN
class="glTF"
title="model by dogzerx@hotmail.com"
style="display:inline-block; width:200px; height:250px;"
data-scene="data/cow.glb"
data-animA="3"
data-animB="3"
></SPAN>
<SPAN
class="glTF"
title="model by dogzerx@hotmail.com"
style="display:inline-block; width:200px; height:250px;"
data-scene="data/elephant.glb"
data-animA="3"
data-animB="3"
></SPAN>
</center>
<center><b>PGA motors can be just as fast as 4x4 matrices to transform your mesh vertices !!!</b></center>
<BR>
<CENTER STYLE="overflow-x:auto">
<TABLE style="max-width:900px">
<TR><TH>method<TH>floats/vertex<TH>floats/transform<TH>multiplications<TH>additions
<TR><TD>Matrix + normal + tangent<TD>12<TD>32<TD>48<TD>36
<TR><TD>Motor + tangentRotor<TD>9 <B>-25%</B><TD>8 <B>-75%</B><TD>47<TD>38
</TABLE>
</CENTER>
<p>
The resulting code block in the vertexshader now becomes
</p>
<PRE><CODE CLASS="language-glsl">// Now transform our vertex using the motor from object to world-space.
worldPosition = sw_mp(toWorld, attrib_position);
// Concatenate the world motor and the tangent frame.
motor tangentRotor = gp_rr( toWorld, motor(attrib_tangentRotor,vec4(0.)) );
// Next, extract world normal and tangent from the tangentFrame rotor.
extractNormalTangent(tangentRotor, worldNormal, worldTangent.xyz);
worldTangent.w = sign(1.0 / attrib_tangentRotor.x); // trick to disambiguate negative zero!</CODE></PRE>
<p>
At this point, no changes to the fragment shader are required, making this a drop-in replacement
that can be used in any existing engine.
</p>
<H4>Fragment Shader</H4>
<p>
If we want to be able to load existing content, this is the point where we have to resort back to the
TBN matrix. The reason for this is clear. When baking the high detail mesh onto the low detail mesh,
the baking tool has interpolated vertex normal and tangent vectors over the face of each triangle. From
these (no longer normalized or orthogonal) vectors, at each fragment, an orthogonal TBN matrix is
constructed, and used to transform the high detail world space normal to the tangent space normal that
is stored in the texture.
</p>
<p>
This process of interpolating basis vectors introduces an error that is typical for matrices, and
unfortunately this error is thus literally baked into the textures. This is why we opted to indeed
extract the normal and tangent vectors explicitly from the tangentRotor.
</p>
<p>
However, for scenarios where one controls the baking tool, we can do better still. In these cases we
could just pass the tangentRotor unmodified to the fragmentShader, where it can be normalized and
used to transform the sampled normal, without ever constructing a TBN matrix. In this scenario, we
would save even more, removing the need to extract normal and tangent in the vertex shader, requiring
one less varying parameter, and removing the need for expensive orthogonalization in the fragment
shader.
</p>
<H3>Motor Skinning.</H3>
<p>
With PGA motors isomorphic to the dual quaternions, skinning is an obvious candidate for our PGA approach.
After converting inverse bind matrices to their motor equivalent, the skinning code for our motors follows
the well known pattern from dual quaternion skinning :
</p>
<PRE><CODE CLASS="language-glsl">// Grab the 4 bone motors.
motor b1 = motors[int(attrib_joints.x)];
motor b2 = motors[int(attrib_joints.y)];
motor b3 = motors[int(attrib_joints.z)];
motor b4 = motors[int(attrib_joints.w)];
// Blend them together, always use short path.
motor r = attrib_weights.x * b1;
if (dot(r[0],b2[0])<=0.0) b2 = -b2;
r += attrib_weights.y * b2;
if (dot(r[0],b3[0])<=0.0) b3 = -b3;
r += attrib_weights.z * b3;
if (dot(r[0],b4[0])<=0.0) b4 = -b4;
r += attrib_weights.w * b4;
// Now renormalize and combine with object to world
toWorld = gp(toWorld, normalize_m(r));
</CODE></PRE>
<p>
Note how just as for dual quaternions, we make sure that any transformation that is blended in follows
the shortest arc, and renormalize the resulting transformation.
</p>
<H3>Animation Blending</H3>
<p>
For animation blending, the same technique is used, directly blending and renormalizing PGA motors on the CPU.
</p>
<CENTER>
<DIV STYLE="display:flex; place-content: center; max-width:600px;">
<DIV STYLE="display:inline-block; height:530px; text-align:center; flex-grow:1">
<SPAN
class="glTF"
id="e4"
title="model by dogzerx@hotmail.com"
style="display:block; height:250px;"
data-scene="data/cow.glb"
data-anima=2
data-animb=2
></SPAN>
<SPAN
class="glTF"
id="e1"
title="model by dogzerx@hotmail.com"
style="display:block; height:250px;"
data-scene="data/elephant.glb"
data-anima=2
data-animb=2
></SPAN>
<SELECT ID="anim1" ONCHANGE="e5.dataset.anima = e2.dataset.anima = e1.dataset.anima = e1.dataset.animb = e4.dataset.anima = e4.dataset.animb = this.selectedIndex">
<OPTION>Idle
<OPTION>Idle2
<OPTION SELECTED>Failure
<OPTION>Success
<OPTION>Talk
<OPTION>Walk
</SELECT>
</DIV>
<DIV STYLE="display:inline-block; height:530px; text-align:center; flex-grow:1">
<SPAN
class="glTF"
id="e5"
title="model by dogzerx@hotmail.com"
style="display:block; height:250px;"
data-scene="data/cow.glb"
data-anima=2
data-animb=3
data-blend="0"
></SPAN>
<SPAN
class="glTF"
id="e2"
title="model by dogzerx@hotmail.com"
style="display:block; height:250px;"
data-scene="data/elephant.glb"
data-anima=2
data-animb=3
data-blend="0"
></SPAN>
<INPUT ID="blend" TYPE="range" MIN="0" MAX="1" STEP="0.001" VALUE="0" ONINPUT="e2.dataset.blend = e5.dataset.blend = this.value*1;"></INPUT>
</DIV>
<DIV STYLE="display:inline-block; height:530px; text-align:center; flex-grow:1">
<SPAN
class="glTF"
id="e6"
title="model by dogzerx@hotmail.com"
style="display:block; height:250px;"
data-scene="data/cow.glb"
data-anima=3
data-animb=3
></SPAN>
<SPAN
class="glTF"
id="e3"
title="model by dogzerx@hotmail.com"
style="display:block; height:250px;"
data-scene="data/elephant.glb"
data-anima=3
data-animb=3
></SPAN>
<SELECT ID="anim2" ONCHANGE="e2.dataset.animb = e5.dataset.animb = e3.dataset.anima = e3.dataset.animb = e6.dataset.anima = e6.dataset.animb = this.selectedIndex">
<OPTION>Idle
<OPTION>Idle2
<OPTION>Failure
<OPTION SELECTED>Success
<OPTION>Talk
<OPTION>Walk
</SELECT><BR><BR>
</DIV>
</DIV>
</CENTER>
<H2>Conclusion</H2>
<p>
This project started off with the goal to demonstrate it is indeed possible to implement a forward renderer using
PGA exclusively. What we found is that, not only is this possible, the common understanding that this would come
at the cost of more expensive transformations turns out to be much more subtle. The resulting improvements are
both unexpected and quite spectacular, especially on the memory footprint where an extra 33% vertices in the
same storage is quite a significant improvement. This technique can readily be deployed into other existing 3D
engines, at virtually no cost on the vertex shader and without modifications to the rest of the pipeline.
</p>
<H2>References</H2>
<OL STYLE="text-align:left">
<LI> Geometric Algebra and Computer Graphics. Charles Gunn & Steven De Keninck. <A TARGET="BLANK" HREF="https://dl.acm.org/doi/10.1145/3305366.3328099">https://dl.acm.org/doi/10.1145/3305366.3328099</A>
<LI> n-Dimensional Rigid Body Mechanics. Marc Ten Bosch. SIGGRAPH2020. <A TARGET="BLANK" HREF="https://marctenbosch.com/ndphysics/NDrigidbody.pdf">https://marctenbosch.com/ndphysics/NDrigidbody.pdf</A>
<LI> Geometric Clifford Algebra Networks. David Ruhe & co. <A TARGET="BLANK" HREF="https://doi.org/10.48550/arXiv.2302.06594">https://doi.org/10.48550/arXiv.2302.06594</A>
<LI> Geometric Algebra Transformers. Johann Brehmer & co. <A TARGET="BLANK" HREF="https://arxiv.org/pdf/2305.18415.pdf">https://arxiv.org/pdf/2305.18415.pdf</A>
<LI> Plane-based Geometric Algebra for Computer Science. Leo Dorst & Steven De Keninck. <A TARGET="BLANK" HREF="https://bivector.net/PGA4CS.html">https://bivector.net/PGA4CS.html</A>
<LI> May the Forque be with you. Leo Dorst & Steven De Keninck. <A TARGET="BLANK" HREF="https://bivector.net/PGADYN.html">https://bivector.net/PGADYN.html</A>
<LI> Normalization, square roots, and the exponential and logarithmic maps in geometric algebras of less than 6D. Steven De Keninck & Martin Roelfs. <A TARGET="BLANK" HREF="http://dx.doi.org/10.1002/mma.8639">http://dx.doi.org/10.1002/mma.8639</A>
</OL>
</DIV>
<!-- SCRIPT -->
<script type="module" src="src/LookMaNoMatrices.js"></script>
<script>hljs.highlightAll();</script>
</body>
</html>
================================================
FILE: src/LookMaNoMatrices.js
================================================
/******************************************************************************
*
* Look, Ma, No Matrices!
* Putting PGA to the test.
*
* by Steven De Keninck
*
* A matrix-free forward rendering 3D glTF renderer.
*
* Figure out which glTF files are referenced in the page, load the data, and
* setup the rendering loop.
*
* The 3D files are referenced in the main html, for example as :
*
* <SPAN CLASS="glTF" data-scene="data/elephant.glb" data-blend=0 data-anima=1 data-animb=2>
*
* with:
*
* CLASS="glTF" mandatory, indicates a glTF file needs to be rendered here.
* data-scene="uri" uri of the glb/glTF file to load.
* data-anima=x number of first animation in the blend. 0 if omitted.
* data-animb=x number of second animation in the blend. same as a if omitted.
* data-blend=x blending factor between the animations. auto if omitted.
*
*****************************************************************************/
/******************************************************************************
* Imports
*****************************************************************************/
import {miniRender} from './miniRender.js';
import * as PGA from './miniPGA.js';
/******************************************************************************
* Shorthand
*****************************************************************************/
const {PI, E, sin, min, max, hypot, sqrt, abs} = Math;
const {gp, exp_t, exp_r, exp_b, add, sub, mul, e31, e12, e23, e01, e02, e03} = PGA;
/******************************************************************************
* Initialize and Load.
*****************************************************************************/
// Setup canvas.
const canvas = document.getElementById('render');
const render = new miniRender({canvas});
// Grab all html elements we need to render glTF files behind.
const els = [...document.querySelectorAll(".glTF")];
const files = els.map(x=>x.dataset.scene).filter((x,i,a)=>a.indexOf(x)==i).sort((a,b)=>a<b?-1:1);
els.forEach( el => el.sceneID = files.indexOf( el.dataset.scene ));
// Load a glTF file and upload to webGL.
const glTF = await Promise.all(files.map( (file,i) => render.load(file,i) ));
/******************************************************************************
* Our Frame Handler.
*****************************************************************************/
const frame = ()=>{
// Update canvas size/pos and clear.
render.initFrame();
// Our default orientation.
// This will put our object center screen.
var world = (exp_b(add(mul(e31,PI/2),mul(e02,0.4))));
// Now scan the page for html elements with the id 'elephant'
// and render in those places. This allows us to integrate neatly
// into the page, and break outside of our 'box', without needing
// multiple canvases or contexts.
canvas.style.opacity = 0;
els.forEach( model => {
// Figure out if it is on the screen.
var rect = model.getBoundingClientRect();
var aspect = canvas.clientWidth/canvas.clientHeight;
var height = canvas.clientHeight;
// Establish zoom - not included in gBCR, and collect transparency.
var zoom = 1, opacity = 1;
var parent = model;
while (parent) { zoom *= parent.style.zoom||1; opacity *= getComputedStyle(parent).opacity; parent = parent.parentElement; }
// Figure out the center position and map it to viewport ratios.
var center = [(rect.left + 0.5*(rect.right - rect.left))/canvas.clientWidth * zoom, (rect.bottom + 0.5*(rect.top - rect.bottom))/height * zoom];
center = add(mul(sub(center,0.5),0.48),0.5);
if (rect.bottom<0 || rect.top * zoom>window.innerHeight) return;
// If we are visible, set our scale and calculate our final transform.
render.worldscale = (rect.bottom - rect.top) / height * 0.5;
render.worldscale *= zoom;
var world2 = gp(world, exp_t( -(center[0]-0.5) / render.worldscale * aspect - 0.05 , e01 ), exp_t( (center[1]-0.5) / render.worldscale, e02 ), exp_r( 0.2, e31));
// Inherit transparency. (for reveal.js).
if (opacity==0) return;
canvas.style.opacity = opacity;
// Grab correct scene.
const TF = glTF[model.sceneID ?? 0];
// Make sure our motors get recalculated.
TF.json.scenes[0].nodes[0].changed = true;
// Now setup the proper animation. Either what the html tag has, or what's selected in the dropdown.
var a1 = model.dataset.anima ?? 0;
var a2 = model.dataset.animb ?? a1;
// Figure out if we need manual blending or just slowly back and forth.
var bl = model.dataset.blend ?? Math.sin(performance.now()/800 - Math.PI/2)*0.5+0.5;
// Now grab both animations
const an1 = TF?.json?.animations[a1];
const an2 = TF?.json?.animations[a2];
// Figure out animation time.
const t = performance.now()/1000;
const t1 = t % an1.duration;
const t2 = t % an2.duration;
// Animate!
TF.setTime( t1, a1, t2, a2, bl);
// And render this character.
render.render(world2, model.sceneID ?? 0);
});
requestAnimationFrame(frame);
}
frame();
================================================
FILE: src/miniGGX.glsl
================================================
/******************************************************************************
*
* Look, Ma, No Matrices!
* miniGGX.glsl
*
* by Steven De Keninck
*
* Elementary GGX lighting support.
* Adapted from the official Khronos glTF viewer.
*
*/
/**
* Computes Schlick's approximation for the Fresnel reflectance.
*
* @param vec3 f0 The reflectance at normal incidence.
* @param vec3 f90 The reflectance when the view direction is perpendicular to the surface normal.
* @param float VdotH The dot product of the view and half
* @returns vec3 The Fresnel reflectance.
*/
vec3 F_Schlick(vec3 f0, vec3 f90, float VdotH) {
return f0 + (f90 - f0) * pow(clamp(1.0 - VdotH, 0.0, 1.0), 5.0);
}
/**
* Smith's joint GGX approximation for geometric shadowing/masking.
*
* @param float NdotL The dot product of the surface normal and the light direction.
* @param float NdotV The dot product of the surface normal and the view direction.
* @param float alphaRoughness The roughness of the surface squared.
* @returns float The geometric shadowing/masking factor.
*/
float V_GGX(float NdotL, float NdotV, float alphaRoughness) {
float alphaRoughnessSq = alphaRoughness * alphaRoughness;
float GGXV = NdotL * sqrt(NdotV * NdotV * (1.0 - alphaRoughnessSq) + alphaRoughnessSq);
float GGXL = NdotV * sqrt(NdotL * NdotL * (1.0 - alphaRoughnessSq) + alphaRoughnessSq);
float GGX = GGXV + GGXL;
if (GGX > 0.0) return 0.5 / GGX;
return 0.0;
}
/**
* GGX/Trowbridge-Reitz normal distribution function for microfacet models.
*
* @param float NdotH The dot product of the surface normal and the half-vector.
* @param float alphaRoughness The roughness of the surface squared.
* @returns float The probability distribution of microfacets oriented in the half-vector direction.
*/
float D_GGX(float NdotH, float alphaRoughness) {
float alphaRoughnessSq = alphaRoughness * alphaRoughness;
float f = (NdotH * NdotH) * (alphaRoughnessSq - 1.0) + 1.0;
return alphaRoughnessSq / (PI * f * f);
}
/**
* Computes the Lambertian part of the BRDF.
*
* @param vec3 F The Fresnel reflectance.
* @param vec3 diffuseColor The base color of the material.
* @returns vec3 The diffuse reflection component.
*/
vec3 BRDF_lambertian(vec3 F, vec3 diffuseColor) {
return (1.0 - F) * (diffuseColor / PI);
}
/**
* Computes the specular GGX part of the BRDF.
*
* @param vec3 F The Fresnel reflectance.
* @param float alphaRoughness The roughness of the surface.
* @param float NdotL The dot product of the surface normal and the light direction.
* @param float NdotV The dot product of the surface normal and the view direction.
* @param float NdotH The dot product of the surface normal and the half-vector.
* @returns vec3 The specular reflection component.
*/
vec3 BRDF_specularGGX(vec3 F, float alphaRoughness, float NdotL, float NdotV, float NdotH) {
float Vis = V_GGX(NdotL, NdotV, alphaRoughness);
float D = D_GGX(NdotH, alphaRoughness);
return F * Vis * D;
}
/**
* Combines diffuse and specular BRDF components for material rendering.
*
* @param vec3 N The surface normal.
* @param vec3 V The view direction.
* @param vec3 L The light direction.
* @param vec3 matCol The base color of the material.
* @param vec3 matMetRgh A vector containing the metallic and roughness values of the material.
* @returns vec3 The combined color contribution from both diffuse and specular reflections.
*/
vec3 brdf(in vec3 N, in vec3 V, in vec3 L, in vec3 matCol, in vec3 matMetRgh) {
vec3 f_diffuse = vec3(0.), f_specular = vec3(0.);
vec3 H = normalize(L + V);
float NdotL = clamp(dot(N, L), 0., 1.);
float NdotV = clamp(dot(N, V), 0., 1.);
float NdotH = clamp(dot(N, H), 0., 1.);
float VdotH = clamp(dot(V, H), 0., 1.);
vec3 f0 = mix(vec3(0.04), matCol, matMetRgh.r); // Blend between non-metallic and metallic reflectance.
vec3 c_diff = mix(matCol, vec3(0.), matMetRgh.r); // Adjust base color for metallic materials.
vec3 F = F_Schlick(f0, vec3(1.0), VdotH);
if (NdotL > 0. || NdotV > 0.) {
f_diffuse += NdotL * BRDF_lambertian(F, c_diff);
f_specular += NdotL * BRDF_specularGGX(F, matMetRgh.g, NdotL, NdotV, NdotH);
}
return f_diffuse + f_specular; // Combine diffuse and specular contributions.
}
================================================
FILE: src/miniGL.js
================================================
/******************************************************************************
*
* Look, Ma, No Matrices!
* miniGL.js
*
* by Steven De Keninck
*
* Minimal webGL2 wrapping.
*
*****************************************************************************/
/******************************************************************************
* imports.
*****************************************************************************/
import {texParams} from './util.js';
/******************************************************************************
* Compile a vertex or fragment shader.
* @param {WebGL2RenderingContext} gl webgl2 context.
* @param {Number} type gl.VERTEX_SHADER,gl.FRAGMENT_SHADER
* @param {String} source Shader source.
* @returns {WebGLShader}
*****************************************************************************/
const compileShader = (gl, type, source) => {
// create and compile shader.
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (gl.getShaderParameter(shader, gl.COMPILE_STATUS)) return shader;
// output errors with line numbers.
console.error('GL Shader error: ' + gl.getShaderInfoLog(shader) + '\n', source.split('\n'));
gl.deleteShader(shader);
}
/******************************************************************************
* Program Cache. Compiling takes long ..
*****************************************************************************/
var programCache = {};
export const resetProgramCache = ()=>programCache={};
/******************************************************************************
* Create a program, compile and link shaders, extract uniforms and attribs.
* @param {WebGL2RenderingContext} gl webgl2 context.
* @param {String} vertexShaderSource The vertex shader source.
* @param {String} fragmentShaderSource The fragment shader source.
*****************************************************************************/
export const createProgram = (gl, vertexShaderSource, fragmentShaderSource, defines='') => {
// Check for cached version.
if (programCache[vertexShaderSource + fragmentShaderSource]) return programCache[vertexShaderSource + fragmentShaderSource];
// Create program and store in cache
const program = gl.createProgram();
programCache[vertexShaderSource + fragmentShaderSource] = program;
// Compile and attach both shaders.
gl.attachShader(program, compileShader(gl, gl.VERTEX_SHADER, '#version 300 es\n'+defines+vertexShaderSource));
gl.attachShader(program, compileShader(gl, gl.FRAGMENT_SHADER, '#version 300 es\n'+defines+fragmentShaderSource));
// Link the program and print errors if needed.
gl.linkProgram(program);
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
console.error('GL Program error: ' + gl.getProgramInfoLog(program));
gl.deleteProgram(program);
return;
}
// Figure out which uniform variables the program references.
program.uniforms = [...Array(gl.getProgramParameter(program, gl.ACTIVE_UNIFORMS))]
.map((_,i)=>gl.getActiveUniform(program, i))
.map(x=>Object.assign(gl.getUniformLocation(program, x.name)||{noLocation:true},{name:x.name,type:x.type,size:x.size}));
// Similarly, determine the vertex attributes used.
program.attribs = Object.fromEntries([...Array(gl.getProgramParameter(program, gl.ACTIVE_ATTRIBUTES))]
.map((_,i)=>gl.getActiveAttrib(program, i))
.map(x=>[x.name,Object.assign(gl.getAttribLocation(program, x.name),{type:x.type,size:x.size})]));
// And the same for uniform blocks, fetching for each their name, index, size and uniforms.
program.uniformBlocks = Object.fromEntries([...Array(gl.getProgramParameter(program, gl.ACTIVE_UNIFORM_BLOCKS))]
.map((_,i)=>[gl.getActiveUniformBlockName(program, i), {
index: gl.getUniformBlockIndex(program, gl.getActiveUniformBlockName(program, i)),
size: gl.getActiveUniformBlockParameter(program, i, gl.UNIFORM_BLOCK_DATA_SIZE),
uniforms: [...gl.getActiveUniformBlockParameter(program, i, gl.UNIFORM_BLOCK_ACTIVE_UNIFORM_INDICES)]
.map(i=>gl.getActiveUniform(program, i))
}]));
// The uniforms list above also contains the block uniforms, split them out so each is
// in their own block instead.
var j=0; for (var i in program.uniformBlocks) {
// Grab the block and find the uniform names.
const block = program.uniformBlocks[i];
const names = Object.entries(block.uniforms).map(([k,v])=>v.name);
// Map those names to indices and then to expected offsets in the ubo.
const idx = gl.getUniformIndices(program, names);
const ofs = gl.getActiveUniforms(program, idx, gl.UNIFORM_OFFSET);
// Store the uniforms per block, with their names, types, indices and offsets included.
block.uniforms = names.map( (name,i) => Object.assign(program.uniforms.find(x=>x.name == name),{ idx : idx[i], ofs : ofs[i] }));
}
// now remove the block ones from the default uniforms list.
program.uniforms = program.uniforms.filter(x=>x.noLocation!==true);
return program;
}
/******************************************************************************
* Create or Update a uniform block.
*****************************************************************************/
export const updateUBO = (gl, buffer, data, block) => {
if (buffer === undefined) buffer = gl.createBuffer();
gl.bindBuffer(gl.UNIFORM_BUFFER, buffer);
if (data instanceof Float32Array || data instanceof Array) {
gl.bufferData(gl.UNIFORM_BUFFER, data, gl.DYNAMIC_DRAW);
} else {
buffer.arr = buffer.arr ?? new Float32Array( block.size / 4 );
for (var prop=0, l = block.uniforms.length; prop<l; ++prop) {
const d = data[block.uniforms[prop].name];
if (d.map) /*(d instanceof Array || d instanceof Float32Array)*/ buffer.arr.set( d, block.uniforms[prop].ofs/4 );
else buffer.arr[block.uniforms[prop].ofs/4] = d;
}
gl.bufferData(gl.UNIFORM_BUFFER, buffer.arr, gl.DYNAMIC_DRAW);
if (buffer.arr.length * 4 != block.size) debugger;
}
gl.bindBuffer(gl.UNIFORM_BUFFER, null);
return buffer;
}
/******************************************************************************
* Create a vertex array object.
*****************************************************************************/
export const createVAO = (gl, vertices, indices, nrOfCoords = 2, uvs, weights, joints, tangentRotors) => {
// Create and bind the vao.
const vao = gl.createVertexArray();
gl.bindVertexArray(vao);
// Bind all vertex attributes.
// We use a fixed layout, position, tangentFrame, uv, [weights, indices]
[vertices, tangentRotors, uvs, weights].forEach((x,i)=>{ if (x) {
gl.bindBuffer(gl.ARRAY_BUFFER, gl.createBuffer());
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(x), gl.STATIC_DRAW);
gl.enableVertexAttribArray(i); gl.vertexAttribPointer(i,[nrOfCoords,4,2,4][i], gl.FLOAT, false, 0, 0);
}});
// Joints are uint16 attributes
if (joints) {
gl.bindBuffer(gl.ARRAY_BUFFER, gl.createBuffer());
gl.bufferData(gl.ARRAY_BUFFER, new Uint16Array(joints), gl.STATIC_DRAW);
gl.enableVertexAttribArray(4); gl.vertexAttribPointer(4, 4, gl.UNSIGNED_SHORT, false, 0, 0);
}
// Bind the polygon attributes.
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, gl.createBuffer());
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint32Array(indices), gl.STATIC_DRAW);
// Store lengths for drawing..
vao.length = indices.length;
vao.nrPoints = vertices.length / nrOfCoords;
// Unbind and return.
gl.bindVertexArray(null);
return vao;
}
/******************************************************************************
* Render a vertex array object.
*****************************************************************************/
export const render = (gl, program, vao, indexCount, uniforms={}, points=false, lines=false) => {
gl.useProgram(program);
for (let u=0, l=program.uniforms.length; u<l; ++u) {
const pu = program.uniforms[u];
const v = pu.name;
switch (pu.type) {
case gl.SAMPLER_2D :
case gl.SAMPLER_CUBE : if (program.used !== true) gl.uniform1i( pu, uniforms[v]); break;
case gl.FLOAT_MAT4 : gl.uniformMatrix4fv( pu, false, uniforms[v] ); break;
case gl.FLOAT_MAT3 : gl.uniformMatrix3fv( pu, false, uniforms[v] ); break;
case gl.FLOAT_MAT3x4 : gl.uniformMatrix3x4fv( pu, false, uniforms[v] ); break;
case gl.FLOAT_MAT2x4 : gl.uniformMatrix2x4fv( pu, false, uniforms[v] ); break;
case gl.FLOAT_VEC4 : gl.uniform4fv( pu, uniforms[v] ); break;
case gl.FLOAT_VEC3 : gl.uniform3fv( pu, uniforms[v] ); break;
case gl.FLOAT_VEC2 : gl.uniform2fv( pu, uniforms[v] ); break;
case gl.FLOAT : gl.uniform1f( pu, uniforms[v]); break;
default : gl.uniform1i( pu, uniforms[v]); break;
}
}
for (let i in program.uniformBlocks) {
const block = program.uniformBlocks[i];
gl.bindBufferBase( gl.UNIFORM_BUFFER, block.index, block.buffer );
gl.uniformBlockBinding(program, block.index, block.index);
}
program.used = true;
gl.bindVertexArray(vao);
if (points) gl.drawArrays(gl.POINTS, 0, vao.nrPoints);
else gl.drawElements(lines?gl.LINES:gl.TRIANGLES, indexCount, gl.UNSIGNED_INT, 0);
}
/******************************************************************************
* Texture cache.
*****************************************************************************/
var textureCache = {};
/******************************************************************************
* Load a texture.
*****************************************************************************/
export const loadTexture = (gl, src, linear = true, target = gl.TEXTURE_2D) => {
const id = src.blob ? src.blob.name+src.bufferView:(src.uri??src);
if (textureCache[id]) return textureCache[id];
const texture = gl.createTexture();
textureCache[id] = texture;
gl.bindTexture(target, texture);
gl.texImage2D(target, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
// make texture into blobPromise.
const blobPromise = src.blob ? new Promise((S,F)=>S(src.blob)):fetch( src.uri || src ).then( res => res.blob() );
blobPromise.then( blob => createImageBitmap(blob,{premultiplyAlpha:"none", colorSpaceConversion:"none"}).then( ib=>{
// console.log('ib load', linear?'linear':'sRGB',' [',ib.width,',',ib.height,']');
gl.bindTexture(target, texture);
gl.texImage2D(target, 0, linear==false ? gl.SRGB8_ALPHA8 : gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, ib);
texParams(gl, target, gl.LINEAR_MIPMAP_LINEAR, gl.LINEAR)
gl.generateMipmap(target);
}));
return texture
}
/******************************************************************************
* Create and bind a framebuffer object.
*****************************************************************************/
export const bindFrameBuffer = (gl, buf, width = 1920, height = 1080, hasDepth = true, nrMips = 1, mipLevel = 0, nrAttachments = 1) => {
// If no buffer yet, create one
if (buf == undefined) buf = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, buf);
// If we're still the same size, bail early.
if (buf.width == width && buf.height == height) {
// We attach the correct mipLevel
if (buf.colorTexture) gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, buf.colorTexture, mipLevel);
if (buf.colorTexture2) gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT1, gl.TEXTURE_2D, buf.colorTexture2, mipLevel);
if (buf.depthTexture) gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, buf.depthTexture, mipLevel);
// Return the buffer.
return buf;
}
Object.assign(buf, {width, height});
// Create/resize the textures
if (buf.colorTexture !== undefined) gl.deleteTexture(buf.colorTexture);
buf.colorTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, buf.colorTexture);
gl.texStorage2D(gl.TEXTURE_2D, nrMips, gl.RGBA8, width, height);
texParams(gl, gl.TEXTURE_2D, nrMips==1?gl.LINEAR:gl.LINEAR_MIPMAP_NEAREST, gl.LINEAR, gl.CLAMP_TO_EDGE, gl.CLAMP_TO_EDGE, 0, nrMips);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, buf.colorTexture, mipLevel);
if (nrAttachments == 2) {
if (buf.colorTexture2 !== undefined) gl.deleteTexture(buf.colorTexture2);
buf.colorTexture2 = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, buf.colorTexture2);
gl.texStorage2D(gl.TEXTURE_2D, nrMips, gl.RGBA8, width, height);
texParams(gl, gl.TEXTURE_2D, nrMips==1?gl.LINEAR:gl.LINEAR_MIPMAP_NEAREST, gl.LINEAR, gl.CLAMP_TO_EDGE, gl.CLAMP_TO_EDGE, 0, nrMips);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT1, gl.TEXTURE_2D, buf.colorTexture2, mipLevel);
}
// Create/resize the depth textures.
if (hasDepth) {
if (buf.depthTexture !== undefined) gl.deleteTexture(buf.depthTexture);
buf.depthTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, buf.depthTexture);
gl.texStorage2D(gl.TEXTURE_2D, nrMips, gl.DEPTH_COMPONENT24, width, height);
texParams(gl, gl.TEXTURE_2D, gl.NEAREST, gl.NEAREST,gl.CLAMP_TO_EDGE, gl.CLAMP_TO_EDGE, 0, nrMips);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, buf.depthTexture, mipLevel);
}
return buf;
}
================================================
FILE: src/miniGLTF.js
================================================
/** A minimal GLTF loader with PGA support
*
* * Loads and prepares .gltf and .glb files. Converts matrices,
* quaternions and translations to PGA motors.
* * offers unwelding (for mikkt) and scale compensation.
* * evaluates glTF animations.
*
* ©2024 - Enki
**/
const {abs,min,max,hypot} = Math;
import {identity, fromMatrix, log_m, gp_mm, normalize, mix, dot} from './miniPGA.js';
export class miniGLTF {
constructor () {
this.json = null;
}
/**
* Load a glTF or glb file.
* @param {string} uri The url of the file to load.
* @param {Object} [opts] An optional options object.
* @param {function} [opts.progress] A download progress callback function.
**/
async load (uri, opts) {
// Split path and filename. Other references will be relative to this path.
const path = uri.replace(/[^\/]*$/,'');
const fname = uri.replace(/^.*\//,'');
// note : lots of 'var x in' because many things can and will be arrays and/or objects!
// First lets load and parse the JSON and fetch the buffers. (or split them from a glb)
if (fname.match(/\.glb$/i)) {
// Fetch binary data. We'll assume one big blob at the end, never seen multiple buffers here.
var bin = await fetch(uri, {priority:'high',cache:'force-cache'})
.then(r => r.progress( opts.progress ))
.then(r => r.arrayBuffer());
// Make sure its a valid glb file. (check magic and size)
var b32 = new Uint32Array(bin,0,5), b8 = new Uint8Array(bin);
if (b32[0] != 0x46546C67 || b32[2] != bin.byteLength) return console.error('not a valid .glb file.');
// Now split json and buffer - we're assuming just two chuncks.
var J = this.json = JSON.parse(new TextDecoder().decode(b8.slice(20, 20 + b32[3]))); // skip 12 byte header and 8 byte chunk header.
for (var i in J.buffers) J.buffers[i] = bin.slice( 20 + b32[3] + 8 ); // skip 12 + 8 + json_size + 8 of next header.
} else {
// if not glb, json and binary are separate uri's.
var J = this.json = await fetch(uri,{priority:'high'}).then(r => r.json());
for (var i in J.buffers) J.buffers[i] = await fetch( (J.buffers[i].uri.match(/^data/)?'':path) + J.buffers[i].uri, {priority:'high'})
.then(r => r.progress(opts.progress))
.then(r => r.arrayBuffer());
}
// Split bufferviews and link accessors directly.
for (var i in J.bufferViews) {
const bv = J.bufferViews[i];
bv.buffer = J.buffers[bv.buffer].slice(bv.byteOffset??0, (bv.byteOffset??0) + bv.byteLength);
}
// Now link all the accessors to appropriate typed arrays.
for (var i in J.accessors) {
const arrayType = {5120:Int8Array, 5121:Uint8Array, 5122:Int16Array, 5123:Uint16Array, 5124:Int32Array, 5125:Uint32Array, 5126:Float32Array}[J.accessors[i].componentType];
const size = {5120:1, 5121:1, 5122:2, 5123:2, 5124:4, 5125:4, 5126:4}[J.accessors[i].componentType];
const full = size * {'SCALAR':1,'VEC2':2,'VEC3':3,'VEC4':4}[J.accessors[i].type];
const ofs = J.accessors[i].byteOffset??0;
const count = J.accessors[i].count;
if (J.accessors[i].bufferView !== undefined) J.accessors[i].bufferView = Object.assign(new arrayType(J.bufferViews[J.accessors[i].bufferView].buffer),{byteStride:J.bufferViews[J.accessors[i].bufferView].byteStride});
}
// Prepare the images, either as blob or as url.
for (var i in J.images) {
// old glTF files included binary images via extension.
if (J.images[i].extensions?.KHR_binary_glTF) Object.assign(J.images[i], J.images[i].extensions.KHR_binary_glTF);
// Store either as URI or blob.
if (J.images[i].bufferView !== undefined) J.images[i].blob = Object.assign(new Blob( [J.bufferViews[J.images[i].bufferView].buffer], { type: J.images[i].mimeType } ),{name : J.bufferViews[J.images[i].bufferView].name??uri});
else J.images[i].uri = (J.images[i].uri.match(/^data/)?'':path) + J.images[i].uri;
}
// Assume images are in linear space.
for (var i in J.textures) Object.assign( J.textures[i], {source: J.images[J.textures[i].source], linear:true, sampler: J.samplers&&J.samplers[J.textures[i]?.sampler] });
// now process the materials, link all the textures.
for (var i in J.materials) {
const M = J.materials[i];
const getTex = (M,n) => Object.assign( J.textures[M[n].index], { extensions : M[n].extensions } );
// Top level textures, as well as those in metallicRoughness etc ..
['emissiveTexture','normalTexture','occlusionTexture'].forEach(n=>{if (M[n]) M[n] = getTex(M,n); });
if (M.pbrMetallicRoughness) ['baseColorTexture','metallicRoughnessTexture'].forEach(n=>{if (M.pbrMetallicRoughness[n]) M.pbrMetallicRoughness[n] = getTex(M.pbrMetallicRoughness,n); });
if (M.extensions?.KHR_materials_pbrSpecularGlossiness) ['diffuseTexture'].forEach(n=>{if (M.extensions?.KHR_materials_pbrSpecularGlossiness[n]) M.extensions.KHR_materials_pbrSpecularGlossiness[n] = getTex(M.extensions.KHR_materials_pbrSpecularGlossiness,n); });
if (M.extensions?.KHR_materials_clearcoat) ['clearcoatTexture','clearcoatNormalTexture','clearcoatRoughnessTexture'].forEach(n=>{if (M.extensions?.KHR_materials_clearcoat[n]) M.extensions.KHR_materials_clearcoat[n] = getTex(M.extensions.KHR_materials_clearcoat,n); });
// Update those textures that should be provided in SRGB space.
if (M.emissiveTexture) M.emissiveTexture.linear = false;
if (M.pbrMetallicRoughness?.baseColorTexture) M.pbrMetallicRoughness.baseColorTexture.linear = false;
if (M.extensions?.KHR_materials_pbrSpecularGlossiness?.diffuseTexture) M.extensions.KHR_materials_pbrSpecularGlossiness.diffuseTexture.linear = false;
// temp patch for pbrSpecularGlossiness
if (M.extensions?.KHR_materials_pbrSpecularGlossiness) M.pbrMetallicRoughness = { baseColorFactor : M.extensions.KHR_materials_pbrSpecularGlossiness.diffuseFactor };
}
// Now iterate and prepare all meshes. link all attributes and materials.
for (var i in J.meshes) J.meshes[i].primitives.map( p=> {
p.attributes = Object.fromEntries(Object.entries(p.attributes).map( ([k,v]) => [k, J.accessors[v]] ));
if (p.indices !== undefined) p.indices = J.accessors[p.indices];
if (p.material !== undefined) p.material = J.materials[p.material];
if (p.material?.normalTexture || p.material?.extensions?.KHR_materials_clearcoat?.clearcoatNormalTexture) p.needsTangent = true;
});
// Next link all meshes to their nodes and resolve node children to nodes.
for (var i in J.nodes) {
if (J.nodes[i].mesh !== undefined && J.nodes[i].meshes === undefined) J.nodes[i].meshes=[J.nodes[i].mesh];
J.nodes[i].meshes = J.nodes[i].meshes?.map( name => J.meshes[name].primitives );
J.nodes[i].children = J.nodes[i].children?.map( name => { J.nodes[name].parent = J.nodes[i]; return J.nodes[name]; } );
if (J.nodes[i].camera !== undefined) J.nodes[i].camera = J.cameras[J.nodes[i].camera];
if (J.nodes[i].skin !== undefined) J.nodes[i].skin = J.skins[J.nodes[i].skin];
}
// Now for all the scenes, link the nodes.
for (var i in J.scenes) J.scenes[i].nodes = J.scenes[i].nodes.map( name => J.nodes[name] );
// Process the skeletons. Convert inverseBindMatrices to inverseBindMotors.
for (var i in J.skins) {
const skin = J.skins[i];
// Link and convert bind matrices.
if (skin.inverseBindMatrices) {
const bm = J.accessors[skin.inverseBindMatrices];
skin.inverseBindMatrices = bm;
skin.inverseBindMotors = [...Array(bm.count)].map((x,i)=> fromMatrix( bm.bufferView.slice(i*16,i*16+16) ));
}
// Link nodes to joints.
if (skin.skeleton) skin.skeleton = J.nodes[skin.skeleton];
skin.joints = skin.joints.map(joint => J.nodes[joint]);
}
// Process the animations - link all samplers and targets. Verify min and max on inputs is present!
if (J.animations instanceof Object) J.animations = Object.values(J.animations);
if (J.animations?.length === 0) J.animations = undefined;
for (var i in J.animations) {
const anim = J.animations[i];
J.animations[i].channels.forEach(channel=>{
channel.sampler = J.animations[i].samplers[channel.sampler];
channel.target.node = J.nodes[channel.target.node];
channel.sampler.input = J.accessors[channel.sampler.input];
channel.sampler.output = J.accessors[channel.sampler.output];
const input = channel.sampler.input, output = channel.sampler.output;
input.min = [ Infinity]; for (var j=0; j<input.count; ++j) input.min[0]=min(input.min[0],input.bufferView[ (input.byteOffset??0)/4 + j*(input.byteStride??4)/4 ]);
input.max = [-Infinity]; for (var j=0; j<input.count; ++j) input.max[0]=max(input.max[0],input.bufferView[ (input.byteOffset??0)/4 + j*(input.byteStride??4)/4 ]);
//if ((output.min !== undefined) && (output.max !== undefined) && (output.min+'' == output.max+'')) input.skip = output.skip = channel.skip = true;
anim.duration = max(input.max[0], anim.duration??0);
})
}
// Next, make sure all animations are 'complete', that is animations that do not animate properties that
// are animated in other animations should reset those properties.
const animatedProps = J.animations.map(x=>x.channels).flat().map(x=>[x.target.node, x.target.path]).filter((x,i,a)=>a.findIndex(([n,p])=>x[0]==n&&x[1]==p)==i);
J.animations.forEach(a=>{
const missing = animatedProps.filter( ([n,p]) => !a.channels.find( c => c.target.node == n && c.target.path == p ) );
missing.forEach(([n,p])=>a.channels.push({
target : { node : n, path : p },
sampler : { input : { bufferView:[0], count:1 } , output : { bufferView:n[p].slice() , count:1} }
}));
})
// Next, convert all local matrices to motors/bivectors.
// All nodes will get a property 'transform' which is a motor representing the local transformation.
for (var i in J.nodes) {
// Fall back identity if no transform included.
J.nodes[i].transform = identity;
// First matrix, then rot, tran. Scale is not handled!
if (J.nodes[i].matrix) J.nodes[i].transform = fromMatrix(J.nodes[i].matrix);
else {
if (J.nodes[i].rotation) J.nodes[i].transform = gp_mm([J.nodes[i].rotation[3],...J.nodes[i].rotation.slice(0,3).map(x=>-x), 0,0,0,0], J.nodes[i].transform);
if (J.nodes[i].translation) J.nodes[i].transform = gp_mm([1,0,0,0,...J.nodes[i].translation.map(x=>-x/2),0], J.nodes[i].transform);
}
J.nodes[i].bivector = log_m(J.nodes[i].transform);
}
// Finally, establish what we need to get rid of any scaling.
// Most scaling is uniform, and occurs simply to set relative sizes.
// We can easily compensate for this by adjusting animation keys and vertex data.
// Step 1. Establish world scale for each node.
const calculateWorldScale = (node, scale = [1,1,1]) => {
// find our own scale and multiply with incoming scale.
const nm = node.matrix;
node.ownScale = node.scale ?? (nm ? [hypot(...nm.slice(0,3)), hypot(...nm.slice(4,7)), hypot(...nm.slice(8,11))]: [1,1,1]);
node.worldScale = scale.map((x,i)=>node.ownScale[i]*x);
// now forward to our children.
node.children?.forEach( child => calculateWorldScale(child, node.worldScale) );
}
if (J.scenes instanceof Object) J.scenes = Object.values(J.scenes);
if (typeof J.scene == 'string') J.scene = 0;
for (var s in J.scenes) J.scenes[s].nodes.forEach( node => calculateWorldScale(node) );
// Step 2. Find for each mesh, which nodes it is associated with.
for (var j in J.nodes) { const node = J.nodes[j]; node.meshes?.forEach( mesh => mesh.forEach( prim => {
prim.nodes = (prim.nodes || []);
// For a skinned mesh, add the used bones, else add the instance bone.
if (node.skin) {
// Figure out which bones are used by this primitive.
const usedBones = [], attrib = prim.attributes.JOINTS_0, stride = (attrib.byteStride??8)/2, ofs = (attrib.byteOffset??0)/2;
for (var i=0; i<attrib.count; i++) for (var j=0; j<4; j++)
if (usedBones.indexOf( attrib.bufferView[ofs + i * stride + j] ) == -1) usedBones.push( attrib.bufferView[ofs + i * stride + j] );
// Store the nodes for all these bones on the primitive.
prim.nodes.push(...usedBones.map( jointID => node.skin.joints[jointID] ));
// Grab the inverse bindMatrix of a used bone and figure out its scale.
// Next check the matching node for its scale. The difference we need to patch up for.
// (reminder, M and N often satisfy MN=1, but e.g. in stegosaurus.glb they do not)
const matrix = node.skin.inverseBindMatrices.bufferView.slice(usedBones[0]*16, usedBones[0]*16+16);
const boneScaleM = [hypot(...matrix.slice(0,3)), hypot(...matrix.slice(4,7)), hypot(...matrix.slice(8,11))];
const boneScaleN = prim.nodes[0].scale??[1,1,1];
// For this primitive, the total scale is the standard worldscale multiplied with the bonescale.
prim.worldScale = node.worldScale.map((x,i)=>x*boneScaleM[i]*boneScaleN[i]);
prim.skin = node.skin;
node.ownScale = node.ownScale.map((x,i)=>x*boneScaleN[i]);
} else {
// Add this node to the primitive list.
prim.nodes.push(node);
prim.worldScale = node.worldScale;
if (prim.nodes.length > 1) console.log('multiple instances ', prim.nodes);
}
}))};
// Step 3. Update the inverseBindMotors, and the transforms to the new scaling.
if (J.skins instanceof Object) J.skins = Object.values(J.skins);
J.skins?.forEach( skin => skin.joints.forEach( (joint, j) => {
const pscale = joint?.parent?.worldScale??[1,1,1];
skin.inverseBindMotors[j][4] *= pscale[0];
skin.inverseBindMotors[j][5] *= pscale[1];
skin.inverseBindMotors[j][6] *= pscale[2];
skin.inverseBindMotors[j][7] *= pscale[2];
}));
// Adjust all node transformations to reflect scale changes.
for (var j in J.nodes) { const node = J.nodes[j];
const pscale = node?.parent?.worldScale??[1,1,1];
node.transform[4] *= pscale[0];
node.transform[5] *= pscale[1];
node.transform[6] *= pscale[2];
node.transform[7] *= pscale[2];
};
// Let us also renormalize skinning weights so they always sum to one.
if (J.meshes instanceof Object) J.meshes = Object.values(J.meshes);
for (var m in J.meshes) J.meshes[m].primitives.forEach( prim => {
// Only if the primitive is skinned.
if (!prim.attributes.WEIGHTS_0) return;
// Grab the attribute, stride and offset
const attrib = prim.attributes.WEIGHTS_0, ofs = (attrib.byteOffset??0)/4, stride = (attrib.byteStride??16)/4;
// Now loop over all sets of weights and renormalize them.
for (var i=0; i<attrib.count; ++i) {
const vec = attrib.bufferView.slice(ofs + stride * i,ofs + stride * i + 4);
const len = hypot(...vec);
attrib.bufferView.set( vec.map(x=>x/len) , ofs + stride * i);
}
});
return this;
}
/**
* Evaluate all nodes at the given time for the given animation.
* @param {number} [time=0] The time to evaluate at.
* @param {number} [anim=0] The animation to evaluate.
**/
setTime (time=0, anim=0, time2, anim2, blend) {
// Make sure we have a valid animation
if (!this.json.animations || !this.json.animations.length) return;
anim = min(anim, (this.json?.animations?.length??1)-1)
if (blend) anim2 = min(anim2, (this.json?.animations?.length??1)-1)
// Make sure we have a valid time in that animation.
//time = min(max(time,this.json?.animations[anim].channels[0].sampler.input.min[0]),this.json?.animations[anim].channels[0].sampler.input.max[0])
// Now loop over all channnels
var allAnims = blend ? [[anim, time],[anim2, time2]]:[[anim, time]];
allAnims.forEach(([anim, time],bi)=>{
for (var ci=0, cl=this.json.animations[anim].channels.length; ci<cl; ++ci) {
let channel = this.json.animations[anim].channels[ci];
//if (channel.skip) continue;
// Grab the target and sampler - with their offsets. strides are fixed for animation data!
const {target, sampler} = channel;
const ofsi = (sampler.input.byteOffset??0)/4;
const ofso = (sampler.output.byteOffset??0)/4;
// find correct frame. (start looking from our last found frame as optimisation).
const si = sampler.input, sib = si.bufferView;
if (si.skip) {
var frame = 0;
} else {
for (var frame = time>=sampler.curTime?sampler.curFrame:0; sib[ofsi + frame]<=time && frame<si.count-1; ++frame);
}
// Calculate the subframe time as 't'
var t = frame==0?1:(time - sampler.input.bufferView[ofsi + frame - 1])/(sampler.input.bufferView[ofsi + frame] - sampler.input.bufferView[ofsi + (frame-1)]);
t = min(1,max(0,t));
//if (ci == 1) console.log(time, frame, t);
// Store our current frame and time
sampler.curFrame = frame;
sampler.curTime = time;
const bv = sampler.output.bufferView;
target.node.changed = true;
// Now handle the translation
if (target.path == 'translation') {
const ofsB = ofso + frame*3, ofsA = ofsB - 3;
if (bi) {
if (t===1) target.node.translation = mix(target.node.translation, bv.slice(ofsB, ofsB+3), blend);
else if (t==0) target.node.translation = mix(target.node.translation, bv.slice(ofsA, ofsA+3), blend);
else target.node.translation = mix(target.node.translation, [bv[ofsA]*(1-t)+t*bv[ofsB], bv[ofsA+1]*(1-t)+t*bv[ofsB+1], bv[ofsA+2]*(1-t)+t*bv[ofsB+2]], blend);
} else {
if (t===1) target.node.translation = bv.slice(ofsB, ofsB+3);
else if (t==0) target.node.translation = bv.slice(ofsA, ofsA+3);
else target.node.translation = [bv[ofsA]*(1-t)+t*bv[ofsB], bv[ofsA+1]*(1-t)+t*bv[ofsB+1], bv[ofsA+2]*(1-t)+t*bv[ofsB+2]];
}
}
// For the rotation we do a renormalized lerp for now.
if (target.path == 'rotation') {
// Fetch both frames.
const ofsB = ofso + frame*4, ofsA = ofsB - 4;
// Quick bail.
if (t==1) {
if (bi) {
rotF = bv.slice(ofsB, ofsB+4);
if (dot(rotF, target.node.rotation)<0) rotF = rotF.map(x=>-x);
target.node.rotation = mix(target.node.rotation, rotF, blend);
} else {
target.node.rotation = bv.slice(ofsB, ofsB+4);
}
continue;
}
if (t==0) {
if (bi) {
rotF = bv.slice(ofsA, ofsA+4);
if (dot(rotF, target.node.rotation)<0) rotF = rotF.map(x=>-x);
target.node.rotation = mix(target.node.rotation, rotF, blend);
} else {
target.node.rotation = bv.slice(ofsA, ofsA+4);
}
continue;
}
// Make sure we're picking the small angle.
if ( bv[ofsA]*bv[ofsB] + bv[ofsA+1]*bv[ofsB+1] + bv[ofsA+2]*bv[ofsB+2] + bv[ofsA+3]*bv[ofsB+3] < 0)
var rotF = [bv[ofsA]*(1-t)-t*bv[ofsB], bv[ofsA+1]*(1-t)-t*bv[ofsB+1], bv[ofsA+2]*(1-t)-t*bv[ofsB+2], bv[ofsA+3]*(1-t)-t*bv[ofsB+3]]; // rotA.map((a,i)=> a * (1 - t) - t * rotB[i] );
else
var rotF = [bv[ofsA]*(1-t)+t*bv[ofsB], bv[ofsA+1]*(1-t)+t*bv[ofsB+1], bv[ofsA+2]*(1-t)+t*bv[ofsB+2], bv[ofsA+3]*(1-t)+t*bv[ofsB+3]]; // rotA.map((a,i)=> a * (1 - t) + t * rotB[i] );
// Now interpolate linearly and renormalize.
var len = (rotF[0]**2 + rotF[1]**2 + rotF[2]**2 + rotF[3]**2)**-.5; // hypot(...rotF);
if (bi) {
rotF = rotF.map(x=>x*len);
if (dot(rotF, target.node.rotation)<0) rotF = rotF.map(x=>-x);
target.node.rotation = mix(target.node.rotation, rotF, blend);
} else {
target.node.rotation = rotF.map(x=>x*len);
}
}
}});
// Now that all animation data is updated, we need to recalculate all local transforms.
const J = this.json;
for (var i=0, l=J.nodes.length; i<l; ++i) {
const JNI = J.nodes[i];
if (!JNI.changed) continue;
JNI.transform=identity;
// First matrix, then rot, tran. Scale is not handled here!
if (JNI.matrix) JNI.transform = fromMatrix(JNI.matrix);
if (JNI.rotation) JNI.transform = normalize([JNI.rotation[3],-JNI.rotation[0],-JNI.rotation[1],-JNI.rotation[2], 0,0,0,0]);
if (JNI.translation) JNI.transform = gp_mm([1,0,0,0,-JNI.translation[0]/2,-JNI.translation[1]/2,-JNI.translation[2]/2,0], JNI.transform);
// Apply scale patch - assume verts are already scaled, so only modify transform translation part.
const tf = JNI.transform, s = JNI?.parent?.worldScale || [1,1,1];
tf[4] *= s[0]; tf[5] *= s[1]; tf[6] *= s[2]; tf[7] *= s[2];
}
}
/**
* Unweld: Used to unpack attributes so they are no longer interleaved.
* Optionally can unweld all vertices (e.g. for mikkt).
* @param {Object} prim The glTF primitive to process.
* @param {Object} [opts] An options object.
* @param {number[]} [opts.scale] Optional scaling to apply to the vertex positions.
* @param {boolean} [opts.needsTangent=false] Boolean indicating if a full unweld is needed.
**/
unweld (prim, opts={}) {
// Grab the attributes.
var P = prim.attributes.POSITION.bufferView;
var N = (prim.attributes.NORMAL) ? prim.attributes.NORMAL.bufferView : new Float32Array(P.length);
var T = (prim.attributes.TEXCOORD_0) ? prim.attributes.TEXCOORD_0.bufferView : new Float32Array(P.length/3*2);
var TG = prim.attributes?.TANGENT?.bufferView;
var W = prim.attributes?.WEIGHTS_0?.bufferView;
var J = prim.attributes?.JOINTS_0?.bufferView;
var I = prim.indices?.bufferView??new Uint32Array([...Array(P.length/3).keys()]);
// Grab the offsets and strides.
const pa = prim.attributes;
const [PO, NO, TO, GO, WO, JO] = [pa.POSITION.byteOffset??0, pa.NORMAL?.byteOffset??0, pa.TEXCOORD_0?.byteOffset||0, pa.TANGENT?.byteOffset??0, pa.WEIGHTS_0?.byteOffset??0, (pa.JOINTS_0?.byteOffset??0)*2].map(x=>x/4);
const [PS, NS, TS, GS, WS, JS] = [pa.POSITION.byteStride??12, pa.NORMAL?.byteStride??12, pa.TEXCOORD_0?.byteStride||8, pa.TANGENT?.byteStride??16, pa.WEIGHTS_0?.byteStride??16, (pa.JOINTS_0?.bufferView?.byteStride??8)*2 ].map(x=>x/4);
const IO = (prim?.indices?.byteOffset??0) / ((I instanceof Uint16Array)?2:4);
// Now unpack
if (opts.scale == undefined) opts.scale = [1,1,1];
if ((opts.needsTangent && !prim.attributes.TANGENT)) {
var vertices = new Float32Array((prim?.indices?.count??I.length)*3), normals = new Float32Array(prim.indices.count*3), weights = W && new Float32Array(prim.indices.count*4), joints = J && new Uint16Array(prim.indices.count*4),
uvs = new Float32Array(prim.indices.count*2), indices = [], tangents = TG?new Float32Array(prim.indices.count*4):undefined;
for (var j=0; j<(prim?.indices?.count??I.length); ++j) {
var i = I[j + IO];
vertices[j*3 ] = P[i*PS + PO ] * opts.scale[0];
vertices[j*3+1] = P[i*PS + PO+1] * opts.scale[1];
vertices[j*3+2] = P[i*PS + PO+2] * opts.scale[2];
normals.set(N.slice(i*NS + NO, i*NS + NO + 3), j*3);
uvs.set(T.slice(i*TS + TO, i*TS + TO + 2), j*2);
if (TG) tangents.set(TG.slice(i*GS + GO, i*GS + GO + 4), j*4);
if (W) weights.set(W.slice(i*WS + WO, i*WS + WO + 4), j*4);
if (J) joints.set(J.slice(i*JS + JO, i*JS + JO + 4), j*4);
indices.push(indices.length);
}
} else { // no need to unweld.
vertices = P.slice( PO, PO + prim.attributes.POSITION.count * 3 );
if (opts.scale[0]!=1 || opts.scale[1]!=1 || opts.scale[2]!=1) {
// if (J) console.log('apply scale', PO, vertices.length, opts.scale);
for (var i=0; i<vertices.length; ++i) vertices[i] *= opts.scale[i%3]??1;
}
normals = N.slice( NO, NO + prim.attributes.POSITION.count * 3 );
uvs = T.slice( TO, TO + prim.attributes.POSITION.count * 2 );
if (TG) tangents = TG.slice( GO, GO + prim.attributes.POSITION.count * 4 );
if (W) weights = W.slice( WO, WO + prim.attributes.POSITION.count * 4 );
if (J) {
if (JS==4) {
joints = J.slice( JO, JO + prim.attributes.POSITION.count * 4 );
} else {
const l = prim.attributes.POSITION.count;
joints = new Uint16Array( l * 4 );
for (var i=0; i<l; ++i) joints.set( J.slice(JO + i*JS, JO + i*JS + 4) , i*4 );
}
}
indices = I.slice(IO, IO + (prim?.indices?.count??I.length));
}
return {vertices, normals, uvs, indices, tangents, weights, joints};
}
}
================================================
FILE: src/miniIBL.glsl
================================================
/******************************************************************************
*
* Look, Ma, No Matrices!
* miniIBL.glsl
*
* by Steven De Keninck
*
* Elementary IBL GGX lighting support.
* Adapted from the official Khronos glTF viewer.
*
*/
/* We use three textures for IBL lighting :
*
* ibl_irradiance : cubemap for environment and reflections. (GGX mips)
* ibl_radiance : cubemap for indirect lighting.
* ibl_lut : GGX lookup table.
*/
uniform samplerCube ibl_irradiance;
uniform samplerCube ibl_radiance;
uniform sampler2D ibl_lut;
/* Convert a direction to equirectangular uv coordinates.
*
* @param {vec3} direction The direction to convert.
* @returns {vec2} The equirectangular uv coordinates.
**/
vec2 equirect(vec3 dir) {
return vec2(1.0 - (PI + atan(dir.z,dir.x)) / (2.0 * PI), acos(dir.y) / PI);
}
/**
* Reproject a (position,direction) w.r.t. a finite environment cube. Used for localised
* reflections and lighting. It requires vpos to be inside the box!
*
* @param {vec3} indir The direction to reproject.
* @param {vec3} vpos The position indir is from.
* @param {vec3} bmin The minimum values of an axis aligned bounding box.
* @param {vec3} bmax The maximum values of an axis aligned bounding box.
* @param {vec3} bpos The center of the axis aligned bounding box.
* @returns {vec3} The reprojected direction.
**/
vec3 reproject_cube( vec3 indir, vec3 vpos, vec3 bmin, vec3 bmax, vec3 bpos ) {
// Determine where, seen from vpos, our indir hits the box.
vec3 FirstPlaneIntersect = (bmax-vpos) / indir;
vec3 SecondPlaneIntersect = (bmin-vpos) / indir;
// Figure out the furthest plane, and the distance to it.
vec3 FurthestPlane = max(FirstPlaneIntersect, SecondPlaneIntersect);
float Distance = min(min(FurthestPlane.x, FurthestPlane.y), FurthestPlane.z);
// Return the direction 'bpos' needs to hit the same point.
vec3 IntersectPositionWS = vpos.xyz + indir * Distance;
return normalize(IntersectPositionWS - bpos);
}
/**
* Computes the specular radiance contribution from the environment lighting using GGX.
*
* @param vec3 n The surface normal direction vector.
* @param vec3 v The view direction vector from the camera to the surface point.
* @param float roughness The roughness of the surface, affecting the sharpness of the reflection.
* @param vec3 F0 The Fresnel reflectance at normal incidence.
* @param vec3 pos The position of the surface point in world space.
* @returns vec3 The specular radiance contribution from the environment.
*/
vec3 getIBLRadianceGGX(vec3 n, vec3 v, float roughness, vec3 F0, vec3 pos)
{
// Clamp dot product of normal and view vector to avoid negative values
float NdotV = clamp(dot(n, v), 0.0, 1.0);
// Calculate level of detail for mipmapping based on roughness
float lod = roughness * 7.0; // Assuming 8 mipmap levels
// Reflect view vector around normal, and reproject w.r.t. environment box
vec3 reflection = normalize(reflect(-v, n));
reflection = reproject_cube(reflection, pos, vec3(-12.0, -1.0, -12.0), vec3(12.0, 80.0, 12.0), vec3(0.0, 0.0, -2.3));
// Determine the BRDF sampling point, sample reflectance and geom. att.
vec2 brdfSamplePoint = clamp(vec2(NdotV, roughness), vec2(0.0), vec2(1.0));
vec2 f_ab = texture(ibl_lut, brdfSamplePoint).rg;
// Sample the specular radiance from the environment map
vec3 specularLight = textureLod(ibl_irradiance, reflection, lod).rgb;
// Calculate Fresnel reflectance and specualr scaling.
vec3 Fr = max(vec3(1.0 - roughness), F0) - F0;
vec3 k_S = F0 + Fr * pow(1.0 - NdotV, 5.0);
// Return final spec contribution from env.
return specularLight * (k_S * f_ab.x + f_ab.y);
}
/**
* Computes the diffuse radiance contribution from the environment lighting based on Lambertian reflection.
* This function provides the indirect lighting effect on surfaces, taking into account their roughness and base color.
*
* @param vec3 n The surface normal direction vector.
* @param vec3 v The view direction vector from the camera to the surface point.
* @param float roughness The roughness of the surface, affecting the diffusion of the reflection.
* @param vec3 diffuseColor The base color of the surface.
* @param vec3 F0 The Fresnel reflectance at normal incidence.
* @returns vec3 The diffuse radiance contribution from the environment.
*/
vec3 getIBLRadianceLambertian(vec3 n, vec3 v, float roughness, vec3 diffuseColor, vec3 F0)
{
// Clamp dot product of normal and view vector to ensure non-negative values
float NdotV = clamp(dot(n, v), 0.0, 1.0);
// Determine the BRDF sampling point, sample reflectance and geom. att.
vec2 brdfSamplePoint = clamp(vec2(NdotV, roughness), vec2(0.0), vec2(1.0));
vec2 f_ab = texture(ibl_lut, brdfSamplePoint).rg;
// Sample the diffuse irradiance from the environment map
vec3 irradiance = texture(ibl_radiance, n).rgb;
// Calculate Fresnel reflectance at normal incidence
vec3 Fr = max(vec3(1.0 - roughness), F0) - F0;
vec3 k_S = F0 + Fr * pow(1.0 - NdotV, 5.0);
// Combine Fresnel reflectance and geometric attenuation
vec3 FssEss = k_S * f_ab.x + f_ab.y;
// Calculate energy conservation for multiple scattering
float Ems = 1.0 - (f_ab.x + f_ab.y);
// Compute average Fresnel reflectance
vec3 F_avg = F0 + (1.0 - F0) / 21.0;
// Calculate multiple scattering component
vec3 FmsEms = Ems * FssEss * F_avg / (1.0 - F_avg * Ems);
// Compute diffuse contribution, accounting for energy conservation
vec3 k_D = diffuseColor * (1.0 - FssEss + FmsEms);
// Return final diffuse contribution from the environment
return (FmsEms + k_D) * irradiance;
}
================================================
FILE: src/miniPGA.glsl
================================================
/******************************************************************************
*
* Look, Ma, No Matrices!
* miniPGA.glsl
*
* by Steven De Keninck
*
* Basic PGA support for glsl. Layouts used :
*
* motor : mat2x4 : [ [s, e23, e31, e12], [e01, e02, e03, e0123] ]
* point : vec3 : [ e032, e013, e021 ] with implied 1 e123
* direction : vec3 : [ e032, e013, e021 ] with implied 0 e123
* line : mat2x3 : [ [e23, e31, e12], [e01, e02, e03] ]
*
* We opt to use matrix types because they allow addition and scalar multiplication.
*
* A postfix approach is used to disambiguate overlapping types. We provide the
* following functions :
*
* prefix function postfix
* gp = geometric product [_rt, _tr, _mt, _tm, _mr, _rm, _rr, _tt, _mm, _vv]
* sw = sandwich product [_mp, _md, _mo]
* reverse = reverse [_m]
* exp = exponential [_b]
* log = logarithm [_m]
* normalize = normalize [_m]
* sqrt = square root [_m]
*
* Postfix naming convention :
*
* m = general motor (normalized) [ [s, e23, e31, e12], [e01, e02, e03, e0123] ]
* t = simple translation [ [1, 0, 0, 0], [e01, e02, e03, 0] ]
* r = simple rotation [ [s, e23, e31, e12], [0, 0, 0, 0] ]
* d = ideal point (direction) [ e032, e013, e021 ]
* p = normalized Euclidean point (point) [ e032, e013, e021 ]
* o = origin. (1e123) -
* b = bivector (line) [ [e23, e31, e12], [e01, e02, e03] ]
*
* e.g. gp_mr = geometric product between general motor and rotator.
* sw_md = sandwich product between general motor and direction (ideal point).
*
* We generally assume normalized motors for performance reasons.
* A normalisation function is available.
*
*****************************************************************************/
#define motor mat2x4
#define line mat2x3
#define point vec3
#define direction vec3
const float PI = 3.141592653;
/******************************************************************************
* Apply a normalized motor 'a' to a Euclidean point 'b'.
* @param {motor} a The motor 'a' in 'ab~a'. Must be normalized.
* @param {point} b Euclidean point 'b' in 'ab~a'.
* @returns {point} The transformed point.
* 21 muls, 18 adds
*****************************************************************************/
point sw_mp( motor a, point b ) {
direction t = cross(b, a[0].yzw) - a[1].xyz;
return (a[0].x * t + cross(t, a[0].yzw) - a[0].yzw * a[1].w) * 2. + b;
}
/******************************************************************************
* Apply a normalized motor 'a' to a Euclidean point 'b', but return
* (ab~a)/2 - b. (Saving 3 multiplies and 3 adds).
* Use this for linear bone skinning, it is not only cheaper but can handle
* non normalized weights, including all zero. (i.e. blend the 4 swx results,
* then multiply the result with two and add the original vertex back in.)
* @param {motor} a The motor 'a' in '(ab~a)/2-b'. Must be normalized.
* @param {point} b Euclidean point 'b' in '(ab~a)/2-b'.
* @returns {point} (ab~a)/2-b.
* 18 muls, 15 adds
*****************************************************************************/
point swx_mp( motor a, point b ) {
direction t = cross(b, a[0].yzw) - a[1].xyz;
return a[0].x * t + cross(t, a[0].yzw) - a[0].yzw * a[1].w;
}
/******************************************************************************
* Apply a normalized motor 'a' to an Infinite point 'b'.
* @param {motor} a The motor 'a' in 'ab~a'. Must be normalized.
* @param {direction} b Infinite point 'b' in 'ab~a'. (direction).
* @returns {direction} The transformed Infinite point. (direction).
* 18 muls, 12 adds
*****************************************************************************/
direction sw_md( motor a, direction b ) {
direction t = cross(b, a[0].yzw);
return (a[0].x * t + cross(t, a[0].yzw)) * 2. + b;
}
/******************************************************************************
* Apply a normalized motor 'a' to the x-direction
* the resulting vector is normalized to length 0.5!
* @param {motor} a The motor 'a' in 'a * e1 * ~a'. Must be normalized.
* @returns {direction} The transformed x direction. (direction).
* 6 muls, 4 adds
*****************************************************************************/
direction sw_mx( motor a ) {
return direction(
0.5 - a[0].w*a[0].w - a[0].z*a[0].z,
a[0].z*a[0].y - a[0].x*a[0].w,
a[0].w*a[0].y + a[0].x*a[0].z
);
}
/******************************************************************************
* Apply a normalized motor 'a' to the y-direction.
* the resulting vector is normalized to length 0.5!
* @param {motor} a The motor 'a' in 'a * e2 * ~a'. Must be normalized.
* @returns {direction} The transformed y direction. (direction).
* 6 muls, 4 adds
*****************************************************************************/
direction sw_my( motor a ) {
return direction(
a[0].x*a[0].w + a[0].y*a[0].z,
0.5 - a[0].y*a[0].y - a[0].w*a[0].w,
a[0].w*a[0].z - a[0].x*a[0].y
);
}
/******************************************************************************
* Apply a normalized motor 'a' to the z-direction.
* the resulting vector is normalized to length 0.5!
* @param {motor} a The motor 'a' in 'a * e3 * ~a'. Must be normalized.
* @returns {direction} The transformed z direction. (direction).
* 6 muls, 4 adds
*****************************************************************************/
direction sw_mz( motor a ) {
return direction(
a[0].y*a[0].w - a[0].z*a[0].x,
a[0].z*a[0].w + a[0].y*a[0].x,
0.5 - a[0].z*a[0].z - a[0].y*a[0].y
);
}
/******************************************************************************
* Extract both the normal and tangent directions from a motor.
* The resulting vectors are normalised to length 0.5 (saves 6 muls).
* @param {motor} a The motor.
* @returns {vec3[2]} the normal and tangent vectors.
* 9 muls, 8 adds.
*****************************************************************************/
void extractNormalTangent( motor a, out direction normal, out direction tangent ) {
float yw = a[0].y * a[0].w;
float xz = a[0].x * a[0].z;
float zz = a[0].z * a[0].z;
normal = direction( yw - xz, a[0].z*a[0].w + a[0].y*a[0].x, 0.5 - zz - a[0].y*a[0].y );
tangent = direction( 0.5 - zz - a[0].w*a[0].w, a[0].z*a[0].y - a[0].x*a[0].w, yw + xz );
}
/******************************************************************************
* Apply a normalized motor 'a' to the origin.
* @param {motor} a The motor 'a' in 'a * e123 * ~a'. Must be normalized.
* @returns {point} The transformed origin.
* 15 muls, 9 adds
*****************************************************************************/
point sw_mo( motor a ) {
return 2.*( cross(a[0].yzw, a[1].xyz) - a[0].x*a[1].xyz - a[1].w*a[0].yzw );
}
/******************************************************************************
* Reverse a normalized motor 'R'
* @param {motor} R The motor to be reversed.
* @returns {motor} The reversed motor.
* 6 negations
*****************************************************************************/
motor reverse_m( motor R ) {
return motor( R[0].x, -R[0].yzw, -R[1].xyz, R[1].w );
}
/******************************************************************************
* Create a simple rotation that preserves the origin.
* Expects an angle and normalized line (bivector).
* @param {number} angle The angle.
* @param {line} B The Euclidean line (Bivector) to rotate around.
* @returns {motor} The exponentiation of angle*B.
* 3 muls, cos, sin
*****************************************************************************/
motor exp_r( float angle, line B ) {
return motor( cos(angle), sin(angle)*B[0], vec4(0.) );
}
/******************************************************************************
* Create a simple translation.
* Expects a distance and normalized bivector.
* @param {number} dist The distance.
* @param {line} B The ideal line (Bivector) to 'rotate' around.
* @returns {motor} The exponentiation of dist*B.
* 3 muls
*****************************************************************************/
motor exp_t( float dist, line B ) {
return motor( 1., 0., 0., 0., dist*B[1], 0. );
}
/******************************************************************************
* General exponential.
* @param {line} B The line (bivector) to exponentiate.
* @returns {motor} The exponentiation of B.
* 17 muls 8 add 2 div 1 sqrt 1 cos 1 sin
*****************************************************************************/
motor exp_b( line B ) {
float l = dot(B[0],B[0]);
if (l==0.) return motor( vec4(1., 0., 0., 0.), vec4(B[1], 0.) );
float a = sqrt(l), m = dot(B[0].xyz, B[1]), c = cos(a), s = sin(a)/a, t = m/l*(c-s);
return motor( c, s*B[0], s*B[1] + t*B[0].zyx, m*s );
}
/******************************************************************************
* General logarithm.
* @param {motor} M The normalized motor of which to take the logarithm.
* @returns {line} The logarithm of M.
* 14 muls 5 add 1 div 1 acos 1 sqrt
*****************************************************************************/
line log_m( motor M ) {
if (M[0].x == 1.) return line( vec3(0.), vec3(M[1].xyz) );
float a = 1./(1. - M[0].x*M[0].x), b = acos(M[0].x) * sqrt(a), c = a*M[1].w*(1. - M[0].x*b);
return line( b*M[0].yzw, b*M[1].xyz + c*M[0].wzy);
}
/******************************************************************************
* Efficient composition of motors iff a is a rotation and b a translation.
* @param {motor} A A rotation motor A.
* @param {motor} B A translation motor B.
* @returns {motor} The composition of motors ab
* 12 muls 8 adds
*****************************************************************************/
motor gp_rt( motor a, motor b ) {
return motor( a[0], a[0].x*b[1].xyz + cross(b[1].xyz, a[0].yzw), dot(b[1].xyz, a[0].yzw) );
}
/******************************************************************************
* Efficient composition of motors iff a is a translation and b a rotation.
* @param {motor} A A translation motor A.
* @param {motor} B A rotation motor B.
* @returns {motor} The composition of motors ab
* 12 muls 8 adds
*****************************************************************************/
motor gp_tr( motor a, motor b ) {
return motor( b[0], b[0].x*a[1].xyz - cross(a[1].xyz, b[0].yzw), dot(a[1].xyz, b[0].yzw) );
}
/******************************************************************************
* Efficient composition of motors iff a is a rotation and b a general motor.
* @param {motor} A A rotation motor A.
* @param {motor} B A general motor B.
* @returns {motor} The composition of motors ab
* 32 muls 24 adds
*****************************************************************************/
motor gp_rm( motor a, motor b ) {
return motor( a[0].x*b[0] + vec4( -dot(a[0].yzw, b[0].yzw), b[0].x*a[0].yzw + cross(b[0].yzw, a[0].yzw) ),
a[0].x*b[1] + vec4( cross(b[1].xyz, a[0].yzw) - a[0].yzw*b[1].w, dot(a[0].yzw, b[1].xyz) ));
}
/******************************************************************************
* Efficient composition of motors iff a is a general motor and b a rotation.
* @param {motor} A A general motor A.
* @param {motor} B A rotation motor B.
* @returns {motor} The composition of motors ab
* 32 muls 24 adds
*****************************************************************************/
motor gp_mr( motor a, motor b ) {
return motor( b[0].x*a[0] + vec4( -dot(b[0].yzw, a[0].yzw), a[0].x*b[0].yzw - cross(a[0].yzw, b[0].yzw) ),
b[0].x*a[1] + vec4( -cross(a[1].xyz, b[0].yzw) - b[0].yzw*a[1].w, dot(b[0].yzw, a[1].xyz) ));
}
/******************************************************************************
* Efficient composition of motors iff a is a translation and b a general motor
* @param {motor} A A translation motor A.
* @param {motor} B A general motor B.
* @returns {motor} The composition of motors ab
* 12 muls 12 adds
*****************************************************************************/
motor gp_tm( motor a, motor b ) {
return motor( b[0], b[1].xyz + b[0].x*a[1].xyz - cross(a[1].xyz, b[0].yzw), dot(a[1].xyz, b[0].yzw) + b[1].w );
}
/******************************************************************************
* Efficient composition of motors iff a is a general motor and b a translation
* @param {motor} A A general motor A.
* @param {motor} B A translation motor B.
* @returns {motor} The composition of motors ab
* 12 muls 12 adds
*****************************************************************************/
motor gp_mt( motor a, motor b ) {
return motor( a[0], a[1].xyz + a[0].x*b[1].xyz + cross(b[1].xyz, a[0].yzw), dot(b[1].xyz, a[0].yzw) + a[1].w );
}
/******************************************************************************
* Efficient composition of motors iff a and b are both translators.
* @param {motor} A A translation motor A.
* @param {motor} B A translation motor B.
* @returns {motor} The composition of motors ab
* 4 adds
*****************************************************************************/
motor gp_tt( motor a, motor b ) {
return motor( 1., 0., 0., 0., a[1] + b[1] );
}
/******************************************************************************
* Efficient composition of motors iff a and b are both rotations around origin
* @param {motor} A A rotation motor A.
* @param {motor} B A rotation motor B.
* @returns {motor} The composition of motors ab
* 16 muls 12 adds
*****************************************************************************/
motor gp_rr( motor a, motor b ) {
return motor( a[0].x*b[0] + vec4( -dot(a[0].yzw, b[0].yzw), b[0].x*a[0].yzw + cross(b[0].yzw,a[0].yzw) ), vec4(0.) );
}
/******************************************************************************
* Compose two general motors ab = a * b
* @param {motor} A A general motor A.
* @param {motor} B A general motor B.
* @returns {motor} The composition of motors ab
* 48 muls 40 adds
*****************************************************************************/
motor gp_mm( motor a, motor b ) {
return motor(
a[0].x*b[0].x - dot(a[0].yzw, b[0].yzw),
a[0].x*b[0].yzw + b[0].x*a[0].yzw + cross(b[0].yzw, a[0].yzw),
a[0].x*b[1].xyz + b[0].x*a[1].xyz + cross(b[0].yzw, a[1].xyz) + cross(b[1].xyz, a[0].yzw) - b[1].w*a[0].yzw - a[1].w*b[0].yzw,
a[0].x*b[1].w + b[0].x*a[1].w + dot(a[0].yzw, b[1].xyz) + dot(a[1].xyz, b[0].yzw));
}
/******************************************************************************
* Normalize a motor.
* @param {motor} a A general non-normalized motor a.
* @returns {motor} The normalized input.
*****************************************************************************/
motor normalize_m( motor a ) {
float s = 1. / length( a[0] );
float d = (a[1].w * a[0].x - dot( a[1].xyz, a[0].yzw ))*s*s;
return motor(a[0]*s, a[1]*s + vec4(a[0].yzw*s*d,-a[0].x*s*d));
}
/******************************************************************************
* GP between two R3 vectors.
* @param {vec3} a A vector.
* @param {vec3} b A vector.
* @returns {motor} The geometric product ab
*****************************************************************************/
motor gp_vv (vec3 a, vec3 b) {
return motor( dot(a,b), cross(a,b), vec4(0.) );
}
/******************************************************************************
* Square root of a motor.
* @param {motor} R The rotor to take the square root of.
* @returns {motor} The square root of R.
*****************************************************************************/
motor sqrt_m( motor R ) {
return normalize_m( motor( R[0].x + 1., R[0].yzw, R[1] ) );
}
/******************************************************************************
* Basis planes e1,e2,e3
*****************************************************************************/
const direction e1 = direction(1., 0., 0.); // x = 0 (the yz plane)
const direction e2 = direction(0., 1., 0.); // y = 0 (the xz plane)
const direction e3 = direction(0., 0., 1.); // z = 0 (the xy plane)
/******************************************************************************
* Basis lines
*****************************************************************************/
const line e23 = line( 1., 0., 0., 0., 0., 0. ); // y = z = 0 (the x line)
const line e31 = line( 0., 1., 0., 0., 0., 0. ); // z = x = 0 (the y line)
const line e12 = line( 0., 0., 1., 0., 0., 0. ); // x = y = 0 (the z line)
const line e01 = line( 0., 0., 0., 1., 0., 0. ); // inf,x line
const line e02 = line( 0., 0., 0., 0., 1., 0. ); // inf,y line
const line e03 = line( 0., 0., 0., 0., 0., 1. ); // inf,z line
/******************************************************************************
* Basis directions
*****************************************************************************/
const direction e032 = direction(1., 0., 0.); // inf,y=z=0 (inf x point)
const direction e013 = direction(0., 1., 0.); // inf,x=z=0 (inf y point)
const direction e021 = direction(0., 0., 1.); // inf,x=y=0 (inf z point)
/******************************************************************************
* Identity motor
*****************************************************************************/
const motor identity = motor( 1., 0., 0., 0., 0., 0., 0., 0. );
/******************************************************************************
* Choosing for addition and scalar multiplication, by going for the internal
* vec and mat types as opposed to custom structs, means we cannot use type
* based dispatch. We can however still provide some flexibility for multiple
* chained geometric products. (keep in mind the specific versions are faster!)
*****************************************************************************/
motor gp( motor a, motor b ) { return gp_mm(a,b); }
motor gp( motor a, motor b, motor c ) { return gp(gp(a,b),c); }
motor gp( motor a, motor b, motor c, motor d ) { return gp(gp(a,b,c),d); }
motor gp( motor a, motor b, motor c, motor d, motor e ) { return gp(gp(a,b,c,d),e); }
motor gp( motor a, motor b, motor c, motor d, motor e, motor f ) { return gp(gp(a,b,c,d,e),f); }
motor gp( vec3 a, vec3 b ) { return gp_vv(a,b); }
/******************************************************************************
* Perform a perspective projection.
* @param {float} n The near clipping plane distance.
* @param {float} f The far clipping plane distance.
* @param {float} minfov The minimal field of view. (for the narrow side).
* @param {float} aspect The viewport aspect ratio (width/height).
* @param {vec3} inpos The position of the vertex to project.
*****************************************************************************/
vec4 project( const float n, const float f, const float minfov, float aspect, vec3 inpos ){
float cthf = cos(minfov/2.0) / sin(minfov/2.0); // cotangent of half the minimal fov.
float fa = 2.*f*n/(n-f), fb = (n+f)/(n-f); // all of these can be precomputed constants.
// vec2 fit = cthf * vec2(-min(1.,1./aspect), min(1.,aspect)); // depending on aspect, fit this fov horizontal or vertical.
vec2 fit = cthf * vec2(-1.0/aspect, 1.0); // fit vertical.
return vec4( inpos.xy * fit, fa - fb*inpos.z, inpos.z );
}
================================================
FILE: src/miniPGA.js
================================================
/******************************************************************************
*
* Look, Ma, No Matrices!
* miniPGA.js
*
* by Steven De Keninck
*
* Basic PGA support for javascript. Layouts used mirror miniPGA.glsl :
*
* motor : mat2x4 : [ s, e23, e31, e12, e01, e02, e03, e0123 ]
* point : vec3 : [ e032, e013, e021 ] with implied 1 e123
* direction : vec3 : [ e032, e013, e021 ] with implied 0 e123
* line : mat2x3 : [ e23, e31, e12, e01, e02, e03 ]
*
* A postfix approach is used to disambiguate overlapping types. We provide the
* following functions :
*
* prefix function postfix
* gp = geometric product [_rt, _tr, _mt, _tm, _mr, _rm, _rr, _tt, _mm, _vv]
* sw = sandwich product [_mp, _md, _mo]
* reverse = reverse [_m]
* exp = exponential [_b]
* log = logarithm [_m]
* normalize = normalize [_m]
* sqrt = square root [_m]
*
* Postfix naming convention :
*
* m = general motor (normalized) [ s, e23, e31, e12, e01, e02, e03, e0123 ]
* t = simple translation [ 1, 0, 0, 0, e01, e02, e03, 0 ]
* r = simple rotation [ s, e23, e31, e12, 0, 0, 0, 0 ]
* d = ideal point (direction) [ e032, e013, e021 ]
* p = normalized Euclidean point (point) [ e032, e013, e021 ]
* o = origin. (1e123) -
* b = bivector (line) [ e23, e31, e12, e01, e02, e03 ]
*
* We generally assume normalized motors for performance reasons.
* A normalisation function is available.
*
*****************************************************************************/
/******************************************************************************
* Some helpers from Math.
*****************************************************************************/
const {sqrt, cos, sin, PI, E, acos, abs, max, min, hypot} = Math;
/******************************************************************************
* Basetype used for PGA storage.
*****************************************************************************/
const baseType = Float32Array;
/******************************************************************************
* Vector Dot product between two n-d vectors.
* @param {Array} A First vector
* @param {Array} B Second vector
* @returns {Number} The vector dot product between A and B.
*****************************************************************************/
export const dot = (A,B) => A.reduce((s,A,i)=>s+A*B[i],0);
/******************************************************************************
* Vector Cross product between two 3-d vectors.
* @param {Array} A First vector
* @param {Array} B Second vector
* @returns {Array} The vector cross product between A and B.
*****************************************************************************/
export const cross = (A,B) => A.map((_,i)=> A[(i+1)%3]*B[(i+2)%3] - A[(i+2)%3]*B[(i+1)%3] );
/******************************************************************************
* Vector Cross product between two 3-d vectors.
* @param {Array} A First vector
* @param {Array} B Second vector
* @returns {Array} The vector cross product between A and B.
*****************************************************************************/
export const mix = (A,B,t) => A.map((Ai,i)=>(1-t)*Ai + t*B[i]);
/******************************************************************************
* Vector Cross product between two 3-d vectors.
* @param {Array} A vector
* @returns {Number} The average value.
*****************************************************************************/
export const avg = x => x.reduce((s,a)=>s+a)/x.length;
/******************************************************************************
* Vector Length
* @param {Array} A vector
* @returns {Number} The vector's length.
*****************************************************************************/
export const length = A => Math.hypot(...A);
/******************************************************************************
* Vector Normalization
* @param {Array} A Input Vector
* @returns {Array} The normalized vector
*****************************************************************************/
export const normalize_v = v => mul(v, 1/length(v));
/******************************************************************************
* Vector Addition
* @param {Array} A Input Vector A
* @param {Array|Number} B Input vector or number B.
* @returns {Array} A + B
*****************************************************************************/
export const add = (A,B) => A.map((Ai,i)=>Ai+(B[i]??B));
/******************************************************************************
* Vector Subtraction
* @param {Array} A Input Vector A
* @param {Array|Number} B Input vector or number B.
* @returns {Array} A - B
*****************************************************************************/
export const sub = (A,B) => A.map((Ai,i)=>Ai-(B[i]??B));
/******************************************************************************
* Vector Hadamard Product
* @param {Array} A Input Vector A
* @param {Array|Number} B Input vector or number B.
* @returns {Array} Component wise multiplication
*****************************************************************************/
export const mul = (A,B) => A.map((Ai,i)=>Ai*(B[i]??B));
/******************************************************************************
* Apply a normalized motor 'a' to a Euclidean point 'b'.
* @param {motor} a The motor 'a' in 'ab~a'. Must be normalized.
* @param {point} b Euclidean point 'b' in 'ab~a'.
* @returns {point} The transformed point.
* 21 muls, 18 adds
*****************************************************************************/
export const sw_mp = (a, b) => {
const a0=a[0],a1=a[1],a2=a[2],a3=a[3],a4=a[4],a5=a[5],a6=a[6],a7=a[7],
b0=b[0],b1=b[1],b2=b[2],
s0=a1*b2-a3*b0-a5, s1=a3*b1-a2*b2-a4, s2=a2*b0-a1*b1-a6;
return [b0+2*(a3*s0+a0*s1-a1*a7-a2*s2),
b1+2*(a1*s2+a0*s0-a2*a7-a3*s1),
b2+2*(a2*s1+a0*s2-a3*a7-a1*s0)];
}
/******************************************************************************
* Apply a normalized motor 'a' to an Infinite point 'b'.
* @param {motor} a The motor 'a' in 'ab~a'. Must be normalized.
* @param {direction} b Infinite point 'b' in 'ab~a'. (direction).
* @returns {direction} The transformed Infinite point. (direction).
* 18 muls, 12 adds
*****************************************************************************/
export const sw_md = (a, b) => {
const a0=a[0],a1=a[1],a2=a[2],a3=a[3],a4=a[4],a5=a[5],a6=a[6],a7=a[7],b0=b[0],b1=b[1],b2=b[2],
s0=a1*b2-a3*b0,s1=a3*b1-a2*b2,s2=a2*b0-a1*b1;
return [b0+2*(a3*s0+a0*s1-a2*s2),
b1+2*(a1*s2+a0*s0-a3*s1),
b2+2*(a2*s1+a0*s2-a1*s0)];
}
/******************************************************************************
* Apply a normalized motor 'a' to the origin.
* @param {motor} a The motor 'a' in 'a * e123 * ~a'. Must be normalized.
* @returns {point} The transformed origin.
* 15 muls, 9 adds
*****************************************************************************/
export const sw_mo = a => {
const a0=a[0],a1=a[1],a2=a[2],a3=a[3],a4=a[4],a5=a[5],a6=a[6],a7=a[7];
return [2*(a2*a6-a0*a4-a1*a7-a3*a5),
2*(a3*a4-a0*a5-a1*a6-a2*a7),
2*(a1*a5-a0*a6-a2*a4-a3*a7)];
}
/******************************************************************************
* Reverse a normalized motor 'R'
* @param {motor} R The motor to be reversed.
* @returns {motor} The reversed motor.
* 6 negations
*****************************************************************************/
export const reverse_m = (R, res = new baseType(8)) => {
res[0] = R[0]; res[1] = -R[1]; res[2] = -R[2]; res[3] = -R[3];
res[4] = -R[4]; res[5] = -R[5]; res[6] = -R[6]; res[7] = R[7];
return res;
};
/******************************************************************************
* Create a simple rotation that preserves the origin.
* Expects an angle and normalized line (bivector).
* @param {number} angle The angle.
* @param {line} B The Euclidean line (Bivector) to rotate around.
* @returns {motor} The exponentiation of angle*B.
* 3 muls, cos, sin
*****************************************************************************/
export const exp_r = (angle, B, R = new baseType(8)) => {
var s = sin(angle);
R[0] = cos(angle); R[1] = B[0]*s; R[2] = B[1]*s; R[3] = B[2]*s;
R[4] = R[5] = R[6] = R[7] = 0;
return R;
}
/******************************************************************************
* Create a simple translation.
* Expects a distance and normalized bivector.
* @param {number} dist The distance.
* @param {line} B The ideal line (Bivector) to 'rotate' around.
* @returns {motor} The exponentiation of dist*B.
* 3 muls
*****************************************************************************/
export const exp_t = (dist, B, R = new baseType(8)) => {
R[0] = 1; R[1] = R[2] = R[3] = R[7] = 0;
R[4] = dist*B[3]; R[5] = dist*B[4]; R[6] = dist*B[5];
return R;
}
/******************************************************************************
* General exponential.
* @param {line} B The line (bivector) to exponentiate.
* @returns {motor} The exponentiation of B.
* 17 muls 8 add 2 div 1 sqrt 1 cos 1 sin
*****************************************************************************/
export const exp_b = ( B, R = new baseType(8) ) => {
const l = B[0]**2 + B[1]**2 + B[2]**2;
if (l==0) return [1,0,0,0,B[3],B[4],B[5],0];
const a = sqrt(l), m = B[0]*B[3]+B[1]*B[4]+B[2]*B[5], c = cos(a), s = sin(a)/a, t = m/l*(c-s);
R[0] = c; R[1] = B[0]*s; R[2] = B[1]*s; R[3] = B[2]*s;
R[4] = B[3]*s + B[0]*t; R[5] = B[4]*s + B[1]*t; R[6] = B[5]*s + B[2]*t; R[7] = m*s;
return R;
}
/******************************************************************************
* General logarithm.
* @param {motor} M The normalized motor of which to take the logarithm.
* @returns {line} The logarithm of M.
* 14 muls 5 add 1 div 1 acos 1 sqrt
*****************************************************************************/
export const log_m = M => {
if (Math.abs(M[0] - 1.)<0.000001) return [0,0,0,M[4],M[5],M[6]];
const a = 1./(1. - M[0]**2), b = acos(M[0]) * Math.sqrt(a), c = a*M[7]*(1. - M[0]*b);
return [M[1]*b, M[2]*b, M[3]*b, M[4]*b + M[1]*c, M[5]*b + M[2]*c, M[6]*b + M[3]*c];
}
/******************************************************************************
* Compose two general motors ab = a * b
* @param {motor} A A general motor A.
* @param {motor} B A general motor B.
* @returns {motor} The composition of motors ab
* 48 muls 40 adds
*****************************************************************************/
export const gp_mm = (a,b,res=new baseType(8)) => {
const a0=a[0],a1=a[1],a2=a[2],a3=a[3],a4=a[4],a5=a[5],a6=a[6],a7=a[7],
b0=b[0],b1=b[1],b2=b[2],b3=b[3],b4=b[4],b5=b[5],b6=b[6],b7=b[7];
res[0] = a0*b0-a1*b1-a2*b2-a3*b3;
res[1] = a0*b1+a1*b0+a3*b2-a2*b3;
res[2] = a0*b2+a1*b3+a2*b0-a3*b1;
res[3] = a0*b3+a2*b1+a3*b0-a1*b2;
res[4] = a0*b4+a3*b5+a4*b0+a6*b2-a1*b7-a2*b6-a5*b3-a7*b1;
res[5] = a0*b5+a1*b6+a4*b3+a5*b0-a2*b7-a3*b4-a6*b1-a7*b2;
res[6] = a0*b6+a2*b4+a5*b1+a6*b0-a1*b5-a3*b7-a4*b2-a7*b3;
res[7] = a0*b7+a1*b4+a2*b5+a3*b6+a4*b1+a5*b2+a6*b3+a7*b0;
return res;
}
/******************************************************************************
* Normalize a motor.
* @param {motor} a A general non-normalized motor a.
* @returns {motor} The normalized input.
*****************************************************************************/
export const normalize_m = a => {
const a0=a[0], a1=a[1], a2=a[2], a3=a[3], a4=a[4], a5=a[5], a6=a[6], a7=a[7];
const s = 1. / (a0**2 + a1**2 + a2**2 + a3**2)**.5;
const d = (a7*a0 - ( a4*a1 + a5*a2 + a6*a3 ))*s*s;
return new baseType([ a0*s, a1*s, a2*s, a3*s,
a4*s + a1*s*d, a5*s + a2*s*d, a6*s + a3*s*d, a7*s - a0*s*d ]);
}
/******************************************************************************
* GP between two R3 vectors.
* @param {vec3} a A vector.
* @param {vec3} b A vector.
* @returns {motor} The geometric product ab
*****************************************************************************/
export const gp_vv = (a,b)=> [dot(a,b),...cross(a,b),0,0,0,0];
/******************************************************************************
* Square root of a motor.
* @param {motor} R The rotor to take the square root of.
* @returns {motor} The square root of R.
*****************************************************************************/
export const sqrt_m = R => normalize_m( [R[0]+1,R[1],R[2],R[3],R[4],R[5],R[6],R[7]] );
/******************************************************************************
* Basis planes e1,e2,e3
*****************************************************************************/
export const e1 = new baseType([1., 0., 0.]);
export const e2 = new baseType([0., 1., 0.]);
export const e3 = new baseType([0., 0., 1.]);
/******************************************************************************
* Basis directions
*****************************************************************************/
export const e032 = new baseType([1., 0., 0.]);
export const e013 = new baseType([0., 1., 0.]);
export const e021 = new baseType([0., 0., 1.]);
export const e123 = new baseType([0., 0., 0.]); // remember implied 4th '1' coefficient for points !
/******************************************************************************
* Basis lines
*****************************************************************************/
export const e23 = new baseType([ 1., 0., 0., 0., 0., 0.]);
export const e31 = new baseType([ 0., 1., 0., 0., 0., 0.]);
export const e12 = new baseType([ 0., 0., 1., 0., 0., 0.]);
export const e01 = new baseType([ 0., 0., 0., 1., 0., 0.]);
export const e02 = new baseType([ 0., 0., 0., 0., 1., 0.]);
export const e03 = new baseType([ 0., 0., 0., 0., 0., 1.]);
/******************************************************************************
* Identity motor
*****************************************************************************/
export const identity = new baseType([1,0,0,0, 0,0,0,0]);
/******************************************************************************
* Multi-argument gp, and type aware normalize.
*****************************************************************************/
export const gp = (a,...args)=>a.length==3?gp_vv(a,args[0]):args.reduce((p,x)=>gp_mm(p,x),a);
export const normalize = x => x.length != 3 ? normalize_m(x) : normalize_v(x);
/******************************************************************************
* Convert an orthogonal 3x3 matrix to a motor. Try to compensate for funky
* scaling. Used only for importing animations, tangent spaces etc.
* @param {Matrix} M The 3x3 input matrix
* @returns {Motor} The rotor representing this matrix.
*****************************************************************************/
export const fromMatrix3 = M => {
// Shorthand.
var [m00,m01,m02,m10,m11,m12,m20,m21,m22] = M;
// Quick scale check - we really should do SVD here.
const scale = [hypot(m00,m01,m02),hypot(m10,m11,m12),hypot(m20,m21,m22)];
if (abs(scale[0]-1)>0.0001 || abs(scale[1]-1)>0.0001 || abs(scale[2]-1)>0.0001) {
const i = scale.map(s=>1/s);
m00 *= i[0]; m01 *= i[0]; m02 *= i[0];
m10 *= i[1]; m11 *= i[1]; m12 *= i[1];
m20 *= i[2]; m21 *= i[2]; m22 *= i[2];
if (abs(scale[0]/scale[1]-1)>0.0001 || abs(scale[1]/scale[2]-1)>0.0001) console.warn("non uniformly scaled matrix !", scale);
}
// Return a pure rotation (in motor format)
return normalize( m00 + m11 + m22 > 0 ? [m00 + m11 + m22 + 1.0, m21 - m12, m02 - m20, m10 - m01, 0,0,0,0]:
m00 > m11 && m00 > m22 ? [m21 - m12, 1.0 + m00 - m11 - m22, m01 + m10, m02 + m20, 0,0,0,0]:
m11 > m22 ? [m02 - m20, m01 + m10, 1.0 + m11 - m00 - m22, m12 + m21, 0,0,0,0]:
[m10 - m01, m02 + m20, m12 + m21, 1.0 + m22 - m00 - m11, 0,0,0,0]);
}
/******************************************************************************
* Convert an orthogonal 4x4 matrix to a motor. Try to compensate for funky
* scaling. Used only for importing animations etc.
* @param {Matrix} M The 4x4 input matrix
* @returns {Motor} The motor representing this matrix.
*****************************************************************************/
export const fromMatrix = M => {
// Shorthand.
var [m00,m01,m02,m03,m10,m11,m12,m13,m20,m21,m22,m23,m30,m31,m32,m33] = M;
// Return rotor as translation * rotation
return gp_mm( [1,0,0,0,-0.5*m30,-0.5*m31,-0.5*m32,0], fromMatrix3([m00,m01,m02,m10,m11,m12,m20,m21,m22]) );
}
================================================
FILE: src/miniRender.js
================================================
/******************************************************************************
*
* Look, Ma, No Matrices!
* miniRender.js
*
* by Steven De Keninck
*
*****************************************************************************/
/******************************************************************************
* Imports
*****************************************************************************/
import * as util from './util.js';
import * as miniGL from './miniGL.js';
import * as PGA from './miniPGA.js';
import {miniGLTF} from './miniGLTF.js';
import {UBO, vertexShader, fragmentShader} from './shaders.js';
/******************************************************************************
* Shorthand
*****************************************************************************/
const {PI, E, sin, min, max, hypot, sqrt, abs} = Math;
const {mul, add, sub, dot, cross, e23, e31, e12, e01, e02, e03, exp_b, exp_t, gp, normalize, fromMatrix3, exp_r, reverse_m, sw_mo, sw_md, identity, gp_mm, sqrt_m, gp_vv} = PGA;
const isMobile = (/Android|webOS|iPhone|iPad|iPod|BlackBerry|IEMobile|Opera Mini/i.test(navigator.userAgent))
/******************************************************************************
* Create Render class
*****************************************************************************/
export class miniRender {
/*****************************************************************************
*
****************************************************************************/
constructor ( options ) {
this.options = options;
if (options.canvas) {
this.canvas = options.canvas;
} else {
this.canvas = document.body.appendChild(document.createElement('canvas'));
Object.assign( this.canvas.style, { position : 'absolute', top : 0, left : 0, width:'100%', height:'100%', zIndex:1000, pointerEvents:'none' });
}
this.gl = this.canvas.getContext('webgl2', Object.assign({
antialias : !isMobile,
alpha : true,
depth : true,
stencil : false,
premultipliedAlpha : false,
preserveDrawingBuffer : false,
powerPreference : 'high-performance'
},options));
this.gl.enable(this.gl.DEPTH_TEST);
this.gl.enable(this.gl.CULL_FACE);
const ibl = 'data/factory';
util.loadCubemap(this.gl, ibl+'_lambertian.cubemap.png').then( id => this.lambertianTextureID = id);
util.loadCubemap(this.gl, ibl+'_ggx.cubemap.png', 9).then( id => this.ggxTextureID = id);
util.loadHDRTexture(this.gl,'data/lutGGX.RGBE.png', util.RGBAToLUT).then( id => this.ggxLutTextureID = id);
this.worldscale = 0.25;
this.exposure = 1.0;
this.camera = exp_b(mul(e03,-1))
this.glTF = [];
return this;
}
/*****************************************************************************
* Load a glTF file, convert to PGA, upload to webGL
* @param {string} uri URI of glTF/glb file to load.
* @returns {object} reference to this scene.
****************************************************************************/
async load ( uri, slot=0 ) {
this.glTF[slot] = (await new miniGLTF().load( uri, { progress:x=>{ document.getElementById('file').value = 100*x.value;}})
.then(glTF=>{
// Don't cache programs from the previous model.
miniGL.resetProgramCache();
document.getElementById('file').style.display = 'none';
const gl = this.gl;
// Create vertex array objects for all primitives.
console.time('uploading geometry.');
for (var i in glTF.json.meshes) {
const mesh = glTF.json.meshes[i];
for (var j in mesh.primitives) {
const prim = mesh.primitives[j];
// unweld and switch to flat arrays.
var {vertices, normals, uvs, indices, tangents, weights, joints} = glTF.unweld( prim, {scale : prim.worldScale ?? prim.boneScale ?? prim.scale, needsTangent : prim.needsTangent} );
// Do we need tangents?
if (prim.needsTangent && tangents == undefined) try {
var tangents = generateTangents(vertices, normals, uvs);
console.log('mikkt');
} catch (e) {
var tangents = undefined;
}
// For each vertex, we construct the tbn matrix with positive determinant,
// and convert these to PGA rotors. The final determinant is as usual stored
// separately.
var tangentRotors = [...Array(vertices.length/3)].map( (_,i)=> {
// we will assume the dot between tangent and normal is always zero!
let normal = normalize([...normals.slice(i*3,i*3+3)]);
let tangent = tangents ? normalize([...tangents.slice(i*4,i*4+3)]) : normalize([normal[1]+normal[2],normal[0]+normal[2],normal[0]+normal[1]]);
// Orthogonalize
tangent = normalize( sub(tangent, mul(normal, dot(normal,tangent) ) ) );
// Calculate the bitangent.
let bitangent = normalize(cross(normal, tangent));
// Now setup the matrix explicitely.
let mat = [...tangent, ...bitangent, ...normal];
// Convert to motor and store.
let motor = fromMatrix3( mat );
// Use the double cover to encode the handedness.
// in GA language, this means we are using half of the double cover to distinguish even and odd versors.
if (tangents) if (Math.sign(motor[0])!=tangents[i*4+3]) motor = motor.map(x=>-x);
return [...motor.slice(0,4)];
}).flat();
tangentRotors = new Float32Array(tangentRotors);
// Create and store the vao. (we should really re-weld first ..)
prim.hasBones = !!weights;
prim.vao = miniGL.createVAO(gl, vertices, indices, 3, uvs, weights, joints, tangentRotors);
// Compile the shader.
prim.material.program = prim.program = miniGL.createProgram(gl, vertexShader(prim.material, prim), fragmentShader(prim.material, prim));
}
}
console.timeEnd('uploading geometry.');
// Load all textures.
console.time('loading textures.');
for (var i in glTF.json.textures) {
const t = glTF.json.textures[i];
t.tex = miniGL.loadTexture(gl, t.source, t.linear);
}
console.timeEnd('loading textures.');
return glTF;
}));
return this.glTF[slot];
}
/*****************************************************************************
* Check viewport size/place, and clear it.
****************************************************************************/
initFrame () {
const canvas = this.canvas, gl = this.gl;
// We allow our canvas to move during smooth scroll, so at every
// new frame, we position it center view again.
canvas.style.top = (visualViewport.pageTop??window.scrollY) + 'px';
// Setup size.
var dpr = window.devicePixelRatio||1;
const width = window.visualViewport?.width??canvas.clientWidth;
const height = window.visualViewport?.height??canvas.clientHeight;
if (width * dpr != canvas.width || height * dpr != canvas.height) {
canvas.width = width * dpr;
canvas.height = height * dpr;
}
// Force the height also. (fixes issue with auto-hide address on phones)
canvas.style.height = height + 'px';
// Now start the render.
gl.viewport(0,0,canvas.width,canvas.height);
gl.clearColor(0,0,0,0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
}
/*****************************************************************************
* Render the scene.
****************************************************************************/
render (world, scene=0) {
if (!this.glTF) return;
const gl = this.gl;
const canvas = this.canvas;
const glTF = this.glTF[scene];
// Render a single node.
const renderNode = (gl, node, transform, params, trans=0, parentChanged = false) => {
// Accumulate own transform.
if (trans === 0) {
if (parentChanged === true) node.changed = true;
if (node.changed !== false) {
transform = gp_mm(transform, node.transform??identity );
parentChanged = true;
node.changed = false;
params.world = node.world = transform;
if (node.meshes) node.ubo = miniGL.updateUBO(gl, node.ubo, { world:node.world }, glTF.json.meshes[0].primitives[0].program.uniformBlocks.instance);
}
}
// If we have primitives, render them.
if (node.meshes) for (var m=0, l=node.meshes.length; m<l; m++) for (var i=0,l2=node.meshes[m].length; i<l2; ++i) {
const prim = node.meshes[m][i];
const mat = prim.material;
const matIsTrans = mat?.alphaMode=='BLEND' || mat?.extensions?.KHR_materials_transmission !== undefined;
if (trans ^ matIsTrans) continue;
// bind textures.
if (mat?.normalTexture) { gl.activeTexture(gl.TEXTURE4); gl.bindTexture(gl.TEXTURE_2D, mat?.normalTexture.tex); }
if (mat?.emissiveTexture) { gl.activeTexture(gl.TEXTURE2); gl.bindTexture(gl.TEXTURE_2D, mat?.emissiveTexture.tex); }
if (mat?.occlusionTexture ) { gl.activeTexture(gl.TEXTURE3); gl.bindTexture(gl.TEXTURE_2D, mat?.occlusionTexture.tex); }
if (mat?.pbrMetallicRoughness) {
if (mat.pbrMetallicRoughness.baseColorTexture) { gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, mat?.pbrMetallicRoughness.baseColorTexture.tex); }
if (mat.pbrMetallicRoughness.metallicRoughnessTexture) { gl.activeTexture(gl.TEXTURE1); gl.bindTexture(gl.TEXTURE_2D, mat?.pbrMetallicRoughness.metallicRoughnessTexture.tex); }
}
if (mat?.extensions) {
if (mat.extensions.KHR_materials_pbrSpecularGlossiness?.diffuseTexture) { gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, mat.extensions.KHR_materials_pbrSpecularGlossiness.diffuseTexture.tex); }
}
// if (node.skin) gl.bindBufferBase(gl.UNIFORM_BUFFER, prim.program.uniformBlocks.skin.index, node.skin.ubo);
if (node.skin) prim.program.uniformBlocks.skin.buffer = node.skin.ubo;
prim.program.uniformBlocks.scene.buffer = glTF.json.scenes[0].ubo;
prim.program.uniformBlocks.instance.buffer = node.ubo;
prim.program.uniformBlocks.material.buffer = mat.ubo;
// gl state
if (mat?.alphaMode=='BLEND' || mat?.extensions?.KHR_materials_transmission) {
gl.enable(gl.BLEND);
gl.blendFuncSeparate(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ZERO);
} else {
gl.disable(gl.BLEND);
}
if (mat?.doubleSided) {
// gl.disable(gl.CULL_FACE); else gl.enable(gl.CULL_FACE);
if (mat?.alphaMode=='BLEND' || mat?.extensions?.KHR_materials_transmission) {
if (mat?.pbrMetallicRoughness?.baseColorFactor && mat?.pbrMetallicRoughness?.baseColorFactor[3] < 0.1) gl.depthMask(false);
gl.frontFace(gl.CW);
miniGL.render(gl, prim.program, prim.vao, prim.vao.length, params);
gl.frontFace(gl.CCW);
miniGL.render(gl, prim.program, prim.vao, prim.vao.length, params);
if (mat?.pbrMetallicRoughness?.baseColorFactor && mat?.pbrMetallicRoughness?.baseColorFactor[3] < 0.1) gl.depthMask(true);
} else {
gl.disable(gl.CULL_FACE);
miniGL.render(gl, prim.program, prim.vao, prim.vao.length, params);
gl.enable(gl.CULL_FACE);
}
} else {
// push geometry.
miniGL.render(gl, prim.program, prim.vao, prim.vao.length, params);
}
}
// Render all children.
if (node.children) for (var i=0, l=node.children.length; i<l; ++i) renderNode(gl, node.children[i], node.world||identity, params, trans, parentChanged);
}
// Populate the scene level ubo.
glTF.json.scenes[0].ubo = miniGL.updateUBO( gl, glTF.json.scenes[0].ubo, {
camera : this.camera,
aspect : canvas.width/canvas.height,
scale : this.worldscale,
lightPos : [6,6,-10],
cameraPos : sw_mo(reverse_m(this.camera)),
exposure : 2**this.exposure,
}, glTF.json.meshes[0].primitives[0].program.uniformBlocks.scene );
// Resolve UBO's for skeletons.
function xform( cur, motor = identity, changed = false ) {
cur.worldTransform = gp_mm( motor, cur.transform??identity );
cur.changed = cur.changed | changed;
cur.children?.forEach( child => xform( child, cur.worldTransform, cur.changed ) );
}
var m = new Float32Array(8);
glTF.json.skins.forEach( skin => {
xform( skin.skeleton ?? skin.joints[0] ?? glTF.json.nodes.find(x=>x.skin == skin) );
if (skin.array === undefined) skin.array = new Float32Array( skin.joints.length * 8 );
for (var i = 0, k = 0, l = skin.joints.length; i<l; ++i) {
m = gp_mm( skin.joints[i].worldTransform ?? skin.joints[i].transform , skin.inverseBindMotors[i] , m);
skin.array.set(m,k); k+=8;
}
skin.ubo = miniGL.updateUBO(gl, skin.ubo, skin.array );
});
// Resolve UBO's for materials.
glTF.json.materials.forEach( material => {
if (!material.program.uniformBlocks.material) return;
material.ubo = miniGL.updateUBO(gl, material.ubo, {
baseColorFactor : material.pbrMetallicRoughness?.baseColorFactor??[1,1,1,1],
emissiveFactor : material.emissiveFactor??[0,0,0],
metallicFactor : material.pbrMetallicRoughness?.metallicFactor??0,
roughnessFactor : material.pbrMetallicRoughness?.roughnessFactor??1,
}, material.program.uniformBlocks.material);
});
// Bind IBL images.
gl.activeTexture(gl.TEXTURE5); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.ggxTextureID);
gl.activeTexture(gl.TEXTURE6); gl.bindTexture(gl.TEXTURE_2D, this.ggxLutTextureID);
gl.activeTexture(gl.TEXTURE7); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.lambertianTextureID);
// Now render all nodes.
for (let trans=0; trans<2; ++trans) for (var i in glTF.json.scenes[glTF.json.scene].nodes) renderNode(gl, glTF.json.scenes[glTF.json.scene].nodes[i], world, {
colorTexture : 0,
specularTexture : 1,
emissiveTexture : 2,
oclusionTexture : 3,
normalTexture : 4,
ibl_irradiance : 5,
ibl_lut : 6,
ibl_radiance : 7
}, trans);
}
}
================================================
FILE: src/shaders.js
================================================
/******************************************************************************
*
* Look, Ma, No Matrices!
* Putting PGA to the test.
*
* by Steven De Keninck
*
*****************************************************************************/
/******************************************************************************
* Shader functions that are shared.
*****************************************************************************/
const shaderLib = {
miniPGA : await fetch('src/miniPGA.glsl').then(x=>x.text()),
miniIBL : await fetch('src/miniIBL.glsl').then(x=>x.text()),
miniGGX : await fetch('src/miniGGX.glsl').then(x=>x.text()),
}
/******************************************************************************
* UBO definitions that are shared.
*****************************************************************************/
export const UBO = {
/** Scene ******************************************************************/
scene : `
uniform scene {
motor camera; // World to view motor.
vec3 cameraPos; // Current camera position = sw_mo( camera ).
vec3 lightPos; // Current light position.
float aspect; // Aspect ratio
float scale; // Global scale
float exposure; // Exposure
};`,
/** Instance ***************************************************************/
instance : `
uniform instance {
motor world; // Object to world motor.
};`,
/** Material ***************************************************************/
material : `
uniform material {
// glTF defaults.
vec3 emissiveFactor; // Base emissive color.
// glTF pbrMetallicRoughness
vec4 baseColorFactor; // Base color and transparency.
float metallicFactor; // Base metalness.
float roughnessFactor; // Base roughness.
};
`,
}
/******************************************************************************
* Main Vertex Shader.
*****************************************************************************/
export const vertexShader = (material, mesh) => `
// Precision qualifiers.
precision highp float;
precision highp sampler2DArray;
// Include PGA motor support.
${ shaderLib.miniPGA }
// Include Scene and Instance uniforms.
${ UBO.scene }
${ UBO.instance }
// Shader outputs.
out vec2 st;
out vec3 worldPosition;
out vec3 worldNormal;
out vec4 worldTangent;
// Vertex Attributes.
// tangent Rotors replace normals and tangents.
layout(location = 0) in vec3 attrib_position;
layout(location = 1) in vec4 attrib_tangentRotor;
layout(location = 2) in vec2 attrib_uv;
// Skinned meshes also provide 4 weights and joint indices.
${mesh?.skin?.joints?.length?`
// Two attributes with 4 most important joints and weights
layout(location = 3) in vec4 attrib_weights;
layout(location = 4) in vec4 attrib_joints;
// And an UBO that contains all skin motors.
uniform skin { motor motors[${mesh.skin.joints.length}]; };
`:``}
void main() {
// Pass through uv coordinates unmodified.
st = attrib_uv;
// Our model -> world motor. Replaces its classic matrix equiv.
motor toWorld = world;
// If the mesh is skinned, apply the skinning weighting to the
// skinning motors and compose into the world motor.
${mesh.hasBones?`
// Grab the 4 bone motors.
motor b1 = motors[int(attrib_joints.x)];
motor b2 = motors[int(attrib_joints.y)];
motor b3 = motors[int(attrib_joints.z)];
motor b4 = motors[int(attrib_joints.w)];
// Blend them together, always use short path.
motor r = attrib_weights.x * b1;
if (dot(r[0],b2[0])<=0.0) b2 = -b2;
r += attrib_weights.y * b2;
if (dot(r[0],b3[0])<=0.0) b3 = -b3;
r += attrib_weights.z * b3;
if (dot(r[0],b4[0])<=0.0) b4 = -b4;
r += attrib_weights.w * b4;
// Now renormalize and combine with object to world
toWorld = gp(toWorld, normalize_m(r));
`:``}
// Now transform our vertex using the motor from object to worldspace.
worldPosition = sw_mp(toWorld, attrib_position) * scale;
// Concatenate the world motor and the tangent frame.
motor tangentRotor = gp_rr( toWorld, motor(attrib_tangentRotor,vec4(0.)) );
// Next, extract world normal and tangent from the tangentFrame rotor.
extractNormalTangent(tangentRotor, worldNormal, worldTangent.xyz);
worldTangent.w = sign(1.0 / attrib_tangentRotor.x); // trick to disambiguate negative zero!
// Now transform from worldspace to eyespace using the view motor.
vec3 viewPosition = sw_mp(camera, worldPosition);
// And finally do the perspective projection. (see miniPGA.glsl)
const float n = .04, f = 400.00; // near and far plane.
const float minfov = 26.0 * PI / 180.0; // The minimal fov in radians.
gl_Position = project(n, f, minfov, aspect, viewPosition);
}`;
/******************************************************************************
* The main fragment shader.
*****************************************************************************/
export const fragmentShader = (material, mesh)=>`
precision highp float;
precision highp sampler2DArray;
precision highp sampler2D;
precision highp samplerCube;
// Import PGA, IBL, GGX
${ shaderLib.miniPGA }
${ shaderLib.miniIBL }
${ shaderLib.miniGGX }
// We'll also use the scene uniforms.
${UBO.scene}
// And the material uniform block.
${UBO.material}
// Incoming varying attributes.
in vec2 st;
in vec3 worldPosition;
in vec3 worldNormal;
in vec4 worldTangent;
// Textures we might sample.
uniform sampler2D colorTexture;
uniform sampler2D specularTexture;
uniform sampler2D emissiveTexture;
uniform sampler2D normalTexture;
uniform sampler2D oclusionTexture;
// We output the final color.
layout (location=0) out vec4 outColor;
void main() {
// Sample the color and alpha.
vec4 color, sgao;
color = ${material?.pbrMetallicRoughness?.baseColorFactor?`vec4(${material.pbrMetallicRoughness?.baseColorFactor.map(x=>x.toFixed(3))}) *`:'vec4(1.0) *'}
${material?.pbrMetallicRoughness?.baseColorTexture || material?.extensions?.KHR_materials_pbrSpecularGlossiness?.diffuseTexture ?'texture(colorTexture, vec2(st)).rgba':'1.0'};
// Implement alpha Test
${(material?.alphaMode == 'MASK')?` if (color.a < ${material?.alphaCutoff?.toFixed(3)||'0.5'}) discard; `:''}
${(material?.alphaMode == 'BLEND')?` if (color.a < 5./255.) discard; `:''}
// Sample sgao : metalness, roughness, ao.
sgao = ${material?.pbrMetallicRoughness?.metallicRoughnessTexture?'texture(specularTexture, vec2(st)).bgra;':`vec4(1.0,${material?.pbrMetallicRoughness?.roughnessFactor?.toFixed(3)??'1.0'}, 1.0, 1.0).bgra;`}
// Sample the emissive map.
vec3 emissive = ${(material?.emissiveTexture !== undefined)?'texture(emissiveTexture, vec2(st)).rgb;':(material?.emissiveFactor !== undefined)?`vec3(${material.emissiveFactor.map(x=>(x*(material?.extensions?.KHR_materials_emissive_strength?.emissiveStrength??1)).toFixed(3))});`:'vec3(0.0);'}
// If we're unlit, we are done.
${material?.extensions?.KHR_materials_unlit?'outColor = vec4( pow(exposure*(color.rgb + emissive.rgb), vec3(1./2.2)) , dot(vec3(0.299, 0.587, 0.114),emissive.rgb)); outColor2= vec4(vec3(0.),max(0., dot( vec3(0.30, 0.59, 0.11), outColor.rgb ) - 1.) / 2.0); return;':''}
// Process metalness
${(material?.pbrMetallicRoughness?.metallicFactor!==undefined)?`sgao.r *= ${material?.pbrMetallicRoughness?.metallicFactor.toFixed(3)};`:``}
sgao.r = clamp(sgao.r, 0., 1.);
// Process roughness, squared and clamped.
sgao.g = sgao.g * sgao.g;
sgao.g = max(sgao.g, 0.0002);
// Sample ambient occlusion if seperate.
${(material?.occlusionTexture !== undefined && material?.occlusionTexture !== material?.pbrMetallicRoughness)?'sgao.b = texture(oclusionTexture, vec2(st)).r;':'sgao.b = 1.0;'}
// Sample the normalmap and apply handedness.
vec3 normalTex = normalize(texture(normalTexture, st).rgb * 2.0 - 1.0);
normalTex.y *= worldTangent.w;
// Build tangent frame.
vec3 normal = normalize(worldNormal); // renormalize normal
vec3 tg = normalize(worldTangent.xyz); // renormalize tangent
tg = normalize( tg - normal * dot(tg, normal) ); // orthogonalize tangent
mat3 tgw = mat3( tg, normalize(cross(normal, tg)), normal ); // construct TBN matrix
${(material?.normalTexture !== undefined) ? 'normal = normalize(tgw * normalTex);':''} // sample normalmap.
// Front facing flag.
if (gl_FrontFacing == false) normal *= -1.;
// Just a single fixed point light.
vec3 V = normalize(cameraPos - worldPosition);
// Calculate light attenuation.
float range = 36.;
float dist = length(worldPosition - lightPos);
float att = clamp(1. - (dist*dist)/(range*range), 0., 1.); att *= att;
// Light contribution.
vec3 ldir = normalize(lightPos - worldPosition);
vec3 light1 = 1.2 * att * sgao.b * brdf(normal, V, ldir, color.rgb, sgao.rgb);
// IBL lighting contribution.
vec3 ibl = getIBLRadianceGGX(normal, V, pow(sgao.g,.5), mix(vec3(0.04), color.rgb, sgao.r), worldPosition);
ibl += sgao.b * getIBLRadianceLambertian( normal, V, pow(sgao.g,.5), mix(color.rgb, vec3(0.), sgao.r), mix(vec3(0.04), color.rgb, sgao.r));
// Accumulate and gamma correct
outColor = vec4( exposure * (light1 + ibl + emissive), color.a);
outColor.rgb = pow(outColor.rgb,vec3(1./2.2));
}`;
================================================
FILE: src/util.js
================================================
/******************************************************************************
*
* Look, Ma, No Matrices!
* Putting PGA to the test.
*
* by Steven De Keninck
*
* Some assorted utilities.
*
*****************************************************************************/
const {floor, ceil, log2, log, max, pow, round} = Math;
/**
* Browser : save url as file.
* @function saveAs
* @param {String} href the url to save.
* @param {String} download the local filename to use.
*/
export const saveAs = ( href, download ) => Object.assign( document.createElement('a'), {href, download} ).click();
/**
* Helper to set all the texture parameters.
**/
export function texParams (gl, target, ...vals) {
vals.forEach((val,i)=>{
if (val!==undefined) gl.texParameteri(
target,
[gl.TEXTURE_MIN_FILTER,gl.TEXTURE_MAG_FILTER,gl.TEXTURE_WRAP_S,gl.TEXTURE_WRAP_T,gl.TEXTURE_MIN_LOD,gl.TEXTURE_MAX_LOD][i],
val);
});
}
/**
* Store an arraybufer into a 32bit PNG.
* This is unfortunately hard. The obvious putImageData fails because of
* bad premultiplication control. We detour via webGL1.
* @function dataToImage
* @param {Arraybuffer} data The arraybuffer of raw data you want to store in the png
* @param {Number} [w] Optional width to use.
* @param {Number} [h] Optional height to use.
* @param {String} [tp='image/png'] Image mimetype. (only png is safe :( )
**/
export async function dataToImage( data, w, h, tp = 'image/png' ) {
// Grab a pointer to the bytes.
const bytes = new Uint8Array( data.buffer );
// First decide on the resolution.
const closestPow2 = 2 << (floor(log2((bytes.length/4)**.5))-1);
const width = w||floor(closestPow2);
const height = h||ceil(bytes.length/(width*4));
// We need to do this via webGL to get unmodified data in the canvas..
// hmmm can we do this with an imagebitmaprender context instead??
function createShader(gl,src,tp) { var s = gl.createShader(tp); gl.shaderSource(s, src); gl.compileShader(s); return s; };
function createProgram(gl, vs, fs) {
var p = gl.createProgram();
gl.attachShader(p, vs=createShader(gl, vs, gl.VERTEX_SHADER));
gl.attachShader(p, fs=createShader(gl, fs, gl.FRAGMENT_SHADER));
gl.linkProgram(p); gl.deleteShader(vs); gl.deleteShader(fs);
return p;
};
var vs2 = 'precision highp float;\nattribute vec3 position;\nvarying vec2 tex;\nvoid main() { tex = position.xy/2.0+0.5; gl_Position = vec4(position, 1.0); }';
var fs2 = 'precision highp float;\nprecision highp sampler2D;\nuniform sampler2D tx;\nvarying vec2 tex;\nvoid main() { gl_FragColor = texture2D(tx,tex); }';
var canvas = Object.assign(document.createElement('canvas'),{width,height});
var gl = canvas.getContext('webgl',{antialias:false,alpha:true,premultipliedAlpha:false,preserveDrawingBuffer:true});
// Now create the texture we will use.
var texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, texture); gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL,true);
texParams(gl, gl.TEXTURE_2D, gl.NEAREST, gl.NEAREST, gl.CLAMP_TO_EDGE, gl.CLAMP_TO_EDGE);
var bytes2 = new Uint8Array( width * height * 4 ); bytes2.set(bytes,0);
gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, bytes2);
// Create the program to render this texture unmodified to the canvas.
var program = createProgram(gl, vs2, fs2), uniformTexLocation = gl.getUniformLocation(program, 'tx');
var positions = new Float32Array([-1, -1, 1, -1, 1, 1, 1, 1, -1, 1, -1, -1]), vertexPosBuffer=gl.createBuffer();
gl.enableVertexAttribArray(0); gl.bindBuffer(gl.ARRAY_BUFFER, vertexPosBuffer); gl.bufferData(gl.ARRAY_BUFFER, positions, gl.STATIC_DRAW);
gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0);
// Setup the program and texture slot, render and cleanup
gl.useProgram(program); gl.uniform1i(uniformTexLocation, 0);
gl.drawArrays(gl.TRIANGLES, 0, 6);
gl.deleteTexture(texture); gl.deleteProgram(program); gl.deleteBuffer(vertexPosBuffer);
// Now convert it to a png.
return await canvas.toBlob( blob => {
var url = URL.createObjectURL(blob);
console.log('compressed',bytes.length,'to',blob.size,'['+(100*blob.size/bytes.length).toFixed(3)+'%] - ',width,'*',height);
}, tp, 1);
}
/**
* Similarly, getting raw bytes from an image is not obvious either.
* We use createImageBitmap and OffscreenCanvas for webWorker access.
* @param {String} url Image url to load
* @returns {ArrayBuffer} data raw data
**/
export async function imageToData( url ) {
// Fetch the data as imagebitmap - mind premultiply option!
const blob = await fetch(url,{priority:'high', cache:'force-cache'}).then(res=>res.blob());
const i = await createImageBitmap( blob, {premultiplyAlpha:"none", colorSpaceConversion:"none"} );
// Create gl context and upload data as texture.
const c = new OffscreenCanvas(i.width,i.height), gl = c.getContext('webgl');
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, i);
// Create framebuffer and attach texture
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.framebufferTexture2D( gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture, 0);
// Now read back the data.
const res = new Uint8Array(i.width*i.height*4);
gl.readPixels(0,0,i.width,i.height,gl.RGBA,gl.UNSIGNED_BYTE,res);
gl.deleteTexture(texture); gl.deleteFramebuffer(fb);
res.width = i.width; res.height = i.height;
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
return res;
}
/**
* Convert HDR Floating point data to RGBE format.
* @param {Array} Color Red, Green, Blue floating point vector.
* @returns {Array} Color [R,G,B,E] Uint vector
**/
export function floatToRGBE([r, g, b]) {
// Highest coefficient determines shared exponent.
let v = max(r, g, b);
if (v < 1e-32) return [0, 0, 0, 0];
// Calculate exponent and scaling factor.
let exp = floor(log2(v) + 1.0);
let scale = pow(2.0, -exp);
// Return scaled versions as unsigned bytes.
return [
round(r * scale * 255.0),
round(g * scale * 255.0),
round(b * scale * 255.0),
exp + 128
];
}
/**
* Convert RGBE data back to floating point HDR RGB.
* @param {array} RGBE Array with RGBE unsigned byte data.
* @param {number} [offset] Optional offset into the input array
* @returns {array} Color Array with HDR RGB color.
**/
const RGBE_exp_cache = [...Array(256)].map((x,i)=>pow(2.0, i - 136));
export function RGBEToFloat(RGBE, offset = 0, dest = [], destOffset = 0) {
let f = RGBE_exp_cache[RGBE[offset + 3]]; // 2 ** (RGBE[offset + 3] - 136);
dest[destOffset ] = RGBE[offset] * f;
dest[destOffset+1] = RGBE[offset+1] * f;
dest[destOffset+2] = RGBE[offset+2] * f;
return dest;
}
/**
* Packs two floating points into RGBA unsigned bytes. Specically scales for LUT range!
* @param {array} xy Array with two floating point values to be packed.
* @returns {array} RGBA Array with 4 unsigned bytes packing the floating point values.
**/
export function LUTToRGBA([x,y]) {
x = floor(x * 65500);
y = floor(y * 65500);
return [(x >> 8)&255, (x)&255, (y >> 8)&255, y&255];
}
/**
* Unpacks RGBA encoded LUT information back to two floats. Specific for LUT range!
* @param {array} RGBA RGBA input
* @param {number} offset Optional offset to use in the input.
* @returns {array} XY Two unpacked rescaled floating point values.
*/
export function RGBAToLUT(RGBA, offset = 0, dest = [], destOffset = 0) {
dest[destOffset] = ((RGBA[offset] << 8) + RGBA[offset+1])/65500;
dest[destOffset+1] = ((RGBA[offset+2] << 8) + RGBA[offset+3])/65500;
return dest;
}
/**
* Calculates a simple mipmap chain. Handles any type of input data.
* @param {array|typedarray} buffer input buffer.
* @param {number} width width of input image.
* @param {number} height height of input image.
* @param {number} [pp=3] number of components per pixel.
* @returns {array} mips Array of mips, same type as buffer, starting with buffer and halving in size each step.
**/
export function generateMipChain (buffer, width, height, pp=3) {
// Our result starts with the input, our first size is halfway.
const res = [buffer];
width = width >> 1; height = height >> 1;
// Untill one of the sizes is zero, repeat.
while (width && height) {
// Create a new buffer of the same type and correct size.
var buf = new (buffer.constructor)( width * height * pp );
// Now do a simple box filter 50% scale.
for (var i=0; i<height; ++i) for (var j=0; j<width; ++j) for (var k=0; k<pp; ++k) {
buf[ i*width*pp + j*pp + k ] = (
buffer[ (i*2 )*width*pp*2 + (j*2 )*pp + k ]
+buffer[ (i*2+1)*width*pp*2 + (j*2 )*pp + k ]
+buffer[ (i*2 )*width*pp*2 + (j*2+1)*pp + k ]
+buffer[ (i*2+1)*width*pp*2 + (j*2+1)*pp + k ]
)/4;
}
// Store this result, halven sizes and carry on.
res.push(buf); buffer = buf;
width = width >> 1; height = height >> 1;
}
return res;
}
/**
* Fetch with progress for ArrayBuffer, Blob and JSON.
* @param {Function} progressCallback Gets called with progress update object { current, estimate, value }
**/
Response.prototype.progress = async function ( progressCallback ) {
// Figure out total size :
// 1. from custom x-content-length header containing uncompressed length of compressed streams, must be custom set on server.
// 2. from content-length header containing uncompressed length of raw streams.
// 3. from local storage if we loaded the file before.
const totSize = ( this.headers.get("x-content-length") || this.headers.get("content-length") || localStorage["content_length_"+this.url] ) | 0;
// Read the request as stream and accumulate the chunks.
let responseSize = 0, chunks = [], reader = this.body.getReader(), time_start = performance.now();
progressCallback&&progressCallback({ chunk : 0, current : responseSize, estimate : totSize, value : responseSize / totSize });
while (true) {
const {done, value} = await reader.read();
if (done) { reader.releaseLock(); break; };
responseSize += value.length;
const time_passed = performance.now() - time_start;
progressCallback&&progressCallback({
chunk : value.length,
current : responseSize,
estimate : totSize,
value : responseSize / totSize,
speed : responseSize / time_passed
});
chunks.push(value);
}
localStorage["content_length_"+this.url] = responseSize;
// Concatenate chunks
let buffer = new Uint8Array(responseSize);
for (var i=0, j=0; i<chunks.length; ++i) { buffer.set(chunks[i],j); j+=chunks[i].length; }
// Finally, return the needed accessors.
return {
text : ()=>new TextDecoder().decode(buffer),
arrayBuffer : ()=>buffer.buffer,
json : ()=>JSON.parse(new TextDecoder().decode(buffer)),
blob : ()=>new Blob([buffer],{ type : this.headers.get('content-type') }),
};
}
/**
* Save a packed RGBE HDR panoramic cubemap, optionally with mipmaps.
**/
export function saveCubemap (gl, texid, texturesize = 256, baseName = 'cubemap', nrMips = 0) {
// Function to process each face of the cubemap
function processCubemapFace(faceIndex, ctx, offset=0, mipLevel=0) {
const cursize = texturesize / (2**mipLevel);
// Bind the framebuffer
let framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + faceIndex, texid, mipLevel);
// Read pixels from the framebuffer
let rawData = new Float32Array(cursize * cursize * 4);
gl.readPixels(0, 0, cursize, cursize, gl.RGBA, gl.FLOAT, rawData);
// Convert each pixel and write it to the canvas.
console.log(' ',faceIndex, cursize, ' @@@', offset + cursize**2*4*faceIndex);
for (let i = 0; i < rawData.length; i += 4) {
let rgbe = floatToRGBE([rawData[i], rawData[i + 1], rawData[i + 2]]);
ctx.set(rgbe, i + offset + cursize**2*4*faceIndex);
}
// Cleanup
gl.deleteFramebuffer(framebuffer);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
}
// Store the entire thing. The size is the base size + 25% of that and so on.
const totsize = texturesize**2 * 6 * 4 * 4/3 * (1-1/4**(nrMips+1)) ; // Geometric series .. 1 + 1/4 + ... + 1/4^n = 4/3 * (1 - 1/4^n)
let final = new Uint8Array( totsize );
// Process each face of the cubemap
for (let j = 0; j < nrMips+1; ++j) for (let i = 0; i < 6; i++) processCubemapFace(i, final, texturesize**2 * 6 * 4 * 4/3 * (1-1/4**j), j);
dataToImage( final, texturesize*2, Math.ceil((totsize/4) / (texturesize*2)) ).then(url=>saveAs( url, `${baseName}.cubemap.png` ));
}
/**
* imageCache object.
**/
export const imageCache = {};
/**
* Loads a packed RGBE HDR panoramic cubemap, optionally with mipmaps.
* @param {WebGL2RenderingContext} gl webGL2 rendering context.
* @param {WebGLTexture} [texture] Optional texture object. A new texture is created if undefined.
* @param {String} fileName Filename to load.
* @param {boolean} [domips=true] Also load mipmaps.
* @param {number} [nrMips=1] Nr of mipmaps to load.
* @returns {WebGLTexture} A webGL texture.
**/
export async function loadCubemap (gl, fileName, nrMips = 1) {
// First check the cache.
if (imageCache[fileName]) return imageCache[fileName];
// Create texture.
const texture = gl.createTexture();
const imageData = await imageToData( fileName );
// These cubemaps are saved with twice their actual width.
const [width,height] = [imageData.width / 2, imageData.height * 2];
// Convert each pixel from RGBE to RGBA32f and write to a Float32Array
let floatData = new Float32Array(imageData.length);
for (let i = 0; i < imageData.length; i += 4) RGBEToFloat(imageData, i, floatData, i);//floatData.set( RGBEToFloat(imageData, i), i);
// Upload the float data to the cubemap face
gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture);
gl.texStorage2D(gl.TEXTURE_CUBE_MAP, nrMips, gl.RGBA16F, width, width);
for (var faceIndex = 0; faceIndex < 6; ++faceIndex)
for (var j=0; j<nrMips; ++j) {
const curSize = width / (2**j);
const mipOffset = width**2 * 6 * 4 * (4/3 * (1-1/4**j));
const imgOffset = mipOffset + faceIndex * curSize**2 * 4;
const imgSize = curSize**2 * 4;
gl.texSubImage2D( gl.TEXTURE_CUBE_MAP_POSITIVE_X + faceIndex, j, 0, 0, curSize, curSize, gl.RGBA, gl.FLOAT, floatData.slice( imgOffset, imgOffset + imgSize ));
}
// Set texture parameters, max miplevel and resolve.
texParams(gl, gl.TEXTURE_CUBE_MAP, nrMips>1?gl.LINEAR_MIPMAP_LINEAR:gl.LINEAR, gl.LINEAR, gl.CLAMP_TO_EDGE, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAX_LEVEL, Math.max(0,nrMips-1));
imageCache[fileName] = texture;
return texture;
}
/**
* Loads a HDR texture from PNG. Standard decoder is RGBE to Float RGB.
* Our GGX LUT tables are also stored as PNG and decoded with RGBAToLUT.
* @param {WebGL2RenderingContext} gl WebGL context to use.
* @param {string} fileName URI to load.
* @param {function} [decoder=RGBEToFloat] Decoder function(array, offset).
**/
export async function loadHDRTexture (gl, fileName, decoder = RGBEToFloat) {
// Create the texture, fetch the data.
const texture = gl.createTexture();
const imageData = await imageToData( fileName );
const {width,height} = imageData;
// Convert each pixel from RGBE to RGBA32f and write to a Float32Array
let floatData = new Float32Array(imageData.length);
for (let i = 0; i < imageData.length; i += 4) decoder(imageData, i, floatData, i);
// Upload the float data to the cubemap face
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA16F, width, height, 0, gl.RGBA, gl.FLOAT, floatData);
texParams(gl, gl.TEXTURE_2D, gl.LINEAR, gl.LINEAR, gl.CLAMP_TO_EDGE, gl.CLAMP_TO_EDGE);
return texture;
}
gitextract_3gsjwqbd/
├── LICENSE
├── README.md
├── data/
│ ├── cow.glb
│ └── elephant.glb
├── index.html
└── src/
├── LookMaNoMatrices.js
├── miniGGX.glsl
├── miniGL.js
├── miniGLTF.js
├── miniIBL.glsl
├── miniPGA.glsl
├── miniPGA.js
├── miniRender.js
├── shaders.js
└── util.js
SYMBOL INDEX (22 symbols across 4 files)
FILE: src/miniGLTF.js
class miniGLTF (line 15) | class miniGLTF {
method constructor (line 17) | constructor () {
method load (line 27) | async load (uri, opts) {
method setTime (line 290) | setTime (time=0, anim=0, time2, anim2, blend) {
method unweld (line 417) | unweld (prim, opts={}) {
FILE: src/miniRender.js
class miniRender (line 32) | class miniRender {
method constructor (line 37) | constructor ( options ) {
method load (line 79) | async load ( uri, slot=0 ) {
method initFrame (line 149) | initFrame () {
method render (line 177) | render (world, scene=0) {
FILE: src/shaders.js
constant UBO (line 24) | const UBO = {
FILE: src/util.js
function texParams (line 25) | function texParams (gl, target, ...vals) {
function dataToImage (line 44) | async function dataToImage( data, w, h, tp = 'image/png' ) {
function imageToData (line 99) | async function imageToData( url ) {
function floatToRGBE (line 129) | function floatToRGBE([r, g, b]) {
function RGBEToFloat (line 154) | function RGBEToFloat(RGBE, offset = 0, dest = [], destOffset = 0) {
function LUTToRGBA (line 167) | function LUTToRGBA([x,y]) {
function RGBAToLUT (line 179) | function RGBAToLUT(RGBA, offset = 0, dest = [], destOffset = 0) {
function generateMipChain (line 193) | function generateMipChain (buffer, width, height, pp=3) {
function saveCubemap (line 267) | function saveCubemap (gl, texid, texturesize = 256, baseName = 'cubemap'...
function loadCubemap (line 317) | async function loadCubemap (gl, fileName, nrMips = 1) {
function loadHDRTexture (line 358) | async function loadHDRTexture (gl, fileName, decoder = RGBEToFloat) {
Condensed preview — 15 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (187K chars).
[
{
"path": "LICENSE",
"chars": 1074,
"preview": "MIT License\n\nCopyright (c) 2024 Steven De Keninck\n\nPermission is hereby granted, free of charge, to any person obtaining"
},
{
"path": "README.md",
"chars": 477,
"preview": "# Look, Ma, No Matrices!\n\nTo be presented at SIGGRAPH2024. (gensub_345).\n\nSupplanting matrices with Geometric Algebra (P"
},
{
"path": "index.html",
"chars": 48166,
"preview": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n\n <!-- Mobile scaling and title -->\n <meta charset=\"UTF-8\">\n <meta name=\"view"
},
{
"path": "src/LookMaNoMatrices.js",
"chars": 5176,
"preview": "/******************************************************************************\n *\n * Look, Ma, No Matrices!\n * Putting "
},
{
"path": "src/miniGGX.glsl",
"chars": 4343,
"preview": "/******************************************************************************\n *\n * Look, Ma, No Matrices!\n * miniGGX."
},
{
"path": "src/miniGL.js",
"chars": 13600,
"preview": "/******************************************************************************\n *\n * Look, Ma, No Matrices!\n * miniGL.j"
},
{
"path": "src/miniGLTF.js",
"chars": 25558,
"preview": "/** A minimal GLTF loader with PGA support \n *\n * * Loads and prepares .gltf and .glb files. Converts matrices, \n * qu"
},
{
"path": "src/miniIBL.glsl",
"chars": 5903,
"preview": "/******************************************************************************\n *\n * Look, Ma, No Matrices!\n * miniIBL."
},
{
"path": "src/miniPGA.glsl",
"chars": 19766,
"preview": "/******************************************************************************\n *\n * Look, Ma, No Matrices!\n * miniPGA."
},
{
"path": "src/miniPGA.js",
"chars": 17033,
"preview": "/******************************************************************************\n *\n * Look, Ma, No Matrices!\n * miniPGA."
},
{
"path": "src/miniRender.js",
"chars": 14637,
"preview": "/******************************************************************************\n *\n * Look, Ma, No Matrices!\n * miniRend"
},
{
"path": "src/shaders.js",
"chars": 10041,
"preview": "/******************************************************************************\n *\n * Look, Ma, No Matrices!\n * Putting "
},
{
"path": "src/util.js",
"chars": 16648,
"preview": "/******************************************************************************\n *\n * Look, Ma, No Matrices!\n * Putting "
}
]
// ... and 2 more files (download for full content)
About this extraction
This page contains the full source code of the enkimute/LookMaNoMatrices GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 15 files (178.1 KB), approximately 50.2k tokens, and a symbol index with 22 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.