Showing preview only (830K chars total). Download the full file or copy to clipboard to get everything.
Repository: Ralith/hypermine
Branch: master
Commit: 5dd946046873
Files: 102
Total size: 793.1 KB
Directory structure:
gitextract_t2xcel5r/
├── .github/
│ ├── dependabot.yml
│ └── workflows/
│ ├── package.yml
│ └── rust.yml
├── .gitignore
├── Cargo.toml
├── LICENSE-APACHE
├── LICENSE-ZLIB
├── README.md
├── assets/
│ ├── .gitattributes
│ └── character.glb
├── client/
│ ├── Cargo.toml
│ ├── benches/
│ │ └── surface_extraction.rs
│ ├── shaders/
│ │ ├── common.h
│ │ ├── fog.frag
│ │ ├── fullscreen.vert
│ │ ├── mesh.frag
│ │ ├── mesh.vert
│ │ ├── surface-extraction/
│ │ │ ├── extract.comp
│ │ │ └── surface.h
│ │ ├── voxels.frag
│ │ └── voxels.vert
│ └── src/
│ ├── config.rs
│ ├── graphics/
│ │ ├── base.rs
│ │ ├── core.rs
│ │ ├── draw.rs
│ │ ├── fog.rs
│ │ ├── frustum.rs
│ │ ├── gltf_mesh.rs
│ │ ├── gui.rs
│ │ ├── meshes.rs
│ │ ├── mod.rs
│ │ ├── png_array.rs
│ │ ├── tests.rs
│ │ ├── voxels/
│ │ │ ├── mod.rs
│ │ │ ├── surface.rs
│ │ │ ├── surface_extraction.rs
│ │ │ └── tests.rs
│ │ └── window.rs
│ ├── lahar_deprecated/
│ │ ├── condition.rs
│ │ ├── mod.rs
│ │ ├── ring_alloc.rs
│ │ ├── staging.rs
│ │ └── transfer.rs
│ ├── lib.rs
│ ├── loader.rs
│ ├── local_character_controller.rs
│ ├── main.rs
│ ├── metrics.rs
│ ├── net.rs
│ ├── prediction.rs
│ ├── sim.rs
│ └── worldgen_driver.rs
├── common/
│ ├── Cargo.toml
│ ├── benches/
│ │ └── bench.rs
│ └── src/
│ ├── character_controller/
│ │ ├── collision.rs
│ │ ├── mod.rs
│ │ └── vector_bounds.rs
│ ├── chunk_collision.rs
│ ├── chunk_ray_casting.rs
│ ├── chunks.rs
│ ├── codec.rs
│ ├── collision_math.rs
│ ├── cursor.rs
│ ├── dodeca.rs
│ ├── graph.rs
│ ├── graph_collision.rs
│ ├── graph_entities.rs
│ ├── graph_ray_casting.rs
│ ├── id.rs
│ ├── lib.rs
│ ├── margins.rs
│ ├── math.rs
│ ├── node.rs
│ ├── peer_traverser.rs
│ ├── proto.rs
│ ├── sim_config.rs
│ ├── traversal.rs
│ ├── voxel_math.rs
│ ├── world.rs
│ └── worldgen/
│ ├── horosphere.rs
│ ├── mod.rs
│ ├── plane.rs
│ └── terraingen.rs
├── docs/
│ ├── README.md
│ └── world_generation.md
├── save/
│ ├── Cargo.toml
│ ├── benches/
│ │ └── bench.rs
│ ├── gen-protos/
│ │ ├── Cargo.toml
│ │ └── src/
│ │ └── main.rs
│ ├── src/
│ │ ├── lib.rs
│ │ ├── protos.proto
│ │ └── protos.rs
│ └── tests/
│ ├── heavy.rs
│ └── tests.rs
├── server/
│ ├── Cargo.toml
│ └── src/
│ ├── config.rs
│ ├── input_queue.rs
│ ├── lib.rs
│ ├── main.rs
│ ├── postcard_helpers.rs
│ └── sim.rs
└── shell.nix
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/dependabot.yml
================================================
version: 2
updates:
- package-ecosystem: cargo
directory: "/"
schedule:
interval: daily
open-pull-requests-limit: 10
ignore:
# Ignore raw-window-handle because it's tied to ash-window
- dependency-name: raw-window-handle
# Ignore rustls and rustls-pemfile because they're tied to quinn
- dependency-name: rustls
- dependency-name: rustls-pemfile
================================================
FILE: .github/workflows/package.yml
================================================
name: Package
on:
push:
branches: ['master']
jobs:
package-windows:
name: Windows
runs-on: windows-latest
env:
VULKAN_VERSION: "1.3.290.0"
VULKAN_SDK: "C:/VulkanSDK/1.3.290.0"
steps:
- uses: actions/checkout@v4
with:
lfs: true
- name: Install Vulkan SDK
run: |
Invoke-WebRequest -Uri "https://sdk.lunarg.com/sdk/download/${{ env.VULKAN_VERSION }}/windows/VulkanSDK-${{ env.VULKAN_VERSION }}-Installer.exe" -OutFile vulkan.exe
./vulkan.exe --accept-licenses --default-answer --confirm-command install
- uses: dtolnay/rust-toolchain@stable
- name: Build Server
run: cargo build --package server --release --locked
- name: Build Client
run: cargo build --package client --release --no-default-features --locked
- name: Package Artifacts
run: |
mkdir artifacts
Move-Item -Path assets/* -Destination artifacts/
Move-Item -Path target/release/*.exe -Destination artifacts/
- name: Upload Artifacts
uses: actions/upload-artifact@v4
with:
name: windows
path: "artifacts/*"
package-linux:
name: Linux
# Oldest supported runner, for wide glibc compat
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
with:
lfs: true
- name: Install dependencies
run: sudo apt update && sudo apt-get -y install libasound2-dev libvulkan-dev libfontconfig-dev
# No prebuilt shaderc, since the official binaries don't seem to be compatible with Ubuntu 20.04,
# and we haven't tested them on Ubuntu 22.04.
- uses: dtolnay/rust-toolchain@stable
- name: Build Server
run: cargo build --package server --release --locked
- name: Build Client
run: cargo build --package client --release --no-default-features --locked
- name: Strip
run: |
strip target/release/server target/release/client
- name: Package Artifacts
run: |
mkdir artifacts
mv assets/* artifacts/
mv target/release/server artifacts/
mv target/release/client artifacts/
- name: Upload Artifacts
uses: actions/upload-artifact@v4
with:
name: linux
path: "artifacts/*"
================================================
FILE: .github/workflows/rust.yml
================================================
name: CI
on:
push:
branches: ['master']
paths-ignore:
- 'docs/**'
pull_request:
paths-ignore:
- 'docs/**'
jobs:
test:
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
runs-on: ${{ matrix.os }}
env:
VULKAN_VERSION: "1.3.290.0"
VULKAN_SDK: "C:/VulkanSDK/1.3.290.0"
steps:
- name: Install shaderc
if: matrix.os == 'ubuntu-latest'
run: |
wget -nv -r -nd -A install.tgz 'https://storage.googleapis.com/shaderc/badges/build_link_linux_clang_release.html'
tar xf install.tgz
echo "SHADERC_LIB_DIR=$PWD/install/lib" >> "$GITHUB_ENV"
- name: Install Vulkan SDK
if: matrix.os == 'windows-latest'
run: |
Invoke-WebRequest -Uri "https://sdk.lunarg.com/sdk/download/${{ env.VULKAN_VERSION }}/windows/VulkanSDK-${{ env.VULKAN_VERSION }}-Installer.exe" -OutFile vulkan.exe
./vulkan.exe --accept-licenses --default-answer --confirm-command install
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- run: cargo build --workspace --all-targets --locked
- run: cargo test --workspace --locked
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
with:
components: rustfmt, clippy
- run: cargo fmt --all -- --check
- if: always()
run: cargo clippy --workspace --all-targets -- -D warnings
check-protos:
name: Check protos
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- run: sudo apt update && sudo apt-get -y install protobuf-compiler
- name: Generate Rust code from .proto files
run: cargo run -p gen-protos --locked
- name: Check for uncommitted changes
run: git diff --exit-code
================================================
FILE: .gitignore
================================================
/target
**/*.rs.bk
# IDEA workspace stuff
/*.iml
/.idea
/.vscode/launch.json
/.vscode/tasks.json
/tarpaulin-report.html
================================================
FILE: Cargo.toml
================================================
[workspace]
resolver = "2"
members = ["client", "server", "common", "save", "save/gen-protos"]
[workspace.dependencies]
hecs = "0.11.0"
nalgebra = { version = "0.34.1", features = ["libm-force"] }
quinn = { version = "0.11", default-features = false, features = ["rustls", "ring", "runtime-tokio"] }
toml = { version = "0.9.12", default-features = false, features = ["parse", "serde", "std"] }
[profile.dev]
opt-level = 1
debug-assertions = true
[profile.dev.package."*"]
opt-level = 2
[profile.release.build-override]
opt-level = 0
================================================
FILE: LICENSE-APACHE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: LICENSE-ZLIB
================================================
Copyright (c) 2020 Benjamin Saunders
This software is provided 'as-is', without any express or implied warranty. In
no event will the authors be held liable for any damages arising from the use of
this software.
Permission is granted to anyone to use this software for any purpose, including
commercial applications, and to alter it and redistribute it freely, subject to
the following restrictions:
1. The origin of this software must not be misrepresented; you must not claim
that you wrote the original software. If you use this software in a product, an
acknowledgment in the product documentation would be appreciated but is not
required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
================================================
FILE: README.md
================================================
## Installation
See the [wiki](https://github.com/Ralith/hypermine/wiki) for instructions on how to build and run
## License
Licensed under either of
* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
http://www.apache.org/licenses/LICENSE-2.0)
* Zlib license ([LICENSE-ZLIB](LICENSE-ZLIB) or
https://opensource.org/licenses/Zlib)
at your option.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
dual licensed as above, without any additional terms or conditions.
================================================
FILE: assets/.gitattributes
================================================
*.png filter=lfs diff=lfs merge=lfs -text
*.glb filter=lfs diff=lfs merge=lfs -text
*.gltf filter=lfs diff=lfs merge=lfs -text
================================================
FILE: assets/character.glb
================================================
version https://git-lfs.github.com/spec/v1
oid sha256:ee33342c11e0746b106031b2765ee86a0f1890b72fec7c0219815ae98f414e16
size 139624
================================================
FILE: client/Cargo.toml
================================================
[package]
name = "client"
version = "0.1.0"
authors = ["Benjamin Saunders <ben.e.saunders@gmail.com>"]
edition = "2024"
publish = false
license = "Apache-2.0 OR Zlib"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
common = { path = "../common" }
server = { path = "../server" }
tracing = "0.1.10"
ash = { version = "0.38.0", default-features = false, features = ["loaded", "debug", "std"] }
lahar = { git = "https://github.com/Ralith/lahar", rev = "7963ae5750ea61fa0a894dbb73d3be0ac77255d2" }
yakui = "0.3.0"
yakui-vulkan = "0.3.0"
winit = "0.30.4"
ash-window = "0.13"
raw-window-handle = "0.6"
directories = "6.0.0"
vk-shader-macros = "0.2.5"
nalgebra = { workspace = true }
libm = "0.2.16"
tokio = { version = "1.43.0", features = ["rt-multi-thread", "sync", "macros"] }
png = "0.18.0"
anyhow = "1.0.26"
serde = { version = "1.0.104", features = ["derive", "rc"] }
toml = { workspace = true }
fxhash = "0.2.1"
downcast-rs = "2.0.0"
quinn = { workspace = true }
futures-util = "0.3.1"
webpki = "0.22.4"
hecs = { workspace = true }
memoffset = "0.9"
gltf = { version = "1.0.0", default-features = false, features = ["utils"] }
metrics = "0.24.0"
hdrhistogram = { version = "7", default-features = false }
save = { path = "../save" }
lru-slab = "0.1.2"
[features]
default = ["use-repo-assets"]
use-repo-assets = []
[dev-dependencies]
approx = "0.5.1"
bencher = "0.1.5"
renderdoc = "0.12.1"
[[bench]]
name = "surface_extraction"
harness = false
================================================
FILE: client/benches/surface_extraction.rs
================================================
use std::sync::Arc;
use ash::vk;
use bencher::{Bencher, benchmark_group, benchmark_main};
use client::graphics::{
Base,
voxels::surface_extraction::{self, ExtractTask, SurfaceExtraction},
};
//use common::world::Material;
fn extract(bench: &mut Bencher) {
let gfx = Arc::new(Base::headless());
let extract = SurfaceExtraction::new(&gfx);
let mut scratch = surface_extraction::ScratchBuffer::new(&gfx, &extract, BATCH_SIZE, DIMENSION);
let draw = surface_extraction::DrawBuffer::new(&gfx, BATCH_SIZE, DIMENSION);
let device = &*gfx.device;
unsafe {
let cmd_pool = device
.create_command_pool(
&vk::CommandPoolCreateInfo::default().queue_family_index(gfx.queue_family),
None,
)
.unwrap();
let cmd = device
.allocate_command_buffers(
&vk::CommandBufferAllocateInfo::default()
.command_pool(cmd_pool)
.command_buffer_count(1),
)
.unwrap()[0];
device
.begin_command_buffer(cmd, &vk::CommandBufferBeginInfo::default())
.unwrap();
let batch = (0..BATCH_SIZE)
.map(|i| ExtractTask {
index: i,
draw_id: i,
indirect_offset: draw.indirect_offset(i),
face_offset: draw.face_offset(i),
reverse_winding: false,
})
.collect::<Vec<_>>();
scratch.extract(
device,
&extract,
draw.indirect_buffer(),
draw.face_buffer(),
cmd,
&batch,
);
device.end_command_buffer(cmd).unwrap();
bench.iter(|| {
device
.queue_submit(
gfx.queue,
&[vk::SubmitInfo::default().command_buffers(&[cmd])],
vk::Fence::null(),
)
.unwrap();
device.device_wait_idle().unwrap();
})
}
}
const DIMENSION: u32 = 16;
const BATCH_SIZE: u32 = 16;
benchmark_group!(benches, extract);
benchmark_main!(benches);
================================================
FILE: client/shaders/common.h
================================================
#ifndef COMMON_H
#define COMMON_H
const float PI = 3.14159265;
const float INFINITY = 1.0 / 0.0;
layout(set = 0, binding = 0) uniform Common {
// Maps local node space to clip space
mat4 view_projection;
// Maps clip space to view space
mat4 inverse_projection;
float fog_density;
float time;
};
#endif
================================================
FILE: client/shaders/fog.frag
================================================
#version 450
#include "common.h"
layout(location = 0) in vec2 texcoords;
layout(location = 0) out vec4 fog;
layout(input_attachment_index=0, set=0, binding=1) uniform subpassInput depth;
void main() {
vec4 clip_pos = vec4(texcoords * 2.0 - 1.0, subpassLoad(depth).x, 1.0);
vec4 scaled_view_pos = inverse_projection * clip_pos;
// Cancel out perspective, obtaining klein ball position
vec3 view_pos = scaled_view_pos.xyz / scaled_view_pos.w;
float view_length = length(view_pos);
// Convert to true hyperbolic distance, taking care to respect atanh's domain
float dist = view_length >= 1.0 ? INFINITY : atanh(view_length);
// Exponential^k fog
fog = vec4(0.5, 0.65, 0.9, exp(-pow(dist * fog_density, 5)));
}
================================================
FILE: client/shaders/fullscreen.vert
================================================
#version 450
layout (location = 0) out vec2 texcoords;
void main() {
texcoords = vec2((gl_VertexIndex << 1) & 2, gl_VertexIndex & 2);
gl_Position = vec4(texcoords * 2.0f + -1.0f, 0.0f, 1.0f);
}
================================================
FILE: client/shaders/mesh.frag
================================================
#version 450
layout(location = 0) in vec2 texcoords;
layout(location = 1) in vec4 normal;
layout(location = 0) out vec4 color_out;
layout(set = 1, binding = 0) uniform sampler2D color;
void main() {
color_out = texture(color, texcoords);
}
================================================
FILE: client/shaders/mesh.vert
================================================
#version 450
#include "common.h"
layout(location = 0) in vec3 position;
layout(location = 1) in vec2 texcoords;
layout(location = 2) in vec3 normal;
layout(location = 0) out vec2 texcoords_out;
layout(location = 1) out vec4 normal_out;
layout(push_constant) uniform PushConstants {
mat4 transform;
};
void main() {
gl_Position = view_projection * transform * vec4(position, 1);
texcoords_out = texcoords;
normal_out = transform * vec4(normal, 0);
}
================================================
FILE: client/shaders/surface-extraction/extract.comp
================================================
#version 450
#extension GL_KHR_shader_subgroup_ballot: enable
#extension GL_KHR_shader_subgroup_arithmetic: enable
#include "surface.h"
layout(local_size_x_id = 0, local_size_y_id = 1, local_size_z_id = 2) in;
layout(set = 0, binding = 0) restrict uniform Parameters {
int dimension;
};
layout(set = 1, binding = 0) readonly restrict buffer Voxels {
uint voxel_pair[];
};
layout(set = 1, binding = 1) restrict buffer State {
uint face_count;
};
layout(set = 1, binding = 2) writeonly restrict buffer Indirect {
uint vertex_count;
uint instance_count;
uint first_vertex;
uint first_instance;
};
layout(set = 1, binding = 3) writeonly restrict buffer Surfaces {
Surface surfaces[];
};
layout(push_constant) uniform Uniforms {
bool reverse_winding;
};
uint get_voxel(ivec3 coords) {
// We assume that all dimensions are equal, except that gl_NumWorkGroups.x is three times larger
// (yielding one invocation per negative-facing face). Each coordinate is offset by 1 to account
// for the margin on the negative-facing sides of the chunk.
// There's a margin of 1 on each side of each dimension, only half of which is dispatched over
uint linear = (coords.x + 1) + (coords.y + 1) * (dimension + 2) + (coords.z + 1) * (dimension + 2) * (dimension + 2);
uint pair = voxel_pair[linear / 2];
return (linear % 2) == 0 ? pair & 0xFFFF : pair >> 16;
}
// A face between a voxel and its neighbor in the -X, -Y, or -Z direction
struct Face {
// coordinates of the voxel
ivec3 voxel;
// [0,3), indicating which axis this face is perpendicular to
uint axis;
// whether the normal is facing towards the center of this voxel
bool inward;
// contents of the solid voxel incident to the face, which may be a neighbor
uint material;
};
ivec3 neighbor_offset(uint axis) {
ivec3 off = ivec3(0);
off[axis] = -1;
return off;
}
bool find_face(out Face info) {
// We only look at negative-facing faces of the current voxel, and iterate one past the end on
// each dimension to enclose it fully.
info.voxel = ivec3(gl_GlobalInvocationID.x / 3, gl_GlobalInvocationID.yz);
info.axis = gl_GlobalInvocationID.x % 3;
ivec3 neighbor = info.voxel + neighbor_offset(info.axis);
// Don't generate faces between out-of-bounds voxels
if (any(greaterThanEqual(info.voxel, ivec3(dimension))) && any(greaterThanEqual(neighbor, ivec3(dimension)))) return false;
uint neighbor_mat = get_voxel(neighbor);
uint self_mat = get_voxel(info.voxel);
// Flip face around if the neighbor is the solid one
info.inward = self_mat == 0;
info.material = self_mat | neighbor_mat;
// If self or neighbor is a void margin, then no surface should be generated, as any surface
// that would be rendered is the responsibility of the adjacent chunk.
if ((self_mat == 0 && info.voxel[info.axis] == dimension) || (neighbor_mat == 0 && neighbor[info.axis] == -1)) return false;
return (neighbor_mat == 0) != (self_mat == 0);
}
// Compute the occlusion state based on the three voxels surrounding an exposed vertex:
//
// a b
// c .
//
// There are four occlusion states:
// 0 - fully enclosed
// 1 - two neighboring voxels
// 2 - one neighboring voxel
// 3 - fully exposed
uint vertex_occlusion(bool a, bool b, bool c) {
return b && c ? 0 : (3 - uint(a) - uint(b) - uint(c));
}
// Compute the occlusion state for each vertex on a surface
uvec4 surface_occlusion(ivec3 voxel, uint axis, bool inward) {
// U/V axes on this surface
const ivec3 uvs[3][2] = {
{{0, 1, 0}, {0, 0, 1}},
{{0, 0, 1}, {1, 0, 0}},
{{1, 0, 0}, {0, 1, 0}},
};
if (!inward) {
voxel += neighbor_offset(axis);
}
ivec3 u = uvs[axis][0];
ivec3 v = uvs[axis][1];
// 0 1 2
// 3 . 4
// 5 6 7
bool occluders[8] = {
get_voxel(voxel - u - v) != 0,
get_voxel(voxel - v) != 0,
get_voxel(voxel + u - v) != 0,
get_voxel(voxel - u ) != 0,
get_voxel(voxel + u ) != 0,
get_voxel(voxel - u + v) != 0,
get_voxel(voxel + v) != 0,
get_voxel(voxel + u + v) != 0,
};
return uvec4(
vertex_occlusion(occluders[0], occluders[1], occluders[3]),
vertex_occlusion(occluders[2], occluders[1], occluders[4]),
vertex_occlusion(occluders[5], occluders[6], occluders[3]),
vertex_occlusion(occluders[7], occluders[6], occluders[4])
);
}
void main() {
// Determine whether this thread generates a face
Face info;
bool has_face = find_face(info);
// Number of faces in the subgroup
uint subgroup_faces = subgroupAdd(uint(has_face));
// Compute the starting storage offset for this subgroup
uint subgroup_offset;
if (subgroupElect()) {
subgroup_offset = atomicAdd(face_count, subgroup_faces);
// Increment the vertex count while we're at it, accounting for two triangles per face.
atomicAdd(vertex_count, subgroup_faces * 6);
}
subgroup_offset = subgroupBroadcastFirst(subgroup_offset);
if (!has_face) return;
// Write the thread's face
uint thread_offset = subgroupExclusiveAdd(uint(has_face));
surfaces[subgroup_offset + thread_offset] = surface(
info.voxel,
info.axis,
info.inward ^^ reverse_winding,
info.material,
surface_occlusion(info.voxel, info.axis, info.inward)
);
}
================================================
FILE: client/shaders/surface-extraction/surface.h
================================================
#ifndef SURFACE_EXTRACTION_SURFACE_H_
#define SURFACE_EXTRACTION_SURFACE_H_
// A face between a voxel and its neighbor in the -X, -Y, or -Z direction
struct Surface {
// From most to least significant byte, (axis, z, y, x)
uint pos_axis;
// From most to least significant byte, (occlusion, <padding>, mat, mat)
uint occlusion_mat;
};
// [0,2^8)^3
uvec3 get_pos(Surface s) {
return uvec3(s.pos_axis & 0xFF, (s.pos_axis >> 8) & 0xFF, (s.pos_axis >> 16) & 0xFF);
}
// Identifies the order in which the vertices should be rendered. The vertex positions are the same,
// but winding and diagonal position vary. A flipped diagonal is used to ensure barycentric
// interpolation of ambient occlusion is isotropic, and does not affect texture coordinates
//
// [0,3) are -X/-Y/-Z
// [3,6) are +X/+Y/+Z
// [6,9) are -X/-Y/-Z flipped
// [9,12) are +X/+Y/+Z flipped
uint get_axis(Surface s) {
return s.pos_axis >> 24;
}
uint get_mat(Surface s) {
return s.occlusion_mat & 0xFFFF;
}
float get_occlusion(Surface s, uvec2 texcoords) {
return float((s.occlusion_mat >> (24 + 2 * (texcoords.x | texcoords.y << 1))) & 0x03) / 3.0 * 0.95 + 0.05;
}
Surface surface(uvec3 pos, uint axis, bool reverse, uint mat, uvec4 occlusion) {
Surface result;
// Flip the quad if necessary to prevent the triangle dividing line from being parallel to the
// gradient of ambient occlusion, ensuring isotropy.
axis += 3 * uint(reverse) + 6 * uint(occlusion.y + occlusion.z > occlusion.x + occlusion.w);
result.pos_axis = pos.x | pos.y << 8 | pos.z << 16 | axis << 24;
result.occlusion_mat = mat | occlusion.x << 24 | occlusion.y << 26 | occlusion.z << 28 | occlusion.w << 30;
return result;
}
#endif
================================================
FILE: client/shaders/voxels.frag
================================================
#version 450
layout(location = 0) in vec3 texcoords;
layout(location = 1) in float occlusion;
layout(location = 0) out vec4 color;
layout(set = 1, binding = 1) uniform sampler2DArray textures;
void main() {
color = texture(textures, texcoords) * occlusion;
}
================================================
FILE: client/shaders/voxels.vert
================================================
#version 460
#include "common.h"
#include "surface-extraction/surface.h"
// Maps from cube space ([0..1]^3) to local node space
layout(location = 0) in mat4 transform;
layout(location = 0) out vec3 texcoords_out;
layout(location = 1) out float occlusion;
layout(set = 1, binding = 0) readonly restrict buffer Surfaces {
Surface surfaces[];
};
layout(push_constant) uniform PushConstants {
uint dimension;
};
// Each set of 6 vertices makes a ring around the quad, with the middle and start/end vertices
// duplicated. Note that the sign only indicates the winding of the face; all faces contain the
// origin regardless.
const uvec3 vertices[12][6] = {
{{0, 0, 0}, {0, 0, 1}, {0, 1, 1}, {0, 1, 1}, {0, 1, 0}, {0, 0, 0}}, // -X
{{0, 0, 0}, {1, 0, 0}, {1, 0, 1}, {1, 0, 1}, {0, 0, 1}, {0, 0, 0}}, // -Y
{{0, 0, 0}, {0, 1, 0}, {1, 1, 0}, {1, 1, 0}, {1, 0, 0}, {0, 0, 0}}, // -Z
{{0, 0, 0}, {0, 1, 0}, {0, 1, 1}, {0, 1, 1}, {0, 0, 1}, {0, 0, 0}}, // +X
{{0, 0, 0}, {0, 0, 1}, {1, 0, 1}, {1, 0, 1}, {1, 0, 0}, {0, 0, 0}}, // +Y
{{0, 0, 0}, {1, 0, 0}, {1, 1, 0}, {1, 1, 0}, {0, 1, 0}, {0, 0, 0}}, // +Z
// Versions of the above rotated 90 degrees so the diagonal goes the other way, used to improve
// the consistency of barycentric interpolation of ambient occlusion
{{0, 0, 1}, {0, 1, 1}, {0, 1, 0}, {0, 1, 0}, {0, 0, 0}, {0, 0, 1}}, // -X
{{1, 0, 0}, {1, 0, 1}, {0, 0, 1}, {0, 0, 1}, {0, 0, 0}, {1, 0, 0}}, // -Y
{{0, 1, 0}, {1, 1, 0}, {1, 0, 0}, {1, 0, 0}, {0, 0, 0}, {0, 1, 0}}, // -Z
{{0, 0, 1}, {0, 0, 0}, {0, 1, 0}, {0, 1, 0}, {0, 1, 1}, {0, 0, 1}}, // +X
{{1, 0, 0}, {0, 0, 0}, {0, 0, 1}, {0, 0, 1}, {1, 0, 1}, {1, 0, 0}}, // +Y
{{0, 1, 0}, {0, 0, 0}, {1, 0, 0}, {1, 0, 0}, {1, 1, 0}, {0, 1, 0}} // +Z
};
const uvec2 texcoords[4][6] = {
{{0, 0}, {0, 1}, {1, 1}, {1, 1}, {1, 0}, {0, 0}},
{{0, 0}, {1, 0}, {1, 1}, {1, 1}, {0, 1}, {0, 0}},
// Rotated versions
{{0, 1}, {1, 1}, {1, 0}, {1, 0}, {0, 0}, {0, 1}},
{{0, 1}, {0, 0}, {1, 0}, {1, 0}, {1, 1}, {0, 1}},
};
void main() {
uint index = gl_VertexIndex / 6;
uint vertex = gl_VertexIndex % 6;
Surface s = surfaces[index];
uvec3 pos = get_pos(s);
uint axis = get_axis(s);
uvec2 uv = texcoords[axis / 3][vertex];
texcoords_out = vec3(uv, get_mat(s) - 1);
occlusion = get_occlusion(s, uv);
vec3 relative_coords = vertices[axis][vertex] + pos;
gl_Position = view_projection * transform * vec4(relative_coords / dimension, 1);
}
================================================
FILE: client/src/config.rs
================================================
use std::{
env, fs, io,
net::SocketAddr,
path::{Path, PathBuf},
sync::Arc,
};
use serde::Deserialize;
use tracing::{debug, error, info};
use common::{Anonymize, SimConfig, SimConfigRaw};
pub struct Config {
pub name: Arc<str>,
pub data_dirs: Vec<PathBuf>,
pub save: PathBuf,
pub chunk_load_parallelism: u32,
pub server: Option<SocketAddr>,
pub local_simulation: SimConfig,
}
impl Config {
pub fn load(dirs: &directories::ProjectDirs) -> Self {
// Future work: search $XDG_CONFIG_DIRS
let path = dirs.config_dir().join("client.toml");
// Read and parse config file
let RawConfig {
name,
data_dir,
save,
local_simulation,
chunk_load_parallelism,
server,
} = match fs::read(&path) {
Ok(data) => {
info!("found config at {}", path.anonymize().display());
match std::str::from_utf8(&data)
.map_err(anyhow::Error::from)
.and_then(|s| toml::from_str(s).map_err(anyhow::Error::from))
{
Ok(x) => x,
Err(e) => {
error!("failed to parse config: {}", e);
RawConfig::default()
}
}
}
Err(ref e) if e.kind() == io::ErrorKind::NotFound => {
info!("{} not found, using defaults", path.anonymize().display());
RawConfig::default()
}
Err(e) => {
error!(
"failed to read config: {}: {}",
path.anonymize().display(),
e
);
RawConfig::default()
}
};
let mut data_dirs = Vec::new();
if let Some(dir) = data_dir {
data_dirs.push(dir);
}
data_dirs.push(dirs.data_dir().into());
if let Ok(path) = env::current_exe()
&& let Some(dir) = path.parent()
{
data_dirs.push(dir.into());
}
#[cfg(feature = "use-repo-assets")]
{
data_dirs.push(
Path::new(env!("CARGO_MANIFEST_DIR"))
.parent()
.unwrap()
.join("assets"),
);
}
// Massage into final form
Config {
name: name.unwrap_or("player".into()),
data_dirs,
save: save.unwrap_or("default.save".into()),
chunk_load_parallelism: chunk_load_parallelism.unwrap_or(256),
server,
local_simulation: SimConfig::from_raw(&local_simulation),
}
}
pub fn find_asset(&self, path: &Path) -> Option<PathBuf> {
for dir in &self.data_dirs {
let full_path = dir.join(path);
if full_path.exists() {
debug!(path = ?path.anonymize().display(), dir = ?dir.anonymize().display(), "found asset");
return Some(full_path);
}
}
None
}
}
/// Data as parsed directly out of the config file
#[derive(Deserialize, Default)]
#[serde(deny_unknown_fields)]
struct RawConfig {
name: Option<Arc<str>>,
data_dir: Option<PathBuf>,
save: Option<PathBuf>,
chunk_load_parallelism: Option<u32>,
server: Option<SocketAddr>,
#[serde(default)]
local_simulation: SimConfigRaw,
}
================================================
FILE: client/src/graphics/base.rs
================================================
//! Common state shared throughout the graphics system
use ash::ext::debug_utils;
use common::Anonymize;
use std::ffi::{CStr, c_char};
use std::path::PathBuf;
use std::sync::Arc;
use std::{fs, io};
use tracing::{error, info, trace, warn};
use ash::{Device, vk};
use super::Core;
/// Vulkan resources shared between many parts of the renderer
pub struct Base {
pub core: Arc<Core>,
/// The physical device (i.e. GPU) we're rendering with
pub physical: vk::PhysicalDevice,
/// The logical device, containing functions used for rendering
pub device: Arc<Device>,
/// The queue family we're rendering in
pub queue_family: u32,
/// The queue used for graphics and presentation
pub queue: vk::Queue,
/// Information about the types of device-visible memory that can be allocated
pub memory_properties: vk::PhysicalDeviceMemoryProperties,
/// Cache used to speed up graphics pipeline construction
pub pipeline_cache: vk::PipelineCache,
/// Context in which the main rendering work occurs
pub render_pass: vk::RenderPass,
/// A reasonable general-purpose texture sampler
pub linear_sampler: vk::Sampler,
/// Layout of common shader resources, such as the common uniform buffer
pub common_layout: vk::DescriptorSetLayout,
pub limits: vk::PhysicalDeviceLimits,
pub timestamp_bits: u32,
pipeline_cache_path: Option<PathBuf>,
debug_utils: Option<debug_utils::Device>,
}
unsafe impl Send for Base {}
unsafe impl Sync for Base {}
impl Drop for Base {
fn drop(&mut self) {
unsafe {
self.device
.destroy_pipeline_cache(self.pipeline_cache, None);
self.device.destroy_render_pass(self.render_pass, None);
self.device.destroy_sampler(self.linear_sampler, None);
self.device
.destroy_descriptor_set_layout(self.common_layout, None);
self.device.destroy_device(None);
}
}
}
impl Base {
pub fn new(
core: Arc<Core>,
pipeline_cache_path: Option<PathBuf>,
device_exts: &[&CStr],
mut device_filter: impl FnMut(vk::PhysicalDevice, u32) -> bool,
) -> Option<Self> {
let pipeline_cache_data = if let Some(ref path) = pipeline_cache_path {
match fs::read(path) {
Ok(x) => x,
Err(e) => {
if e.kind() == io::ErrorKind::NotFound {
info!("creating fresh pipeline cache");
} else {
warn!(path=%path.anonymize().display(), "failed to load pipeline cache: {}", e);
}
Vec::new()
}
}
} else {
Vec::new()
};
unsafe {
let instance = &core.instance;
// Select a physical device and queue family to use for rendering
let (physical, queue_family_index, queue_family_properties) = instance
.enumerate_physical_devices()
.unwrap()
.into_iter()
.find_map(|physical| {
instance
.get_physical_device_queue_family_properties(physical)
.into_iter()
.enumerate()
.filter_map(|(queue_family_index, info)| {
let supports_graphic_and_surface =
info.queue_flags.contains(vk::QueueFlags::GRAPHICS)
&& device_filter(physical, queue_family_index as u32);
if supports_graphic_and_surface {
Some((physical, queue_family_index as u32, info))
} else {
None
}
})
.next()
})?;
let mut subgroup_properties = vk::PhysicalDeviceSubgroupProperties::default();
let mut physical_properties = vk::PhysicalDeviceProperties2 {
p_next: &mut subgroup_properties as *mut _ as *mut _,
..Default::default()
};
instance.get_physical_device_properties2(physical, &mut physical_properties);
let name = std::str::from_utf8(
&*(&physical_properties.properties.device_name[..physical_properties
.properties
.device_name
.iter()
.position(|&x| x == 0)
.unwrap()] as *const [c_char] as *const [u8]),
)
.unwrap();
info!(name, "selected device");
if !subgroup_properties
.supported_operations
.contains(vk::SubgroupFeatureFlags::BALLOT & vk::SubgroupFeatureFlags::ARITHMETIC)
{
error!(
"required subgroup operations are unsupported (supported: {:?})",
subgroup_properties.supported_operations
);
return None;
}
// Create the logical device and common resources descended from it
let device_exts = device_exts.iter().map(|x| x.as_ptr()).collect::<Vec<_>>();
let device = Arc::new(
instance
.create_device(
physical,
&vk::DeviceCreateInfo::default()
.queue_create_infos(&[vk::DeviceQueueCreateInfo::default()
.queue_family_index(queue_family_index)
.queue_priorities(&[1.0])])
.enabled_extension_names(&device_exts)
.push_next(
&mut vk::PhysicalDeviceVulkan12Features::default()
.descriptor_binding_partially_bound(true)
.descriptor_binding_sampled_image_update_after_bind(true),
),
None,
)
.unwrap(),
);
let queue = device.get_device_queue(queue_family_index, 0);
let memory_properties = instance.get_physical_device_memory_properties(physical);
let pipeline_cache = device
.create_pipeline_cache(
&vk::PipelineCacheCreateInfo::default().initial_data(&pipeline_cache_data),
None,
)
.unwrap();
let render_pass = device
.create_render_pass(
&vk::RenderPassCreateInfo::default()
.attachments(&[
vk::AttachmentDescription {
format: COLOR_FORMAT,
samples: vk::SampleCountFlags::TYPE_1,
load_op: vk::AttachmentLoadOp::CLEAR,
store_op: vk::AttachmentStoreOp::STORE,
initial_layout: vk::ImageLayout::UNDEFINED,
final_layout: vk::ImageLayout::PRESENT_SRC_KHR,
..Default::default()
},
vk::AttachmentDescription {
format: vk::Format::D32_SFLOAT,
samples: vk::SampleCountFlags::TYPE_1,
load_op: vk::AttachmentLoadOp::CLEAR,
store_op: vk::AttachmentStoreOp::DONT_CARE,
initial_layout: vk::ImageLayout::UNDEFINED,
final_layout: vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
..Default::default()
},
])
.subpasses(&[
vk::SubpassDescription::default()
.color_attachments(&[vk::AttachmentReference {
attachment: 0,
layout: vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL,
}])
.depth_stencil_attachment(&vk::AttachmentReference {
attachment: 1,
layout: vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
})
.pipeline_bind_point(vk::PipelineBindPoint::GRAPHICS),
vk::SubpassDescription::default()
.color_attachments(&[vk::AttachmentReference {
attachment: 0,
layout: vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL,
}])
.input_attachments(&[vk::AttachmentReference {
attachment: 1,
layout: vk::ImageLayout::DEPTH_STENCIL_READ_ONLY_OPTIMAL,
}])
.pipeline_bind_point(vk::PipelineBindPoint::GRAPHICS),
])
.dependencies(&[
vk::SubpassDependency {
src_subpass: vk::SUBPASS_EXTERNAL,
dst_subpass: 0,
src_stage_mask: vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT
| vk::PipelineStageFlags::LATE_FRAGMENT_TESTS,
dst_stage_mask: vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT
| vk::PipelineStageFlags::EARLY_FRAGMENT_TESTS,
dst_access_mask: vk::AccessFlags::COLOR_ATTACHMENT_READ
| vk::AccessFlags::COLOR_ATTACHMENT_WRITE
| vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_READ
| vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_WRITE,
..Default::default()
},
vk::SubpassDependency {
src_subpass: 0,
dst_subpass: 1,
src_stage_mask: vk::PipelineStageFlags::EARLY_FRAGMENT_TESTS
| vk::PipelineStageFlags::LATE_FRAGMENT_TESTS, // depth write
dst_stage_mask: vk::PipelineStageFlags::FRAGMENT_SHADER, // subpass input
src_access_mask: vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_WRITE,
dst_access_mask: vk::AccessFlags::INPUT_ATTACHMENT_READ,
dependency_flags: vk::DependencyFlags::BY_REGION,
},
]),
None,
)
.unwrap();
let linear_sampler = device
.create_sampler(
&vk::SamplerCreateInfo::default()
.min_filter(vk::Filter::LINEAR)
.mag_filter(vk::Filter::LINEAR)
.mipmap_mode(vk::SamplerMipmapMode::NEAREST)
.address_mode_u(vk::SamplerAddressMode::CLAMP_TO_EDGE)
.address_mode_v(vk::SamplerAddressMode::CLAMP_TO_EDGE)
.address_mode_w(vk::SamplerAddressMode::CLAMP_TO_EDGE),
None,
)
.unwrap();
let common_layout = device
.create_descriptor_set_layout(
&vk::DescriptorSetLayoutCreateInfo::default().bindings(&[
// Uniforms
vk::DescriptorSetLayoutBinding {
binding: 0,
descriptor_type: vk::DescriptorType::UNIFORM_BUFFER,
descriptor_count: 1,
stage_flags: vk::ShaderStageFlags::VERTEX
| vk::ShaderStageFlags::FRAGMENT,
..Default::default()
},
// Depth buffer
vk::DescriptorSetLayoutBinding {
binding: 1,
descriptor_type: vk::DescriptorType::INPUT_ATTACHMENT,
descriptor_count: 1,
stage_flags: vk::ShaderStageFlags::FRAGMENT,
..Default::default()
},
]),
None,
)
.unwrap();
let debug_utils = core
.debug_utils
.as_ref()
.map(|_| debug_utils::Device::new(&core.instance, &device));
Some(Self {
core,
physical,
device,
queue_family: queue_family_index,
queue,
memory_properties,
pipeline_cache,
render_pass,
linear_sampler,
common_layout,
pipeline_cache_path,
limits: physical_properties.properties.limits,
timestamp_bits: queue_family_properties.timestamp_valid_bits,
debug_utils,
})
}
}
pub fn save_pipeline_cache(&self) {
let path = match self.pipeline_cache_path {
Some(ref x) => x,
None => return,
};
let data = unsafe {
self.device
.get_pipeline_cache_data(self.pipeline_cache)
.unwrap()
};
match fs::create_dir_all(path.parent().unwrap()).and_then(|()| fs::write(path, &data)) {
Ok(()) => {
trace!(len = data.len(), "wrote pipeline cache");
}
Err(e) => {
warn!(path=%path.anonymize().display(), "failed to save pipeline cache: {}", e);
}
}
}
/// Set an object's name for use in diagnostics
pub unsafe fn set_name<T: vk::Handle>(&self, object: T, name: &CStr) {
unsafe {
let Some(ref ex) = self.debug_utils else {
return;
};
ex.set_debug_utils_object_name(
&vk::DebugUtilsObjectNameInfoEXT::default()
.object_handle(object)
.object_name(name),
)
.unwrap();
}
}
/// Convenience constructor for tests and benchmarks
pub fn headless() -> Self {
let core = Core::new(&[]);
Self::new(Arc::new(core), None, &[], |_, _| true).unwrap()
}
}
/// The pixel format we render in
pub const COLOR_FORMAT: vk::Format = vk::Format::B8G8R8A8_SRGB;
================================================
FILE: client/src/graphics/core.rs
================================================
use std::ffi::CStr;
use std::os::raw::c_char;
use std::os::raw::c_void;
use std::ptr;
use std::slice;
use ash::ext::debug_utils;
use ash::{Entry, Instance, vk};
use tracing::{debug, error, info, trace, warn};
use common::defer;
/// The most fundamental components of a Vulkan setup
pub struct Core {
/// Handle to the Vulkan dynamic library itself, used to bootstrap
pub entry: Entry,
/// The Vulkan instance, containing fundamental device-independent functions
pub instance: Instance,
/// Diagnostic infrastructure, configured if the environment supports them. Typically present
/// when the Vulkan validation layers are enabled or a graphics debugger is in use and absent
/// otherwise.
pub debug_utils: Option<debug_utils::Instance>,
messenger: vk::DebugUtilsMessengerEXT,
}
impl Drop for Core {
fn drop(&mut self) {
unsafe {
if let Some(ref utils) = self.debug_utils {
utils.destroy_debug_utils_messenger(self.messenger, None);
}
self.instance.destroy_instance(None);
}
}
}
impl Core {
pub fn new(exts: &[*const c_char]) -> Self {
unsafe {
let entry = Entry::load().unwrap();
let supported_exts = entry.enumerate_instance_extension_properties(None).unwrap();
let has_debug = supported_exts
.iter()
.any(|x| CStr::from_ptr(x.extension_name.as_ptr()) == debug_utils::NAME);
let mut exts = exts.to_vec();
if has_debug {
exts.push(debug_utils::NAME.as_ptr());
} else {
info!("vulkan debugging unavailable");
}
let instance_layers = entry.enumerate_instance_layer_properties().unwrap();
tracing::info!(
"Vulkan instance layers: {:?}",
instance_layers
.iter()
.map(|layer| CStr::from_ptr(layer.layer_name.as_ptr()).to_str().unwrap())
.collect::<Vec<_>>()
);
let name = cstr!("hypermine");
let app_info = vk::ApplicationInfo::default()
.application_name(name)
.application_version(0)
.engine_name(name)
.engine_version(0)
.api_version(vk::make_api_version(0, 1, 2, 0));
let mut instance_info = vk::InstanceCreateInfo::default()
.application_info(&app_info)
.enabled_extension_names(&exts);
let mut debug_utils_messenger_info = vk::DebugUtilsMessengerCreateInfoEXT::default()
.message_severity(
vk::DebugUtilsMessageSeverityFlagsEXT::ERROR
| vk::DebugUtilsMessageSeverityFlagsEXT::WARNING
| vk::DebugUtilsMessageSeverityFlagsEXT::INFO
| vk::DebugUtilsMessageSeverityFlagsEXT::VERBOSE,
)
.message_type(
vk::DebugUtilsMessageTypeFlagsEXT::GENERAL
| vk::DebugUtilsMessageTypeFlagsEXT::VALIDATION
| vk::DebugUtilsMessageTypeFlagsEXT::PERFORMANCE,
)
.pfn_user_callback(Some(messenger_callback))
.user_data(ptr::null_mut());
if has_debug {
instance_info = instance_info.push_next(&mut debug_utils_messenger_info);
}
let instance = entry.create_instance(&instance_info, None).unwrap();
// Guards ensure we clean up gracefully if something panics
let instance_guard = defer(|| instance.destroy_instance(None));
let debug_utils;
let messenger;
if has_debug {
// Configure Vulkan diagnostic message logging
let utils = debug_utils::Instance::new(&entry, &instance);
messenger = utils
.create_debug_utils_messenger(&debug_utils_messenger_info, None)
.unwrap();
debug_utils = Some(utils);
} else {
debug_utils = None;
messenger = vk::DebugUtilsMessengerEXT::null();
}
// Setup successful, don't destroy things.
instance_guard.cancel();
Self {
entry,
instance,
debug_utils,
messenger,
}
}
}
}
/// Callback invoked by Vulkan for diagnostic messages
///
/// We forward these to our `tracing` logging infrastructure.
unsafe extern "system" fn messenger_callback(
message_severity: vk::DebugUtilsMessageSeverityFlagsEXT,
_message_types: vk::DebugUtilsMessageTypeFlagsEXT,
p_data: *const vk::DebugUtilsMessengerCallbackDataEXT,
_p_user_data: *mut c_void,
) -> vk::Bool32 {
unsafe {
unsafe fn fmt_labels(ptr: *const vk::DebugUtilsLabelEXT, count: u32) -> String {
unsafe {
if count == 0 {
// We need to handle a count of 0 separately because ptr may be
// null, resulting in undefined behavior if used with
// slice::from_raw_parts.
return String::new();
}
slice::from_raw_parts(ptr, count as usize)
.iter()
.map(|label| {
CStr::from_ptr(label.p_label_name)
.to_string_lossy()
.into_owned()
})
.collect::<Vec<_>>()
.join(", ")
}
}
let data = &*p_data;
let msg_id = if data.p_message_id_name.is_null() {
"".into()
} else {
CStr::from_ptr(data.p_message_id_name).to_string_lossy()
};
let msg = CStr::from_ptr(data.p_message).to_string_lossy();
let queue_labels = fmt_labels(data.p_queue_labels, data.queue_label_count);
let cmd_labels = fmt_labels(data.p_cmd_buf_labels, data.cmd_buf_label_count);
let objects = slice::from_raw_parts(data.p_objects, data.object_count as usize)
.iter()
.map(|obj| {
if obj.p_object_name.is_null() {
format!("{:?} {:x}", obj.object_type, obj.object_handle)
} else {
format!("{:?} {:x} {}", obj.object_type, obj.object_handle, msg_id)
}
})
.collect::<Vec<_>>()
.join(", ");
if message_severity >= vk::DebugUtilsMessageSeverityFlagsEXT::ERROR {
error!(target: "vulkan", id = %msg_id, number = data.message_id_number, queue_labels = %queue_labels, cmd_labels = %cmd_labels, objects = %objects, "{}", msg);
} else if message_severity >= vk::DebugUtilsMessageSeverityFlagsEXT::WARNING {
warn! (target: "vulkan", id = %msg_id, number = data.message_id_number, queue_labels = %queue_labels, cmd_labels = %cmd_labels, objects = %objects, "{}", msg);
} else if message_severity >= vk::DebugUtilsMessageSeverityFlagsEXT::INFO {
debug!(target: "vulkan", id = %msg_id, number = data.message_id_number, queue_labels = %queue_labels, cmd_labels = %cmd_labels, objects = %objects, "{}", msg);
} else {
trace!(target: "vulkan", id = %msg_id, number = data.message_id_number, queue_labels = %queue_labels, cmd_labels = %cmd_labels, objects = %objects, "{}", msg);
}
vk::FALSE
}
}
================================================
FILE: client/src/graphics/draw.rs
================================================
use std::sync::Arc;
use std::time::Instant;
use ash::vk;
use common::traversal;
use lahar::Staged;
use metrics::histogram;
use super::{Base, Fog, Frustum, GltfScene, Meshes, Voxels, fog, voxels};
use crate::{Asset, Config, Loader, Sim};
use common::SimConfig;
use common::proto::{Character, Position};
/// Manages rendering, independent of what is being rendered to
pub struct Draw {
gfx: Arc<Base>,
cfg: Arc<Config>,
/// Used to allocate the command buffers we render with
cmd_pool: vk::CommandPool,
/// Allows accurate frame timing information to be recorded
timestamp_pool: vk::QueryPool,
/// State that varies per frame in flight
states: Vec<State>,
/// The index of the next element of `states` to use
next_state: usize,
/// A reference time
epoch: Instant,
/// The lowest common denominator between the interfaces of our graphics pipelines
///
/// Represents e.g. the binding for common uniforms
common_pipeline_layout: vk::PipelineLayout,
/// Descriptor pool from which descriptor sets shared between many pipelines are allocated
common_descriptor_pool: vk::DescriptorPool,
/// Drives async asset loading
loader: Loader,
//
// Rendering pipelines
//
/// Populated after connect, once the voxel configuration is known
voxels: Option<Voxels>,
meshes: Meshes,
fog: Fog,
/// Reusable storage for barriers that prevent races between image upload and read
image_barriers: Vec<vk::ImageMemoryBarrier<'static>>,
/// Reusable storage for barriers that prevent races between buffer upload and read
buffer_barriers: Vec<vk::BufferMemoryBarrier<'static>>,
/// Yakui Vulkan context
yakui_vulkan: yakui_vulkan::YakuiVulkan,
/// Miscellany
character_model: Asset<GltfScene>,
}
/// Maximum number of simultaneous frames in flight
const PIPELINE_DEPTH: u32 = 2;
const TIMESTAMPS_PER_FRAME: u32 = 3;
impl Draw {
pub fn new(gfx: Arc<Base>, cfg: Arc<Config>) -> Self {
let device = &*gfx.device;
unsafe {
// Allocate a command buffer for each frame state
let cmd_pool = device
.create_command_pool(
&vk::CommandPoolCreateInfo::default()
.queue_family_index(gfx.queue_family)
.flags(
vk::CommandPoolCreateFlags::RESET_COMMAND_BUFFER
| vk::CommandPoolCreateFlags::TRANSIENT,
),
None,
)
.unwrap();
let cmds = device
.allocate_command_buffers(
&vk::CommandBufferAllocateInfo::default()
.command_pool(cmd_pool)
.command_buffer_count(2 * PIPELINE_DEPTH),
)
.unwrap();
let timestamp_pool = device
.create_query_pool(
&vk::QueryPoolCreateInfo::default()
.query_type(vk::QueryType::TIMESTAMP)
.query_count(TIMESTAMPS_PER_FRAME * PIPELINE_DEPTH),
None,
)
.unwrap();
gfx.set_name(timestamp_pool, cstr!("timestamp pool"));
let common_pipeline_layout = device
.create_pipeline_layout(
&vk::PipelineLayoutCreateInfo::default().set_layouts(&[gfx.common_layout]),
None,
)
.unwrap();
// Allocate descriptor sets for data used by all graphics pipelines (e.g. common
// uniforms)
let common_descriptor_pool = device
.create_descriptor_pool(
&vk::DescriptorPoolCreateInfo::default()
.max_sets(PIPELINE_DEPTH)
.pool_sizes(&[
vk::DescriptorPoolSize {
ty: vk::DescriptorType::UNIFORM_BUFFER,
descriptor_count: PIPELINE_DEPTH,
},
vk::DescriptorPoolSize {
ty: vk::DescriptorType::INPUT_ATTACHMENT,
descriptor_count: PIPELINE_DEPTH,
},
]),
None,
)
.unwrap();
let common_ds = device
.allocate_descriptor_sets(
&vk::DescriptorSetAllocateInfo::default()
.descriptor_pool(common_descriptor_pool)
.set_layouts(&vec![gfx.common_layout; PIPELINE_DEPTH as usize]),
)
.unwrap();
let mut loader = Loader::new(cfg.clone(), gfx.clone());
// Construct the per-frame states
let states = cmds
.chunks(2)
.zip(common_ds)
.map(|(cmds, common_ds)| {
let uniforms = Staged::new(
device,
&gfx.memory_properties,
vk::BufferUsageFlags::UNIFORM_BUFFER,
);
device.update_descriptor_sets(
&[vk::WriteDescriptorSet::default()
.dst_set(common_ds)
.dst_binding(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.buffer_info(&[vk::DescriptorBufferInfo {
buffer: uniforms.buffer(),
offset: 0,
range: vk::WHOLE_SIZE,
}])],
&[],
);
let x = State {
cmd: cmds[0],
post_cmd: cmds[1],
common_ds,
image_acquired: device.create_semaphore(&Default::default(), None).unwrap(),
fence: device
.create_fence(
&vk::FenceCreateInfo::default()
.flags(vk::FenceCreateFlags::SIGNALED),
None,
)
.unwrap(),
uniforms,
used: false,
in_flight: false,
voxels: None,
};
gfx.set_name(x.cmd, cstr!("frame"));
gfx.set_name(x.post_cmd, cstr!("post-frame"));
gfx.set_name(x.image_acquired, cstr!("image acquired"));
gfx.set_name(x.fence, cstr!("render complete"));
gfx.set_name(x.uniforms.buffer(), cstr!("uniforms"));
x
})
.collect();
let meshes = Meshes::new(&gfx, loader.ctx().mesh_ds_layout);
let fog = Fog::new(&gfx);
gfx.save_pipeline_cache();
let mut yakui_vulkan_options = yakui_vulkan::Options::default();
yakui_vulkan_options.render_pass = gfx.render_pass;
yakui_vulkan_options.subpass = 1;
let mut yakui_vulkan = yakui_vulkan::YakuiVulkan::new(
&yakui_vulkan::VulkanContext::new(device, gfx.queue, gfx.memory_properties),
yakui_vulkan_options,
);
for _ in 0..PIPELINE_DEPTH {
yakui_vulkan.transfers_submitted();
}
let character_model = loader.load(
"character model",
super::GlbFile {
path: "character.glb".into(),
},
);
Self {
gfx,
cfg,
cmd_pool,
timestamp_pool,
states,
next_state: 0,
epoch: Instant::now(),
common_pipeline_layout,
common_descriptor_pool,
loader,
voxels: None,
meshes,
fog,
buffer_barriers: Vec::new(),
image_barriers: Vec::new(),
yakui_vulkan,
character_model,
}
}
}
/// Called with server-defined world parameters once they're known
pub fn configure(&mut self, cfg: &SimConfig) {
let voxels = Voxels::new(
&self.gfx,
self.cfg.clone(),
&mut self.loader,
u32::from(cfg.chunk_size),
PIPELINE_DEPTH,
);
for state in &mut self.states {
state.voxels = Some(voxels::Frame::new(&self.gfx, &voxels));
}
self.voxels = Some(voxels);
}
/// Waits for a frame's worth of resources to become available for use in rendering a new frame
///
/// Call before signaling the image_acquired semaphore or invoking `draw`.
pub unsafe fn wait(&mut self) {
unsafe {
let device = &*self.gfx.device;
let state = &mut self.states[self.next_state];
device.wait_for_fences(&[state.fence], true, !0).unwrap();
self.yakui_vulkan
.transfers_finished(&yakui_vulkan::VulkanContext::new(
device,
self.gfx.queue,
self.gfx.memory_properties,
));
state.in_flight = false;
}
}
/// Semaphore that must be signaled when an output framebuffer can be rendered to
///
/// Don't signal until after `wait`ing; call before `draw`
pub fn image_acquired(&self) -> vk::Semaphore {
self.states[self.next_state].image_acquired
}
/// Submit commands to the GPU to draw a frame
///
/// `framebuffer` must have a color and depth buffer attached and have the dimensions specified
/// in `extent`. The `present` semaphore is signaled when rendering is complete and the color
/// image can be presented.
///
/// Submits commands that wait on `image_acquired` before writing to `framebuffer`'s color
/// attachment.
#[allow(clippy::too_many_arguments)] // Every argument is of a different type, making this less of a problem.
pub unsafe fn draw(
&mut self,
mut sim: Option<&mut Sim>,
yakui_paint_dom: &yakui::paint::PaintDom,
framebuffer: vk::Framebuffer,
depth_view: vk::ImageView,
extent: vk::Extent2D,
present: vk::Semaphore,
frustum: &Frustum,
) {
unsafe {
let draw_started = Instant::now();
let view = sim.as_ref().map_or_else(Position::origin, |sim| sim.view());
let projection = frustum.projection(1.0e-4);
let view_projection = projection.matrix() * na::Matrix4::from(view.local.inverse());
self.loader.drive();
let device = &*self.gfx.device;
let state_index = self.next_state;
let state = &mut self.states[self.next_state];
let cmd = state.cmd;
let yakui_vulkan_context = yakui_vulkan::VulkanContext::new(
device,
self.gfx.queue,
self.gfx.memory_properties,
);
// We're using this state again, so put the fence back in the unsignaled state and compute
// the next frame to use
device.reset_fences(&[state.fence]).unwrap();
self.next_state = (self.next_state + 1) % PIPELINE_DEPTH as usize;
// Set up framebuffer attachments
device.update_descriptor_sets(
&[vk::WriteDescriptorSet::default()
.dst_set(state.common_ds)
.dst_binding(1)
.descriptor_type(vk::DescriptorType::INPUT_ATTACHMENT)
.image_info(&[vk::DescriptorImageInfo {
sampler: vk::Sampler::null(),
image_view: depth_view,
image_layout: vk::ImageLayout::DEPTH_STENCIL_READ_ONLY_OPTIMAL,
}])],
&[],
);
// Handle completed queries
let first_query = state_index as u32 * TIMESTAMPS_PER_FRAME;
if state.used {
// Collect timestamps from the last time we drew this frame
let mut queries = [0u64; TIMESTAMPS_PER_FRAME as usize];
// `WAIT` is guaranteed not to block here because `Self::draw` is only called after
// `Self::wait` ensures that the prior instance of this frame is complete.
device
.get_query_pool_results(
self.timestamp_pool,
first_query,
&mut queries,
vk::QueryResultFlags::TYPE_64 | vk::QueryResultFlags::WAIT,
)
.unwrap();
let draw_seconds = self.gfx.limits.timestamp_period as f64
* 1e-9
* (queries[1] - queries[0]) as f64;
let after_seconds = self.gfx.limits.timestamp_period as f64
* 1e-9
* (queries[2] - queries[1]) as f64;
histogram!("frame.gpu.draw").record(draw_seconds);
histogram!("frame.gpu.after_draw").record(after_seconds);
}
device
.begin_command_buffer(
cmd,
&vk::CommandBufferBeginInfo::default()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT),
)
.unwrap();
device
.begin_command_buffer(
state.post_cmd,
&vk::CommandBufferBeginInfo::default()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT),
)
.unwrap();
device.cmd_reset_query_pool(
cmd,
self.timestamp_pool,
first_query,
TIMESTAMPS_PER_FRAME,
);
let mut timestamp_index = first_query;
device.cmd_write_timestamp(
cmd,
vk::PipelineStageFlags::BOTTOM_OF_PIPE,
self.timestamp_pool,
timestamp_index,
);
timestamp_index += 1;
self.yakui_vulkan
.transfer(yakui_paint_dom, &yakui_vulkan_context, cmd);
// Schedule transfer of uniform data. Note that we defer actually preparing the data to just
// before submitting the command buffer so time-sensitive values can be set with minimum
// latency.
state.uniforms.record_transfer(device, cmd);
self.buffer_barriers.push(
vk::BufferMemoryBarrier::default()
.src_access_mask(vk::AccessFlags::TRANSFER_WRITE)
.dst_access_mask(vk::AccessFlags::UNIFORM_READ)
.buffer(state.uniforms.buffer())
.size(vk::WHOLE_SIZE),
);
let nearby_nodes_started = Instant::now();
let nearby_nodes = if let Some(sim) = sim.as_deref() {
traversal::nearby_nodes(&sim.graph, &view, self.cfg.local_simulation.view_distance)
} else {
vec![]
};
histogram!("frame.cpu.nearby_nodes").record(nearby_nodes_started.elapsed());
if let (Some(voxels), Some(sim)) = (self.voxels.as_mut(), sim.as_mut()) {
voxels.prepare(
device,
state.voxels.as_mut().unwrap(),
sim,
&nearby_nodes,
state.post_cmd,
frustum,
);
}
// Ensure reads of just-transferred memory wait until it's ready
device.cmd_pipeline_barrier(
cmd,
vk::PipelineStageFlags::TRANSFER,
vk::PipelineStageFlags::VERTEX_SHADER | vk::PipelineStageFlags::FRAGMENT_SHADER,
vk::DependencyFlags::default(),
&[],
&self.buffer_barriers,
&self.image_barriers,
);
self.buffer_barriers.clear();
self.image_barriers.clear();
device.cmd_begin_render_pass(
cmd,
&vk::RenderPassBeginInfo::default()
.render_pass(self.gfx.render_pass)
.framebuffer(framebuffer)
.render_area(vk::Rect2D {
offset: vk::Offset2D::default(),
extent,
})
.clear_values(&[
vk::ClearValue {
color: vk::ClearColorValue {
float32: [0.0, 0.0, 0.0, 0.0],
},
},
vk::ClearValue {
depth_stencil: vk::ClearDepthStencilValue {
depth: 0.0,
stencil: 0,
},
},
]),
vk::SubpassContents::INLINE,
);
// Set up common dynamic state
let viewports = [vk::Viewport {
x: 0.0,
y: 0.0,
width: extent.width as f32,
height: extent.height as f32,
min_depth: 0.0,
max_depth: 1.0,
}];
let scissors = [vk::Rect2D {
offset: vk::Offset2D { x: 0, y: 0 },
extent: vk::Extent2D {
width: extent.width,
height: extent.height,
},
}];
device.cmd_set_viewport(cmd, 0, &viewports);
device.cmd_set_scissor(cmd, 0, &scissors);
// Record the actual rendering commands
if let Some(ref mut voxels) = self.voxels {
voxels.draw(
device,
&self.loader,
state.common_ds,
state.voxels.as_ref().unwrap(),
cmd,
);
}
if let Some(sim) = sim.as_deref() {
for (node, transform) in nearby_nodes {
for &entity in sim.graph_entities.get(node) {
if sim.local_character == Some(entity) {
// Don't draw ourself
continue;
}
let pos = sim
.world
.get::<&Position>(entity)
.expect("positionless entity in graph");
if let Some(character_model) = self.loader.get(self.character_model)
&& let Ok(ch) = sim.world.get::<&Character>(entity)
{
let transform = na::Matrix4::from(transform * pos.local)
* na::Matrix4::new_scaling(sim.cfg().meters_to_absolute)
* ch.state.orientation.to_homogeneous();
for mesh in &character_model.0 {
self.meshes
.draw(device, state.common_ds, cmd, mesh, &transform);
}
}
}
}
}
device.cmd_next_subpass(cmd, vk::SubpassContents::INLINE);
self.fog.draw(device, state.common_ds, cmd);
self.yakui_vulkan
.paint(yakui_paint_dom, &yakui_vulkan_context, cmd, extent);
// Finish up
device.cmd_end_render_pass(cmd);
device.cmd_write_timestamp(
cmd,
vk::PipelineStageFlags::BOTTOM_OF_PIPE,
self.timestamp_pool,
timestamp_index,
);
timestamp_index += 1;
device.end_command_buffer(cmd).unwrap();
device.cmd_write_timestamp(
state.post_cmd,
vk::PipelineStageFlags::BOTTOM_OF_PIPE,
self.timestamp_pool,
timestamp_index,
);
device.end_command_buffer(state.post_cmd).unwrap();
// Specify the uniform data before actually submitting the command to transfer it
state.uniforms.write(Uniforms {
view_projection,
inverse_projection: *projection.inverse().matrix(),
fog_density: fog::density(self.cfg.local_simulation.fog_distance, 1e-3, 5.0),
time: self.epoch.elapsed().as_secs_f32().fract(),
});
// Submit the commands to the GPU
device
.queue_submit(
self.gfx.queue,
&[
vk::SubmitInfo::default()
.command_buffers(&[cmd])
.wait_semaphores(&[state.image_acquired])
.wait_dst_stage_mask(&[vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT])
.signal_semaphores(&[present]),
vk::SubmitInfo::default().command_buffers(&[state.post_cmd]),
],
state.fence,
)
.unwrap();
self.yakui_vulkan.transfers_submitted();
state.used = true;
state.in_flight = true;
histogram!("frame.cpu").record(draw_started.elapsed());
}
}
/// Wait for all drawing to complete
///
/// Useful to e.g. ensure it's safe to deallocate an image that's being rendered to
pub fn wait_idle(&self) {
let device = &*self.gfx.device;
for state in &self.states {
unsafe {
device.wait_for_fences(&[state.fence], true, !0).unwrap();
}
}
}
}
impl Drop for Draw {
fn drop(&mut self) {
let device = &*self.gfx.device;
unsafe {
for state in &mut self.states {
if state.in_flight {
device.wait_for_fences(&[state.fence], true, !0).unwrap();
state.in_flight = false;
}
device.destroy_semaphore(state.image_acquired, None);
device.destroy_fence(state.fence, None);
state.uniforms.destroy(device);
if let Some(mut voxels) = state.voxels.take() {
voxels.destroy(device);
}
}
self.yakui_vulkan.cleanup(&self.gfx.device);
device.destroy_command_pool(self.cmd_pool, None);
device.destroy_query_pool(self.timestamp_pool, None);
device.destroy_descriptor_pool(self.common_descriptor_pool, None);
device.destroy_pipeline_layout(self.common_pipeline_layout, None);
self.fog.destroy(device);
self.meshes.destroy(device);
if let Some(mut voxels) = self.voxels.take() {
voxels.destroy(device);
}
}
}
}
struct State {
/// Semaphore signaled by someone else to indicate that output to the framebuffer can begin
image_acquired: vk::Semaphore,
/// Fence signaled when this state is no longer in use
fence: vk::Fence,
/// Command buffer we record the frame's rendering onto
cmd: vk::CommandBuffer,
/// Work performed after rendering, overlapping with the next frame's CPU work
post_cmd: vk::CommandBuffer,
/// Descriptor set for graphics-pipeline-independent data
common_ds: vk::DescriptorSet,
/// The common uniform buffer
uniforms: Staged<Uniforms>,
/// Whether this state has been previously used
///
/// Indicates that e.g. valid timestamps are associated with this query
used: bool,
/// Whether this state is currently being accessed by the GPU
///
/// True for the period between `cmd` being submitted and `fence` being waited.
in_flight: bool,
// Per-pipeline states
voxels: Option<voxels::Frame>,
}
/// Data stored in the common uniform buffer
///
/// Alignment and padding must be manually managed to match the std140 ABI as expected by the
/// shaders.
#[repr(C)]
#[derive(Copy, Clone)]
struct Uniforms {
/// Camera projection matrix
view_projection: na::Matrix4<f32>,
inverse_projection: na::Matrix4<f32>,
fog_density: f32,
/// Cycles through [0,1) once per second for simple animation effects
time: f32,
}
================================================
FILE: client/src/graphics/fog.rs
================================================
use ash::{Device, vk};
use vk_shader_macros::include_glsl;
use super::Base;
use common::defer;
const VERT: &[u32] = include_glsl!("shaders/fullscreen.vert");
const FRAG: &[u32] = include_glsl!("shaders/fog.frag");
pub struct Fog {
pipeline_layout: vk::PipelineLayout,
pipeline: vk::Pipeline,
}
impl Fog {
pub fn new(gfx: &Base) -> Self {
let device = &*gfx.device;
unsafe {
// Construct the shader modules
let vert = device
.create_shader_module(&vk::ShaderModuleCreateInfo::default().code(VERT), None)
.unwrap();
// Note that these only need to live until the pipeline itself is constructed
let v_guard = defer(|| device.destroy_shader_module(vert, None));
let frag = device
.create_shader_module(&vk::ShaderModuleCreateInfo::default().code(FRAG), None)
.unwrap();
let f_guard = defer(|| device.destroy_shader_module(frag, None));
// Define the outward-facing interface of the shaders, incl. uniforms, samplers, etc.
let pipeline_layout = device
.create_pipeline_layout(
&vk::PipelineLayoutCreateInfo::default().set_layouts(&[gfx.common_layout]),
None,
)
.unwrap();
let entry_point = cstr!("main").as_ptr();
let mut pipelines = device
.create_graphics_pipelines(
gfx.pipeline_cache,
&[vk::GraphicsPipelineCreateInfo::default()
.stages(&[
vk::PipelineShaderStageCreateInfo {
stage: vk::ShaderStageFlags::VERTEX,
module: vert,
p_name: entry_point,
..Default::default()
},
vk::PipelineShaderStageCreateInfo {
stage: vk::ShaderStageFlags::FRAGMENT,
module: frag,
p_name: entry_point,
..Default::default()
},
])
.vertex_input_state(&vk::PipelineVertexInputStateCreateInfo::default())
.input_assembly_state(
&vk::PipelineInputAssemblyStateCreateInfo::default()
.topology(vk::PrimitiveTopology::TRIANGLE_LIST),
)
.viewport_state(
&vk::PipelineViewportStateCreateInfo::default()
.scissor_count(1)
.viewport_count(1),
)
.rasterization_state(
&vk::PipelineRasterizationStateCreateInfo::default()
.cull_mode(vk::CullModeFlags::NONE)
.polygon_mode(vk::PolygonMode::FILL)
.line_width(1.0),
)
.multisample_state(
&vk::PipelineMultisampleStateCreateInfo::default()
.rasterization_samples(vk::SampleCountFlags::TYPE_1),
)
.depth_stencil_state(
&vk::PipelineDepthStencilStateCreateInfo::default()
.depth_test_enable(false)
.depth_write_enable(false),
)
.color_blend_state(
&vk::PipelineColorBlendStateCreateInfo::default().attachments(&[
vk::PipelineColorBlendAttachmentState {
blend_enable: vk::TRUE,
src_color_blend_factor: vk::BlendFactor::ONE_MINUS_SRC_ALPHA,
dst_color_blend_factor: vk::BlendFactor::SRC_ALPHA,
color_blend_op: vk::BlendOp::ADD,
color_write_mask: vk::ColorComponentFlags::R
| vk::ColorComponentFlags::G
| vk::ColorComponentFlags::B,
..Default::default()
},
]),
)
.dynamic_state(
&vk::PipelineDynamicStateCreateInfo::default().dynamic_states(&[
vk::DynamicState::VIEWPORT,
vk::DynamicState::SCISSOR,
]),
)
.layout(pipeline_layout)
.render_pass(gfx.render_pass)
.subpass(1)],
None,
)
.unwrap()
.into_iter();
let pipeline = pipelines.next().unwrap();
gfx.set_name(pipeline, cstr!("fog"));
// Clean up the shaders explicitly, so the defer guards don't hold onto references we're
// moving into `Self` to be returned
v_guard.invoke();
f_guard.invoke();
Self {
pipeline_layout,
pipeline,
}
}
}
pub unsafe fn draw(
&mut self,
device: &Device,
common_ds: vk::DescriptorSet,
cmd: vk::CommandBuffer,
) {
unsafe {
device.cmd_bind_pipeline(cmd, vk::PipelineBindPoint::GRAPHICS, self.pipeline);
device.cmd_bind_descriptor_sets(
cmd,
vk::PipelineBindPoint::GRAPHICS,
self.pipeline_layout,
0,
&[common_ds],
&[],
);
device.cmd_draw(cmd, 3, 1, 0, 0);
}
}
pub unsafe fn destroy(&mut self, device: &Device) {
unsafe {
device.destroy_pipeline(self.pipeline, None);
device.destroy_pipeline_layout(self.pipeline_layout, None);
}
}
}
/// Compute the density value that will lead to a certain transmission from points at a certain
/// distance, for a certain fog exponent
pub fn density(distance: f32, transmission: f32, exponent: f32) -> f32 {
transmission.recip().ln().powf(exponent.recip()) / distance
}
================================================
FILE: client/src/graphics/frustum.rs
================================================
use common::math::{MDirection, MPoint};
#[derive(Debug, Copy, Clone)]
pub struct Frustum {
pub left: f32,
pub right: f32,
pub down: f32,
pub up: f32,
}
impl Frustum {
/// Construct a symmetric frustum from a vertical FoV and an aspect ratio (width / height)
pub fn from_vfov(vfov: f32, aspect_ratio: f32) -> Self {
let hfov = (aspect_ratio * vfov.tan()).atan();
Self {
left: -hfov,
right: hfov,
down: -vfov,
up: vfov,
}
}
#[rustfmt::skip]
/// Compute right-handed y-up inverse Z perspective projection matrix with far plane at 1.0
///
/// This projection is applied to Beltrami-Klein vertices, which fall within a ball of radius 1
/// around the viewpoint, so a far plane of 1.0 gives us ideal distribution of depth precision.
pub fn projection(&self, znear: f32) -> na::Projective3<f32> {
// Based on http://dev.theomader.com/depth-precision/ (broken link) + OpenVR docs
// Additional context at https://developer.nvidia.com/content/depth-precision-visualized
let zfar = 1.0;
let left = self.left.tan();
let right = self.right.tan();
let down = self.down.tan();
let up = self.up.tan();
let idx = 1.0 / (right - left);
let idy = 1.0 / (down - up);
let sx = right + left;
let sy = down + up;
// For an infinite far plane instead, za = 0 and zb = znear
let za = -znear / (znear - zfar);
let zb = -(znear * zfar) / (znear - zfar);
na::Projective3::from_matrix_unchecked(
na::Matrix4::new(
2.0 * idx, 0.0, sx * idx, 0.0,
0.0, 2.0 * idy, sy * idy, 0.0,
0.0, 0.0, za, zb,
0.0, 0.0, -1.0, 0.0))
}
pub fn planes(&self) -> FrustumPlanes {
FrustumPlanes {
left: MDirection::from(
na::UnitQuaternion::from_axis_angle(&na::Vector3::y_axis(), self.left)
* -na::Vector3::x_axis(),
),
right: MDirection::from(
na::UnitQuaternion::from_axis_angle(&na::Vector3::y_axis(), self.right)
* na::Vector3::x_axis(),
),
down: MDirection::from(
na::UnitQuaternion::from_axis_angle(&na::Vector3::x_axis(), self.down)
* na::Vector3::y_axis(),
),
up: MDirection::from(
na::UnitQuaternion::from_axis_angle(&na::Vector3::x_axis(), self.up)
* -na::Vector3::y_axis(),
),
}
}
}
#[derive(Debug, Copy, Clone)]
pub struct FrustumPlanes {
left: MDirection<f32>,
right: MDirection<f32>,
down: MDirection<f32>,
up: MDirection<f32>,
}
impl FrustumPlanes {
pub fn contain(&self, point: &MPoint<f32>, radius: f32) -> bool {
for &plane in &[&self.left, &self.right, &self.down, &self.up] {
if plane.mip(point).asinh() < -radius {
return false;
}
}
true
}
}
#[cfg(test)]
mod tests {
use super::*;
use common::math::MIsometry;
use std::f32;
#[test]
fn planes_sanity() {
// 90 degree square
let planes = Frustum::from_vfov(f32::consts::FRAC_PI_4, 1.0).planes();
assert!(planes.contain(&MPoint::origin(), 0.1));
assert!(planes.contain(
&(MIsometry::translation_along(&-na::Vector3::z()) * MPoint::origin()),
0.0
));
assert!(!planes.contain(
&(MIsometry::translation_along(&na::Vector3::z()) * MPoint::origin()),
0.0
));
assert!(!planes.contain(
&(MIsometry::translation_along(&na::Vector3::x()) * MPoint::origin()),
0.0
));
assert!(!planes.contain(
&(MIsometry::translation_along(&na::Vector3::y()) * MPoint::origin()),
0.0
));
assert!(!planes.contain(
&(MIsometry::translation_along(&-na::Vector3::x()) * MPoint::origin()),
0.0
));
assert!(!planes.contain(
&(MIsometry::translation_along(&-na::Vector3::y()) * MPoint::origin()),
0.0
));
}
}
================================================
FILE: client/src/graphics/gltf_mesh.rs
================================================
use std::{
borrow::Cow,
fs::{self, File},
io::Cursor,
mem,
path::{Path, PathBuf},
ptr,
};
use anyhow::{Context, Result, anyhow, bail};
use ash::vk;
use common::Anonymize;
use futures_util::future::{BoxFuture, FutureExt, try_join_all};
use lahar::{BufferRegionAlloc, DedicatedImage};
use tracing::{error, trace};
use super::{Base, Mesh, meshes::Vertex};
use crate::loader::{Cleanup, LoadCtx, LoadFuture, Loadable};
pub struct GlbFile {
pub path: PathBuf,
}
impl Loadable for GlbFile {
type Output = GltfScene;
fn load(self, ctx: &LoadCtx) -> LoadFuture<'_, Self::Output> {
Box::pin(self.load(ctx))
}
}
impl GlbFile {
async fn load(self, ctx: &LoadCtx) -> Result<GltfScene> {
let path = ctx
.cfg
.find_asset(&self.path)
.ok_or_else(|| anyhow!("{} not found", self.path.anonymize().display()))?;
let glb = gltf::Glb::from_reader(
File::open(&path).with_context(|| format!("opening {}", path.anonymize().display()))?,
)
.with_context(|| format!("reading {}", path.anonymize().display()))?;
let gltf = gltf::Document::from_json(
gltf::json::deserialize::from_slice(&glb.json).context("JSON parsing")?,
)
.context("GLTF parsing")?;
let buffer = glb
.bin
.as_ref()
.ok_or_else(|| anyhow!("missing binary payload"))?;
let scene = gltf
.default_scene()
.ok_or_else(|| anyhow!("no default scene"))?;
let identity = na::Matrix4::identity();
let meshes = try_join_all(
scene
.nodes()
.map(|node| load_node(ctx, buffer, &identity, node)),
)
.await?
.into_iter()
.flatten()
.collect();
Ok(GltfScene(meshes))
}
}
pub struct GltfScene(pub Vec<Mesh>);
impl Cleanup for GltfScene {
unsafe fn cleanup(self, gfx: &Base) {
unsafe {
for mesh in self.0 {
mesh.cleanup(gfx);
}
}
}
}
fn load_node<'a>(
ctx: &'a LoadCtx,
buffer: &'a [u8],
transform: &'a na::Matrix4<f32>,
node: gltf::Node<'a>,
) -> BoxFuture<'a, Result<Vec<Mesh>>> {
async move {
let transform = transform * na::Matrix4::from(node.transform().matrix());
let (mut local, children) = tokio::try_join!(
async {
if let Some(mesh) = node.mesh() {
Ok(load_mesh(ctx, buffer, &transform, &mesh).await?)
} else {
Ok(Vec::new())
}
},
try_join_all(
node.children()
.map(|child| load_node(ctx, buffer, &transform, child))
)
)?;
local.extend(children.into_iter().flatten());
Ok(local)
}
.boxed()
}
async fn load_mesh(
ctx: &LoadCtx,
buffer: &[u8],
transform: &na::Matrix4<f32>,
mesh: &gltf::Mesh<'_>,
) -> Result<Vec<Mesh>> {
try_join_all(
mesh.primitives()
.map(|x| load_primitive(ctx, buffer, transform, x)),
)
.await
}
async fn load_primitive(
ctx: &LoadCtx,
buffer: &[u8],
transform: &na::Matrix4<f32>,
prim: gltf::Primitive<'_>,
) -> Result<Mesh> {
let device = &*ctx.gfx.device;
let texcoord_index = prim
.material()
.pbr_metallic_roughness()
.base_color_texture()
.map(|x| x.tex_coord());
// Concurrent upload
// TODO: Don't leak resources on error
let (geom, color) = tokio::join!(
load_geom(ctx, buffer, &prim, transform, texcoord_index),
load_material(ctx, buffer, &prim)
);
let geom = geom?;
let color = color?;
unsafe {
let color_view = device
.create_image_view(
&vk::ImageViewCreateInfo::default()
.image(color.handle)
.view_type(vk::ImageViewType::TYPE_2D)
.format(vk::Format::R8G8B8A8_SRGB)
.subresource_range(vk::ImageSubresourceRange {
aspect_mask: vk::ImageAspectFlags::COLOR,
base_mip_level: 0,
level_count: 1,
base_array_layer: 0,
layer_count: 1,
}),
None,
)
.unwrap();
let pool = device
.create_descriptor_pool(
&vk::DescriptorPoolCreateInfo::default()
.max_sets(1)
.pool_sizes(&[vk::DescriptorPoolSize {
ty: vk::DescriptorType::COMBINED_IMAGE_SAMPLER,
descriptor_count: 1,
}]),
None,
)
.unwrap();
let ds = device
.allocate_descriptor_sets(
&vk::DescriptorSetAllocateInfo::default()
.descriptor_pool(pool)
.set_layouts(&[ctx.mesh_ds_layout]),
)
.unwrap()[0];
device.update_descriptor_sets(
&[vk::WriteDescriptorSet::default()
.dst_set(ds)
.dst_binding(0)
.descriptor_type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER)
.image_info(&[vk::DescriptorImageInfo {
sampler: vk::Sampler::null(),
image_view: color_view,
image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL,
}])],
&[],
);
Ok(Mesh {
vertices: geom.vertices,
indices: geom.indices,
index_count: geom.index_count,
pool,
ds,
color,
color_view,
})
}
}
struct Geometry {
vertices: BufferRegionAlloc,
indices: BufferRegionAlloc,
index_count: u32,
}
async fn load_geom(
ctx: &LoadCtx,
buffer: &[u8],
prim: &gltf::Primitive<'_>,
transform: &na::Matrix4<f32>,
texcoord_index: Option<u32>,
) -> Result<Geometry> {
let normal_transform = match transform.try_inverse() {
None => {
error!("non-invertible transform");
na::Matrix4::identity()
}
Some(x) => x.transpose(),
};
let prim = prim.reader(|x| {
if let gltf::buffer::Source::Bin = x.source() {
Some(buffer)
} else {
None
}
});
let positions = prim
.read_positions()
.ok_or_else(|| anyhow!("vertex positions missing"))?;
let mut texcoords = texcoord_index
.map(|i| -> Result<_> {
Ok(prim
.read_tex_coords(i)
.ok_or_else(|| anyhow!("texcoords missing"))?
.into_f32())
})
.transpose()?;
let normals = prim
.read_normals()
.ok_or_else(|| anyhow!("normals missing"))?;
let vertex_count = positions.len();
if vertex_count != normals.len() || texcoords.as_ref().is_some_and(|x| vertex_count != x.len())
{
bail!("inconsistent vertex attribute counts");
}
let byte_size = vertex_count * mem::size_of::<Vertex>();
let mut v_staging = ctx
.staging
.alloc(byte_size)
.await
.ok_or_else(|| anyhow!("too large"))?;
for ((pos, norm), storage) in positions
.zip(normals)
.zip(v_staging.chunks_exact_mut(mem::size_of::<Vertex>()))
{
let v = Vertex {
position: na::Point3::from_homogeneous(
transform * (na::Point3::from(pos)).to_homogeneous(),
)
.unwrap_or_else(na::Point3::origin),
texcoords: texcoords
.as_mut()
.map_or_else(na::zero, |x| x.next().unwrap().into()),
normal: na::Unit::new_normalize(
(normal_transform * na::Vector3::from(norm).to_homogeneous()).xyz(),
),
};
// write_unaligned accepts misaligned pointers
#[allow(clippy::cast_ptr_alignment)]
unsafe {
ptr::write_unaligned(storage.as_ptr() as *mut Vertex, v);
}
}
let indices = prim
.read_indices()
.ok_or_else(|| anyhow!("indices missing"))?
.into_u32();
let index_count = indices.len();
let mut i_staging = ctx
.staging
.alloc(index_count * 4)
.await
.ok_or_else(|| anyhow!("too large"))?;
for (idx, storage) in indices.zip(i_staging.chunks_exact_mut(4)) {
storage.copy_from_slice(&idx.to_ne_bytes());
}
let vert_alloc =
ctx.vertex_alloc
.lock()
.unwrap()
.alloc(&ctx.gfx.device, byte_size as vk::DeviceSize, 4);
let staging_buffer = ctx.staging.buffer();
let vert_buffer = vert_alloc.buffer;
let vert_src_offset = v_staging.offset();
let vert_dst_offset = vert_alloc.offset;
let vertex_upload = unsafe {
ctx.transfer.run(move |xf, cmd| {
xf.device.cmd_copy_buffer(
cmd,
staging_buffer,
vert_buffer,
&[vk::BufferCopy {
src_offset: vert_src_offset,
dst_offset: vert_dst_offset,
size: byte_size as vk::DeviceSize,
}],
);
xf.stages |= vk::PipelineStageFlags::VERTEX_INPUT;
xf.buffer_barriers.push(
vk::BufferMemoryBarrier::default()
.src_access_mask(vk::AccessFlags::TRANSFER_WRITE)
.dst_access_mask(vk::AccessFlags::VERTEX_ATTRIBUTE_READ)
.src_queue_family_index(xf.queue_family)
.dst_queue_family_index(xf.dst_queue_family)
.buffer(vert_buffer)
.offset(vert_dst_offset)
.size(byte_size as vk::DeviceSize),
);
})
};
let idx_alloc = ctx.index_alloc.lock().unwrap().alloc(
&ctx.gfx.device,
index_count as vk::DeviceSize * 4,
4,
);
let idx_buffer = idx_alloc.buffer;
let idx_src_offset = i_staging.offset();
let idx_dst_offset = idx_alloc.offset;
let index_upload = unsafe {
ctx.transfer.run(move |xf, cmd| {
xf.device.cmd_copy_buffer(
cmd,
staging_buffer,
idx_buffer,
&[vk::BufferCopy {
src_offset: idx_src_offset,
dst_offset: idx_dst_offset,
size: index_count as vk::DeviceSize * 4,
}],
);
xf.stages |= vk::PipelineStageFlags::VERTEX_INPUT;
xf.buffer_barriers.push(
vk::BufferMemoryBarrier::default()
.src_access_mask(vk::AccessFlags::TRANSFER_WRITE)
.dst_access_mask(vk::AccessFlags::INDEX_READ)
.src_queue_family_index(xf.queue_family)
.dst_queue_family_index(xf.dst_queue_family)
.buffer(idx_buffer)
.offset(idx_dst_offset)
.size(index_count as vk::DeviceSize * 4),
);
})
};
// Upload concurrently
let (r1, r2) = tokio::join!(vertex_upload, index_upload);
r1?;
r2?;
Ok(Geometry {
vertices: vert_alloc,
indices: idx_alloc,
index_count: index_count as u32,
})
}
async fn load_material(
ctx: &LoadCtx,
buffer: &[u8],
prim: &gltf::Primitive<'_>,
) -> Result<DedicatedImage> {
let device = &*ctx.gfx.device;
let color = match prim
.material()
.pbr_metallic_roughness()
.base_color_texture()
{
None => {
return load_solid_color(
ctx,
prim.material().pbr_metallic_roughness().base_color_factor(),
)
.await;
}
Some(x) => x,
};
let color_data = match color.texture().source().source() {
gltf::image::Source::Uri { uri, .. } => {
let path = ctx
.cfg
.find_asset(Path::new(uri))
.ok_or_else(|| anyhow!("texture {} not found", uri))?;
trace!(path = %path.anonymize().display(), "reading texture");
Cow::Owned(fs::read(&path).context("reading texture")?)
}
gltf::image::Source::View { view, .. } => {
match view.buffer().source() {
gltf::buffer::Source::Bin => {}
gltf::buffer::Source::Uri(_) => {
bail!("external buffers unsupported");
}
}
Cow::Borrowed(&buffer[view.offset()..view.offset() + view.length()])
}
};
let mut color_data = &color_data[..];
let mut color_reader = png::Decoder::new(Cursor::new(&mut color_data))
.read_info()
.with_context(|| "decoding PNG header")?;
let (width, height) = {
let info = color_reader.info();
(info.width, info.height)
};
let mut color_staging = ctx
.staging
.alloc(width as usize * height as usize * 4)
.await
.ok_or_else(|| anyhow!("texture too large"))?;
color_reader
.next_frame(&mut color_staging)
.with_context(|| "decoding PNG data")?;
let color = unsafe {
DedicatedImage::new(
device,
&ctx.gfx.memory_properties,
&vk::ImageCreateInfo::default()
.image_type(vk::ImageType::TYPE_2D)
.format(vk::Format::R8G8B8A8_SRGB)
.extent(vk::Extent3D {
width,
height,
depth: 1,
})
.mip_levels(1)
.array_layers(1)
.samples(vk::SampleCountFlags::TYPE_1)
.usage(vk::ImageUsageFlags::SAMPLED | vk::ImageUsageFlags::TRANSFER_DST),
)
};
let staging_buffer = ctx.staging.buffer();
let color_handle = color.handle;
let color_offset = color_staging.offset();
unsafe {
ctx.transfer
.run(move |xf, cmd| {
let range = vk::ImageSubresourceRange {
aspect_mask: vk::ImageAspectFlags::COLOR,
base_mip_level: 0,
level_count: 1,
base_array_layer: 0,
layer_count: 1,
};
xf.device.cmd_pipeline_barrier(
cmd,
vk::PipelineStageFlags::TOP_OF_PIPE,
vk::PipelineStageFlags::TRANSFER,
vk::DependencyFlags::default(),
&[],
&[],
&[vk::ImageMemoryBarrier::default()
.dst_access_mask(vk::AccessFlags::TRANSFER_WRITE)
.src_queue_family_index(vk::QUEUE_FAMILY_IGNORED)
.dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED)
.old_layout(vk::ImageLayout::UNDEFINED)
.new_layout(vk::ImageLayout::TRANSFER_DST_OPTIMAL)
.image(color_handle)
.subresource_range(range)],
);
xf.device.cmd_copy_buffer_to_image(
cmd,
staging_buffer,
color_handle,
vk::ImageLayout::TRANSFER_DST_OPTIMAL,
&[vk::BufferImageCopy {
buffer_offset: color_offset,
image_subresource: vk::ImageSubresourceLayers {
aspect_mask: vk::ImageAspectFlags::COLOR,
mip_level: 0,
base_array_layer: 0,
layer_count: 1,
},
image_extent: vk::Extent3D {
width,
height,
depth: 1,
},
..Default::default()
}],
);
xf.stages |= vk::PipelineStageFlags::FRAGMENT_SHADER;
xf.image_barriers.push(
vk::ImageMemoryBarrier::default()
.src_access_mask(vk::AccessFlags::TRANSFER_WRITE)
.dst_access_mask(vk::AccessFlags::SHADER_READ)
.src_queue_family_index(xf.queue_family)
.dst_queue_family_index(xf.dst_queue_family)
.old_layout(vk::ImageLayout::TRANSFER_DST_OPTIMAL)
.new_layout(vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL)
.image(color_handle)
.subresource_range(range),
);
})
.await?;
}
Ok(color)
}
async fn load_solid_color(ctx: &LoadCtx, rgba: [f32; 4]) -> Result<DedicatedImage> {
unsafe {
let image = DedicatedImage::new(
&ctx.gfx.device,
&ctx.gfx.memory_properties,
&vk::ImageCreateInfo::default()
.image_type(vk::ImageType::TYPE_2D)
.format(vk::Format::R8G8B8A8_SRGB)
.extent(vk::Extent3D {
width: 1,
height: 1,
depth: 1,
})
.mip_levels(1)
.array_layers(1)
.samples(vk::SampleCountFlags::TYPE_1)
.usage(vk::ImageUsageFlags::SAMPLED | vk::ImageUsageFlags::TRANSFER_DST),
);
let handle = image.handle;
ctx.transfer
.run(move |xf, cmd| {
let range = vk::ImageSubresourceRange {
aspect_mask: vk::ImageAspectFlags::COLOR,
base_mip_level: 0,
level_count: 1,
base_array_layer: 0,
layer_count: 1,
};
xf.device.cmd_pipeline_barrier(
cmd,
vk::PipelineStageFlags::TOP_OF_PIPE,
vk::PipelineStageFlags::TRANSFER,
vk::DependencyFlags::default(),
&[],
&[],
&[vk::ImageMemoryBarrier::default()
.dst_access_mask(vk::AccessFlags::TRANSFER_WRITE)
.src_queue_family_index(vk::QUEUE_FAMILY_IGNORED)
.dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED)
.old_layout(vk::ImageLayout::UNDEFINED)
.new_layout(vk::ImageLayout::TRANSFER_DST_OPTIMAL)
.image(handle)
.subresource_range(range)],
);
xf.device.cmd_clear_color_image(
cmd,
handle,
vk::ImageLayout::TRANSFER_DST_OPTIMAL,
&vk::ClearColorValue { float32: rgba },
&[range],
);
xf.stages |= vk::PipelineStageFlags::FRAGMENT_SHADER;
xf.image_barriers.push(
vk::ImageMemoryBarrier::default()
.src_access_mask(vk::AccessFlags::TRANSFER_WRITE)
.dst_access_mask(vk::AccessFlags::SHADER_READ)
.src_queue_family_index(xf.queue_family)
.dst_queue_family_index(xf.dst_queue_family)
.old_layout(vk::ImageLayout::TRANSFER_DST_OPTIMAL)
.new_layout(vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL)
.image(handle)
.subresource_range(range),
);
})
.await?;
Ok(image)
}
}
================================================
FILE: client/src/graphics/gui.rs
================================================
use yakui::{
Alignment, Color, align, colored_box, colored_box_container, label, pad, widgets::Pad,
};
use crate::Sim;
pub struct GuiState {
show_gui: bool,
}
impl GuiState {
pub fn new() -> Self {
GuiState { show_gui: true }
}
/// Toggles whether the GUI is shown
pub fn toggle_gui(&mut self) {
self.show_gui = !self.show_gui;
}
/// Prepare the GUI for rendering. This should be called between
/// Yakui::start and Yakui::finish.
pub fn run(&self, sim: &Sim) {
if !self.show_gui {
return;
}
align(Alignment::CENTER, || {
colored_box(Color::BLACK.with_alpha(0.9), [5.0, 5.0]);
});
align(Alignment::TOP_LEFT, || {
pad(Pad::all(8.0), || {
colored_box_container(Color::BLACK.with_alpha(0.7), || {
let material_count_string = if sim.cfg.gameplay_enabled {
sim.count_inventory_entities_matching_material(sim.selected_material())
.to_string()
} else {
"∞".to_string()
};
label(format!(
"Selected material: {:?} (×{})",
sim.selected_material(),
material_count_string
));
});
});
});
}
}
================================================
FILE: client/src/graphics/meshes.rs
================================================
use std::mem;
use ash::{Device, vk};
use lahar::{BufferRegionAlloc, DedicatedImage};
use memoffset::offset_of;
use vk_shader_macros::include_glsl;
use super::Base;
use common::defer;
const VERT: &[u32] = include_glsl!("shaders/mesh.vert");
const FRAG: &[u32] = include_glsl!("shaders/mesh.frag");
pub struct Meshes {
pipeline_layout: vk::PipelineLayout,
pipeline: vk::Pipeline,
}
impl Meshes {
#[allow(clippy::unneeded_field_pattern)] // Silence offset_of warnings nonsense
pub fn new(gfx: &Base, ds_layout: vk::DescriptorSetLayout) -> Self {
let device = &*gfx.device;
unsafe {
// Construct the shader modules
let vert = device
.create_shader_module(&vk::ShaderModuleCreateInfo::default().code(VERT), None)
.unwrap();
// Note that these only need to live until the pipeline itself is constructed
let v_guard = defer(|| device.destroy_shader_module(vert, None));
let frag = device
.create_shader_module(&vk::ShaderModuleCreateInfo::default().code(FRAG), None)
.unwrap();
let f_guard = defer(|| device.destroy_shader_module(frag, None));
// Define the outward-facing interface of the shaders, incl. uniforms, samplers, etc.
let pipeline_layout = device
.create_pipeline_layout(
&vk::PipelineLayoutCreateInfo::default()
.set_layouts(&[gfx.common_layout, ds_layout])
.push_constant_ranges(&[vk::PushConstantRange {
stage_flags: vk::ShaderStageFlags::VERTEX,
offset: 0,
size: 64,
}]),
None,
)
.unwrap();
let entry_point = cstr!("main").as_ptr();
let mut pipelines = device
.create_graphics_pipelines(
gfx.pipeline_cache,
&[vk::GraphicsPipelineCreateInfo::default()
.stages(&[
vk::PipelineShaderStageCreateInfo {
stage: vk::ShaderStageFlags::VERTEX,
module: vert,
p_name: entry_point,
..Default::default()
},
vk::PipelineShaderStageCreateInfo {
stage: vk::ShaderStageFlags::FRAGMENT,
module: frag,
p_name: entry_point,
..Default::default()
},
])
.vertex_input_state(
&vk::PipelineVertexInputStateCreateInfo::default()
.vertex_binding_descriptions(&[vk::VertexInputBindingDescription {
binding: 0,
stride: mem::size_of::<Vertex>() as u32,
input_rate: vk::VertexInputRate::VERTEX,
}])
.vertex_attribute_descriptions(&[
vk::VertexInputAttributeDescription {
location: 0,
binding: 0,
format: vk::Format::R32G32B32_SFLOAT,
offset: offset_of!(Vertex, position) as u32,
},
vk::VertexInputAttributeDescription {
location: 1,
binding: 0,
format: vk::Format::R32G32_SFLOAT,
offset: offset_of!(Vertex, texcoords) as u32,
},
vk::VertexInputAttributeDescription {
location: 2,
binding: 0,
format: vk::Format::R32G32B32_SFLOAT,
offset: offset_of!(Vertex, normal) as u32,
},
]),
)
.input_assembly_state(
&vk::PipelineInputAssemblyStateCreateInfo::default()
.topology(vk::PrimitiveTopology::TRIANGLE_LIST),
)
.viewport_state(
&vk::PipelineViewportStateCreateInfo::default()
.scissor_count(1)
.viewport_count(1),
)
.rasterization_state(
&vk::PipelineRasterizationStateCreateInfo::default()
.cull_mode(vk::CullModeFlags::BACK)
.front_face(vk::FrontFace::COUNTER_CLOCKWISE)
.polygon_mode(vk::PolygonMode::FILL)
.line_width(1.0),
)
.multisample_state(
&vk::PipelineMultisampleStateCreateInfo::default()
.rasterization_samples(vk::SampleCountFlags::TYPE_1),
)
.depth_stencil_state(
&vk::PipelineDepthStencilStateCreateInfo::default()
.depth_test_enable(true)
.depth_write_enable(true)
.depth_compare_op(vk::CompareOp::GREATER),
)
.color_blend_state(
&vk::PipelineColorBlendStateCreateInfo::default().attachments(&[
vk::PipelineColorBlendAttachmentState {
blend_enable: vk::TRUE,
src_color_blend_factor: vk::BlendFactor::ONE,
dst_color_blend_factor: vk::BlendFactor::ZERO,
color_blend_op: vk::BlendOp::ADD,
color_write_mask: vk::ColorComponentFlags::R
| vk::ColorComponentFlags::G
| vk::ColorComponentFlags::B,
..Default::default()
},
]),
)
.dynamic_state(
&vk::PipelineDynamicStateCreateInfo::default().dynamic_states(&[
vk::DynamicState::VIEWPORT,
vk::DynamicState::SCISSOR,
]),
)
.layout(pipeline_layout)
.render_pass(gfx.render_pass)
.subpass(0)],
None,
)
.unwrap()
.into_iter();
let pipeline = pipelines.next().unwrap();
gfx.set_name(pipeline, cstr!("sky"));
// Clean up the shaders explicitly, so the defer guards don't hold onto references we're
// moving into `Self` to be returned
v_guard.invoke();
f_guard.invoke();
Self {
pipeline_layout,
pipeline,
}
}
}
pub unsafe fn draw(
&mut self,
device: &Device,
common_ds: vk::DescriptorSet,
cmd: vk::CommandBuffer,
mesh: &Mesh,
transform: &na::Matrix4<f32>,
) {
unsafe {
device.cmd_bind_pipeline(cmd, vk::PipelineBindPoint::GRAPHICS, self.pipeline);
device.cmd_bind_descriptor_sets(
cmd,
vk::PipelineBindPoint::GRAPHICS,
self.pipeline_layout,
0,
&[common_ds, mesh.ds],
&[],
);
device.cmd_push_constants(
cmd,
self.pipeline_layout,
vk::ShaderStageFlags::VERTEX,
0,
&mem::transmute::<na::Matrix4<f32>, [u8; 64]>(*transform),
);
device.cmd_bind_vertex_buffers(
cmd,
0,
&[mesh.vertices.buffer],
&[mesh.vertices.offset],
);
device.cmd_bind_index_buffer(
cmd,
mesh.indices.buffer,
mesh.indices.offset,
vk::IndexType::UINT32,
);
device.cmd_draw_indexed(cmd, mesh.index_count, 1, 0, 0, 0);
}
}
pub unsafe fn destroy(&mut self, device: &Device) {
unsafe {
device.destroy_pipeline(self.pipeline, None);
device.destroy_pipeline_layout(self.pipeline_layout, None);
}
}
}
#[repr(C)]
pub struct Vertex {
pub position: na::Point3<f32>,
pub texcoords: na::Vector2<f32>,
pub normal: na::Unit<na::Vector3<f32>>,
}
#[derive(Copy, Clone)]
pub struct Mesh {
pub vertices: BufferRegionAlloc,
pub indices: BufferRegionAlloc,
pub index_count: u32,
pub pool: vk::DescriptorPool,
pub ds: vk::DescriptorSet,
// TODO: Make shareable
pub color: DedicatedImage,
pub color_view: vk::ImageView,
}
impl crate::loader::Cleanup for Mesh {
unsafe fn cleanup(mut self, gfx: &Base) {
unsafe {
let device = &*gfx.device;
device.destroy_descriptor_pool(self.pool, None);
device.destroy_image_view(self.color_view, None);
self.color.destroy(device);
}
}
}
================================================
FILE: client/src/graphics/mod.rs
================================================
#![allow(clippy::missing_safety_doc)] // Vulkan wrangling is categorically unsafe
mod base;
mod core;
mod draw;
mod fog;
mod frustum;
mod gltf_mesh;
mod gui;
mod meshes;
mod png_array;
pub mod voxels;
mod window;
#[cfg(test)]
mod tests;
pub use self::{
base::Base,
core::Core,
draw::Draw,
fog::Fog,
frustum::Frustum,
gltf_mesh::{GlbFile, GltfScene},
meshes::{Mesh, Meshes},
png_array::PngArray,
voxels::Voxels,
window::{EarlyWindow, Window},
};
unsafe fn as_bytes<T: Copy>(x: &T) -> &[u8] {
unsafe { std::slice::from_raw_parts(x as *const T as *const u8, std::mem::size_of::<T>()) }
}
#[repr(C)]
#[derive(Debug, Eq, PartialEq, Copy, Clone)]
pub struct VkDrawIndirectCommand {
pub vertex_count: u32,
pub instance_count: u32,
pub first_vertex: u32,
pub first_instance: u32,
}
================================================
FILE: client/src/graphics/png_array.rs
================================================
use std::{
fs::{self, File},
io::BufReader,
path::PathBuf,
};
use anyhow::{Context, anyhow, bail};
use ash::vk;
use common::Anonymize;
use lahar::DedicatedImage;
use tracing::trace;
use crate::loader::{LoadCtx, LoadFuture, Loadable};
pub struct PngArray {
pub path: PathBuf,
pub size: usize,
}
impl Loadable for PngArray {
type Output = DedicatedImage;
fn load(self, handle: &LoadCtx) -> LoadFuture<'_, Self::Output> {
Box::pin(async move {
let full_path = handle
.cfg
.find_asset(&self.path)
.ok_or_else(|| anyhow!("{} not found", self.path.anonymize().display()))?;
let mut paths = fs::read_dir(&full_path)
.with_context(|| format!("reading {}", full_path.anonymize().display()))?
.map(|x| x.map(|x| x.path()))
.collect::<Result<Vec<_>, _>>()
.with_context(|| format!("reading {}", full_path.anonymize().display()))?;
if paths.is_empty() {
bail!("{} is empty", full_path.anonymize().display());
}
if paths.len() < self.size {
bail!(
"{}: expected {} textures, found {}",
full_path.anonymize().display(),
self.size,
paths.len()
);
}
paths.sort();
paths.truncate(self.size);
let mut dims: Option<(u32, u32)> = None;
let mut mem = None;
for (i, path) in paths.iter().enumerate() {
trace!(layer=i, path=%path.anonymize().display(), "loading");
let file = File::open(path)
.with_context(|| format!("reading {}", path.anonymize().display()))?;
let decoder = png::Decoder::new(BufReader::new(file));
let mut reader = decoder
.read_info()
.with_context(|| format!("decoding {}", path.anonymize().display()))?;
let info = reader.info();
if let Some(dims) = dims {
if dims != (info.width, info.height) {
bail!(
"inconsistent dimensions: expected {}x{}, got {}x{}",
dims.0,
dims.1,
info.width,
info.height
);
}
} else {
dims = Some((info.width, info.height));
mem = Some(
handle
.staging
.alloc(info.width as usize * info.height as usize * 4 * self.size)
.await
.ok_or_else(|| {
anyhow!(
"{}: image array too large",
full_path.anonymize().display()
)
})?,
);
}
let mem = mem.as_mut().unwrap();
let step_size = info.width as usize * info.height as usize * 4;
reader
.next_frame(&mut mem[i * step_size..(i + 1) * step_size])
.with_context(|| format!("decoding {}", path.anonymize().display()))?;
}
let (width, height) = dims.unwrap();
let mem = mem.unwrap();
unsafe {
let image = DedicatedImage::new(
&handle.gfx.device,
&handle.gfx.memory_properties,
&vk::ImageCreateInfo::default()
.image_type(vk::ImageType::TYPE_2D)
.format(vk::Format::R8G8B8A8_SRGB)
.extent(vk::Extent3D {
width,
height,
depth: 1,
})
.mip_levels(1)
.array_layers(self.size as u32)
.samples(vk::SampleCountFlags::TYPE_1)
.usage(vk::ImageUsageFlags::SAMPLED | vk::ImageUsageFlags::TRANSFER_DST),
);
let range = vk::ImageSubresourceRange {
aspect_mask: vk::ImageAspectFlags::COLOR,
base_mip_level: 0,
level_count: 1,
base_array_layer: 0,
layer_count: self.size as u32,
};
let src = handle.staging.buffer();
let buffer_offset = mem.offset();
let dst = image.handle;
handle
.transfer
.run(move |xf, cmd| {
xf.device.cmd_pipeline_barrier(
cmd,
vk::PipelineStageFlags::TOP_OF_PIPE,
vk::PipelineStageFlags::TRANSFER,
vk::DependencyFlags::default(),
&[],
&[],
&[vk::ImageMemoryBarrier::default()
.dst_access_mask(vk::AccessFlags::TRANSFER_WRITE)
.src_queue_family_index(vk::QUEUE_FAMILY_IGNORED)
.dst_queue_family_index(vk::QUEUE_FAMILY_IGNORED)
.old_layout(vk::ImageLayout::UNDEFINED)
.new_layout(vk::ImageLayout::TRANSFER_DST_OPTIMAL)
.image(dst)
.subresource_range(range)],
);
xf.device.cmd_copy_buffer_to_image(
cmd,
src,
dst,
vk::ImageLayout::TRANSFER_DST_OPTIMAL,
&[vk::BufferImageCopy {
buffer_offset,
image_subresource: vk::ImageSubresourceLayers {
aspect_mask: vk::ImageAspectFlags::COLOR,
mip_level: 0,
base_array_layer: 0,
layer_count: range.layer_count,
},
image_extent: vk::Extent3D {
width,
height,
depth: 1,
},
..Default::default()
}],
);
xf.stages |= vk::PipelineStageFlags::FRAGMENT_SHADER;
xf.image_barriers.push(
vk::ImageMemoryBarrier::default()
.src_access_mask(vk::AccessFlags::TRANSFER_WRITE)
.dst_access_mask(vk::AccessFlags::SHADER_READ)
.src_queue_family_index(xf.queue_family)
.dst_queue_family_index(xf.dst_queue_family)
.old_layout(vk::ImageLayout::TRANSFER_DST_OPTIMAL)
.new_layout(vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL)
.image(dst)
.subresource_range(range),
);
})
.await?;
trace!(
width = width,
height = height,
path = %full_path.anonymize().display(),
"loaded array"
);
Ok(image)
}
})
}
}
================================================
FILE: client/src/graphics/tests.rs
================================================
use super::Base;
#[test]
#[ignore]
fn init_base() {
let _guard = common::tracing_guard();
Base::headless();
}
================================================
FILE: client/src/graphics/voxels/mod.rs
================================================
mod surface;
pub mod surface_extraction;
#[cfg(test)]
mod tests;
use std::{sync::Arc, time::Instant};
use ash::{Device, vk};
use lru_slab::LruSlab;
use metrics::histogram;
use tracing::warn;
use crate::{
Config, Loader, Sim,
graphics::{Base, Frustum},
};
use common::{
dodeca::{self, Vertex},
graph::NodeId,
math::{MIsometry, MPoint},
node::{Chunk, ChunkId, VoxelData},
};
use surface::Surface;
use surface_extraction::{DrawBuffer, ExtractTask, ScratchBuffer, SurfaceExtraction};
pub struct Voxels {
config: Arc<Config>,
surface_extraction: SurfaceExtraction,
extraction_scratch: ScratchBuffer,
surfaces: DrawBuffer,
states: LruSlab<SurfaceState>,
draw: Surface,
max_chunks: u32,
}
impl Voxels {
pub fn new(
gfx: &Base,
config: Arc<Config>,
loader: &mut Loader,
dimension: u32,
frames: u32,
) -> Self {
let max_faces = 3 * (dimension.pow(3) + dimension.pow(2));
let max_supported_chunks = gfx.limits.max_storage_buffer_range / (8 * max_faces);
let max_chunks = if MAX_CHUNKS > max_supported_chunks {
warn!(
"clamping max chunks to {} due to SSBO size limit",
max_supported_chunks
);
max_supported_chunks
} else {
MAX_CHUNKS
};
let surfaces = DrawBuffer::new(gfx, max_chunks, dimension);
let draw = Surface::new(gfx, loader, &surfaces);
let surface_extraction = SurfaceExtraction::new(gfx);
let extraction_scratch = surface_extraction::ScratchBuffer::new(
gfx,
&surface_extraction,
config.chunk_load_parallelism * frames,
dimension,
);
Self {
config,
surface_extraction,
extraction_scratch,
surfaces,
states: LruSlab::with_capacity(max_chunks),
draw,
max_chunks,
}
}
/// Determine what to render and stage chunk transforms
///
/// Surface extraction commands are written to `cmd`, and will be presumed complete for the next
/// (not current) frame.
pub unsafe fn prepare(
&mut self,
device: &Device,
frame: &mut Frame,
sim: &mut Sim,
nearby_nodes: &[(NodeId, MIsometry<f32>)],
cmd: vk::CommandBuffer,
frustum: &Frustum,
) {
// Clean up after previous frame
for i in frame.extracted.drain(..) {
self.extraction_scratch.free(i);
}
for chunk in frame.drawn.drain(..) {
self.states.peek_mut(chunk).refcount -= 1;
}
// Determine what to load/render
let view = sim.view();
if !sim.graph.contains(view.node) {
// Graph is temporarily out of sync with the server; we don't know where we are, so
// there's no point trying to draw.
return;
}
let node_scan_started = Instant::now();
let frustum_planes = frustum.planes();
let local_to_view = view.local.inverse();
let mut extractions = Vec::new();
for &(node, ref node_transform) in nearby_nodes {
let node_to_view = local_to_view * node_transform;
let origin = node_to_view * MPoint::origin();
if !frustum_planes.contain(&origin, dodeca::BOUNDING_SPHERE_RADIUS) {
// Don't bother generating or drawing chunks from nodes that are wholly outside the
// frustum.
continue;
}
use Chunk::*;
for vertex in Vertex::iter() {
let chunk = ChunkId::new(node, vertex);
// Fetch existing chunk, or extract surface of new chunk
let &mut Populated {
ref mut surface,
ref mut old_surface,
ref voxels,
} = &mut sim.graph[chunk]
else {
continue;
};
if let Some(slot) = surface.or(*old_surface) {
// Render an already-extracted surface
self.states.get_mut(slot).refcount += 1;
frame.drawn.push(slot);
// Transfer transform
frame.surface.transforms_mut()[slot as usize] =
na::Matrix4::from(*node_transform) * vertex.chunk_to_node();
}
if let (None, &VoxelData::Dense(ref data)) = (&surface, voxels) {
// Extract a surface so it can be drawn in future frames
if frame.extracted.len() == self.config.chunk_load_parallelism as usize {
continue;
}
let removed = if self.states.len() == self.max_chunks {
let slot = self.states.lru().expect("full LRU table is nonempty");
if self.states.peek(slot).refcount != 0 {
warn!("MAX_CHUNKS is too small");
break;
}
Some((slot, self.states.remove(slot)))
} else {
None
};
let scratch_slot = self.extraction_scratch.alloc().expect(
"there are at least chunks_loaded_per_frame scratch slots per frame",
);
frame.extracted.push(scratch_slot);
let slot = self.states.insert(SurfaceState {
node,
chunk: vertex,
refcount: 0,
});
*surface = Some(slot);
let storage = self.extraction_scratch.storage(scratch_slot);
storage.copy_from_slice(&data[..]);
if let Some((lru_slot, lru)) = removed
&& let Populated {
ref mut surface,
ref mut old_surface,
..
} = sim.graph[lru.node].chunks[lru.chunk]
{
// Remove references to released slot IDs
if *surface == Some(lru_slot) {
*surface = None;
}
if *old_surface == Some(lru_slot) {
*old_surface = None;
}
}
let node_is_odd = sim.graph.depth(node) & 1 != 0;
extractions.push(ExtractTask {
index: scratch_slot,
indirect_offset: self.surfaces.indirect_offset(slot),
face_offset: self.surfaces.face_offset(slot),
draw_id: slot,
reverse_winding: vertex.parity() ^ node_is_odd,
});
}
}
}
unsafe {
self.extraction_scratch.extract(
device,
&self.surface_extraction,
self.surfaces.indirect_buffer(),
self.surfaces.face_buffer(),
cmd,
&extractions,
);
}
histogram!("frame.cpu.voxels.node_scan").record(node_scan_started.elapsed());
}
pub unsafe fn draw(
&mut self,
device: &Device,
loader: &Loader,
common_ds: vk::DescriptorSet,
frame: &Frame,
cmd: vk::CommandBuffer,
) {
unsafe {
let started = Instant::now();
if !self.draw.bind(
device,
loader,
self.surfaces.dimension(),
common_ds,
&frame.surface,
cmd,
) {
return;
}
for &chunk in &frame.drawn {
self.draw.draw(device, cmd, &self.surfaces, chunk);
}
histogram!("frame.cpu.voxels.draw").record(started.elapsed());
}
}
pub unsafe fn destroy(&mut self, device: &Device) {
unsafe {
self.surface_extraction.destroy(device);
self.extraction_scratch.destroy(device);
self.surfaces.destroy(device);
self.draw.destroy(device);
}
}
}
pub struct Frame {
surface: surface::Frame,
/// Scratch slots completed in this frame
extracted: Vec<u32>,
drawn: Vec<u32>,
}
impl Frame {
pub unsafe fn destroy(&mut self, device: &Device) {
unsafe {
self.surface.destroy(device);
}
}
}
impl Frame {
pub fn new(gfx: &Base, ctx: &Voxels) -> Self {
Self {
surface: surface::Frame::new(gfx, ctx.states.capacity()),
extracted: Vec::new(),
drawn: Vec::new(),
}
}
}
/// Maximum number of concurrently drawn voxel chunks
const MAX_CHUNKS: u32 = 8192;
struct SurfaceState {
node: NodeId,
chunk: common::dodeca::Vertex,
refcount: u32,
}
================================================
FILE: client/src/graphics/voxels/surface.rs
================================================
use ash::{Device, vk};
use lahar::{DedicatedImage, DedicatedMapping};
use vk_shader_macros::include_glsl;
use super::surface_extraction::DrawBuffer;
use crate::{Asset, Loader, graphics::Base};
use common::{defer, world::Material};
const VERT: &[u32] = include_glsl!("shaders/voxels.vert");
const FRAG: &[u32] = include_glsl!("shaders/voxels.frag");
pub struct Surface {
static_ds_layout: vk::DescriptorSetLayout,
pipeline_layout: vk::PipelineLayout,
pipeline: vk::Pipeline,
descriptor_pool: vk::DescriptorPool,
ds: vk::DescriptorSet,
colors: Asset<DedicatedImage>,
colors_view: vk::ImageView,
}
impl Surface {
pub fn new(gfx: &Base, loader: &mut Loader, buffer: &DrawBuffer) -> Self {
let device = &*gfx.device;
unsafe {
// Construct the shader modules
let vert = device
.create_shader_module(&vk::ShaderModuleCreateInfo::default().code(VERT), None)
.unwrap();
// Note that these only need to live until the pipeline itself is constructed
let v_guard = defer(|| device.destroy_shader_module(vert, None));
let frag = device
.create_shader_module(&vk::ShaderModuleCreateInfo::default().code(FRAG), None)
.unwrap();
let f_guard = defer(|| device.destroy_shader_module(frag, None));
let static_ds_layout = device
.create_descriptor_set_layout(
&vk::DescriptorSetLayoutCreateInfo::default().bindings(&[
vk::DescriptorSetLayoutBinding {
binding: 0,
descriptor_type: vk::DescriptorType::STORAGE_BUFFER,
descriptor_count: 1,
stage_flags: vk::ShaderStageFlags::VERTEX,
..Default::default()
},
vk::DescriptorSetLayoutBinding {
binding: 1,
descriptor_type: vk::DescriptorType::COMBINED_IMAGE_SAMPLER,
descriptor_count: 1,
stage_flags: vk::ShaderStageFlags::FRAGMENT,
p_immutable_samplers: &gfx.linear_sampler,
..Default::default()
},
]),
None,
)
.unwrap();
let descriptor_pool = device
.create_descriptor_pool(
&vk::DescriptorPoolCreateInfo::default()
.max_sets(1)
.pool_sizes(&[
vk::DescriptorPoolSize {
ty: vk::DescriptorType::STORAGE_BUFFER,
descriptor_count: 1,
},
vk::DescriptorPoolSize {
ty: vk::DescriptorType::COMBINED_IMAGE_SAMPLER,
descriptor_count: 1,
},
]),
None,
)
.unwrap();
let ds = device
.allocate_descriptor_sets(
&vk::DescriptorSetAllocateInfo::default()
.descriptor_pool(descriptor_pool)
.set_layouts(&[static_ds_layout]),
)
.unwrap()[0];
device.update_descriptor_sets(
&[vk::WriteDescriptorSet::default()
.dst_set(ds)
.dst_binding(0)
.descriptor_type(vk::DescriptorType::STORAGE_BUFFER)
.buffer_info(&[vk::DescriptorBufferInfo {
buffer: buffer.face_buffer(),
offset: 0,
range: vk::WHOLE_SIZE,
}])],
&[],
);
// Define the outward-facing interface of the shaders, incl. uniforms, samplers, etc.
let pipeline_layout = device
.create_pipeline_layout(
&vk::PipelineLayoutCreateInfo::default()
.set_layouts(&[gfx.common_layout, static_ds_layout])
.push_constant_ranges(&[vk::PushConstantRange {
stage_flags: vk::ShaderStageFlags::VERTEX,
offset: 0,
size: 4,
}]),
None,
)
.unwrap();
let entry_point = cstr!("main").as_ptr();
let mut pipelines = device
.create_graphics_pipelines(
gfx.pipeline_cache,
&[vk::GraphicsPipelineCreateInfo::default()
.stages(&[
vk::PipelineShaderStageCreateInfo {
stage: vk::ShaderStageFlags::VERTEX,
module: vert,
p_name: entry_point,
..Default::default()
},
vk::PipelineShaderStageCreateInfo {
stage: vk::ShaderStageFlags::FRAGMENT,
module: frag,
p_name: entry_point,
..Default::default()
},
])
.vertex_input_state(
&vk::PipelineVertexInputStateCreateInfo::default()
.vertex_binding_descriptions(&[vk::VertexInputBindingDescription {
binding: 0,
stride: TRANSFORM_SIZE as u32,
input_rate: vk::VertexInputRate::INSTANCE,
}])
.vertex_attribute_descriptions(&[
vk::VertexInputAttributeDescription {
location: 0,
binding: 0,
format: vk::Format::R32G32B32A32_SFLOAT,
offset: 0,
},
vk::VertexInputAttributeDescription {
location: 1,
binding: 0,
format: vk::Format::R32G32B32A32_SFLOAT,
offset: 16,
},
vk::VertexInputAttributeDescription {
location: 2,
binding: 0,
format: vk::Format::R32G32B32A32_SFLOAT,
offset: 32,
},
vk::VertexInputAttributeDescription {
location: 3,
binding: 0,
format: vk::Format::R32G32B32A32_SFLOAT,
offset: 48,
},
]),
)
.input_assembly_state(
&vk::PipelineInputAssemblyStateCreateInfo::default()
.topology(vk::PrimitiveTopology::TRIANGLE_LIST),
)
.viewport_state(
&vk::PipelineViewportStateCreateInfo::default()
.scissor_count(1)
.viewport_count(1),
)
.rasterization_state(
&vk::PipelineRasterizationStateCreateInfo::default()
.cull_mode(vk::CullModeFlags::BACK)
.front_face(vk::FrontFace::COUNTER_CLOCKWISE)
.polygon_mode(vk::PolygonMode::FILL)
.line_width(1.0),
)
.multisample_state(
&vk::PipelineMultisampleStateCreateInfo::default()
.rasterization_samples(vk::SampleCountFlags::TYPE_1),
)
.depth_stencil_state(
&vk::PipelineDepthStencilStateCreateInfo::default()
.depth_test_enable(true)
.depth_write_enable(true)
.depth_compare_op(vk::CompareOp::GREATER),
)
.color_blend_state(
&vk::PipelineColorBlendStateCreateInfo::default().attachments(&[
vk::PipelineColorBlendAttachmentState {
blend_enable: vk::TRUE,
src_color_blend_factor: vk::BlendFactor::ONE,
dst_color_blend_factor: vk::BlendFactor::ZERO,
color_blend_op: vk::BlendOp::ADD,
color_write_mask: vk::ColorComponentFlags::R
| vk::ColorComponentFlags::G
| vk::ColorComponentFlags::B,
..Default::default()
},
]),
)
.dynamic_state(
&vk::PipelineDynamicStateCreateInfo::default().dynamic_states(&[
vk::DynamicState::VIEWPORT,
vk::DynamicState::SCISSOR,
]),
)
.layout(pipeline_layout)
.render_pass(gfx.render_pass)
.subpass(0)],
None,
)
.unwrap()
.into_iter();
let pipeline = pipelines.next().unwrap();
gfx.set_name(pipeline, cstr!("voxels"));
// Clean up the shaders explicitly, so the defer guards don't hold onto references we're
// moving into `Self` to be returned
v_guard.invoke();
f_guard.invoke();
let colors = loader.load(
"voxel materials",
crate::graphics::PngArray {
path: "materials".into(),
size: common::world::Material::COUNT - 1,
},
);
Self {
static_ds_layout,
pipeline_layout,
pipeline,
descriptor_pool,
ds,
colors,
colors_view: vk::ImageView::null(),
}
}
}
pub unsafe fn bind(
&mut self,
device: &Device,
loader: &Loader,
dimension: u32,
common_ds: vk::DescriptorSet,
frame: &Frame,
cmd: vk::CommandBuffer,
) -> bool {
unsafe {
if self.colors_view == vk::ImageView::null() {
if let Some(colors) = loader.get(self.colors) {
self.colors_view = device
.create_image_view(
&vk::ImageViewCreateInfo::default()
.image(colors.handle)
.view_type(vk::ImageViewType::TYPE_2D_ARRAY)
.format(vk::Format::R8G8B8A8_SRGB)
.subresource_range(vk::ImageSubresourceRange {
aspect_mask: vk::ImageAspectFlags::COLOR,
base_mip_level: 0,
level_count: 1,
base_array_layer: 0,
layer_count: (Material::COUNT - 1) as u32,
}),
None,
)
.unwrap();
device.update_descriptor_sets(
&[vk::WriteDescriptorSet::default()
.dst_set(self.ds)
.dst_binding(1)
.descriptor_type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER)
.image_info(&[vk::DescriptorImageInfo {
sampler: vk::Sampler::null(),
image_view: self.colors_view,
image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL,
}])],
&[],
);
} else {
return false;
}
}
device.cmd_bind_pipeline(cmd, vk::PipelineBindPoint::GRAPHICS, self.pipeline);
device.cmd_bind_descriptor_sets(
cmd,
vk::PipelineBindPoint::GRAPHICS,
self.pipeline_layout,
0,
&[common_ds, self.ds],
&[],
);
device.cmd_bind_vertex_buffers(cmd, 0, &[frame.transforms.buffer()], &[0]);
device.cmd_push_constants(
cmd,
self.pipeline_layout,
vk::ShaderStageFlags::VERTEX,
0,
&dimension.to_ne_bytes(),
);
true
}
}
pub unsafe fn draw(
&self,
device: &Device,
cmd: vk::CommandBuffer,
buffer: &DrawBuffer,
chunk: u32,
) {
unsafe {
device.cmd_draw_indirect(
cmd,
buffer.indirect_buffer(),
buffer.indirect_offset(chunk),
1,
16,
);
}
}
pub unsafe fn destroy(&mut self, device: &Device) {
unsafe {
device.destroy_pipeline(self.pipeline, None);
device.destroy_pipeline_layout(self.pipeline_layout, None);
device.destroy_descriptor_set_layout(self.static_ds_layout, None);
device.destroy_descriptor_pool(self.descriptor_pool, None);
if self.colors_view != vk::ImageView::null() {
device.destroy_image_view(self.colors_view, None);
}
}
}
}
pub struct Frame {
transforms: DedicatedMapping<[na::Matrix4<f32>]>,
}
impl Frame {
pub fn new(gfx: &Base, count: u32) -> Self {
unsafe {
let transforms = DedicatedMapping::zeroed_array(
&gfx.device,
&gfx.memory_properties,
vk::BufferUsageFlags::VERTEX_BUFFER | vk::BufferUsageFlags::TRANSFER_DST,
count as usize * TRANSFORM_SIZE as usize,
);
gfx.set_name(transforms.buffer(), cstr!("voxel transforms"));
Self { transforms }
}
}
pub fn transforms_mut(&mut self) -> &mut [na::Matrix4<f32>] {
&mut self.transforms
}
}
impl Frame {
pub unsafe fn destroy(&mut self, device: &Device) {
unsafe {
self.transforms.destroy(device);
}
}
}
// 4x4 f32 matrix
pub const TRANSFORM_SIZE: vk::DeviceSize = 64;
================================================
FILE: client/src/graphics/voxels/surface_extraction.rs
================================================
use std::ffi::c_char;
use std::mem;
use ash::{Device, vk};
use lahar::{DedicatedBuffer, DedicatedMapping};
use vk_shader_macros::include_glsl;
use crate::graphics::{Base, VkDrawIndirectCommand, as_bytes};
use common::{defer, world::Material};
const EXTRACT: &[u32] = include_glsl!("shaders/surface-extraction/extract.comp", target: vulkan1_1);
/// GPU-accelerated surface extraction from voxel chunks
pub struct SurfaceExtraction {
params_layout: vk::DescriptorSetLayout,
ds_layout: vk::DescriptorSetLayout,
pipeline_layout: vk::PipelineLayout,
extract: vk::Pipeline,
}
impl SurfaceExtraction {
pub fn new(gfx: &Base) -> Self {
let device = &*gfx.device;
unsafe {
let params_layout = device
.create_descriptor_set_layout(
&vk::DescriptorSetLayoutCreateInfo::default().bindings(&[
vk::DescriptorSetLayoutBinding {
binding: 0,
descriptor_type: vk::DescriptorType::UNIFORM_BUFFER,
descriptor_count: 1,
stage_flags: vk::ShaderStageFlags::COMPUTE,
..Default::default()
},
]),
None,
)
.unwrap();
let ds_layout = device
.create_descriptor_set_layout(
&vk::DescriptorSetLayoutCreateInfo::default().bindings(&[
vk::DescriptorSetLayoutBinding {
binding: 0,
descriptor_type: vk::DescriptorType::STORAGE_BUFFER,
descriptor_count: 1,
stage_flags: vk::ShaderStageFlags::COMPUTE,
..Default::default()
},
vk::DescriptorSetLayoutBinding {
binding: 1,
descriptor_type: vk::DescriptorType::STORAGE_BUFFER,
descriptor_count: 1,
stage_flags: vk::ShaderStageFlags::COMPUTE,
..Default::default()
},
vk::DescriptorSetLayoutBinding {
binding: 2,
descriptor_type: vk::DescriptorType::STORAGE_BUFFER,
descriptor_count: 1,
stage_flags: vk::ShaderStageFlags::COMPUTE,
..Default::default()
},
vk::DescriptorSetLayoutBinding {
binding: 3,
descriptor_type: vk::DescriptorType::STORAGE_BUFFER,
descriptor_count: 1,
stage_flags: vk::ShaderStageFlags::COMPUTE,
..Default::default()
},
]),
None,
)
.unwrap();
let pipeline_layout = device
.create_pipeline_layout(
&vk::PipelineLayoutCreateInfo::default()
.set_layouts(&[params_layout, ds_layout])
.push_constant_ranges(&[vk::PushConstantRange {
stage_flags: vk::ShaderStageFlags::COMPUTE,
offset: 0,
size: 4,
}]),
None,
)
.unwrap();
let extract = device
.create_shader_module(&vk::ShaderModuleCreateInfo::default().code(EXTRACT), None)
.unwrap();
let extract_guard = defer(|| device.destroy_shader_module(extract, None));
let specialization_map_entries = [
vk::SpecializationMapEntry {
constant_id: 0,
offset: 0,
size: 4,
},
vk::SpecializationMapEntry {
constant_id: 1,
offset: 4,
size: 4,
},
vk::SpecializationMapEntry {
constant_id: 2,
offset: 8,
size: 4,
},
];
let specialization = vk::SpecializationInfo::default()
.map_entries(&specialization_map_entries)
.data(as_bytes(&WORKGROUP_SIZE));
let p_name = c"main".as_ptr() as *const c_char;
let mut pipelines = device
.create_compute_pipelines(
gfx.pipeline_cache,
&[vk::ComputePipelineCreateInfo {
stage: vk::PipelineShaderStageCreateInfo {
stage: vk::ShaderStageFlags::COMPUTE,
module: extract,
p_name,
p_specialization_info: &specialization,
..Default::default()
},
layout: pipeline_layout,
..Default::default()
}],
None,
)
.unwrap()
.into_iter();
// Free shader modules now that the actual pipelines are built
extract_guard.invoke();
let extract = pipelines.next().unwrap();
gfx.set_name(extract, cstr!("extract"));
Self {
params_layout,
ds_layout,
pipeline_layout,
extract,
}
}
}
pub unsafe fn destroy(&mut self, device: &Device) {
unsafe {
device.destroy_descriptor_set_layout(self.params_layout, None);
device.destroy_descriptor_set_layout(self.ds_layout, None);
device.destroy_pipeline_layout(self.pipeline_layout, None);
device.destroy_pipeline(self.extract, None);
}
}
}
/// Scratch space for actually performing the extraction
pub struct ScratchBuffer {
dimension: u32,
params: DedicatedBuffer,
/// Size of a single entry in the voxel buffer
voxel_buffer_unit: vk::DeviceSize,
/// Size of a single entry in the state buffer
state_buffer_unit: vk::DeviceSize,
voxels_staging: DedicatedMapping<[Material]>,
voxels: DedicatedBuffer,
state: DedicatedBuffer,
descriptor_pool: vk::DescriptorPool,
params_ds: vk::DescriptorSet,
descriptor_sets: Vec<vk::DescriptorSet>,
free_slots: Vec<u32>,
concurrency: u32,
}
impl ScratchBuffer {
pub fn new(gfx: &Base, ctx: &SurfaceExtraction, concurrency: u32, dimension: u32) -> Self {
let device = &*gfx.device;
// Padded by 2 on each dimension so each voxel of interest has a full neighborhood
let voxel_buffer_unit = round_up(
mem::size_of::<Material>() as vk::DeviceSize * (dimension as vk::DeviceSize + 2).pow(3),
// Pad at least to multiples of 4 so the shaders can safely read in 32 bit units
gfx.limits.min_storage_buffer_offset_alignment.max(4),
);
let voxels_size = concurrency as vk::DeviceSize * voxel_buffer_unit;
let state_buffer_unit = round_up(4, gfx.limits.min_storage_buffer_offset_alignment);
unsafe {
let params = DedicatedBuffer::new(
device,
&gfx.memory_properties,
&vk::BufferCreateInfo::default()
.size(mem::size_of::<Params>() as vk::DeviceSize)
.usage(
vk::BufferUsageFlags::UNIFORM_BUFFER | vk::BufferUsageFlags::TRANSFER_DST,
)
.sharing_mode(vk::SharingMode::EXCLUSIVE),
vk::MemoryPropertyFlags::DEVICE_LOCAL,
);
gfx.set_name(params.handle, cstr!("surface extraction params"));
let voxels_staging = DedicatedMapping::zeroed_array(
device,
&gfx.memory_properties,
vk::BufferUsageFlags::TRANSFER_SRC,
(voxels_size / mem::size_of::<Material>() as vk::DeviceSize) as usize,
);
gfx.set_name(voxels_staging.buffer(), cstr!("voxels staging"));
let voxels = DedicatedBuffer::new(
device,
&gfx.memory_properties,
&vk::BufferCreateInfo::default()
.size(voxels_size)
.usage(
vk::BufferUsageFlags::STORAGE_BUFFER | vk::BufferUsageFlags::TRANSFER_DST,
)
.sharing_mode(vk::SharingMode::EXCLUSIVE),
vk::MemoryPropertyFlags::DEVICE_LOCAL,
);
gfx.set_name(voxels.handle, cstr!("voxels"));
let state = DedicatedBuffer::new(
device,
&gfx.memory_properties,
&vk::BufferCreateInfo::default()
.size(state_buffer_unit * vk::DeviceSize::from(concurrency))
.usage(
vk::BufferUsageFlags::STORAGE_BUFFER | vk::BufferUsageFlags::TRANSFER_DST,
)
.sharing_mode(vk::SharingMode::EXCLUSIVE),
vk::MemoryPropertyFlags::DEVICE_LOCAL,
);
gfx.set_name(state.handle, cstr!("surface extraction state"));
let descriptor_pool = device
.create_descriptor_pool(
&vk::DescriptorPoolCreateInfo::default()
.max_sets(concurrency + 1)
.pool_sizes(&[
vk::DescriptorPoolSize {
ty: vk::DescriptorType::UNIFORM_BUFFER,
descriptor_count: 1,
},
vk::DescriptorPoolSize {
ty: vk::DescriptorType::STORAGE_BUFFER,
descriptor_count: 4 * concurrency,
},
]),
None,
)
.unwrap();
let mut layouts = Vec::with_capacity(concurrency as usize + 1);
layouts.resize(concurrency as usize, ctx.ds_layout);
layouts.push(ctx.params_layout);
let mut descriptor_sets = device
.allocate_descriptor_sets(
&vk::DescriptorSetAllocateInfo::default()
.descriptor_pool(descriptor_pool)
.set_layouts(&layouts),
)
.unwrap();
let params_ds = descriptor_sets.pop().unwrap();
device.update_descriptor_sets(
&[vk::WriteDescriptorSet::default()
.dst_set(params_ds)
.dst_binding(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.buffer_info(&[vk::DescriptorBufferInfo {
buffer: params.handle,
offset: 0,
range: vk::WHOLE_SIZE,
}])],
&[],
);
Self {
dimension,
params,
voxel_buffer_unit,
state_buffer_unit,
voxels_staging,
voxels,
state,
descriptor_pool,
params_ds,
descriptor_sets,
free_slots: (0..concurrency).collect(),
concurrency,
}
}
}
pub fn alloc(&mut self) -> Option<u32> {
self.free_slots.pop()
}
pub fn free(&mut self, index: u32) {
debug_assert!(
!self.free_slots.contains(&index),
"double-free of surface extraction scratch slot"
);
self.free_slots.push(index);
}
/// Includes a one-voxel margin around the entire volume
pub fn storage(&mut self, index: u32) -> &mut [Material] {
let start = index as usize * (self.voxel_buffer_unit as usize / mem::size_of::<Material>());
let length = (self.dimension + 2).pow(3) as usize;
&mut self.voxels_staging[start..start + length]
}
pub unsafe fn extract(
&mut self,
device: &Device,
ctx: &SurfaceExtraction,
indirect_buffer: vk::Buffer,
face_buffer: vk::Buffer,
cmd: vk::CommandBuffer,
tasks: &[ExtractTask],
) {
unsafe {
// Prevent overlap with the last batch of work
device.cmd_pipeline_barrier(
cmd,
vk::PipelineStageFlags::COMPUTE_SHADER,
vk::PipelineStageFlags::TRANSFER,
Default::default(),
&[vk::MemoryBarrier {
src_access_mask: vk::AccessFlags::SHADER_READ,
dst_access_mask: vk::AccessFlags::TRANSFER_WRITE,
..Default::default()
}],
&[],
&[],
);
// HACKITY HACK: Queue submit synchronization validation thinks we're
// racing with the preceding chunk draws. Our logic to allocate unique
// ranges should be preventing this, so this may be a false positive.
// However, if that's true, why does the validation error only trigger a
// handful of times at startup? Perhaps we're freeing and reusing
// storage before the previous draw completes, and validation is somehow
// smart enough to notice?
device.cmd_pipeline_barrier(
cmd,
vk::PipelineStageFlags::VERTEX_SHADER,
vk::PipelineStageFlags::COMPUTE_SHADER,
Default::default(),
&[],
&[vk::BufferMemoryBarrier {
buffer: face_buffer,
src_access_mask: vk::AccessFlags::SHADER_READ,
dst_access_mask: vk::AccessFlags::SHADER_WRITE,
offset: 0,
size: vk::WHOLE_SIZE,
..Default::default()
}],
&[],
);
// Prepare shared state
device.cmd_update_buffer(
cmd,
self.params.handle,
0,
as_bytes(&Params {
dimension: self.dimension,
}),
);
device.cmd_fill_buffer(cmd, self.state.handle, 0, vk::WHOLE_SIZE, 0);
let voxel_count = (self.dimension + 2).pow(3) as usize;
let voxels_range =
voxel_count as vk::DeviceSize * mem::size_of::<Material>() as vk::DeviceSize;
let max_faces = 3 * (self.dimension.pow(3) + self.dimension.pow(2));
let dispatch = dispatch_sizes(self.dimension);
device.cmd_bind_descriptor_sets(
cmd,
vk::PipelineBindPoint::COMPUTE,
ctx.pipeline_layout,
0,
&[self.params_ds],
&[],
);
// Prepare each task
for task in tasks {
assert!(
task.index < self.concurrency,
"index {} out of bounds for concurrency {}",
task.index,
self.concurrency
);
let index = task.index as usize;
let voxels_offset = self.voxel_buffer_unit * index as vk::DeviceSize;
device.update_descriptor_sets(
&[
vk::WriteDescriptorSet::default()
.dst_set(self.descriptor_sets[index])
.dst_binding(0)
.descriptor_type(vk::DescriptorType::STORAGE_BUFFER)
.buffer_info(&[vk::DescriptorBufferInfo {
buffer: self.voxels.handle,
offset: voxels_offset,
range: voxels_range,
}]),
vk::WriteDescriptorSet::default()
.dst_set(self.descriptor_sets[index])
.dst_binding(1)
.descriptor_type(vk::DescriptorType::STORAGE_BUFFER)
.buffer_info(&[vk::DescriptorBufferInfo {
buffer: self.state.handle,
offset: self.state_buffer_unit * vk::DeviceSize::from(task.index),
range: 4,
}]),
vk::WriteDescriptorSet::default()
.dst_set(self.descriptor_sets[index])
.dst_binding(2)
.descriptor_type(vk::DescriptorType::STORAGE_BUFFER)
.buffer_info(&[vk::DescriptorBufferInfo {
buffer: indirect_buffer,
offset: task.indirect_offset,
range: INDIRECT_SIZE,
}]),
vk::WriteDescriptorSet::default()
.dst_set(self.descriptor_sets[index])
.dst_binding(3)
.descriptor_type(vk::DescriptorType::STORAGE_BUFFER)
.buffer_info(&[vk::DescriptorBufferInfo {
buffer: face_buffer,
offset: task.face_offset,
range: max_faces as vk::DeviceSize * FACE_SIZE,
}]),
],
&[],
);
device.cmd_copy_buffer(
cmd,
self.voxels_staging.buffer(),
self.voxels.handle,
&[vk::BufferCopy {
src_offset: voxels_offset,
dst_offset: voxels_offset,
size: voxels_range,
}],
);
device.cmd_update_buffer(
cmd,
indirect_buffer,
task.indirect_offset,
as_bytes(&VkDrawIndirectCommand {
vertex_count: 0,
instance_count: 1,
first_vertex: (task.face_offset / FACE_SIZE) as u32 * 6,
first_instance: task.draw_id,
}),
)
}
device.cmd_pipeline_barrier(
cmd,
vk::PipelineStageFlags::TRANSFER,
vk::PipelineStageFlags::COMPUTE_SHADER,
Default::default(),
&[vk::MemoryBarrier {
src_access_mask: vk::AccessFlags::TRANSFER_WRITE,
dst_access_mask: vk::AccessFlags::SHADER_READ
| vk::AccessFlags::SHADER_WRITE
| vk::AccessFlags::UNIFORM_READ,
..Default::default()
}],
&[],
&[],
);
// Write faces to memory
device.cmd_bind_pipeline(cmd, vk::PipelineBindPoint::COMPUTE, ctx.extract);
for task in tasks {
device.cmd_push_constants(
cmd,
ctx.pipeline_layout,
vk::ShaderStageFlags::COMPUTE,
0,
&u32::from(task.reverse_winding).to_ne_bytes(),
);
device.cmd_bind_descriptor_sets(
cmd,
vk::PipelineBindPoint::COMPUTE,
ctx.pipeline_layout,
1,
&[self.descriptor_sets[task.index as usize]],
&[],
);
device.cmd_dispatch(cmd, dispatch.x, dispatch.y, dispatch.z);
}
device.cmd_pipeline_barrier(
cmd,
vk::PipelineStageFlags::COMPUTE_SHADER,
vk::PipelineStageFlags::VERTEX_SHADER | vk::PipelineStageFlags::DRAW_INDIRECT,
Default::default(),
&[vk::MemoryBarrier {
src_access_mask: vk::AccessFlags::SHADER_WRITE,
dst_access_mask: vk::AccessFlags::SHADER_READ
| vk::AccessFlags::INDIRECT_COMMAND_READ,
..Default::default()
}],
&[],
&[],
);
}
}
pub unsafe fn destroy(&mut self, device: &Device) {
unsafe {
device.destroy_descriptor_pool(self.descriptor_pool, None);
self.params.destroy(device);
self.voxels_staging.destroy(device);
self.voxels.destroy(device);
self.state.destroy(device);
}
}
}
/// Specifies a single chunk's worth of surface extraction work
#[derive(Debug, Copy, Clone)]
pub struct ExtractTask {
pub indirect_offset: vk::DeviceSize,
pub face_offset: vk::DeviceSize,
pub index: u32,
pub draw_id: u32,
pub reverse_winding: bool,
}
fn dispatch_sizes(dimension: u32) -> na::Vector3<u32> {
fn divide_rounding_up(x: u32, y: u32) -> u32 {
debug_assert!(x > 0 && y > 0);
(x - 1) / y + 1
}
// We add 1 to each dimension because we only look at negative-facing faces of the target voxel
na::Vector3::new(
// Extending the X axis accounts for 3 possible faces per voxel
divide_rounding_up((dimension + 1) * 3, WORKGROUP_SIZE[0]),
divide_rounding_up(dimension + 1, WORKGROUP_SIZE[1]),
divide_rounding_up(dimension + 1, WORKGROUP_SIZE[2]),
)
}
#[repr(C)]
#[derive(Copy, Clone)]
struct Params {
dimension: u32,
}
/// Manages storage for ready-to-render voxels
pub struct DrawBuffer {
indirect: DedicatedBuffer,
faces: DedicatedBuffer,
dimension: u32,
face_buffer_unit: vk::DeviceSize,
count: u32,
}
impl DrawBuffer {
/// Allocate a buffer suitable for rendering at most `count` chunks having `dimension` voxels
/// along each edge
pub fn new(gfx: &Base, count: u32, dimension: u32) -> Self {
let device = &*gfx.device;
let max_faces = 3 * (dimension.pow(3) + dimension.pow(2));
let face_buffer_unit = round_up(
max_faces as vk::DeviceSize * FACE_SIZE,
gfx.limits.min_storage_buffer_offset_alignment,
);
let face_buffer_size = count as vk::DeviceSize * face_buffer_unit;
unsafe {
let indirect = DedicatedBuffer::new(
device,
&gfx.memory_properties,
&vk::BufferCreateInfo::default()
.size(count as vk::DeviceSize * INDIRECT_SIZE)
.usage(
vk::BufferUsageFlags::STORAGE_BUFFER
| vk::BufferUsageFlags::INDIRECT_BUFFER
| vk::BufferUsageFlags::TRANSFER_DST,
)
.sharing_mode(vk::SharingMode::EXCLUSIVE),
vk::MemoryPropertyFlags::DEVICE_LOCAL,
);
gfx.set_name(indirect.handle, cstr!("indirect"));
let faces = DedicatedBuffer::new(
device,
&gfx.memory_properties,
&vk::BufferCreateInfo::default()
.size(face_buffer_size)
.usage(vk::BufferUsageFlags::STORAGE_BUFFER)
.sharing_mode(vk::SharingMode::EXCLUSIVE),
vk::MemoryPropertyFlags::DEVICE_LOCAL,
);
gfx.set_name(faces.handle, cstr!("faces"));
Self {
indirect,
faces,
dimension,
face_buffer_unit,
count,
}
}
}
/// Buffer containing face data
pub fn face_buffer(&self) -> vk::Buffer {
self.faces.handle
}
/// Buffer containing face counts for use with cmd_draw_indirect
pub fn indirect_buffer(&self) -> vk::Buffer {
self.indirect.handle
}
/// The offset into the face buffer at which a chunk's face data can be found
pub fn face_offset(&self, chunk: u32) -> vk::DeviceSize {
assert!(chunk < self.count);
vk::DeviceSize::from(chunk) * self.face_buffer_unit
}
/// The offset into the indirect buffer at which a chunk's face data can be found
pub fn indirect_offset(&self, chunk: u32) -> vk::DeviceSize {
assert!(chunk < self.count);
vk::DeviceSize::from(chunk) * INDIRECT_SIZE
}
/// Number of voxels along a chunk edge
pub fn dimension(&self) -> u32 {
self.dimension
}
pub unsafe fn destroy(&mut self, device: &Device) {
unsafe {
self.indirect.destroy(device);
self.faces.destroy(device);
}
}
}
// Size of the VkDrawIndirectCommand struct
const INDIRECT_SIZE: vk::DeviceSize = 16;
const FACE_SIZE: vk::DeviceSize = 8;
const WORKGROUP_SIZE: [u32; 3] = [4, 4, 4];
fn round_up(value: vk::DeviceSize, alignment: vk::DeviceSize) -> vk::DeviceSize {
value.div_ceil(alignment) * alignment
}
================================================
FILE: client/src/graphics/voxels/tests.rs
================================================
use std::{mem, sync::Arc};
use ash::vk;
use lahar::DedicatedMapping;
use renderdoc::{RenderDoc, V110};
use super::{SurfaceExtraction, surface_extraction};
use crate::graphics::{Base, VkDrawIndirectCommand};
use common::world::Material;
struct SurfaceExtractionTest {
gfx: Arc<Base>,
extract: SurfaceExtraction,
scratch: surface_extraction::ScratchBuffer,
indirect: DedicatedMapping<VkDrawIndirectCommand>,
surfaces: DedicatedMapping<[Surface]>,
cmd_pool: vk::CommandPool,
cmd: vk::CommandBuffer,
rd: Option<RenderDoc<V110>>,
}
impl SurfaceExtractionTest {
pub fn new() -> Self {
let gfx = Arc::new(Base::headless());
let extract = SurfaceExtraction::new(&gfx);
let scratch = surface_extraction::ScratchBuffer::new(&gfx, &extract, 1, DIMENSION as u32);
let device = &*gfx.device;
unsafe {
let indirect = DedicatedMapping::<VkDrawIndirectCommand>::zeroed(
device,
&gfx.memory_properties,
vk::BufferUsageFlags::STORAGE_BUFFER | vk::BufferUsageFlags::TRANSFER_DST,
);
let surfaces = DedicatedMapping::<[Surface]>::zeroed_array(
device,
&gfx.memory_properties,
vk::BufferUsageFlags::STORAGE_BUFFER,
3 * (DIMENSION.pow(3) + DIMENSION.pow(2)),
);
let cmd_pool = device
.create_command_pool(
&vk::CommandPoolCreateInfo::default()
.queue_family_index(gfx.queue_family)
.flags(vk::CommandPoolCreateFlags::RESET_COMMAND_BUFFER),
None,
)
.unwrap();
let cmd = device
.allocate_command_buffers(
&vk::CommandBufferAllocateInfo::default()
.command_pool(cmd_pool)
.command_buffer_count(1),
)
.unwrap()[0];
Self {
gfx,
extract,
scratch,
indirect,
surfaces,
cmd_pool,
cmd,
rd: RenderDoc::new().ok(),
}
}
}
fn run(&mut self) {
let device = &*self.gfx.device;
if let Some(ref mut rd) = self.rd {
rd.start_frame_capture(std::ptr::null(), std::ptr::null());
}
unsafe {
device
.begin_command_buffer(
self.cmd,
&vk::CommandBufferBeginInfo::default()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT),
)
.unwrap();
self.scratch.extract(
device,
&self.extract,
self.indirect.buffer(),
self.surfaces.buffer(),
self.cmd,
&[surface_extraction::ExtractTask {
indirect_offset: 0,
face_offset: 0,
index: 0,
draw_id: 0,
reverse_winding: false,
}],
);
device.end_command_buffer(self.cmd).unwrap();
device
.queue_submit(
self.gfx.queue,
&[vk::SubmitInfo::default().command_buffers(&[self.cmd])],
vk::Fence::null(),
)
.unwrap();
device.device_wait_idle().unwrap();
}
if let Some(ref mut rd) = self.rd {
rd.end_frame_capture(std::ptr::null(), std::ptr::null());
}
}
}
impl Drop for SurfaceExtractionTest {
fn drop(&mut self) {
let device = &*self.gfx.device;
unsafe {
self.extract.destroy(device);
self.scratch.destroy(device);
self.indirect.destroy(device);
self.surfaces.destroy(device);
device.destroy_command_pool(self.cmd_pool, None);
}
}
}
const DIMENSION: usize = 2;
#[repr(C)]
#[derive(Debug, Eq, PartialEq)]
struct Surface {
x: u8,
y: u8,
z: u8,
axis: u8,
mat: Material,
_padding: u8,
occlusion: u8,
}
#[test]
#[ignore]
fn surface_extraction() {
assert_eq!(mem::size_of::<Surface>(), 8);
let _guard = common::tracing_guard();
let mut test = SurfaceExtractionTest::new();
for x in test.scratch.storage(0) {
*x = Material::Void;
}
test.run();
assert_eq!(
test.indirect.vertex_count, 0,
"empty chunks have no surfaces"
);
for x in test.scratch.storage(0) {
*x = Material::Dirt;
}
test.run();
assert_eq!(
test.indirect.vertex_count, 0,
"solid chunks have no surfaces"
);
let storage = test.scratch.storage(0);
for x in &mut *storage {
*x = Material::Void;
}
for z in 0..((DIMENSION + 2) / 2) {
for y in 0..(DIMENSION + 2) {
for x in 0..(DIMENSION + 2) {
storage[x + y * (DIMENSION + 2) + z * (DIMENSION + 2).pow(2)] = Material::Dirt;
}
}
}
test.run();
assert_eq!(
test.indirect.vertex_count,
6 * DIMENSION.pow(2) as u32,
"half-solid chunks have n^2 surfaces"
);
let surfaces = &test.surfaces[..DIMENSION.pow(2)];
for expected in &[
Surface {
x: 0,
y: 0,
z: 1,
axis: 5,
mat: Material::Dirt,
_padding: 0,
occlusion: 0xFF,
},
Surface {
x: 1,
y: 0,
z: 1,
axis: 5,
mat: Material::Dirt,
_padding: 0,
occlusion: 0xFF,
},
Surface {
x: 0,
y: 1,
z: 1,
axis: 5,
mat: Material::Dirt,
_padding: 0,
occlusion: 0xFF,
},
Surface {
x: 1,
y: 1,
z: 1,
axis: 5,
mat: Material::Dirt,
_padding: 0,
occlusion: 0xFF,
},
] {
assert!(surfaces.contains(expected));
}
}
================================================
FILE: client/src/graphics/window.rs
================================================
use std::sync::Arc;
use std::time::Instant;
use std::{f32, os::raw::c_char};
use ash::{khr, vk};
use lahar::DedicatedImage;
use raw_window_handle::{HasDisplayHandle, HasWindowHandle};
use tracing::{error, info};
use winit::event::KeyEvent;
use winit::event_loop::ActiveEventLoop;
use winit::keyboard::{KeyCode, PhysicalKey};
use winit::{
dpi::PhysicalSize,
event::{DeviceEvent, ElementState, MouseButton, WindowEvent},
window::{CursorGrabMode, Window as WinitWindow},
};
use super::gui::GuiState;
use super::{Base, Core, Draw, Frustum};
use crate::{Config, Sim};
/// OS window
pub struct EarlyWindow {
window: WinitWindow,
required_extensions: &'static [*const c_char],
}
impl EarlyWindow {
pub fn new(event_loop: &ActiveEventLoop) -> Self {
let mut attrs = WinitWindow::default_attributes();
attrs.title = "hypermine".into();
let window = event_loop.create_window(attrs).unwrap();
Self {
window,
required_extensions: ash_window::enumerate_required_extensions(
event_loop.display_handle().unwrap().as_raw(),
)
.expect("unsupported platform"),
}
}
/// Identify the Vulkan extension needed to render to this window
pub fn required_extensions(
gitextract_t2xcel5r/ ├── .github/ │ ├── dependabot.yml │ └── workflows/ │ ├── package.yml │ └── rust.yml ├── .gitignore ├── Cargo.toml ├── LICENSE-APACHE ├── LICENSE-ZLIB ├── README.md ├── assets/ │ ├── .gitattributes │ └── character.glb ├── client/ │ ├── Cargo.toml │ ├── benches/ │ │ └── surface_extraction.rs │ ├── shaders/ │ │ ├── common.h │ │ ├── fog.frag │ │ ├── fullscreen.vert │ │ ├── mesh.frag │ │ ├── mesh.vert │ │ ├── surface-extraction/ │ │ │ ├── extract.comp │ │ │ └── surface.h │ │ ├── voxels.frag │ │ └── voxels.vert │ └── src/ │ ├── config.rs │ ├── graphics/ │ │ ├── base.rs │ │ ├── core.rs │ │ ├── draw.rs │ │ ├── fog.rs │ │ ├── frustum.rs │ │ ├── gltf_mesh.rs │ │ ├── gui.rs │ │ ├── meshes.rs │ │ ├── mod.rs │ │ ├── png_array.rs │ │ ├── tests.rs │ │ ├── voxels/ │ │ │ ├── mod.rs │ │ │ ├── surface.rs │ │ │ ├── surface_extraction.rs │ │ │ └── tests.rs │ │ └── window.rs │ ├── lahar_deprecated/ │ │ ├── condition.rs │ │ ├── mod.rs │ │ ├── ring_alloc.rs │ │ ├── staging.rs │ │ └── transfer.rs │ ├── lib.rs │ ├── loader.rs │ ├── local_character_controller.rs │ ├── main.rs │ ├── metrics.rs │ ├── net.rs │ ├── prediction.rs │ ├── sim.rs │ └── worldgen_driver.rs ├── common/ │ ├── Cargo.toml │ ├── benches/ │ │ └── bench.rs │ └── src/ │ ├── character_controller/ │ │ ├── collision.rs │ │ ├── mod.rs │ │ └── vector_bounds.rs │ ├── chunk_collision.rs │ ├── chunk_ray_casting.rs │ ├── chunks.rs │ ├── codec.rs │ ├── collision_math.rs │ ├── cursor.rs │ ├── dodeca.rs │ ├── graph.rs │ ├── graph_collision.rs │ ├── graph_entities.rs │ ├── graph_ray_casting.rs │ ├── id.rs │ ├── lib.rs │ ├── margins.rs │ ├── math.rs │ ├── node.rs │ ├── peer_traverser.rs │ ├── proto.rs │ ├── sim_config.rs │ ├── traversal.rs │ ├── voxel_math.rs │ ├── world.rs │ └── worldgen/ │ ├── horosphere.rs │ ├── mod.rs │ ├── plane.rs │ └── terraingen.rs ├── docs/ │ ├── README.md │ └── world_generation.md ├── save/ │ ├── Cargo.toml │ ├── benches/ │ │ └── bench.rs │ ├── gen-protos/ │ │ ├── Cargo.toml │ │ └── src/ │ │ └── main.rs │ ├── src/ │ │ ├── lib.rs │ │ ├── protos.proto │ │ └── protos.rs │ └── tests/ │ ├── heavy.rs │ └── tests.rs ├── server/ │ ├── Cargo.toml │ └── src/ │ ├── config.rs │ ├── input_queue.rs │ ├── lib.rs │ ├── main.rs │ ├── postcard_helpers.rs │ └── sim.rs └── shell.nix
SYMBOL INDEX (1131 symbols across 73 files)
FILE: client/benches/surface_extraction.rs
function extract (line 12) | fn extract(bench: &mut Bencher) {
constant DIMENSION (line 71) | const DIMENSION: u32 = 16;
constant BATCH_SIZE (line 72) | const BATCH_SIZE: u32 = 16;
FILE: client/shaders/common.h
function uniform (line 7) | uniform Common {
FILE: client/shaders/surface-extraction/surface.h
type Surface (line 5) | struct Surface {
function uvec3 (line 13) | uvec3 get_pos(Surface s) {
function uint (line 25) | uint get_axis(Surface s) {
function uint (line 29) | uint get_mat(Surface s) {
function get_occlusion (line 33) | float get_occlusion(Surface s, uvec2 texcoords) {
function Surface (line 37) | Surface surface(uvec3 pos, uint axis, bool reverse, uint mat, uvec4 occl...
FILE: client/src/config.rs
type Config (line 13) | pub struct Config {
method load (line 23) | pub fn load(dirs: &directories::ProjectDirs) -> Self {
method find_asset (line 91) | pub fn find_asset(&self, path: &Path) -> Option<PathBuf> {
type RawConfig (line 106) | struct RawConfig {
FILE: client/src/graphics/base.rs
type Base (line 16) | pub struct Base {
method new (line 60) | pub fn new(
method save_pipeline_cache (line 298) | pub fn save_pipeline_cache(&self) {
method set_name (line 319) | pub unsafe fn set_name<T: vk::Handle>(&self, object: T, name: &CStr) {
method headless (line 334) | pub fn headless() -> Self {
method drop (line 46) | fn drop(&mut self) {
constant COLOR_FORMAT (line 341) | pub const COLOR_FORMAT: vk::Format = vk::Format::B8G8R8A8_SRGB;
FILE: client/src/graphics/core.rs
type Core (line 14) | pub struct Core {
method new (line 39) | pub fn new(exts: &[*const c_char]) -> Self {
method drop (line 28) | fn drop(&mut self) {
function messenger_callback (line 126) | unsafe extern "system" fn messenger_callback(
FILE: client/src/graphics/draw.rs
type Draw (line 15) | pub struct Draw {
method new (line 63) | pub fn new(gfx: Arc<Base>, cfg: Arc<Config>) -> Self {
method configure (line 233) | pub fn configure(&mut self, cfg: &SimConfig) {
method wait (line 250) | pub unsafe fn wait(&mut self) {
method image_acquired (line 268) | pub fn image_acquired(&self) -> vk::Semaphore {
method draw (line 281) | pub unsafe fn draw(
method wait_idle (line 570) | pub fn wait_idle(&self) {
constant PIPELINE_DEPTH (line 59) | const PIPELINE_DEPTH: u32 = 2;
constant TIMESTAMPS_PER_FRAME (line 60) | const TIMESTAMPS_PER_FRAME: u32 = 3;
method drop (line 581) | fn drop(&mut self) {
type State (line 610) | struct State {
type Uniforms (line 642) | struct Uniforms {
FILE: client/src/graphics/fog.rs
constant VERT (line 7) | const VERT: &[u32] = include_glsl!("shaders/fullscreen.vert");
constant FRAG (line 8) | const FRAG: &[u32] = include_glsl!("shaders/fog.frag");
type Fog (line 10) | pub struct Fog {
method new (line 16) | pub fn new(gfx: &Base) -> Self {
method draw (line 126) | pub unsafe fn draw(
method destroy (line 146) | pub unsafe fn destroy(&mut self, device: &Device) {
function density (line 156) | pub fn density(distance: f32, transmission: f32, exponent: f32) -> f32 {
FILE: client/src/graphics/frustum.rs
type Frustum (line 4) | pub struct Frustum {
method from_vfov (line 13) | pub fn from_vfov(vfov: f32, aspect_ratio: f32) -> Self {
method projection (line 28) | pub fn projection(&self, znear: f32) -> na::Projective3<f32> {
method planes (line 51) | pub fn planes(&self) -> FrustumPlanes {
type FrustumPlanes (line 74) | pub struct FrustumPlanes {
method contain (line 82) | pub fn contain(&self, point: &MPoint<f32>, radius: f32) -> bool {
function planes_sanity (line 100) | fn planes_sanity() {
FILE: client/src/graphics/gltf_mesh.rs
type GlbFile (line 20) | pub struct GlbFile {
method load (line 33) | async fn load(self, ctx: &LoadCtx) -> Result<GltfScene> {
type Output (line 25) | type Output = GltfScene;
method load (line 27) | fn load(self, ctx: &LoadCtx) -> LoadFuture<'_, Self::Output> {
type GltfScene (line 69) | pub struct GltfScene(pub Vec<Mesh>);
method cleanup (line 72) | unsafe fn cleanup(self, gfx: &Base) {
function load_node (line 81) | fn load_node<'a>(
function load_mesh (line 110) | async fn load_mesh(
function load_primitive (line 123) | async fn load_primitive(
type Geometry (line 205) | struct Geometry {
function load_geom (line 211) | async fn load_geom(
function load_material (line 375) | async fn load_material(
function load_solid_color (line 516) | async fn load_solid_color(ctx: &LoadCtx, rgba: [f32; 4]) -> Result<Dedic...
FILE: client/src/graphics/gui.rs
type GuiState (line 7) | pub struct GuiState {
method new (line 12) | pub fn new() -> Self {
method toggle_gui (line 17) | pub fn toggle_gui(&mut self) {
method run (line 23) | pub fn run(&self, sim: &Sim) {
FILE: client/src/graphics/meshes.rs
constant VERT (line 11) | const VERT: &[u32] = include_glsl!("shaders/mesh.vert");
constant FRAG (line 12) | const FRAG: &[u32] = include_glsl!("shaders/mesh.frag");
type Meshes (line 14) | pub struct Meshes {
method new (line 21) | pub fn new(gfx: &Base, ds_layout: vk::DescriptorSetLayout) -> Self {
method draw (line 166) | pub unsafe fn draw(
method destroy (line 207) | pub unsafe fn destroy(&mut self, device: &Device) {
type Vertex (line 216) | pub struct Vertex {
type Mesh (line 223) | pub struct Mesh {
method cleanup (line 235) | unsafe fn cleanup(mut self, gfx: &Base) {
FILE: client/src/graphics/mod.rs
function as_bytes (line 31) | unsafe fn as_bytes<T: Copy>(x: &T) -> &[u8] {
type VkDrawIndirectCommand (line 37) | pub struct VkDrawIndirectCommand {
FILE: client/src/graphics/png_array.rs
type PngArray (line 15) | pub struct PngArray {
type Output (line 21) | type Output = DedicatedImage;
method load (line 23) | fn load(self, handle: &LoadCtx) -> LoadFuture<'_, Self::Output> {
FILE: client/src/graphics/tests.rs
function init_base (line 5) | fn init_base() {
FILE: client/src/graphics/voxels/mod.rs
type Voxels (line 28) | pub struct Voxels {
method new (line 39) | pub fn new(
method prepare (line 81) | pub unsafe fn prepare(
method draw (line 206) | pub unsafe fn draw(
method destroy (line 233) | pub unsafe fn destroy(&mut self, device: &Device) {
type Frame (line 243) | pub struct Frame {
method destroy (line 251) | pub unsafe fn destroy(&mut self, device: &Device) {
method new (line 259) | pub fn new(gfx: &Base, ctx: &Voxels) -> Self {
constant MAX_CHUNKS (line 269) | const MAX_CHUNKS: u32 = 8192;
type SurfaceState (line 271) | struct SurfaceState {
FILE: client/src/graphics/voxels/surface.rs
constant VERT (line 9) | const VERT: &[u32] = include_glsl!("shaders/voxels.vert");
constant FRAG (line 10) | const FRAG: &[u32] = include_glsl!("shaders/voxels.frag");
type Surface (line 12) | pub struct Surface {
method new (line 23) | pub fn new(gfx: &Base, loader: &mut Loader, buffer: &DrawBuffer) -> Se...
method bind (line 247) | pub unsafe fn bind(
method draw (line 315) | pub unsafe fn draw(
method destroy (line 333) | pub unsafe fn destroy(&mut self, device: &Device) {
type Frame (line 346) | pub struct Frame {
method new (line 351) | pub fn new(gfx: &Base, count: u32) -> Self {
method transforms_mut (line 364) | pub fn transforms_mut(&mut self) -> &mut [na::Matrix4<f32>] {
method destroy (line 370) | pub unsafe fn destroy(&mut self, device: &Device) {
constant TRANSFORM_SIZE (line 378) | pub const TRANSFORM_SIZE: vk::DeviceSize = 64;
FILE: client/src/graphics/voxels/surface_extraction.rs
constant EXTRACT (line 11) | const EXTRACT: &[u32] = include_glsl!("shaders/surface-extraction/extrac...
type SurfaceExtraction (line 14) | pub struct SurfaceExtraction {
method new (line 22) | pub fn new(gfx: &Base) -> Self {
method destroy (line 148) | pub unsafe fn destroy(&mut self, device: &Device) {
type ScratchBuffer (line 159) | pub struct ScratchBuffer {
method new (line 177) | pub fn new(gfx: &Base, ctx: &SurfaceExtraction, concurrency: u32, dime...
method alloc (line 295) | pub fn alloc(&mut self) -> Option<u32> {
method free (line 299) | pub fn free(&mut self, index: u32) {
method storage (line 308) | pub fn storage(&mut self, index: u32) -> &mut [Material] {
method extract (line 314) | pub unsafe fn extract(
method destroy (line 518) | pub unsafe fn destroy(&mut self, device: &Device) {
type ExtractTask (line 531) | pub struct ExtractTask {
function dispatch_sizes (line 539) | fn dispatch_sizes(dimension: u32) -> na::Vector3<u32> {
type Params (line 556) | struct Params {
type DrawBuffer (line 561) | pub struct DrawBuffer {
method new (line 572) | pub fn new(gfx: &Base, count: u32, dimension: u32) -> Self {
method face_buffer (line 620) | pub fn face_buffer(&self) -> vk::Buffer {
method indirect_buffer (line 625) | pub fn indirect_buffer(&self) -> vk::Buffer {
method face_offset (line 630) | pub fn face_offset(&self, chunk: u32) -> vk::DeviceSize {
method indirect_offset (line 636) | pub fn indirect_offset(&self, chunk: u32) -> vk::DeviceSize {
method dimension (line 642) | pub fn dimension(&self) -> u32 {
method destroy (line 646) | pub unsafe fn destroy(&mut self, device: &Device) {
constant INDIRECT_SIZE (line 655) | const INDIRECT_SIZE: vk::DeviceSize = 16;
constant FACE_SIZE (line 657) | const FACE_SIZE: vk::DeviceSize = 8;
constant WORKGROUP_SIZE (line 659) | const WORKGROUP_SIZE: [u32; 3] = [4, 4, 4];
function round_up (line 661) | fn round_up(value: vk::DeviceSize, alignment: vk::DeviceSize) -> vk::Dev...
FILE: client/src/graphics/voxels/tests.rs
type SurfaceExtractionTest (line 11) | struct SurfaceExtractionTest {
method new (line 23) | pub fn new() -> Self {
method run (line 74) | fn run(&mut self) {
method drop (line 123) | fn drop(&mut self) {
constant DIMENSION (line 135) | const DIMENSION: usize = 2;
type Surface (line 139) | struct Surface {
function surface_extraction (line 151) | fn surface_extraction() {
FILE: client/src/graphics/window.rs
type EarlyWindow (line 23) | pub struct EarlyWindow {
method new (line 29) | pub fn new(event_loop: &ActiveEventLoop) -> Self {
method required_extensions (line 43) | pub fn required_extensions(&self) -> &'static [*const c_char] {
type Window (line 49) | pub struct Window {
method new (line 68) | pub fn new(
method supports (line 105) | pub fn supports(&self, physical: vk::PhysicalDevice, queue_family_inde...
method init_rendering (line 113) | pub fn init_rendering(&mut self, gfx: Arc<Base>) {
method handle_device_event (line 124) | pub fn handle_device_event(&mut self, event: DeviceEvent) {
method handle_event (line 140) | pub fn handle_event(&mut self, event: WindowEvent, event_loop: &Active...
method handle_net (line 293) | fn handle_net(&mut self, msg: server::Message) {
method draw (line 320) | fn draw(&mut self) {
function number_key_to_index (line 388) | fn number_key_to_index(key: KeyCode) -> Option<usize> {
method drop (line 405) | fn drop(&mut self) {
type SwapchainMgr (line 414) | struct SwapchainMgr {
method new (line 421) | fn new(window: &Window, gfx: Arc<Base>, fallback_size: PhysicalSize<u3...
method update (line 467) | unsafe fn update(
method acquire_next_image (line 487) | unsafe fn acquire_next_image(&self, signal: vk::Semaphore) -> Result<(...
method queue_present (line 499) | unsafe fn queue_present(&self, index: u32) -> Result<bool, vk::Result> {
type SwapchainState (line 513) | struct SwapchainState {
method new (line 522) | unsafe fn new(
method drop (line 684) | fn drop(&mut self) {
type Frame (line 699) | struct Frame {
type InputState (line 713) | struct InputState {
method movement (line 727) | fn movement(&self) -> na::Vector3<f32> {
method roll (line 735) | fn roll(&self) -> f32 {
FILE: client/src/lahar_deprecated/condition.rs
type Condition (line 4) | pub struct Condition {
method new (line 10) | pub fn new() -> Self {
method register (line 22) | pub fn register(&mut self, cx: &mut Context, state: &mut State) {
method notify (line 31) | pub fn notify(&mut self) {
type State (line 43) | pub struct State(Option<u64>);
FILE: client/src/lahar_deprecated/ring_alloc.rs
type RingAlloc (line 4) | pub struct RingAlloc {
method new (line 17) | pub fn new() -> Self {
method alloc (line 28) | pub fn alloc(&mut self, capacity: usize, size: usize) -> Option<(usize...
method free (line 70) | pub fn free(&mut self, id: Id) {
type Id (line 80) | pub struct Id(u64);
function sanity (line 87) | fn sanity() {
FILE: client/src/lahar_deprecated/staging.rs
type StagingBuffer (line 17) | pub struct StagingBuffer {
method new (line 29) | pub fn new(
method buffer (line 52) | pub fn buffer(&self) -> vk::Buffer {
method capacity (line 57) | pub fn capacity(&self) -> usize {
method alloc (line 65) | pub fn alloc(&self, size: usize) -> impl Future<Output = Option<Alloc<...
method free (line 91) | fn free(&self, id: ring_alloc::Id) {
type State (line 23) | struct State {
method drop (line 99) | fn drop(&mut self) {
type Alloc (line 107) | pub struct Alloc<'a> {
function offset (line 114) | pub fn offset(&self) -> vk::DeviceSize {
function size (line 119) | pub fn size(&self) -> vk::DeviceSize {
type Target (line 125) | type Target = [u8];
method deref (line 127) | fn deref(&self) -> &[u8] {
method deref_mut (line 133) | fn deref_mut(&mut self) -> &mut [u8] {
method drop (line 139) | fn drop(&mut self) {
FILE: client/src/lahar_deprecated/transfer.rs
type TransferHandle (line 16) | pub struct TransferHandle {
method run (line 21) | pub unsafe fn run(
type TransferContext (line 34) | pub struct TransferContext {
type ShutDown (line 45) | pub struct ShutDown;
method fmt (line 48) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
type Message (line 56) | struct Message {
type Reactor (line 61) | pub struct Reactor {
method new (line 76) | pub unsafe fn new(
method poll (line 116) | pub fn poll(&mut self) -> Result<(), Disconnected> {
method run_for (line 120) | pub fn run_for(&mut self, timeout: Duration) -> Result<(), Disconnecte...
method queue (line 165) | fn queue(&mut self) -> Result<(), Disconnected> {
method prepare (line 178) | fn prepare(&mut self, send: oneshot::Sender<()>) -> vk::CommandBuffer {
method flush (line 218) | fn flush(&mut self) {
method drop (line 261) | fn drop(&mut self) {
type Batch (line 282) | struct Batch {
type Disconnected (line 289) | pub struct Disconnected;
FILE: client/src/loader.rs
type Cleanup (line 25) | pub trait Cleanup {
method cleanup (line 26) | unsafe fn cleanup(self, gfx: &Base);
method cleanup (line 30) | unsafe fn cleanup(mut self, gfx: &Base) {
type Loadable (line 37) | pub trait Loadable: Send + 'static {
method load (line 39) | fn load(self, ctx: &LoadCtx) -> LoadFuture<'_, Self::Output>;
type LoadFuture (line 42) | pub type LoadFuture<'a, T> =
type Loader (line 45) | pub struct Loader {
method new (line 55) | pub fn new(cfg: Arc<Config>, gfx: Arc<Base>) -> Self {
method load (line 121) | pub fn load<L: Loadable>(&mut self, description: &'static str, x: L) -...
method make_queue (line 157) | pub fn make_queue<T: Loadable>(&mut self, capacity: usize) -> WorkQueu...
method drive (line 195) | pub fn drive(&mut self) {
method get (line 202) | pub fn get<T: 'static + Cleanup>(&self, handle: Asset<T>) -> Option<&T> {
method ctx (line 210) | pub fn ctx(&self) -> &LoadCtx {
method drop (line 216) | fn drop(&mut self) {
type Shared (line 223) | struct Shared {
type Message (line 228) | struct Message {
type LoadCtx (line 234) | pub struct LoadCtx {
method load (line 245) | async fn load<T: Loadable>(&self, x: T) -> Result<T::Output> {
method drop (line 251) | fn drop(&mut self) {
type AnyTable (line 261) | trait AnyTable: Downcast {
method finish (line 262) | fn finish(&mut self, index: u32, value: Box<dyn Any + Send>);
method cleanup (line 263) | fn cleanup(self: Box<Self>, gfx: &Base);
method finish (line 285) | fn finish(&mut self, index: u32, value: Box<dyn Any + Send>) {
method cleanup (line 289) | fn cleanup(self: Box<Self>, gfx: &Base) {
type Table (line 268) | struct Table<T> {
function new (line 273) | fn new() -> Self {
function alloc (line 277) | fn alloc(&mut self) -> u32 {
type Asset (line 299) | pub struct Asset<T: 'static> {
method clone (line 306) | fn clone(&self) -> Self {
type WorkQueue (line 319) | pub struct WorkQueue<T: Loadable> {
function load (line 329) | pub fn load(&mut self, x: T) -> Result<(), T> {
function poll (line 345) | pub fn poll(&mut self) -> Option<T::Output> {
method drop (line 353) | fn drop(&mut self) {
FILE: client/src/local_character_controller.rs
type LocalCharacterController (line 3) | pub struct LocalCharacterController {
method new (line 16) | pub fn new() -> Self {
method oriented_position (line 25) | pub fn oriented_position(&self) -> Position {
method orientation (line 32) | pub fn orientation(&self) -> na::UnitQuaternion<f32> {
method update_position (line 38) | pub fn update_position(
method look_free (line 56) | pub fn look_free(&mut self, delta_yaw: f32, delta_pitch: f32, delta_ro...
method look_level (line 65) | pub fn look_level(&mut self, delta_yaw: f32, delta_pitch: f32) {
method align_to_gravity (line 103) | pub fn align_to_gravity(&mut self) {
method horizontal_orientation (line 128) | pub fn horizontal_orientation(&mut self) -> na::UnitQuaternion<f32> {
method renormalize_orientation (line 143) | pub fn renormalize_orientation(&mut self) {
function assert_aligned_to_gravity (line 154) | fn assert_aligned_to_gravity(subject: &LocalCharacterController) {
function assert_yaw_and_pitch_correct (line 164) | fn assert_yaw_and_pitch_correct(
function look_level_and_horizontal_orientation_examples (line 177) | fn look_level_and_horizontal_orientation_examples() {
function align_to_gravity_examples (line 229) | fn align_to_gravity_examples() {
function update_position_example (line 285) | fn update_position_example() {
FILE: client/src/main.rs
function main (line 15) | fn main() {
type App (line 80) | struct App {
method resumed (line 89) | fn resumed(&mut self, event_loop: &ActiveEventLoop) {
method suspended (line 117) | fn suspended(&mut self, _event_loop: &ActiveEventLoop) {
method window_event (line 121) | fn window_event(
method device_event (line 133) | fn device_event(
method about_to_wait (line 145) | fn about_to_wait(&mut self, _event_loop: &ActiveEventLoop) {
method exiting (line 152) | fn exiting(&mut self, _event_loop: &ActiveEventLoop) {
FILE: client/src/metrics.rs
function init (line 10) | pub fn init() -> Arc<Recorder> {
type Recorder (line 18) | pub struct Recorder {
method report (line 23) | pub fn report(&self) {
type ArcRecorder (line 44) | struct ArcRecorder(Arc<Recorder>);
method describe_counter (line 47) | fn describe_counter(
method describe_gauge (line 56) | fn describe_gauge(
method describe_histogram (line 65) | fn describe_histogram(
method register_counter (line 74) | fn register_counter(
method register_gauge (line 82) | fn register_gauge(
method register_histogram (line 90) | fn register_histogram(
type Handle (line 102) | struct Handle {
method record (line 108) | fn record(&self, value: f64) {
function is_ready_for_profiling (line 135) | fn is_ready_for_profiling() -> bool {
function declare_ready_for_profiling (line 140) | pub fn declare_ready_for_profiling() {
FILE: client/src/net.rs
function spawn (line 15) | pub fn spawn(cfg: Arc<Config>) -> server::Handle {
function run (line 30) | async fn run(
function inner (line 56) | async fn inner(
function handle_outgoing (line 99) | async fn handle_outgoing(
function handle_unordered (line 112) | async fn handle_unordered(incoming: mpsc::UnboundedSender<Message>, conn...
type AcceptAnyCert (line 138) | struct AcceptAnyCert;
method verify_server_cert (line 141) | fn verify_server_cert(
method verify_tls12_signature (line 152) | fn verify_tls12_signature(
method verify_tls13_signature (line 162) | fn verify_tls13_signature(
method supported_verify_schemes (line 178) | fn supported_verify_schemes(&self) -> Vec<rustls::SignatureScheme> {
FILE: client/src/prediction.rs
type PredictedMotion (line 16) | pub struct PredictedMotion {
method new (line 25) | pub fn new(initial_position: Position) -> Self {
method push (line 37) | pub fn push(&mut self, cfg: &SimConfig, graph: &Graph, input: &Charact...
method reconcile (line 53) | pub fn reconcile(
method predicted_position (line 87) | pub fn predicted_position(&self) -> &Position {
method predicted_velocity (line 91) | pub fn predicted_velocity(&self) -> &na::Vector3<f32> {
method predicted_on_ground (line 95) | pub fn predicted_on_ground(&self) -> &bool {
function pos (line 106) | fn pos() -> Position {
function wraparound (line 114) | fn wraparound() {
FILE: client/src/sim.rs
constant MATERIAL_PALETTE (line 26) | const MATERIAL_PALETTE: [Material; 10] = [
type Sim (line 40) | pub struct Sim {
method new (line 85) | pub fn new(
method look (line 123) | pub fn look(&mut self, delta_yaw: f32, delta_pitch: f32, delta_roll: f...
method set_movement_input (line 133) | pub fn set_movement_input(&mut self, mut raw_movement_input: na::Vecto...
method toggle_no_clip (line 145) | pub fn toggle_no_clip(&mut self) {
method set_jump_held (line 152) | pub fn set_jump_held(&mut self, jump_held: bool) {
method set_jump_pressed_true (line 157) | pub fn set_jump_pressed_true(&mut self) {
method set_place_block_pressed_true (line 161) | pub fn set_place_block_pressed_true(&mut self) {
method looking_at (line 166) | pub fn looking_at(&self) -> Option<graph_ray_casting::GraphCastHit> {
method select_material (line 183) | pub fn select_material(&mut self, idx: usize) {
method next_material (line 188) | pub fn next_material(&mut self) {
method prev_material (line 193) | pub fn prev_material(&mut self) {
method pick_material (line 199) | pub fn pick_material(&mut self) {
method selected_material (line 211) | pub fn selected_material(&self) -> Material {
method get_any_inventory_entity_matching_material (line 216) | pub fn get_any_inventory_entity_matching_material(
method count_inventory_entities_matching_material (line 236) | pub fn count_inventory_entities_matching_material(&self, material: Mat...
method set_break_block_pressed_true (line 257) | pub fn set_break_block_pressed_true(&mut self) {
method cfg (line 261) | pub fn cfg(&self) -> &SimConfig {
method step (line 265) | pub fn step(&mut self, dt: Duration, net: &mut server::Handle) {
method handle_net (line 322) | pub fn handle_net(&mut self, msg: server::Message) {
method update_position (line 346) | fn update_position(&mut self, id: EntityId, new_pos: &Position) {
method update_character_state (line 362) | fn update_character_state(&mut self, id: EntityId, new_character_state...
method reconcile_prediction (line 376) | fn reconcile_prediction(&mut self, latest_input: u16) {
method handle_spawns (line 406) | fn handle_spawns(&mut self, msg: proto::Spawns) {
method spawn (line 459) | fn spawn(
method send_input (line 499) | fn send_input(&mut self, net: &mut server::Handle) {
method update_view_position (line 523) | fn update_view_position(&mut self) {
method view (line 561) | pub fn view(&self) -> Position {
method destroy (line 571) | fn destroy(&mut self, entity: Entity) {
method destroy_idless (line 581) | fn destroy_idless(&mut self, entity: Entity) {
method get_local_character_block_update (line 591) | fn get_local_character_block_update(&self) -> Option<BlockUpdate> {
FILE: client/src/worldgen_driver.rs
type WorldgenDriver (line 15) | pub struct WorldgenDriver {
method new (line 24) | pub fn new(chunk_load_parallelism: usize) -> Self {
method drive (line 32) | pub fn drive(&mut self, view: Position, chunk_generation_distance: f32...
method add_chunk_to_graph (line 84) | pub fn add_chunk_to_graph(
method apply_block_update (line 100) | pub fn apply_block_update(&mut self, graph: &mut Graph, block_update: ...
method apply_voxel_data (line 110) | pub fn apply_voxel_data(
type ChunkDesc (line 124) | struct ChunkDesc {
type LoadedChunk (line 129) | struct LoadedChunk {
type WorkQueue (line 135) | struct WorkQueue {
method new (line 144) | pub fn new(chunk_load_parallelism: usize) -> Self {
method load (line 174) | pub fn load(&mut self, x: ChunkDesc) -> bool {
method poll (line 187) | pub fn poll(&mut self) -> Option<LoadedChunk> {
FILE: common/benches/bench.rs
function build_graph (line 12) | fn build_graph(c: &mut Criterion) {
FILE: common/src/character_controller/collision.rs
function check_collision (line 14) | pub fn check_collision(
type CollisionContext (line 77) | pub struct CollisionContext<'a> {
type CollisionCheckingResult (line 82) | pub struct CollisionCheckingResult {
method stationary (line 97) | pub fn stationary() -> CollisionCheckingResult {
type Collision (line 106) | pub struct Collision {
FILE: common/src/character_controller/mod.rs
function run_character_step (line 22) | pub fn run_character_step(
function run_standard_character_step (line 58) | fn run_standard_character_step(
function run_no_clip_character_step (line 115) | fn run_no_clip_character_step(
function get_ground_normal (line 128) | fn get_ground_normal(
function is_ground (line 168) | fn is_ground(ctx: &CharacterControllerContext, normal: &na::UnitVector3<...
function apply_ground_controls (line 174) | fn apply_ground_controls(
function apply_air_controls (line 210) | fn apply_air_controls(ctx: &CharacterControllerContext, velocity: &mut n...
function apply_velocity (line 216) | fn apply_velocity(
function handle_collision (line 274) | fn handle_collision(
type CharacterControllerContext (line 327) | struct CharacterControllerContext<'a> {
FILE: common/src/character_controller/vector_bounds.rs
type BoundedVectors (line 11) | pub struct BoundedVectors {
method new (line 26) | pub fn new(displacement: na::Vector3<f32>, velocity: Option<na::Vector...
method displacement (line 38) | pub fn displacement(&self) -> &na::Vector3<f32> {
method scale_displacement (line 43) | pub fn scale_displacement(&mut self, scale_factor: f32) {
method velocity (line 48) | pub fn velocity(&self) -> Option<&na::Vector3<f32>> {
method bounds (line 53) | pub fn bounds(&self) -> &[VectorBound] {
method add_bound (line 60) | pub fn add_bound(&mut self, new_bound: VectorBound) {
method add_temp_bound (line 68) | pub fn add_temp_bound(&mut self, new_bound: VectorBound) {
method clear_temp_bounds (line 74) | pub fn clear_temp_bounds(&mut self) {
method apply_bound (line 79) | fn apply_bound(&mut self, new_bound: &VectorBound) {
type VectorBound (line 143) | pub struct VectorBound {
method new (line 160) | pub fn new(
method constrain_vector (line 174) | fn constrain_vector(&self, subject: &mut na::Vector3<f32>, error_margi...
method check_vector (line 186) | fn check_vector(&self, subject: &na::Vector3<f32>, error_margin: f32) ...
method get_self_constrained_with_bound (line 208) | fn get_self_constrained_with_bound(&self, bound: &VectorBound) -> Opti...
function vector_bound_group_example (line 232) | fn vector_bound_group_example() {
function constrain_vector_example (line 277) | fn constrain_vector_example() {
function get_self_constrained_with_bound_example (line 299) | fn get_self_constrained_with_bound_example() {
function assert_bounds_achieved (line 332) | fn assert_bounds_achieved(bounds: &BoundedVectors) {
function assert_collinear (line 338) | fn assert_collinear(v0: na::Vector3<f32>, v1: na::Vector3<f32>, epsilon:...
function unit_vector (line 347) | fn unit_vector(x: f32, y: f32, z: f32) -> na::UnitVector3<f32> {
FILE: common/src/chunk_collision.rs
type ChunkCastHit (line 9) | pub struct ChunkCastHit {
function chunk_sphere_cast (line 23) | pub fn chunk_sphere_cast(
function find_face_collision (line 75) | fn find_face_collision(
function find_edge_collision (line 154) | fn find_edge_collision(
function find_vertex_collision (line 227) | fn find_vertex_collision(
function voxel_is_solid (line 300) | fn voxel_is_solid(voxel_data: &VoxelData, layout: &ChunkLayout, coords: ...
type TestSphereCastContext (line 314) | struct TestSphereCastContext {
method new (line 321) | fn new(collider_radius: f32) -> Self {
method set_voxel (line 337) | fn set_voxel(&mut self, coords: [u8; 3], material: Material) {
function cast_with_test_ray (line 348) | fn cast_with_test_ray(
function chunk_sphere_cast_wrapper (line 382) | fn chunk_sphere_cast_wrapper(
function find_face_collision_wrapper (line 396) | fn find_face_collision_wrapper(
function find_edge_collision_wrapper (line 419) | fn find_edge_collision_wrapper(
function find_vertex_collision_wrapper (line 442) | fn find_vertex_collision_wrapper(
function test_face_collision (line 463) | fn test_face_collision(
function test_edge_collision (line 477) | fn test_edge_collision(
function test_vertex_collision (line 491) | fn test_vertex_collision(ctx: &TestSphereCastContext, ray: &Ray, tanh_di...
function assert_hits_exist_and_eq (line 502) | fn assert_hits_exist_and_eq(hit0: &Option<ChunkCastHit>, hit1: &Option<C...
function sanity_check_normal (line 513) | fn sanity_check_normal(ray: &Ray, hit: &ChunkCastHit) {
function chunk_sphere_cast_examples (line 531) | fn chunk_sphere_cast_examples() {
function face_collisions_one_sided (line 630) | fn face_collisions_one_sided() {
FILE: common/src/chunk_ray_casting.rs
type ChunkCastHit (line 9) | pub struct ChunkCastHit {
function chunk_ray_cast (line 28) | pub fn chunk_ray_cast(
function find_face_collision (line 54) | fn find_face_collision(
function voxel_is_solid (line 131) | fn voxel_is_solid(voxel_data: &VoxelData, layout: &ChunkLayout, coords: ...
type TestRayCastContext (line 145) | struct TestRayCastContext {
method new (line 151) | fn new() -> Self {
method set_voxel (line 166) | fn set_voxel(&mut self, coords: [u8; 3], material: Material) {
function cast_with_test_ray (line 177) | fn cast_with_test_ray(
function chunk_ray_cast_wrapper (line 211) | fn chunk_ray_cast_wrapper(
function test_face_collision (line 219) | fn test_face_collision(
function chunk_ray_cast_examples (line 237) | fn chunk_ray_cast_examples() {
function face_collisions_one_sided (line 296) | fn face_collisions_one_sided() {
FILE: common/src/chunks.rs
type Chunks (line 10) | pub struct Chunks<T> {
type Output (line 15) | type Output = T;
function index (line 16) | fn index(&self, v: Vertex) -> &T {
function index_mut (line 22) | fn index_mut(&mut self, v: Vertex) -> &mut T {
FILE: common/src/codec.rs
function send (line 4) | pub async fn send<T: Serialize + ?Sized>(stream: &mut quinn::SendStream,...
function recv (line 16) | pub async fn recv<T: DeserializeOwned>(stream: &mut quinn::RecvStream) -...
function send_whole (line 35) | pub async fn send_whole<T: Serialize + ?Sized>(
function recv_whole (line 45) | pub async fn recv_whole<T: DeserializeOwned>(
FILE: common/src/collision_math.rs
type Ray (line 6) | pub struct Ray {
method new (line 14) | pub fn new(position: MPoint<f32>, direction: MDirection<f32>) -> Ray {
method ray_point (line 23) | pub fn ray_point(&self, tanh_distance: f32) -> MVector<f32> {
method solve_sphere_plane_intersection (line 29) | pub fn solve_sphere_plane_intersection(
method solve_sphere_line_intersection (line 46) | pub fn solve_sphere_line_intersection(
method solve_sphere_point_intersection (line 66) | pub fn solve_sphere_point_intersection(
method solve_point_plane_intersection (line 89) | pub fn solve_point_plane_intersection(&self, plane_normal: &MDirection...
type Output (line 103) | type Output = Ray;
function mul (line 106) | fn mul(self, rhs: &Ray) -> Self::Output {
function solve_quadratic (line 120) | fn solve_quadratic(constant_term: f32, half_linear_term: f32, quadratic_...
function solve_sphere_plane_intersection_example (line 159) | fn solve_sphere_plane_intersection_example() {
function solve_sphere_plane_intersection_direct_hit (line 178) | fn solve_sphere_plane_intersection_direct_hit() {
function solve_sphere_plane_intersection_miss (line 192) | fn solve_sphere_plane_intersection_miss() {
function solve_sphere_plane_intersection_margin (line 204) | fn solve_sphere_plane_intersection_margin() {
function solve_sphere_line_intersection_example (line 217) | fn solve_sphere_line_intersection_example() {
function solve_sphere_line_intersection_direct_hit (line 241) | fn solve_sphere_line_intersection_direct_hit() {
function solve_sphere_line_intersection_miss (line 259) | fn solve_sphere_line_intersection_miss() {
function solve_sphere_line_intersection_margin (line 272) | fn solve_sphere_line_intersection_margin() {
function solve_sphere_line_intersection_precision (line 286) | fn solve_sphere_line_intersection_precision() {
function solve_sphere_point_intersection_example (line 308) | fn solve_sphere_point_intersection_example() {
function solve_sphere_point_intersection_direct_hit (line 338) | fn solve_sphere_point_intersection_direct_hit() {
function solve_sphere_point_intersection_miss (line 359) | fn solve_sphere_point_intersection_miss() {
function solve_sphere_point_intersection_margin (line 378) | fn solve_sphere_point_intersection_margin() {
function foo (line 398) | fn foo() {
function solve_quadratic_example (line 413) | fn solve_quadratic_example() {
FILE: common/src/cursor.rs
type Cursor (line 9) | pub struct Cursor {
method from_vertex (line 18) | pub fn from_vertex(node: NodeId, vertex: Vertex) -> Self {
method step (line 24) | pub fn step(self, graph: &Graph, dir: Dir) -> Option<Self> {
method canonicalize (line 53) | pub fn canonicalize(self, graph: &Graph) -> Option<ChunkId> {
type Dir (line 62) | pub enum Dir {
method iter (line 71) | pub fn iter() -> impl ExactSizeIterator<Item = Self> + Clone {
method vector (line 77) | pub fn vector(self) -> na::Vector3<isize> {
type Output (line 92) | type Output = Self;
method neg (line 93) | fn neg(self) -> Self::Output {
function neighbor_sanity (line 138) | fn neighbor_sanity() {
function cursor_identities (line 149) | fn cursor_identities() {
FILE: common/src/dodeca.rs
type Side (line 21) | pub enum Side {
constant VALUES (line 40) | pub const VALUES: [Self; 12] = [
method iter (line 55) | pub fn iter() -> impl ExactSizeIterator<Item = Self> {
method adjacent_to (line 63) | pub fn adjacent_to(self, other: Side) -> bool {
method normal (line 69) | pub fn normal(self) -> &'static MDirection<f32> {
method normal_f64 (line 75) | pub fn normal_f64(self) -> &'static MDirection<f64> {
method reflection (line 82) | pub fn reflection(self) -> &'static MIsometry<f32> {
method reflection_f64 (line 89) | pub fn reflection_f64(self) -> &'static MIsometry<f64> {
method is_facing (line 95) | pub fn is_facing(self, p: &MPoint<f32>) -> bool {
constant SIDE_COUNT (line 37) | pub const SIDE_COUNT: usize = Side::VALUES.len();
type Vertex (line 117) | pub enum Vertex {
constant VALUES (line 144) | pub const VALUES: [Self; 20] = [
method iter (line 167) | pub fn iter() -> impl ExactSizeIterator<Item = Self> {
method from_sides (line 173) | pub fn from_sides(sides: [Side; 3]) -> Option<Self> {
method canonical_sides (line 182) | pub fn canonical_sides(self) -> [Side; 3] {
method adjacent_vertices (line 200) | pub fn adjacent_vertices(self) -> [Vertex; 3] {
method chunk_axis_permutations (line 212) | pub fn chunk_axis_permutations(self) -> &'static [ChunkAxisPermutation...
method dual_vertices (line 219) | pub fn dual_vertices(
method chunk_to_node (line 240) | pub fn chunk_to_node(self) -> na::Matrix4<f32> {
method chunk_to_node_f64 (line 246) | pub fn chunk_to_node_f64(self) -> na::Matrix4<f64> {
method node_to_chunk (line 252) | pub fn node_to_chunk(self) -> na::Matrix4<f32> {
method node_to_chunk_f64 (line 258) | pub fn node_to_chunk_f64(self) -> na::Matrix4<f64> {
method dual_to_node (line 264) | pub fn dual_to_node(self) -> &'static MIsometry<f32> {
method dual_to_node_f64 (line 269) | pub fn dual_to_node_f64(self) -> &'static MIsometry<f64> {
method node_to_dual (line 274) | pub fn node_to_dual(self) -> &'static MIsometry<f32> {
method node_to_dual_f64 (line 279) | pub fn node_to_dual_f64(self) -> &'static MIsometry<f64> {
method dual_to_chunk_factor (line 286) | pub fn dual_to_chunk_factor() -> f32 {
method dual_to_chunk_factor_f64 (line 293) | pub fn dual_to_chunk_factor_f64() -> f64 {
method chunk_to_dual_factor (line 300) | pub fn chunk_to_dual_factor() -> f32 {
method chunk_to_dual_factor_f64 (line 307) | pub fn chunk_to_dual_factor_f64() -> f64 {
method chunk_bounding_sphere_center (line 313) | pub fn chunk_bounding_sphere_center(self) -> &'static MPoint<f32> {
method chunk_bounding_sphere_center_f64 (line 319) | pub fn chunk_bounding_sphere_center_f64(self) -> &'static MPoint<f64> {
method parity (line 324) | pub fn parity(self) -> bool {
constant VERTEX_COUNT (line 141) | pub const VERTEX_COUNT: usize = Vertex::VALUES.len();
constant BOUNDING_SPHERE_RADIUS_F64 (line 329) | pub const BOUNDING_SPHERE_RADIUS_F64: f64 = 1.2264568712514068;
constant BOUNDING_SPHERE_RADIUS (line 330) | pub const BOUNDING_SPHERE_RADIUS: f32 = BOUNDING_SPHERE_RADIUS_F64 as f32;
constant CHUNK_BOUNDING_SPHERE_RADIUS_F64 (line 332) | pub const CHUNK_BOUNDING_SPHERE_RADIUS_F64: f64 = BOUNDING_SPHERE_RADIUS...
constant CHUNK_BOUNDING_SPHERE_RADIUS (line 333) | pub const CHUNK_BOUNDING_SPHERE_RADIUS: f32 = CHUNK_BOUNDING_SPHERE_RADI...
function vertex_sides_consistent (line 578) | fn vertex_sides_consistent() {
function sides_to_vertex (line 595) | fn sides_to_vertex() {
function adjacent_chunk_axis_permutations (line 608) | fn adjacent_chunk_axis_permutations() {
function side_is_facing (line 659) | fn side_is_facing() {
function radius (line 667) | fn radius() {
function chunk_bounding_sphere (line 683) | fn chunk_bounding_sphere() {
function chunk_to_node (line 699) | fn chunk_to_node() {
function node_to_chunk (line 712) | fn node_to_chunk() {
FILE: common/src/graph.rs
type Graph (line 16) | pub struct Graph {
method new (line 22) | pub fn new(dimension: u8) -> Self {
method layout (line 32) | pub fn layout(&self) -> &ChunkLayout {
method len (line 38) | pub fn len(&self) -> u32 {
method contains (line 43) | pub fn contains(&self, node: NodeId) -> bool {
method canonicalize (line 50) | pub fn canonicalize(&self, mut chunk: ChunkId) -> Option<ChunkId> {
method parents (line 63) | pub fn parents(&self, node: NodeId) -> impl ExactSizeIterator<Item = (...
method neighbor (line 84) | pub fn neighbor(&self, node: NodeId, which: Side) -> Option<NodeId> {
method depth (line 90) | pub fn depth(&self, node: NodeId) -> u32 {
method normalize_transform (line 96) | pub fn normalize_transform(
method primary_parent_side (line 123) | pub fn primary_parent_side(&self, node: NodeId) -> Option<Side> {
method tree (line 132) | pub fn tree(&self) -> TreeIter<'_> {
method ensure_neighbor (line 138) | pub fn ensure_neighbor(&mut self, node: NodeId, side: Side) -> NodeId {
method is_parent_side (line 145) | fn is_parent_side(&self, node: NodeId, side: Side) -> bool {
method insert_child (line 152) | pub fn insert_child(&mut self, node: NodeId, side: Side) -> NodeId {
method hash_of (line 178) | pub fn hash_of(&self, node: NodeId) -> u128 {
method from_hash (line 183) | pub fn from_hash(&self, hash: u128) -> NodeId {
method populate_parents_of_subject (line 190) | fn populate_parents_of_subject(
method link_neighbors (line 229) | fn link_neighbors(&mut self, a: NodeId, b: NodeId, side: Side) {
type Output (line 240) | type Output = Node;
method index (line 243) | fn index(&self, node_id: NodeId) -> &Node {
method index_mut (line 250) | fn index_mut(&mut self, node_id: NodeId) -> &mut Node {
type NodeId (line 263) | pub struct NodeId(u128);
constant ROOT (line 266) | pub const ROOT: Self = Self(0);
type NodeContainer (line 269) | struct NodeContainer {
method new (line 278) | fn new(primary_parent_side: Option<Side>, depth: u32) -> Self {
type TreeIter (line 289) | pub struct TreeIter<'a> {
function new (line 296) | fn new(graph: &'a Graph) -> Self {
function next_node (line 310) | fn next_node(&mut self) -> Option<&NodeContainer> {
type Item (line 326) | type Item = (Side, NodeId);
method next (line 328) | fn next(&mut self) -> Option<Self::Item> {
function parent_child_relationships (line 343) | fn parent_child_relationships() {
function children_have_common_neighbor (line 363) | fn children_have_common_neighbor() {
function normalize_transform (line 390) | fn normalize_transform() {
function rebuild_from_tree (line 406) | fn rebuild_from_tree() {
function hash_consistency (line 421) | fn hash_consistency() {
FILE: common/src/graph_collision.rs
function sphere_cast (line 20) | pub fn sphere_cast(
type OutOfBounds (line 68) | pub struct OutOfBounds;
type GraphCastHit (line 72) | pub struct GraphCastHit {
type VoxelLocation (line 102) | struct VoxelLocation<'a> {
function new (line 114) | fn new(node_path: &[Side], vertex: Vertex, coords: [u8; 3]) -> VoxelLoca...
type SphereCastExampleTestCase (line 123) | struct SphereCastExampleTestCase<'a> {
function execute (line 148) | fn execute(self) {
function populate_voxel (line 235) | fn populate_voxel(graph: &mut Graph, dimension: u8, voxel_location: &Vox...
function get_voxel_chunk (line 258) | fn get_voxel_chunk(graph: &Graph, voxel_location: &VoxelLocation) -> Chu...
function sphere_cast_examples (line 273) | fn sphere_cast_examples() {
function sphere_cast_near_unloaded_chunk (line 402) | fn sphere_cast_near_unloaded_chunk() {
FILE: common/src/graph_entities.rs
type GraphEntities (line 7) | pub struct GraphEntities {
method new (line 12) | pub fn new() -> Self {
method get (line 18) | pub fn get(&self, node: NodeId) -> &[Entity] {
method insert (line 22) | pub fn insert(&mut self, node: NodeId, entity: Entity) {
method remove (line 28) | pub fn remove(&mut self, node: NodeId, entity: Entity) {
FILE: common/src/graph_ray_casting.rs
function ray_cast (line 20) | pub fn ray_cast(
type OutOfBounds (line 67) | pub struct OutOfBounds;
type GraphCastHit (line 71) | pub struct GraphCastHit {
FILE: common/src/lib.rs
method fmt (line 43) | fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
method sample (line 49) | fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> EntityId {
type Step (line 54) | pub type Step = i32;
function defer (line 56) | pub fn defer<F: FnOnce()>(f: F) -> Defer<F> {
type Defer (line 60) | pub struct Defer<F: FnOnce()>(Option<F>);
function new (line 63) | pub fn new(f: F) -> Self {
function invoke (line 67) | pub fn invoke(self) {}
function cancel (line 69) | pub fn cancel(mut self) {
method drop (line 75) | fn drop(&mut self) {
function sanitize_motion_input (line 83) | pub fn sanitize_motion_input(v: na::Vector3<f32>) -> na::Vector3<f32> {
function tracing_guard (line 90) | pub fn tracing_guard() -> tracing::dispatcher::DefaultGuard {
function init_tracing (line 95) | pub fn init_tracing() {
function tracing_subscriber (line 100) | fn tracing_subscriber() -> impl tracing::Subscriber {
type Anonymize (line 115) | pub trait Anonymize {
method anonymize (line 117) | fn anonymize(&self) -> Self::Output;
type Output (line 121) | type Output = std::path::PathBuf;
method anonymize (line 123) | fn anonymize(&self) -> Self::Output {
FILE: common/src/margins.rs
function fix_margins (line 13) | pub fn fix_margins(
function all_voxels_at_face (line 84) | fn all_voxels_at_face(
function initialize_margins (line 109) | pub fn initialize_margins(dimension: u8, voxels: &mut VoxelData) {
function reconcile_margin_voxels (line 139) | pub fn reconcile_margin_voxels(
function neighbor_axis_permutation (line 217) | fn neighbor_axis_permutation(vertex: Vertex, direction: ChunkDirection) ...
type CoordsWithMargins (line 226) | struct CoordsWithMargins(pub [u8; 3]);
method to_index (line 230) | pub fn to_index(self, chunk_size: u8) -> usize {
method margin_coord (line 238) | pub fn margin_coord(chunk_size: u8, sign: CoordSign) -> u8 {
method boundary_coord (line 246) | pub fn boundary_coord(chunk_size: u8, sign: CoordSign) -> u8 {
method from (line 256) | fn from(value: Coords) -> Self {
type Output (line 262) | type Output = u8;
method index (line 265) | fn index(&self, coord_axis: CoordAxis) -> &u8 {
method index_mut (line 272) | fn index_mut(&mut self, coord_axis: CoordAxis) -> &mut u8 {
type Output (line 278) | type Output = CoordsWithMargins;
method mul (line 280) | fn mul(self, rhs: CoordsWithMargins) -> Self::Output {
function test_fix_margins (line 296) | fn test_fix_margins() {
function test_initialize_margins (line 344) | fn test_initialize_margins() {
function test_reconcile_margin_voxels (line 361) | fn test_reconcile_margin_voxels() {
FILE: common/src/math.rs
type MVector (line 46) | pub struct MVector<N: Scalar>(na::Vector4<N>);
function normalized_point (line 56) | pub fn normalized_point(&self) -> MPoint<N> {
function normalized_direction (line 76) | pub fn normalized_direction(&self) -> MDirection<N> {
function mip (line 93) | pub fn mip(&self, other: &impl AsRef<MVector<N>>) -> N {
function minkowski_outer_product (line 101) | fn minkowski_outer_product(self, other: &Self) -> na::Matrix4<N> {
function cast (line 107) | pub fn cast<N2: RealField + Copy + SupersetOf<N>>(self) -> MVector<N2> {
function zero (line 113) | pub fn zero() -> Self {
function origin (line 119) | pub fn origin() -> Self {
function x (line 125) | pub fn x() -> Self {
function y (line 131) | pub fn y() -> Self {
function z (line 137) | pub fn z() -> Self {
function w (line 143) | pub fn w() -> Self {
function new (line 149) | pub fn new(x: N, y: N, z: N, w: N) -> Self {
function xyz (line 158) | pub fn xyz(self) -> na::Vector3<N> {
type Output (line 164) | type Output = N;
function index (line 166) | fn index(&self, i: usize) -> &Self::Output {
function index_mut (line 173) | fn index_mut(&mut self, i: usize) -> &mut Self::Output {
function from (line 180) | fn from(value: na::Vector4<N>) -> Self {
function from (line 190) | fn from(value: MVector<N>) -> na::Vector4<N> {
type Target (line 196) | type Target = na::coordinates::XYZW<N>;
function deref (line 198) | fn deref(&self) -> &Self::Target {
function deref_mut (line 205) | fn deref_mut(&mut self) -> &mut Self::Target {
function as_ref (line 212) | fn as_ref(&self) -> &MVector<N> {
type Output (line 218) | type Output = MVector<N>;
function add (line 220) | fn add(self, other: MVector<N>) -> Self::Output {
type Output (line 226) | type Output = MVector<N>;
function add (line 228) | fn add(self, other: &MVector<N>) -> Self::Output {
type Output (line 234) | type Output = MVector<N>;
function add (line 236) | fn add(self, other: MVector<N>) -> Self::Output {
type Output (line 242) | type Output = MVector<N>;
function add (line 244) | fn add(self, other: &MVector<N>) -> Self::Output {
function add_assign (line 251) | fn add_assign(&mut self, other: MVector<N>) {
function add_assign (line 258) | fn add_assign(&mut self, other: &MVector<N>) {
type Output (line 264) | type Output = MVector<N>;
function sub (line 266) | fn sub(self, other: MVector<N>) -> Self::Output {
type Output (line 272) | type Output = MVector<N>;
function sub (line 274) | fn sub(self, other: &MVector<N>) -> Self::Output {
type Output (line 280) | type Output = MVector<N>;
function sub (line 282) | fn sub(self, other: MVector<N>) -> Self::Output {
type Output (line 288) | type Output = MVector<N>;
function sub (line 290) | fn sub(self, other: &MVector<N>) -> Self::Output {
function sub_assign (line 297) | fn sub_assign(&mut self, other: MVector<N>) {
function sub_assign (line 304) | fn sub_assign(&mut self, other: &MVector<N>) {
type Output (line 310) | type Output = MVector<N>;
function neg (line 312) | fn neg(self) -> Self::Output {
type Output (line 318) | type Output = MVector<N>;
function neg (line 320) | fn neg(self) -> Self::Output {
type Output (line 326) | type Output = MVector<N>;
function mul (line 328) | fn mul(self, rhs: N) -> Self::Output {
type Output (line 334) | type Output = MVector<N>;
function mul (line 336) | fn mul(self, rhs: N) -> Self::Output {
function mul_assign (line 343) | fn mul_assign(&mut self, rhs: N) {
type Output (line 349) | type Output = MVector<N>;
function div (line 351) | fn div(self, rhs: N) -> Self::Output {
type Output (line 357) | type Output = MVector<N>;
function div (line 359) | fn div(self, rhs: N) -> Self::Output {
function div_assign (line 366) | fn div_assign(&mut self, rhs: N) {
type MPoint (line 376) | pub struct MPoint<N: Scalar>(MVector<N>);
function midpoint (line 380) | pub fn midpoint(&self, other: &Self) -> MPoint<N> {
function distance (line 387) | pub fn distance(&self, other: &Self) -> N {
function mip (line 400) | pub fn mip(&self, other: &impl AsRef<MVector<N>>) -> N {
function origin (line 406) | pub fn origin() -> Self {
function w (line 412) | pub fn w() -> Self {
function new_unchecked (line 419) | pub fn new_unchecked(x: N, y: N, z: N, w: N) -> Self {
function cast (line 425) | pub fn cast<N2: RealField + Copy + SupersetOf<N>>(self) -> MPoint<N2> {
function from (line 432) | fn from(value: MPoint<N>) -> MVector<N> {
function from (line 442) | fn from(value: MPoint<N>) -> na::Vector4<N> {
type Target (line 448) | type Target = na::coordinates::XYZW<N>;
function deref (line 450) | fn deref(&self) -> &Self::Target {
function as_ref (line 458) | fn as_ref(&self) -> &MVector<N> {
type MDirection (line 468) | pub struct MDirection<N: Scalar>(MVector<N>);
function x (line 473) | pub fn x() -> Self {
function y (line 479) | pub fn y() -> Self {
function z (line 485) | pub fn z() -> Self {
function mip (line 494) | pub fn mip(&self, other: &impl AsRef<MVector<N>>) -> N {
function new_unchecked (line 501) | pub fn new_unchecked(x: N, y: N, z: N, w: N) -> Self {
function cast (line 507) | pub fn cast<N2: RealField + Copy + SupersetOf<N>>(self) -> MDirection<N2> {
function from (line 517) | fn from(value: MDirection<N>) -> na::Vector4<N> {
function from (line 524) | fn from(value: MDirection<N>) -> MVector<N> {
function from (line 531) | fn from(value: na::UnitVector3<N>) -> Self {
type Target (line 537) | type Target = na::coordinates::XYZW<N>;
function deref (line 539) | fn deref(&self) -> &Self::Target {
function as_ref (line 547) | fn as_ref(&self) -> &MVector<N> {
type Output (line 553) | type Output = MDirection<N>;
function neg (line 555) | fn neg(self) -> Self::Output {
type Output (line 561) | type Output = MDirection<N>;
function neg (line 563) | fn neg(self) -> Self::Output {
type MIsometry (line 581) | pub struct MIsometry<N: Scalar>(na::Matrix4<N>);
function row (line 586) | pub fn row(&self, i: usize) -> na::MatrixView1x4<'_, N, na::U1, na::U4> {
function identity (line 592) | pub fn identity() -> Self {
function reflection (line 598) | pub fn reflection(normal: &MDirection<N>) -> Self {
function translation (line 612) | pub fn translation(a: &MPoint<N>, b: &MPoint<N>) -> MIsometry<N> {
function translation_along (line 656) | pub fn translation_along(v: &na::Vector3<N>) -> MIsometry<N> {
function from_columns_unchecked (line 683) | pub fn from_columns_unchecked(
function from_column_slice_unchecked (line 699) | pub fn from_column_slice_unchecked(data: &[N]) -> Self {
function inverse (line 710) | pub fn inverse(&self) -> Self {
function parity (line 722) | pub fn parity(&self) -> bool {
function renormalized (line 734) | pub fn renormalized(&self) -> MIsometry<N> {
function cast (line 771) | pub fn cast<N2: RealField + Copy + SupersetOf<N>>(self) -> MIsometry<N2> {
type Output (line 777) | type Output = N;
function index (line 779) | fn index(&self, ij: (usize, usize)) -> &Self::Output {
function from (line 786) | fn from(value: na::UnitQuaternion<N>) -> Self {
function from (line 793) | fn from(value: na::Rotation3<N>) -> Self {
function from (line 802) | fn from(value: MIsometry<N>) -> na::Matrix4<N> {
type Target (line 808) | type Target = na::coordinates::M4x4<N>;
function deref (line 810) | fn deref(&self) -> &Self::Target {
function as_ref (line 817) | fn as_ref(&self) -> &[[N; 4]; 4] {
type Output (line 823) | type Output = MIsometry<N>;
function mul (line 825) | fn mul(self, rhs: MIsometry<N>) -> Self::Output {
type Output (line 831) | type Output = MIsometry<N>;
function mul (line 833) | fn mul(self, rhs: &MIsometry<N>) -> Self::Output {
type Output (line 839) | type Output = MIsometry<N>;
function mul (line 841) | fn mul(self, rhs: MIsometry<N>) -> Self::Output {
type Output (line 847) | type Output = MIsometry<N>;
function mul (line 849) | fn mul(self, rhs: &MIsometry<N>) -> Self::Output {
function mul_assign (line 856) | fn mul_assign(&mut self, rhs: MIsometry<N>) {
function mul_assign (line 863) | fn mul_assign(&mut self, rhs: &MIsometry<N>) {
type Output (line 869) | type Output = MVector<N>;
function mul (line 871) | fn mul(self, rhs: MVector<N>) -> Self::Output {
type Output (line 877) | type Output = MVector<N>;
function mul (line 879) | fn mul(self, rhs: &MVector<N>) -> Self::Output {
type Output (line 885) | type Output = MVector<N>;
function mul (line 887) | fn mul(self, rhs: MVector<N>) -> Self::Output {
type Output (line 893) | type Output = MVector<N>;
function mul (line 895) | fn mul(self, rhs: &MVector<N>) -> Self::Output {
type Output (line 901) | type Output = MPoint<N>;
function mul (line 903) | fn mul(self, rhs: MPoint<N>) -> Self::Output {
type Output (line 909) | type Output = MPoint<N>;
function mul (line 911) | fn mul(self, rhs: &MPoint<N>) -> Self::Output {
type Output (line 917) | type Output = MPoint<N>;
function mul (line 919) | fn mul(self, rhs: MPoint<N>) -> Self::Output {
type Output (line 925) | type Output = MPoint<N>;
function mul (line 927) | fn mul(self, rhs: &MPoint<N>) -> Self::Output {
type Output (line 933) | type Output = MDirection<N>;
function mul (line 935) | fn mul(self, rhs: MDirection<N>) -> Self::Output {
type Output (line 941) | type Output = MDirection<N>;
function mul (line 943) | fn mul(self, rhs: &MDirection<N>) -> Self::Output {
type Output (line 949) | type Output = MDirection<N>;
function mul (line 951) | fn mul(self, rhs: MDirection<N>) -> Self::Output {
type Output (line 957) | type Output = MDirection<N>;
function mul (line 959) | fn mul(self, rhs: &MDirection<N>) -> Self::Output {
function sqr (line 966) | pub fn sqr<N: RealField + Copy>(x: N) -> N {
function project_to_plane (line 976) | pub fn project_to_plane<N: RealField + Copy>(
function rotation_between_axis (line 989) | pub fn rotation_between_axis<N: RealField + Copy>(
type PermuteXYZ (line 1000) | pub trait PermuteXYZ {
method tuv_to_xyz (line 1016) | fn tuv_to_xyz(self, t_axis: usize) -> Self;
method tuv_to_xyz (line 1020) | fn tuv_to_xyz(mut self, t_axis: usize) -> Self {
method tuv_to_xyz (line 1028) | fn tuv_to_xyz(self, t_axis: usize) -> Self {
method tuv_to_xyz (line 1034) | fn tuv_to_xyz(self, t_axis: usize) -> Self {
type Epsilon (line 1045) | type Epsilon = N;
function default_epsilon (line 1048) | fn default_epsilon() -> Self::Epsilon {
function abs_diff_eq (line 1053) | fn abs_diff_eq(&self, other: &Self, epsilon: Self::Epsilon) -> bool {
type Epsilon (line 1059) | type Epsilon = N;
function default_epsilon (line 1062) | fn default_epsilon() -> Self::Epsilon {
function abs_diff_eq (line 1067) | fn abs_diff_eq(&self, other: &Self, epsilon: Self::Epsilon) -> bool {
function reflect_example (line 1074) | fn reflect_example() {
function translate_example (line 1091) | fn translate_example() {
function translate_identity (line 1110) | fn translate_identity() {
function translate_equivalence (line 1124) | fn translate_equivalence() {
function translate_distance (line 1137) | fn translate_distance() {
function distance_example (line 1144) | fn distance_example() {
function distance_commutative (line 1152) | fn distance_commutative() {
function midpoint_distance (line 1159) | fn midpoint_distance() {
function renormalize_translation (line 1168) | fn renormalize_translation() {
function renormalize_reflection (line 1178) | fn renormalize_reflection() {
function renormalize_normalizes_matrix (line 1189) | fn renormalize_normalizes_matrix() {
function project_to_plane_example (line 1214) | fn project_to_plane_example() {
function rotation_between_axis_example (line 1226) | fn rotation_between_axis_example() {
FILE: common/src/node.rs
type ChunkId (line 18) | pub struct ChunkId {
method new (line 24) | pub fn new(node: NodeId, vertex: Vertex) -> Self {
method partial_node_state (line 32) | pub fn partial_node_state(&self, node_id: NodeId) -> &PartialNodeState {
method ensure_partial_node_state (line 38) | pub fn ensure_partial_node_state(&mut self, node_id: NodeId) {
method node_state (line 53) | pub fn node_state(&self, node_id: NodeId) -> &NodeState {
method ensure_node_state (line 59) | pub fn ensure_node_state(&mut self, node_id: NodeId) {
method get_relative_up (line 75) | pub fn get_relative_up(&self, position: &Position) -> Option<na::UnitVec...
method get_chunk_neighbor (line 86) | pub fn get_chunk_neighbor(
method get_block_neighbor (line 109) | pub fn get_block_neighbor(
method populate_chunk (line 137) | pub fn populate_chunk(&mut self, chunk: ChunkId, mut voxels: VoxelData) {
method get_material (line 170) | pub fn get_material(&self, chunk_id: ChunkId, coords: Coords) -> Option<...
method update_block (line 182) | pub fn update_block(&mut self, block_update: &BlockUpdate) -> bool {
type Output (line 215) | type Output = Chunk;
method index (line 218) | fn index(&self, chunk: ChunkId) -> &Chunk {
method index_mut (line 225) | fn index_mut(&mut self, chunk: ChunkId) -> &mut Chunk {
type Node (line 234) | pub struct Node {
type Chunk (line 246) | pub enum Chunk {
type VoxelData (line 281) | pub enum VoxelData {
method data_mut (line 295) | pub fn data_mut(&mut self, dimension: u8) -> &mut [Material] {
method get (line 305) | pub fn get(&self, index: usize) -> Material {
method is_solid (line 312) | pub fn is_solid(&self) -> bool {
method deserialize (line 321) | pub fn deserialize(serialized: &SerializedVoxelData, dimension: u8) ->...
method serialize (line 346) | pub fn serialize(&self, dimension: u8) -> SerializedVoxelData {
type ChunkLayout (line 367) | pub struct ChunkLayout {
method new (line 373) | pub fn new(dimension: u8) -> Self {
method dimension (line 382) | pub fn dimension(&self) -> u8 {
method dual_to_grid_factor (line 388) | pub fn dual_to_grid_factor(&self) -> f32 {
method dual_to_voxel (line 395) | pub fn dual_to_voxel(&self, dual_coord: f32) -> Option<u8> {
method grid_to_dual (line 408) | pub fn grid_to_dual(&self, grid_coord: u8) -> f32 {
method neighboring_voxels (line 414) | pub fn neighboring_voxels(&self, grid_coord: u8) -> impl Iterator<Item...
type VoxelAABB (line 420) | pub struct VoxelAABB {
method from_ray_segment_and_radius (line 429) | pub fn from_ray_segment_and_radius(
method grid_points (line 465) | pub fn grid_points(
method grid_lines (line 479) | pub fn grid_lines(&self, axis0: usize, axis1: usize) -> impl Iterator<...
method grid_planes (line 486) | pub fn grid_planes(&self, axis: usize) -> impl Iterator<Item = u8> + u...
function voxel_aabb_coverage (line 503) | fn voxel_aabb_coverage() {
FILE: common/src/peer_traverser.rs
function expect_peer_nodes (line 13) | pub fn expect_peer_nodes(graph: &Graph, base_node: NodeId) -> Vec<PeerNo...
function ensure_peer_nodes (line 19) | pub fn ensure_peer_nodes(graph: &mut Graph, base_node: NodeId) -> Vec<Pe...
function peer_nodes_impl (line 25) | fn peer_nodes_impl(mut graph: impl GraphRef, base_node: NodeId) -> Vec<P...
type PeerNode (line 78) | pub struct PeerNode {
method node (line 87) | pub fn node(&self) -> NodeId {
method peer_to_shared (line 93) | pub fn peer_to_shared(&self) -> impl ExactSizeIterator<Item = Side> + ...
method base_to_shared (line 99) | pub fn base_to_shared(&self) -> impl ExactSizeIterator<Item = Side> + ...
type GraphRef (line 165) | trait GraphRef: AsRef<Graph> {
method depth (line 166) | fn depth(&self, node: NodeId) -> u32;
method neighbor (line 167) | fn neighbor(&mut self, node: NodeId, side: Side) -> NodeId;
method parents (line 168) | fn parents(&self, node: NodeId) -> impl ExactSizeIterator<Item = (Side...
method child (line 171) | fn child(&mut self, node: NodeId, side: Side) -> Option<NodeId> {
method depth (line 193) | fn depth(&self, node: NodeId) -> u32 {
method neighbor (line 197) | fn neighbor(&mut self, node: NodeId, side: Side) -> NodeId {
method parents (line 201) | fn parents(&self, node: NodeId) -> impl ExactSizeIterator<Item = (Side...
method depth (line 212) | fn depth(&self, node: NodeId) -> u32 {
method neighbor (line 216) | fn neighbor(&mut self, node: NodeId, side: Side) -> NodeId {
method parents (line 220) | fn parents(&self, node: NodeId) -> impl ExactSizeIterator<Item = (Side...
type AssertingGraphRef (line 182) | struct AssertingGraphRef<'a> {
function as_ref (line 187) | fn as_ref(&self) -> &Graph {
type ExpandingGraphRef (line 207) | struct ExpandingGraphRef<'a> {
function as_ref (line 226) | fn as_ref(&self) -> &Graph {
function node_from_path (line 238) | fn node_from_path(
function peer_traverser_example (line 251) | fn peer_traverser_example() {
function peer_definition_holds (line 308) | fn peer_definition_holds() {
FILE: common/src/proto.rs
type ClientHello (line 9) | pub struct ClientHello {
type ServerHello (line 14) | pub struct ServerHello {
type Position (line 20) | pub struct Position {
method origin (line 26) | pub fn origin() -> Self {
type StateDelta (line 35) | pub struct StateDelta {
type CharacterState (line 44) | pub struct CharacterState {
type Spawns (line 51) | pub struct Spawns {
type Command (line 63) | pub struct Command {
type CharacterInput (line 70) | pub struct CharacterInput {
type BlockUpdate (line 79) | pub struct BlockUpdate {
type SerializedVoxelData (line 87) | pub struct SerializedVoxelData {
type Component (line 93) | pub enum Component {
type FreshNode (line 101) | pub struct FreshNode {
type Character (line 108) | pub struct Character {
type Inventory (line 114) | pub struct Inventory {
constant CONNECTION_LOST (line 121) | pub const CONNECTION_LOST: VarInt = VarInt::from_u32(0);
constant STREAM_ERROR (line 122) | pub const STREAM_ERROR: VarInt = VarInt::from_u32(1);
constant BAD_CLIENT_COMMAND (line 123) | pub const BAD_CLIENT_COMMAND: VarInt = VarInt::from_u32(2);
constant NAME_CONFLICT (line 124) | pub const NAME_CONFLICT: VarInt = VarInt::from_u32(3);
constant CLIENT_CLOSED_CONNECTION (line 125) | pub const CLIENT_CLOSED_CONNECTION: VarInt = VarInt::from_u32(4);
FILE: common/src/sim_config.rs
type SimConfigRaw (line 10) | pub struct SimConfigRaw {
type SimConfig (line 40) | pub struct SimConfig {
method from_raw (line 61) | pub fn from_raw(x: &SimConfigRaw) -> Self {
function meters_to_absolute (line 82) | fn meters_to_absolute(chunk_size: u8, voxel_size: f32) -> f32 {
type CharacterConfigRaw (line 94) | pub struct CharacterConfigRaw {
type CharacterConfig (line 124) | pub struct CharacterConfig {
method from_raw (line 152) | pub fn from_raw(x: &CharacterConfigRaw, meters_to_absolute: f32) -> Se...
FILE: common/src/traversal.rs
function ensure_nearby (line 15) | pub fn ensure_nearby(graph: &mut Graph, start: &Position, distance: f32) {
function nearby_nodes (line 47) | pub fn nearby_nodes(
type RayTraverser (line 100) | pub struct RayTraverser<'a> {
function new (line 115) | pub fn new(graph: &'a Graph, position: Position, ray: &'a Ray, radius: f...
function next (line 151) | pub fn next(&mut self, tanh_distance: f32) -> Option<(Option<ChunkId>, M...
function traversal_functions_example (line 238) | fn traversal_functions_example() {
FILE: common/src/voxel_math.rs
type CoordAxis (line 9) | pub enum CoordAxis {
method iter (line 22) | pub fn iter() -> impl ExactSizeIterator<Item = Self> {
method other_axes (line 27) | pub fn other_axes(self) -> [Self; 2] {
type Error (line 37) | type Error = CoordAxisOutOfBounds;
method try_from (line 39) | fn try_from(value: usize) -> Result<Self, Self::Error> {
type CoordAxisOutOfBounds (line 18) | pub struct CoordAxisOutOfBounds;
type CoordSign (line 53) | pub enum CoordSign {
method iter (line 60) | pub fn iter() -> impl ExactSizeIterator<Item = Self> {
type Output (line 66) | type Output = CoordSign;
method mul (line 68) | fn mul(self, rhs: Self) -> Self::Output {
method mul_assign (line 77) | fn mul_assign(&mut self, rhs: Self) {
type Coords (line 84) | pub struct Coords(pub [u8; 3]);
method to_index (line 88) | pub fn to_index(self, chunk_size: u8) -> usize {
method boundary_coord (line 96) | pub fn boundary_coord(chunk_size: u8, sign: CoordSign) -> u8 {
type Output (line 105) | type Output = u8;
method index (line 108) | fn index(&self, coord_axis: CoordAxis) -> &u8 {
method index_mut (line 115) | fn index_mut(&mut self, coord_axis: CoordAxis) -> &mut u8 {
type ChunkDirection (line 122) | pub struct ChunkDirection {
constant PLUS_X (line 128) | pub const PLUS_X: Self = ChunkDirection {
constant PLUS_Y (line 132) | pub const PLUS_Y: Self = ChunkDirection {
constant PLUS_Z (line 136) | pub const PLUS_Z: Self = ChunkDirection {
constant MINUS_X (line 140) | pub const MINUS_X: Self = ChunkDirection {
constant MINUS_Y (line 144) | pub const MINUS_Y: Self = ChunkDirection {
constant MINUS_Z (line 148) | pub const MINUS_Z: Self = ChunkDirection {
method iter (line 153) | pub fn iter() -> impl ExactSizeIterator<Item = ChunkDirection> {
type ChunkAxisPermutation (line 171) | pub struct ChunkAxisPermutation {
constant IDENTITY (line 176) | pub const IDENTITY: Self = ChunkAxisPermutation {
method from_permutation (line 183) | pub fn from_permutation(from: [Side; 3], to: [Side; 3]) -> Self {
type Output (line 200) | type Output = CoordAxis;
method index (line 202) | fn index(&self, index: CoordAxis) -> &Self::Output {
type Output (line 208) | type Output = Coords;
method mul (line 210) | fn mul(self, rhs: Coords) -> Self::Output {
type Output (line 220) | type Output = ChunkDirection;
method mul (line 222) | fn mul(self, rhs: ChunkDirection) -> Self::Output {
function coords_to_vector3 (line 236) | fn coords_to_vector3(coords: Coords) -> na::Vector3<i32> {
function coord_axis_to_vector3 (line 244) | fn coord_axis_to_vector3(coord_axis: CoordAxis) -> na::Vector3<i32> {
function chunk_direction_to_vector3 (line 250) | fn chunk_direction_to_vector3(chunk_direction: ChunkDirection) -> na::Ve...
function chunk_axis_permutation_to_matrix3 (line 256) | fn chunk_axis_permutation_to_matrix3(
function get_all_permutations (line 263) | fn get_all_permutations() -> Vec<(usize, usize, usize)> {
function test_chunk_axis_permutation (line 282) | fn test_chunk_axis_permutation() {
FILE: common/src/world.rs
type Material (line 7) | pub enum Material {
constant COUNT (line 52) | pub const COUNT: usize = 40;
constant VALUES (line 54) | pub const VALUES: [Self; Self::COUNT] = [
type Error (line 110) | type Error = MaterialOutOfBounds;
method try_from (line 112) | fn try_from(value: u16) -> Result<Self, Self::Error> {
type MaterialOutOfBounds (line 99) | pub struct MaterialOutOfBounds;
method fmt (line 102) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
function u16_to_material_consistency_check (line 125) | fn u16_to_material_consistency_check() {
FILE: common/src/worldgen/horosphere.rs
constant HOROSPHERES_ENABLED (line 20) | const HOROSPHERES_ENABLED: bool = true;
constant HOROSPHERE_SEED (line 23) | const HOROSPHERE_SEED: u64 = 6046133366614030452;
type HorosphereNode (line 30) | pub struct HorosphereNode {
method new (line 52) | pub fn new(graph: &Graph, node_id: NodeId) -> Option<HorosphereNode> {
method create_from_parents (line 64) | fn create_from_parents(graph: &Graph, node_id: NodeId) -> Option<Horos...
method maybe_create_fresh (line 97) | fn maybe_create_fresh(graph: &Graph, node_id: NodeId) -> Option<Horosp...
method tighten_region_bounds (line 126) | fn tighten_region_bounds(&mut self) {
method can_tighten_region_bound (line 136) | fn can_tighten_region_bound(&self, side: Side) -> bool {
method propagate (line 143) | fn propagate(&self, side: Side) -> Option<HorosphereNode> {
method average_with (line 157) | fn average_with(&mut self, other: HorosphereNode, other_weight: f32) {
method is_fresh (line 171) | fn is_fresh(&self, node_id: NodeId) -> bool {
method has_priority (line 177) | fn has_priority(&self, other: &HorosphereNode, node_id: NodeId) -> bool {
method should_generate (line 186) | pub fn should_generate(&self, graph: &Graph, node_id: NodeId) -> bool {
function is_horosphere_valid (line 217) | fn is_horosphere_valid(graph: &Graph, node_id: NodeId, horosphere: &Horo...
constant MAX_OWNED_HOROSPHERE_W (line 225) | const MAX_OWNED_HOROSPHERE_W: f32 = 5.9047837;
type HorosphereChunk (line 228) | pub struct HorosphereChunk {
method new (line 235) | pub fn new(horosphere_node: &HorosphereNode, vertex: Vertex) -> Self {
method generate (line 242) | pub fn generate(&self, voxels: &mut VoxelData, chunk_size: u8) {
type Horosphere (line 265) | pub struct Horosphere {
method contains_point (line 291) | pub fn contains_point(&self, point: &MPoint<f32>) -> bool {
method is_inside_half_space (line 296) | pub fn is_inside_half_space(&self, normal: &MDirection<f32>) -> bool {
method intersects_half_space (line 301) | pub fn intersects_half_space(&self, normal: &MDirection<f32>) -> bool {
method renormalize (line 307) | pub fn renormalize(&mut self) {
method new_random (line 313) | pub fn new_random(rng: &mut Pcg64Mcg, max_w: f32) -> Self {
type Output (line 338) | type Output = Horosphere;
function mul (line 340) | fn mul(self, rhs: Horosphere) -> Self::Output {
type NodeBoundedRegion (line 349) | struct NodeBoundedRegion {
method unbounded (line 357) | fn unbounded() -> Self {
method node_and_descendents (line 362) | fn node_and_descendents(graph: &Graph, node_id: NodeId) -> Self {
method intersect (line 371) | fn intersect(self, other: NodeBoundedRegion) -> NodeBoundedRegion {
method neighbor (line 381) | fn neighbor(self, neighbor_side: Side) -> NodeBoundedRegion {
method contains_node (line 403) | fn contains_node(self, path: impl Iterator<Item = Side>) -> bool {
method is_bounded_by (line 415) | fn is_bounded_by(self, side: Side) -> bool {
method add_bound (line 420) | fn add_bound(&mut self, side: Side) {
function test_max_owned_horosphere_w (line 432) | fn test_max_owned_horosphere_w() {
function node_bounded_region_intersect_example (line 458) | fn node_bounded_region_intersect_example() {
function node_bounded_region_neighbor_example (line 478) | fn node_bounded_region_neighbor_example() {
function node_bounded_region_node_and_descendents_example (line 502) | fn node_bounded_region_node_and_descendents_example() {
function node_bounded_region_contains_node_example (line 517) | fn node_bounded_region_contains_node_example() {
FILE: common/src/worldgen/mod.rs
type NodeStateKind (line 21) | enum NodeStateKind {
constant ROOT (line 30) | const ROOT: Self = Land;
method child (line 33) | fn child(self, side: Side) -> Self {
type NodeStateRoad (line 45) | enum NodeStateRoad {
constant ROOT (line 56) | const ROOT: Self = West;
method child (line 59) | fn child(self, side: Side) -> Self {
type PartialNodeState (line 72) | pub struct PartialNodeState {
method new (line 79) | pub fn new(graph: &Graph, node: NodeId) -> Self {
type NodeState (line 90) | pub struct NodeState {
method new (line 98) | pub fn new(graph: &Graph, node: NodeId) -> Self {
method up_direction (line 155) | pub fn up_direction(&self) -> MVector<f32> {
type ParentInfo (line 161) | struct ParentInfo<'a> {
type VoxelCoords (line 167) | struct VoxelCoords {
method new (line 173) | fn new(dimension: u8) -> Self {
type Item (line 182) | type Item = (u8, u8, u8);
method next (line 184) | fn next(&mut self) -> Option<Self::Item> {
type ChunkParams (line 203) | pub struct ChunkParams {
method new (line 224) | pub fn new(graph: &mut Graph, chunk: ChunkId) -> Self {
method chunk (line 245) | pub fn chunk(&self) -> Vertex {
method generate_voxels (line 250) | pub fn generate_voxels(&self) -> VoxelData {
method generate_terrain (line 276) | fn generate_terrain(&self, voxels: &mut VoxelData, rng: &mut Pcg64Mcg) {
method generate_road (line 356) | fn generate_road(&self, voxels: &mut VoxelData) {
method generate_road_support (line 386) | fn generate_road_support(&self, voxels: &mut VoxelData) {
method trussing_at (line 417) | fn trussing_at(&self, coords: na::Vector3<u8>) -> bool {
method generate_trees (line 441) | fn generate_trees(&self, voxels: &mut VoxelData, rng: &mut Pcg64Mcg) {
method voxel_neighbors (line 488) | fn voxel_neighbors(&self, coords: na::Vector3<u8>, voxels: &VoxelData)...
method neighbor (line 499) | fn neighbor(
constant TERRAIN_SMOOTHNESS (line 526) | const TERRAIN_SMOOTHNESS: f32 = 10.0;
type NeighborData (line 528) | struct NeighborData {
type EnviroFactors (line 534) | struct EnviroFactors {
method varied_from (line 541) | fn varied_from(parent: Self, spice: u64) -> Self {
method continue_from (line 553) | fn continue_from(a: Self, b: Self, ab: Self) -> Self {
function from (line 563) | fn from(envirofactors: EnviroFactors) -> Self {
type ChunkIncidentEnviroFactors (line 572) | struct ChunkIncidentEnviroFactors {
function chunk_incident_enviro_factors (line 583) | fn chunk_incident_enviro_factors(graph: &mut Graph, chunk: ChunkId) -> C...
function trilerp (line 610) | fn trilerp<N: na::RealField + Copy>(
function serp (line 632) | fn serp<N: na::RealField + Copy>(v0: N, v1: N, t: N, threshold: N) -> N {
function terracing_diff (line 648) | fn terracing_diff(elev_raw: f32, block: f32, scale: f32, strength: f32, ...
function voxel_center (line 656) | fn voxel_center(dimension: u8, voxel: na::Vector3<u8>) -> na::Vector3<f3...
function index (line 660) | fn index(dimension: u8, v: na::Vector3<u8>) -> usize {
function hash (line 668) | fn hash(a: u64, b: u64) -> u64 {
constant CHUNK_SIZE (line 680) | const CHUNK_SIZE: u8 = 12;
function chunk_indexing_origin (line 683) | fn chunk_indexing_origin() {
function chunk_indexing_absolute (line 692) | fn chunk_indexing_absolute() {
function check_chunk_incident_max_elevations (line 738) | fn check_chunk_incident_max_elevations() {
function check_trilerp (line 781) | fn check_trilerp() {
function check_voxel_iterable (line 875) | fn check_voxel_iterable() {
FILE: common/src/worldgen/plane.rs
type Plane (line 12) | pub struct Plane {
method from (line 19) | fn from(side: Side) -> Self {
method from (line 25) | fn from(normal: MDirection<f32>) -> Self {
method from (line 35) | fn from(x: na::UnitVector3<f32>) -> Self {
method scaled_normal (line 72) | pub fn scaled_normal(&self) -> &MVector<f32> {
method distance_to (line 77) | pub fn distance_to(&self, point: &MPoint<f32>) -> f32 {
method distance_to_chunk (line 87) | pub fn distance_to_chunk(&self, chunk: Vertex, coord: &na::Vector3<f32...
method update_exponent (line 92) | fn update_exponent(mut self) -> Self {
type Output (line 41) | type Output = Self;
method neg (line 42) | fn neg(self) -> Self {
type Output (line 51) | type Output = Plane;
method mul (line 53) | fn mul(self, rhs: Plane) -> Plane {
type Output (line 59) | type Output = Plane;
function mul (line 60) | fn mul(self, rhs: Plane) -> Plane {
function distance_sanity (line 119) | fn distance_sanity() {
function check_surface_flipped (line 140) | fn check_surface_flipped() {
function check_surface_on_plane (line 150) | fn check_surface_on_plane() {
function check_elevation_consistency (line 162) | fn check_elevation_consistency() {
function large_distances (line 199) | fn large_distances() {
FILE: common/src/worldgen/terraingen.rs
constant GENERAL_HIGH (line 3) | const GENERAL_HIGH: [VoronoiInfo; 113] = [
constant GENERAL_MED (line 119) | const GENERAL_MED: [VoronoiInfo; 113] = [
constant GENERAL_LOW (line 235) | const GENERAL_LOW: [VoronoiInfo; 113] = [
constant GENERAL_DEEP (line 351) | const GENERAL_DEEP: [VoronoiInfo; 113] = [
constant SURFACE_HIGH (line 467) | const SURFACE_HIGH: [VoronoiInfo; 113] = [
constant SURFACE_MED (line 583) | const SURFACE_MED: [VoronoiInfo; 113] = [
constant SURFACE_LOW (line 699) | const SURFACE_LOW: [VoronoiInfo; 113] = [
constant SURFACE_DEEP (line 815) | const SURFACE_DEEP: [VoronoiInfo; 113] = [
constant TERRAIN_SURFACE_THICKNESS (line 931) | const TERRAIN_SURFACE_THICKNESS: f32 = 0.2;
type VoronoiInfo (line 933) | pub struct VoronoiInfo {
method new (line 938) | pub const fn new(mat: Material, rain: f32, temp: f32) -> VoronoiInfo {
method terraingen_voronoi (line 955) | pub fn terraingen_voronoi(elev: f32, rain: f32, temp: f32, dist: f32) ...
FILE: save/benches/bench.rs
function save (line 8) | fn save(c: &mut Criterion) {
FILE: save/gen-protos/src/main.rs
function main (line 3) | fn main() -> Result<()> {
FILE: save/src/lib.rs
type Save (line 11) | pub struct Save {
method open (line 17) | pub fn open(path: &Path, default_chunk_size: u8) -> Result<Self, OpenE...
method meta (line 47) | pub fn meta(&self) -> &Meta {
method read (line 51) | pub fn read(&self) -> Result<Reader, DbError> {
method write (line 62) | pub fn write(&self) -> Result<WriterGuard, DbError> {
function init_meta_table (line 68) | fn init_meta_table(db: &Database, value: &Meta) -> Result<(), DbError> {
function dctx (line 85) | fn dctx() -> zstd::DCtx<'static> {
type Reader (line 92) | pub struct Reader {
method get_voxel_node (line 101) | pub fn get_voxel_node(&mut self, node_id: u128) -> Result<Option<Voxel...
method get_entity_node (line 111) | pub fn get_entity_node(&mut self, node_id: u128) -> Result<Option<Enti...
method get_character (line 121) | pub fn get_character(&mut self, name: &str) -> Result<Option<Character...
method get_all_voxel_node_ids (line 134) | pub fn get_all_voxel_node_ids(&self) -> Result<Vec<u128>, GetError> {
method get_all_entity_node_ids (line 144) | pub fn get_all_entity_node_ids(&self) -> Result<Vec<u128>, GetError> {
function decompress (line 152) | fn decompress(
type WriterGuard (line 174) | pub struct WriterGuard {
method get (line 179) | pub fn get(&mut self) -> Result<Writer<'_>, DbError> {
method commit (line 199) | pub fn commit(self) -> Result<(), DbError> {
function cctx (line 205) | fn cctx() -> zstd::CCtx<'static> {
type Writer (line 214) | pub struct Writer<'guard> {
function put_voxel_node (line 224) | pub fn put_voxel_node(&mut self, node_id: u128, state: &VoxelNode) -> Re...
function put_entity_node (line 230) | pub fn put_entity_node(&mut self, node_id: u128, state: &EntityNode) -> ...
function put_character (line 236) | pub fn put_character(&mut self, name: &str, character: &Character) -> Re...
function prepare (line 249) | fn prepare<T: prost::Message>(
constant META_TABLE (line 264) | const META_TABLE: TableDefinition<&[u8], &[u8]> = TableDefinition::new("...
constant VOXEL_NODE_TABLE (line 265) | const VOXEL_NODE_TABLE: TableDefinition<u128, &[u8]> = TableDefinition::...
constant ENTITY_NODE_TABLE (line 266) | const ENTITY_NODE_TABLE: TableDefinition<u128, &[u8]> = TableDefinition:...
constant CHARACTERS_BY_NAME_TABLE (line 267) | const CHARACTERS_BY_NAME_TABLE: TableDefinition<&str, &[u8]> =
type OpenError (line 271) | pub enum OpenError {
method from (line 283) | fn from(x: redb::Error) -> Self {
type GetError (line 289) | pub enum GetError {
method from (line 299) | fn from(x: redb::Error) -> Self {
method from (line 305) | fn from(x: redb::StorageError) -> Self {
type DbError (line 312) | pub struct DbError(Box<redb::Error>);
method from (line 315) | fn from(x: redb::Error) -> Self {
method from (line 321) | fn from(x: redb::StorageError) -> Self {
method from (line 327) | fn from(x: redb::CommitError) -> Self {
method from (line 333) | fn from(x: redb::TableError) -> Self {
FILE: save/src/protos.rs
type Meta (line 3) | pub struct Meta {
type Character (line 9) | pub struct Character {
type EntityNode (line 15) | pub struct EntityNode {
type VoxelNode (line 22) | pub struct VoxelNode {
type Chunk (line 28) | pub struct Chunk {
type ComponentType (line 38) | pub enum ComponentType {
method as_str_name (line 53) | pub fn as_str_name(&self) -> &'static str {
method from_str_name (line 62) | pub fn from_str_name(value: &str) -> ::core::option::Option<Self> {
FILE: save/tests/heavy.rs
function write (line 8) | fn write() {
FILE: save/tests/tests.rs
function persist_meta (line 6) | fn persist_meta() {
function persist_node (line 16) | fn persist_node() {
function persist_character (line 39) | fn persist_character() {
FILE: server/src/config.rs
type Config (line 14) | pub struct Config {
method load (line 25) | pub fn load(path: &Path) -> Result<Self> {
method default (line 35) | fn default() -> Self {
FILE: server/src/input_queue.rs
type InputQueue (line 23) | pub struct InputQueue {
method new (line 30) | pub fn new() -> Self {
method push (line 37) | pub fn push(&mut self, input: Command, now: Instant) {
method pop (line 49) | pub fn pop(&mut self, now: Instant, delay: Duration) -> Option<Command> {
FILE: server/src/lib.rs
type NetParams (line 25) | pub struct NetParams {
type Server (line 31) | pub struct Server {
method new (line 45) | pub fn new(net: Option<NetParams>, mut cfg: SimConfig, save: Save) -> ...
method connect (line 81) | pub fn connect(&mut self, hello: proto::ClientHello, mut backend: Hand...
method run (line 135) | pub async fn run(mut self) {
method handle_incoming (line 148) | fn handle_incoming(&self) -> mpsc::Receiver<quinn::Connection> {
method on_step (line 172) | fn on_step(&mut self) {
method on_client_event (line 227) | fn on_client_event(&mut self, client_id: ClientId, event: ClientEvent) {
method cleanup_client (line 250) | fn cleanup_client(&mut self, client: ClientId) {
method on_connect (line 256) | fn on_connect(&mut self, connection: quinn::Connection) {
method on_client (line 273) | fn on_client(&mut self, connection: quinn::Connection, hello: proto::C...
constant MAX_CLIENT_MSG_SIZE (line 318) | const MAX_CLIENT_MSG_SIZE: usize = 1 << 16;
function drive_recv (line 320) | async fn drive_recv(
function drive_send (line 352) | async fn drive_send(
function drive_send_unordered (line 373) | async fn drive_send_unordered(
type Client (line 388) | struct Client {
type ClientEvent (line 398) | enum ClientEvent {
type Unordered (line 403) | type Unordered = proto::StateDelta;
type Ordered (line 405) | type Ordered = Arc<proto::Spawns>;
type Handle (line 408) | pub struct Handle {
method loopback (line 414) | pub fn loopback() -> (Self, HandleBackend) {
type HandleBackend (line 430) | pub struct HandleBackend {
type Message (line 436) | pub enum Message {
FILE: server/src/main.rs
function main (line 15) | fn main() {
function run (line 26) | pub async fn run() -> Result<()> {
FILE: server/src/postcard_helpers.rs
function serialize (line 6) | pub fn serialize<T: serde::Serialize + ?Sized>(value: &T, vec: &mut Vec<...
type ExtendVec (line 10) | struct ExtendVec<'a>(&'a mut Vec<u8>);
type Output (line 13) | type Output = ();
function try_push (line 15) | fn try_push(&mut self, data: u8) -> Result<()> {
function finalize (line 20) | fn finalize(self) -> Result<()> {
function try_extend (line 24) | fn try_extend(&mut self, data: &[u8]) -> Result<()> {
type SaveEntity (line 31) | pub struct SaveEntity {
FILE: server/src/sim.rs
type Sim (line 32) | pub struct Sim {
method new (line 53) | pub fn new(cfg: Arc<SimConfig>, save: &save::Save) -> Self {
method save (line 81) | pub fn save(&mut self, save: &mut save::Save) -> Result<(), save::DbEr...
method load_all_entities (line 121) | fn load_all_entities(&mut self, save: &save::Save) -> anyhow::Result<(...
method load_entity (line 136) | fn load_entity(
method load_component (line 161) | fn load_component(
method load_all_voxels (line 236) | fn load_all_voxels(&mut self, save: &save::Save) -> anyhow::Result<()> {
method snapshot_node (line 259) | fn snapshot_node(&self, node: NodeId) -> save::EntityNode {
method snapshot_voxel_node (line 315) | fn snapshot_voxel_node(&self, node: NodeId) -> save::VoxelNode {
method activate_or_spawn_character (line 335) | pub fn activate_or_spawn_character(
method deactivate_character (line 384) | pub fn deactivate_character(&mut self, entity: Entity) {
method spawn (line 409) | fn spawn(&mut self, bundle: impl DynamicBundle) -> (EntityId, Entity) {
method command (line 434) | pub fn command(
method destroy (line 446) | pub fn destroy(&mut self, entity: Entity) {
method snapshot (line 459) | pub fn snapshot(&self) -> Spawns {
method step (line 498) | pub fn step(&mut self) -> (Option<Spawns>, StateDelta) {
method update_entity_node_ids (line 599) | fn update_entity_node_ids(&mut self) {
method new_id (line 643) | fn new_id(&mut self) -> EntityId {
method add_to_inventory (line 653) | fn add_to_inventory(&mut self, inventory_id: EntityId, entity_id: Enti...
method remove_from_inventory (line 666) | fn remove_from_inventory(&mut self, inventory_id: EntityId, entity_id:...
method attempt_block_update (line 683) | fn attempt_block_update(&mut self, subject: EntityId, block_update: Bl...
function dump_entity (line 732) | fn dump_entity(world: &hecs::World, entity: Entity) -> Vec<Component> {
type InactiveCharacter (line 754) | struct InactiveCharacter(pub Character);
type AccumulatedChanges (line 758) | struct AccumulatedChanges {
method is_empty (line 781) | fn is_empty(&self) -> bool {
method into_spawns (line 791) | fn into_spawns(self, step: Step, world: &hecs::World, graph: &Graph) -...
Condensed preview — 102 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (841K chars).
[
{
"path": ".github/dependabot.yml",
"chars": 368,
"preview": "version: 2\nupdates:\n- package-ecosystem: cargo\n directory: \"/\"\n schedule:\n interval: daily\n open-pull-requests-lim"
},
{
"path": ".github/workflows/package.yml",
"chars": 2350,
"preview": "name: Package\n\non:\n push:\n branches: ['master']\n\njobs:\n package-windows:\n name: Windows\n runs-on: windows-lat"
},
{
"path": ".github/workflows/rust.yml",
"chars": 1902,
"preview": "name: CI\n\non:\n push:\n branches: ['master']\n paths-ignore:\n - 'docs/**'\n pull_request:\n paths-ignore:\n "
},
{
"path": ".gitignore",
"chars": 122,
"preview": "/target\n**/*.rs.bk\n\n# IDEA workspace stuff\n/*.iml\n/.idea\n\n/.vscode/launch.json\n/.vscode/tasks.json\n\n/tarpaulin-report.ht"
},
{
"path": "Cargo.toml",
"chars": 537,
"preview": "[workspace]\nresolver = \"2\"\nmembers = [\"client\", \"server\", \"common\", \"save\", \"save/gen-protos\"]\n\n[workspace.dependencies]"
},
{
"path": "LICENSE-APACHE",
"chars": 11323,
"preview": "Apache License\n Version 2.0, January 2004\n http://www.apache.org/licens"
},
{
"path": "LICENSE-ZLIB",
"chars": 858,
"preview": "Copyright (c) 2020 Benjamin Saunders\n\nThis software is provided 'as-is', without any express or implied warranty. In\nno "
},
{
"path": "README.md",
"chars": 626,
"preview": "## Installation\n\nSee the [wiki](https://github.com/Ralith/hypermine/wiki) for instructions on how to build and run\n\n\n## "
},
{
"path": "assets/.gitattributes",
"chars": 127,
"preview": "*.png filter=lfs diff=lfs merge=lfs -text\n*.glb filter=lfs diff=lfs merge=lfs -text\n*.gltf filter=lfs diff=lfs merge=lfs"
},
{
"path": "assets/character.glb",
"chars": 131,
"preview": "version https://git-lfs.github.com/spec/v1\noid sha256:ee33342c11e0746b106031b2765ee86a0f1890b72fec7c0219815ae98f414e16\ns"
},
{
"path": "client/Cargo.toml",
"chars": 1513,
"preview": "[package]\nname = \"client\"\nversion = \"0.1.0\"\nauthors = [\"Benjamin Saunders <ben.e.saunders@gmail.com>\"]\nedition = \"2024\"\n"
},
{
"path": "client/benches/surface_extraction.rs",
"chars": 2175,
"preview": "use std::sync::Arc;\n\nuse ash::vk;\nuse bencher::{Bencher, benchmark_group, benchmark_main};\n\nuse client::graphics::{\n "
},
{
"path": "client/shaders/common.h",
"chars": 330,
"preview": "#ifndef COMMON_H\n#define COMMON_H\n\nconst float PI = 3.14159265;\nconst float INFINITY = 1.0 / 0.0;\n\nlayout(set = 0, bindi"
},
{
"path": "client/shaders/fog.frag",
"chars": 748,
"preview": "#version 450\n\n#include \"common.h\"\n\nlayout(location = 0) in vec2 texcoords;\n\nlayout(location = 0) out vec4 fog;\n\nlayout(i"
},
{
"path": "client/shaders/fullscreen.vert",
"chars": 205,
"preview": "#version 450\n\nlayout (location = 0) out vec2 texcoords;\n\nvoid main() {\n texcoords = vec2((gl_VertexIndex << 1) & 2, "
},
{
"path": "client/shaders/mesh.frag",
"chars": 248,
"preview": "#version 450\n\nlayout(location = 0) in vec2 texcoords;\nlayout(location = 1) in vec4 normal;\n\nlayout(location = 0) out vec"
},
{
"path": "client/shaders/mesh.vert",
"chars": 470,
"preview": "#version 450\n\n#include \"common.h\"\n\nlayout(location = 0) in vec3 position;\nlayout(location = 1) in vec2 texcoords;\nlayout"
},
{
"path": "client/shaders/surface-extraction/extract.comp",
"chars": 5464,
"preview": "#version 450\n\n#extension GL_KHR_shader_subgroup_ballot: enable\n#extension GL_KHR_shader_subgroup_arithmetic: enable\n\n#in"
},
{
"path": "client/shaders/surface-extraction/surface.h",
"chars": 1731,
"preview": "#ifndef SURFACE_EXTRACTION_SURFACE_H_\n#define SURFACE_EXTRACTION_SURFACE_H_\n\n// A face between a voxel and its neighbor "
},
{
"path": "client/shaders/voxels.frag",
"chars": 266,
"preview": "#version 450\n\nlayout(location = 0) in vec3 texcoords;\nlayout(location = 1) in float occlusion;\nlayout(location = 0) out "
},
{
"path": "client/shaders/voxels.vert",
"chars": 2512,
"preview": "#version 460\n\n#include \"common.h\"\n#include \"surface-extraction/surface.h\"\n\n// Maps from cube space ([0..1]^3) to local n"
},
{
"path": "client/src/config.rs",
"chars": 3487,
"preview": "use std::{\n env, fs, io,\n net::SocketAddr,\n path::{Path, PathBuf},\n sync::Arc,\n};\n\nuse serde::Deserialize;\nu"
},
{
"path": "client/src/graphics/base.rs",
"chars": 15163,
"preview": "//! Common state shared throughout the graphics system\n\nuse ash::ext::debug_utils;\nuse common::Anonymize;\nuse std::ffi::"
},
{
"path": "client/src/graphics/core.rs",
"chars": 7602,
"preview": "use std::ffi::CStr;\nuse std::os::raw::c_char;\nuse std::os::raw::c_void;\nuse std::ptr;\nuse std::slice;\n\nuse ash::ext::deb"
},
{
"path": "client/src/graphics/draw.rs",
"chars": 25077,
"preview": "use std::sync::Arc;\nuse std::time::Instant;\n\nuse ash::vk;\nuse common::traversal;\nuse lahar::Staged;\nuse metrics::histogr"
},
{
"path": "client/src/graphics/fog.rs",
"chars": 6633,
"preview": "use ash::{Device, vk};\nuse vk_shader_macros::include_glsl;\n\nuse super::Base;\nuse common::defer;\n\nconst VERT: &[u32] = in"
},
{
"path": "client/src/graphics/frustum.rs",
"chars": 4342,
"preview": "use common::math::{MDirection, MPoint};\n\n#[derive(Debug, Copy, Clone)]\npub struct Frustum {\n pub left: f32,\n pub r"
},
{
"path": "client/src/graphics/gltf_mesh.rs",
"chars": 19955,
"preview": "use std::{\n borrow::Cow,\n fs::{self, File},\n io::Cursor,\n mem,\n path::{Path, PathBuf},\n ptr,\n};\n\nuse a"
},
{
"path": "client/src/graphics/gui.rs",
"chars": 1423,
"preview": "use yakui::{\n Alignment, Color, align, colored_box, colored_box_container, label, pad, widgets::Pad,\n};\n\nuse crate::S"
},
{
"path": "client/src/graphics/meshes.rs",
"chars": 10105,
"preview": "use std::mem;\n\nuse ash::{Device, vk};\nuse lahar::{BufferRegionAlloc, DedicatedImage};\nuse memoffset::offset_of;\nuse vk_s"
},
{
"path": "client/src/graphics/mod.rs",
"chars": 839,
"preview": "#![allow(clippy::missing_safety_doc)] // Vulkan wrangling is categorically unsafe\n\nmod base;\nmod core;\nmod draw;\nmod fog"
},
{
"path": "client/src/graphics/png_array.rs",
"chars": 7987,
"preview": "use std::{\n fs::{self, File},\n io::BufReader,\n path::PathBuf,\n};\n\nuse anyhow::{Context, anyhow, bail};\nuse ash:"
},
{
"path": "client/src/graphics/tests.rs",
"chars": 119,
"preview": "use super::Base;\n\n#[test]\n#[ignore]\nfn init_base() {\n let _guard = common::tracing_guard();\n Base::headless();\n}\n"
},
{
"path": "client/src/graphics/voxels/mod.rs",
"chars": 9176,
"preview": "mod surface;\npub mod surface_extraction;\n\n#[cfg(test)]\nmod tests;\n\nuse std::{sync::Arc, time::Instant};\n\nuse ash::{Devic"
},
{
"path": "client/src/graphics/voxels/surface.rs",
"chars": 15784,
"preview": "use ash::{Device, vk};\nuse lahar::{DedicatedImage, DedicatedMapping};\nuse vk_shader_macros::include_glsl;\n\nuse super::su"
},
{
"path": "client/src/graphics/voxels/surface_extraction.rs",
"chars": 25676,
"preview": "use std::ffi::c_char;\nuse std::mem;\n\nuse ash::{Device, vk};\nuse lahar::{DedicatedBuffer, DedicatedMapping};\nuse vk_shade"
},
{
"path": "client/src/graphics/voxels/tests.rs",
"chars": 6274,
"preview": "use std::{mem, sync::Arc};\n\nuse ash::vk;\nuse lahar::DedicatedMapping;\nuse renderdoc::{RenderDoc, V110};\n\nuse super::{Sur"
},
{
"path": "client/src/graphics/window.rs",
"chars": 26801,
"preview": "use std::sync::Arc;\nuse std::time::Instant;\nuse std::{f32, os::raw::c_char};\n\nuse ash::{khr, vk};\nuse lahar::DedicatedIm"
},
{
"path": "client/src/lahar_deprecated/condition.rs",
"chars": 1247,
"preview": "use std::task::{Context, Waker};\n\n/// Manages tasks waiting on a single condition\npub struct Condition {\n wakers: Vec"
},
{
"path": "client/src/lahar_deprecated/mod.rs",
"chars": 412,
"preview": "//! This code is directly copied from https://github.com/Ralith/lahar/tree/fbc889a4538e2d3b6b519a6cb7a3538d7b3bfcdf\n//! "
},
{
"path": "client/src/lahar_deprecated/ring_alloc.rs",
"chars": 3457,
"preview": "use std::collections::VecDeque;\n\n/// State tracker for a ring buffer of contiguous variable-sized allocations with rando"
},
{
"path": "client/src/lahar_deprecated/staging.rs",
"chars": 3617,
"preview": "use std::future::Future;\nuse std::ops::{Deref, DerefMut};\nuse std::sync::{Arc, Mutex};\nuse std::task::Poll;\n\nuse ash::{D"
},
{
"path": "client/src/lahar_deprecated/transfer.rs",
"chars": 8864,
"preview": "use std::convert::TryFrom;\nuse std::fmt;\nuse std::future::Future;\nuse std::sync::Arc;\nuse std::thread;\nuse std::time::Du"
},
{
"path": "client/src/lib.rs",
"chars": 566,
"preview": "#![allow(clippy::new_without_default)]\n#![allow(clippy::needless_borrowed_reference)]\n\nmacro_rules! cstr {\n ($x:liter"
},
{
"path": "client/src/loader.rs",
"chars": 10549,
"preview": "use std::{\n any::{Any, TypeId},\n convert::TryFrom,\n marker::PhantomData,\n sync::{Arc, Mutex},\n};\n\nuse anyhow"
},
{
"path": "client/src/local_character_controller.rs",
"chars": 13307,
"preview": "use common::{math, math::MIsometry, proto::Position};\n\npub struct LocalCharacterController {\n /// The last extrapolat"
},
{
"path": "client/src/main.rs",
"chars": 4710,
"preview": "use std::{sync::Arc, thread};\n\nuse client::{Config, graphics, metrics, net};\nuse common::{Anonymize, proto};\nuse save::S"
},
{
"path": "client/src/metrics.rs",
"chars": 4082,
"preview": "use std::{\n collections::HashMap,\n sync::{Arc, Mutex, OnceLock, RwLock},\n time::Duration,\n};\n\nuse hdrhistogram:"
},
{
"path": "client/src/net.rs",
"chars": 6173,
"preview": "use std::{sync::Arc, thread};\n\nuse anyhow::{Result, anyhow};\nuse quinn::rustls;\nuse tokio::sync::mpsc;\n\nuse common::{\n "
},
{
"path": "client/src/prediction.rs",
"chars": 4900,
"preview": "use std::collections::VecDeque;\n\nuse common::{\n SimConfig, character_controller,\n graph::Graph,\n proto::{Charac"
},
{
"path": "client/src/sim.rs",
"chars": 22369,
"preview": "use std::time::Duration;\n\nuse fxhash::FxHashMap;\nuse hecs::Entity;\nuse tracing::{debug, error, trace};\n\nuse crate::{\n "
},
{
"path": "client/src/worldgen_driver.rs",
"chars": 6514,
"preview": "use std::time::Instant;\n\nuse common::{\n dodeca::{self, Vertex},\n graph::{Graph, NodeId},\n math::MPoint,\n nod"
},
{
"path": "common/Cargo.toml",
"chars": 964,
"preview": "[package]\nname = \"common\"\nversion = \"0.1.0\"\nauthors = [\"Benjamin Saunders <ben.e.saunders@gmail.com>\"]\nedition = \"2024\"\n"
},
{
"path": "common/benches/bench.rs",
"chars": 1996,
"preview": "use criterion::{Criterion, criterion_group, criterion_main};\n\nuse common::{\n dodeca::{Side, Vertex},\n graph::{Grap"
},
{
"path": "common/src/character_controller/collision.rs",
"chars": 4211,
"preview": "//! This module is used to encapsulate character collision checking for the character controller\n\nuse tracing::error;\n\nu"
},
{
"path": "common/src/character_controller/mod.rs",
"chars": 13728,
"preview": "mod collision;\nmod vector_bounds;\n\nuse std::mem::replace;\n\nuse tracing::warn;\n\nuse crate::{\n SimConfig,\n character"
},
{
"path": "common/src/character_controller/vector_bounds.rs",
"chars": 14873,
"preview": "//! This module is used to transform vectors to ensure that they fit constraints discovered during collision checking.\n\n"
},
{
"path": "common/src/chunk_collision.rs",
"chars": 20998,
"preview": "use crate::{\n collision_math::Ray,\n math::{MVector, PermuteXYZ},\n node::{ChunkLayout, VoxelAABB, VoxelData},\n "
},
{
"path": "common/src/chunk_ray_casting.rs",
"chars": 10598,
"preview": "use crate::{\n collision_math::Ray,\n math::{MVector, PermuteXYZ},\n node::{ChunkLayout, VoxelAABB, VoxelData},\n "
},
{
"path": "common/src/chunks.rs",
"chars": 603,
"preview": "use std::ops::{Index, IndexMut};\n\nuse crate::dodeca::Vertex;\n\n/// A table of chunks contained by a single node\n///\n/// E"
},
{
"path": "common/src/codec.rs",
"chars": 1737,
"preview": "use anyhow::{Result, bail};\nuse serde::{Serialize, de::DeserializeOwned};\n\npub async fn send<T: Serialize + ?Sized>(stre"
},
{
"path": "common/src/collision_math.rs",
"chars": 15885,
"preview": "use crate::math::{MDirection, MIsometry, MPoint, MVector};\n\n/// A ray in hyperbolic space. The position and direction mu"
},
{
"path": "common/src/cursor.rs",
"chars": 6233,
"preview": "use std::sync::LazyLock;\n\nuse crate::dodeca::{SIDE_COUNT, Side, Vertex};\nuse crate::graph::{Graph, NodeId};\nuse crate::n"
},
{
"path": "common/src/dodeca.rs",
"chars": 28848,
"preview": "//! Tools for processing the geometry of a right dodecahedron\n\nuse serde::{Deserialize, Serialize};\n\nuse crate::math::{M"
},
{
"path": "common/src/graph.rs",
"chars": 16064,
"preview": "#![allow(clippy::len_without_is_empty)]\n\nuse std::collections::VecDeque;\n\nuse blake3::Hasher;\nuse fxhash::{FxHashMap, Fx"
},
{
"path": "common/src/graph_collision.rs",
"chars": 17349,
"preview": "use crate::{\n chunk_collision::chunk_sphere_cast,\n collision_math::Ray,\n graph::Graph,\n math::MVector,\n n"
},
{
"path": "common/src/graph_entities.rs",
"chars": 985,
"preview": "use fxhash::FxHashMap;\nuse hecs::Entity;\n\nuse crate::graph::NodeId;\n\n#[derive(Default)]\npub struct GraphEntities {\n m"
},
{
"path": "common/src/graph_ray_casting.rs",
"chars": 2921,
"preview": "use crate::{\n chunk_ray_casting::chunk_ray_cast,\n collision_math::Ray,\n graph::Graph,\n node::{Chunk, ChunkId"
},
{
"path": "common/src/id.rs",
"chars": 1774,
"preview": "#[macro_export]\nmacro_rules! mkid {\n ($name:ident : $ty:ty) => {\n #[derive(Debug, Eq, PartialEq, Ord, PartialO"
},
{
"path": "common/src/lib.rs",
"chars": 3394,
"preview": "#![allow(clippy::needless_borrowed_reference)]\n\nuse rand::{\n Rng,\n distr::{Distribution, StandardUniform},\n};\n\n#[m"
},
{
"path": "common/src/margins.rs",
"chars": 17022,
"preview": "use crate::{\n dodeca::Vertex,\n graph::Graph,\n math::PermuteXYZ,\n node::{Chunk, ChunkId, VoxelData},\n voxe"
},
{
"path": "common/src/math.rs",
"chars": 42128,
"preview": "//! This module defines the a vector and matrix type for use in Minkowski space,\n//! allowing natural operations to be p"
},
{
"path": "common/src/node.rs",
"chars": 23939,
"preview": "/*the name of this module is pretty arbitrary at the moment*/\n\nuse std::ops::{Index, IndexMut};\n\nuse serde::{Deserialize"
},
{
"path": "common/src/peer_traverser.rs",
"chars": 13173,
"preview": "use std::sync::LazyLock;\n\nuse arrayvec::ArrayVec;\n\nuse crate::{\n dodeca::{SIDE_COUNT, Side},\n graph::{Graph, NodeI"
},
{
"path": "common/src/proto.rs",
"chars": 3374,
"preview": "use serde::{Deserialize, Serialize};\n\nuse crate::{\n EntityId, SimConfig, Step, dodeca, graph::NodeId, math::MIsometry"
},
{
"path": "common/src/sim_config.rs",
"chars": 8170,
"preview": "use std::time::Duration;\n\nuse serde::{Deserialize, Serialize};\n\nuse crate::{dodeca, math::MVector};\n\n/// Manually specif"
},
{
"path": "common/src/traversal.rs",
"chars": 10383,
"preview": "use std::collections::VecDeque;\n\nuse fxhash::FxHashSet;\n\nuse crate::{\n collision_math::Ray,\n dodeca::{self, Side, "
},
{
"path": "common/src/voxel_math.rs",
"chars": 9711,
"preview": "use std::ops::{Index, IndexMut};\n\nuse serde::{Deserialize, Serialize};\n\nuse crate::dodeca::Side;\n\n/// Represents a parti"
},
{
"path": "common/src/world.rs",
"chars": 3039,
"preview": "use serde::{Deserialize, Serialize};\n\n#[derive(\n Debug, Copy, Clone, Default, Eq, PartialEq, Ord, PartialOrd, Hash, S"
},
{
"path": "common/src/worldgen/horosphere.rs",
"chars": 23771,
"preview": "use libm::{cosf, sinf, sqrtf};\nuse rand::{Rng, SeedableRng};\nuse rand_distr::Poisson;\nuse rand_pcg::Pcg64Mcg;\n\nuse crate"
},
{
"path": "common/src/worldgen/mod.rs",
"chars": 31108,
"preview": "use horosphere::{HorosphereChunk, HorosphereNode};\nuse plane::Plane;\nuse rand::{Rng, SeedableRng, distr::Uniform};\nuse r"
},
{
"path": "common/src/worldgen/plane.rs",
"chars": 7267,
"preview": "use std::ops::{Mul, Neg};\n\nuse crate::{\n dodeca::{Side, Vertex},\n math::{MDirection, MIsometry, MPoint, MVector},\n"
},
{
"path": "common/src/worldgen/terraingen.rs",
"chars": 51579,
"preview": "use crate::{math, world::Material};\n\nconst GENERAL_HIGH: [VoronoiInfo; 113] = [\n VoronoiInfo::new(Material::ClayLoam,"
},
{
"path": "docs/README.md",
"chars": 6703,
"preview": "# Current outline\nThis is subject to change.\n* Introduction\n* How to play Hypermine\n * Controls\n * Config file\n "
},
{
"path": "docs/world_generation.md",
"chars": 9032,
"preview": "# World generation\nWorld generation in Hypermine is constrained the following principles:\n* Consistency: The contents of"
},
{
"path": "save/Cargo.toml",
"chars": 493,
"preview": "[package]\nname = \"save\"\nversion = \"0.1.0\"\nedition = \"2024\"\n\n# See more keys and their definitions at https://doc.rust-la"
},
{
"path": "save/benches/bench.rs",
"chars": 2936,
"preview": "use std::hint::black_box;\n\nuse save::Save;\n\nuse criterion::{BatchSize, BenchmarkId, Criterion, Throughput, criterion_gro"
},
{
"path": "save/gen-protos/Cargo.toml",
"chars": 120,
"preview": "[package]\nname = \"gen-protos\"\nversion = \"0.1.0\"\nedition = \"2024\"\npublish = false\n\n[dependencies]\nprost-build = \"0.14.3\"\n"
},
{
"path": "save/gen-protos/src/main.rs",
"chars": 290,
"preview": "use std::{io::Result, path::Path};\n\nfn main() -> Result<()> {\n let dir = Path::new(env!(\"CARGO_MANIFEST_DIR\"))\n "
},
{
"path": "save/src/lib.rs",
"chars": 10454,
"preview": "mod protos;\n\nuse std::path::Path;\n\nuse prost::Message;\nuse redb::{Database, ReadableDatabase, ReadableTable, TableDefini"
},
{
"path": "save/src/protos.proto",
"chars": 972,
"preview": "syntax = \"proto3\";\n\npackage protos;\n\nmessage Meta {\n // Number of voxels along the edge of a chunk\n uint32 chunk_s"
},
{
"path": "save/src/protos.rs",
"chars": 2634,
"preview": "// This file is @generated by prost-build.\n#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)]\npub struct Meta"
},
{
"path": "save/tests/heavy.rs",
"chars": 959,
"preview": "use std::time::Instant;\n\nuse save::Save;\n\nuse rand::{Rng, SeedableRng, rngs::SmallRng};\n\n#[test]\nfn write() {\n let mu"
},
{
"path": "save/tests/tests.rs",
"chars": 1670,
"preview": "use rand::{Rng, SeedableRng, rngs::SmallRng};\n\nuse save::Save;\n\n#[test]\nfn persist_meta() {\n let file = tempfile::Nam"
},
{
"path": "server/Cargo.toml",
"chars": 956,
"preview": "[package]\nname = \"server\"\nversion = \"0.1.0\"\nauthors = [\"Benjamin Saunders <ben.e.saunders@gmail.com>\"]\nedition = \"2024\"\n"
},
{
"path": "server/src/config.rs",
"chars": 1082,
"preview": "use std::{\n fs,\n net::{Ipv6Addr, SocketAddr},\n path::{Path, PathBuf},\n};\n\nuse anyhow::{Context, Result};\nuse se"
},
{
"path": "server/src/input_queue.rs",
"chars": 2475,
"preview": "use std::{\n collections::VecDeque,\n time::{Duration, Instant},\n};\n\nuse common::proto::Command;\n\n/// A jitter-toler"
},
{
"path": "server/src/lib.rs",
"chars": 15422,
"preview": "#![allow(clippy::needless_borrowed_reference)]\n\nextern crate nalgebra as na;\nmod input_queue;\nmod postcard_helpers;\nmod "
},
{
"path": "server/src/main.rs",
"chars": 2749,
"preview": "#![allow(clippy::needless_borrowed_reference)]\n\nmod config;\n\nuse std::{fs, net::UdpSocket, path::Path};\n\nuse anyhow::{Co"
},
{
"path": "server/src/postcard_helpers.rs",
"chars": 955,
"preview": "//! Postcard doesn't support serializing to an existing vec out of the box.\n//! See https://github.com/jamesmunns/postca"
},
{
"path": "server/src/sim.rs",
"chars": 32221,
"preview": "use std::sync::Arc;\n\nuse anyhow::Context;\nuse common::dodeca::{Side, Vertex};\nuse common::math::MIsometry;\nuse common::n"
},
{
"path": "shell.nix",
"chars": 951,
"preview": "let\n moz_overlay = import (builtins.fetchTarball\n \"https://github.com/mozilla/nixpkgs-mozilla/archive/9b11a87c0cc54e"
}
]
About this extraction
This page contains the full source code of the Ralith/hypermine GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 102 files (793.1 KB), approximately 197.4k tokens, and a symbol index with 1131 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.