Copy disabled (too large)
Download .txt
Showing preview only (10,642K chars total). Download the full file to get everything.
Repository: TencentARC/FreeSplatter
Branch: main
Commit: c0446c44d9c6
Files: 64
Total size: 10.1 MB
Directory structure:
gitextract_e5wtol89/
├── .gitignore
├── LICENSE.txt
├── README.md
├── app.py
├── configs/
│ ├── freesplatter-object-2dgs.yaml
│ ├── freesplatter-object.yaml
│ └── freesplatter-scene.yaml
├── freesplatter/
│ ├── __init__.py
│ ├── hunyuan/
│ │ ├── __init__.py
│ │ ├── hunyuan3d_mvd_std_pipeline.py
│ │ └── utils.py
│ ├── models/
│ │ ├── __init__.py
│ │ ├── model.py
│ │ ├── renderer/
│ │ │ ├── __init__.py
│ │ │ ├── gaussian_renderer.py
│ │ │ └── gaussian_utils.py
│ │ ├── renderer_2dgs/
│ │ │ ├── __init__.py
│ │ │ ├── gaussian_renderer.py
│ │ │ └── gaussian_utils.py
│ │ └── transformer.py
│ ├── utils/
│ │ ├── __init__.py
│ │ ├── camera_util.py
│ │ ├── geometry_util.py
│ │ ├── infer_util.py
│ │ ├── mesh_optim.py
│ │ └── recon_util.py
│ └── webui/
│ ├── __init__.py
│ ├── camera_viewer/
│ │ ├── __init__.py
│ │ ├── utils.py
│ │ └── visualizer.py
│ ├── gradio_customgs/
│ │ ├── __init__.py
│ │ ├── customgs.py
│ │ ├── customgs.pyi
│ │ └── templates/
│ │ ├── component/
│ │ │ ├── Canvas3D-60a8d213.js
│ │ │ ├── Canvas3DGS-0fbc0d9a.js
│ │ │ ├── Index-f5583db3.js
│ │ │ ├── __vite-browser-external-2447137e.js
│ │ │ ├── index.js
│ │ │ ├── style.css
│ │ │ └── wrapper-6f348d45-19fa94bf.js
│ │ └── example/
│ │ ├── index.js
│ │ └── style.css
│ ├── gradio_custommodel3d/
│ │ ├── __init__.py
│ │ ├── custommodel3d.py
│ │ ├── custommodel3d.pyi
│ │ └── templates/
│ │ ├── component/
│ │ │ ├── Canvas3D-e42d3d6b.js
│ │ │ ├── Canvas3DGS-f5539f54.js
│ │ │ ├── Index-0bb1de05.js
│ │ │ ├── __vite-browser-external-2447137e.js
│ │ │ ├── index.js
│ │ │ ├── style.css
│ │ │ └── wrapper-6f348d45-f837cf34.js
│ │ └── example/
│ │ ├── index.js
│ │ └── style.css
│ ├── parameters.py
│ ├── runner.py
│ ├── shared_opts.py
│ ├── style.css
│ ├── tab_img_to_3d.py
│ ├── tab_instant3d.py
│ ├── tab_text_to_img_to_3d.py
│ ├── tab_views_to_3d.py
│ └── tab_views_to_scene.py
└── requirements.txt
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
eggs/
.eggs/
.vscode/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
.DS_Store
.env/
.trash/
ckpts/
logs/
data/
outputs/
figures*/
examples/
rebuttal/
apps/
*.sh
run.py
================================================
FILE: LICENSE.txt
================================================
Tencent is pleased to support the open source community by making FreeSplatter available.
Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved.
FreeSplatter IS NOT INTENDED FOR USE WITHIN THE EUROPEAN UNION.
For avoidance of doubts, FreeSplatter means the inference-enabling code, parameters, and weights of this model made publicly available by Tencent in accordance with the following License Terms.
License Terms of the FreeSplatter:
--------------------------------------------------------------------
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of this License; and
You must cause any modified files to carry prominent notices stating that You changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
================================================
FILE: README.md
================================================
<div align="center">
# FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction
<a href='https://bluestyle97.github.io/projects/freesplatter/'><img src='https://img.shields.io/badge/Project_Page-Website-red?logo=googlechrome&logoColor=white' alt='Project Page'></a>
<a href="https://arxiv.org/abs/2412.09573"><img src='https://img.shields.io/badge/arXiv-Paper-green?logo=arxiv&logoColor=white' alt='arXiv'></a>
<a href="https://huggingface.co/TencentARC/FreeSplatter"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Model_Card-Huggingface-orange"></a>
<a href="https://huggingface.co/spaces/TencentARC/FreeSplatter"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Gradio%20Demo-Huggingface-orange"></a> <br>
**ICCV 2025**
</div>
---
This repo is the official implementation of FreeSplatter, a feed-forward framework capable of generating high-quality 3D Gaussians from uncalibrated sparse-view images and recovering their camera parameters in mere seconds.
https://github.com/user-attachments/assets/0c73b693-9428-46bd-843c-132434b9686f
# ⚙️ Installation
We recommend using `Python>=3.10`, `PyTorch>=2.1.0`, and `CUDA>=12.1`.
```bash
conda create --name freesplatter python=3.10
conda activate freesplatter
pip install -U pip
# Install PyTorch and xformers
# You may need to install another xformers version if you use a different PyTorch version
pip install torch==2.4.0 torchvision==0.19.0 --index-url https://download.pytorch.org/whl/cu121
pip install xformers==0.0.27.post2
# Install other requirements
pip install -r requirements.txt
```
# 🤖 Pretrained Models
We provide the following pretrained models:
| Model | Description | #Params | Download |
| --- | --- | --- | --- |
| FreeSplatter-O | Object-level reconstruction model | 306M | [Download](https://huggingface.co/TencentARC/FreeSplatter/blob/main/freesplatter-object.safetensors) |
| FreeSplatter-O-2dgs | Object-level reconstruction model using [2DGS](https://surfsplatting.github.io/) (finetuned from FreeSplatter-O) | 306M | [Download](https://huggingface.co/TencentARC/FreeSplatter/blob/main/freesplatter-object-2dgs.safetensors) |
| FreeSplatter-S | Scene-level reconstruction model | 306M | [Download](https://huggingface.co/TencentARC/FreeSplatter/blob/main/freesplatter-scene.safetensors) |
# 💫 Inference
We recommand to start a gradio demo in your local machine, simply run:
```bash
python app.py
```
# ⚖️ License
FreeSplatter's code and models are licensed under the [Apache 2.0 License](LICENSE.txt) with additional restrictions to comply with Tencent's open-source policies. Besides, the libraries [Hunyuan3D-1](https://github.com/Tencent/Hunyuan3D-1) and [BRIAAI RMBG-2.0](https://huggingface.co/briaai/RMBG-2.0) have their own non-commercial licenses.
# :books: Citation
If you find our work useful for your research or applications, please cite using this BibTeX:
```BibTeX
@article{xu2024freesplatter,
title={FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction},
author={Xu, Jiale and Gao, Shenghua and Shan, Ying},
journal={arXiv preprint arXiv:2412.09573},
year={2024}
}
```
================================================
FILE: app.py
================================================
import os
if 'OMP_NUM_THREADS' not in os.environ:
os.environ['OMP_NUM_THREADS'] = '16'
import torch
import gradio as gr
from functools import partial
from huggingface_hub import snapshot_download
from freesplatter.webui.runner import FreeSplatterRunner
from freesplatter.webui.tab_img_to_3d import create_interface_img_to_3d
from freesplatter.webui.tab_views_to_3d import create_interface_views_to_3d
from freesplatter.webui.tab_views_to_scene import create_interface_views_to_scene
os.makedirs('./ckpts/Hunyuan3D-1', exist_ok=True)
snapshot_download('tencent/Hunyuan3D-1', repo_type='model', local_dir='./ckpts/Hunyuan3D-1')
torch.set_grad_enabled(False)
device = torch.device('cuda')
runner = FreeSplatterRunner(device)
_HEADER_ = '''
# FreeSplatter 🤗 Gradio Demo
\n\nOfficial demo of the paper [FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction](https://arxiv.org/abs/2412.09573). [[Github]](https://github.com/TencentARC/FreeSplatter)
**FreeSplatter** is a feed-forward framework capable of generating high-quality 3D Gaussians from **uncalibrated** sparse-view images and recovering their camera parameters in mere seconds.
'''
_IMG_TO_3D_HELP_ = '''
💡💡💡**Usage Tips:**
- This demo supports various multi-view diffusion models, including [Hunyuan3D](https://github.com/Tencent/Hunyuan3D-1) Std and [Zero123++](https://github.com/SUDO-AI-3D/zero123plus) v1.1/v1.2. You can try different models to get the best result.
- Try clicking the \U0001f3b2\ufe0f button to use a different `Random seed` (default: 42) for diverse outputs.
- In most cases, using `2DGS` leads to better mesh geometry than `3DGS`. Please refer to [2DGS paper](https://arxiv.org/abs/2403.17888).
- You can adjust the views used for reconstruction to alleviate the blurry texture problem caused by multi-view inconsistency.
'''
_VIEWS_TO_3D_HELP_ = '''
💡💡💡**Usage Tips:**
- Our model assumes white-background input images with centered objects. Please enable the `Remove background` option if the images are not white-background.
- Our model assumes an equal focal length for all input images, otherwise the results may be degraded.
'''
_VIEWS_TO_SCENE_HELP_ = '''
💡💡💡**Usage Tips:**
- While our model architecture makes no assumption on the number of input images, the current model was only trained on a two-view setting.
- The input images will be center-cropped and resized to 512x512.
- The visualized camera poses are normalized to make the baseline equal to 1.0.
'''
_CITE_ = r"""
If FreeSplatter is helpful, please help to ⭐ the <a href='https://github.com/TencentARC/FreeSplatter' target='_blank'>Github Repo</a>. Thanks! [](https://github.com/TencentARC/FreeSplatter)
---
📝 **Citation**
If you find our work useful for your research or applications, please cite using this bibtex:
```bibtex
@article{xu2024freesplatter,
title={FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction},
author={Xu, Jiale and Gao, Shenghua and Shan, Ying},
journal={arXiv preprint},
year={2024}
}
```
📋 **License**
Apache-2.0 LICENSE. Please refer to the [LICENSE file](https://huggingface.co/spaces/TencentARC/FreeSplatter/blob/main/LICENSE) for details.
📧 **Contact**
If you have any questions, feel free to open a discussion or contact us at <b>bluestyle928@gmail.com</b>.
"""
with gr.Blocks(analytics_enabled=False, title='FreeSplatter Demo', theme=gr.themes.Ocean()) as demo:
gr.Markdown(_HEADER_)
with gr.Tabs() as main_tabs:
with gr.TabItem('Image-to-3D', id='tab_img_to_3d'):
gr.Markdown(_IMG_TO_3D_HELP_)
with gr.Tabs() as sub_tabs_img_to_3d:
with gr.TabItem('Hunyuan3D Std', id='tab_hunyuan3d_std'):
_, var_img_to_3d_hunyuan3d_std = create_interface_img_to_3d(
runner.run_segmentation,
runner.run_img_to_3d,
model='Hunyuan3D Std')
with gr.TabItem('Zero123++ v1.1', id='tab_zero123plus_v11'):
_, var_img_to_3d_zero123plus_v11 = create_interface_img_to_3d(
runner.run_segmentation,
runner.run_img_to_3d,
model='Zero123++ v1.1')
with gr.TabItem('Zero123++ v1.2', id='tab_zero123plus_v12'):
_, var_img_to_3d_zero123plus_v12 = create_interface_img_to_3d(
runner.run_segmentation,
runner.run_img_to_3d,
model='Zero123++ v1.2')
with gr.TabItem('Sparse-view Reconstruction (Object)', id='tab_views_to_3d'):
gr.Markdown(_VIEWS_TO_3D_HELP_)
_, var_views_to_3d = create_interface_views_to_3d(
runner.run_views_to_3d)
with gr.TabItem('Sparse-view Reconstruction (Scene)', id='tab_views_to_scene'):
gr.Markdown(_VIEWS_TO_SCENE_HELP_)
_, var_views_to_scene = create_interface_views_to_scene(
runner.run_views_to_scene)
gr.Markdown(_CITE_)
demo.queue().launch(
share=False,
server_name="0.0.0.0",
server_port=41137,
ssl_verify=False,
)
================================================
FILE: configs/freesplatter-object-2dgs.yaml
================================================
model:
target: freesplatter.models.model.FreeSplatterModel
params:
transformer_config:
target: freesplatter.models.transformer.Transformer
params:
patch_size: 8
input_dim: 3
inner_dim: 1024
output_dim: 22
depth: 24
n_heads: 16
renderer_config:
sh_degree: 1
img_height: 512
img_width: 512
scaling_activation_type: sigmoid
scale_min_act: 0.0001
scale_max_act: 0.02
scale_multi_act: 0.1
sh_residual: false
use_2dgs: true
================================================
FILE: configs/freesplatter-object.yaml
================================================
model:
target: freesplatter.models.model.FreeSplatterModel
params:
transformer_config:
target: freesplatter.models.transformer.Transformer
params:
patch_size: 8
input_dim: 3
inner_dim: 1024
output_dim: 23
depth: 24
n_heads: 16
renderer_config:
sh_degree: 1
img_height: 512
img_width: 512
scaling_activation_type: sigmoid
scale_min_act: 0.0001
scale_max_act: 0.02
scale_multi_act: 0.1
sh_residual: false
================================================
FILE: configs/freesplatter-scene.yaml
================================================
model:
target: freesplatter.models.model.FreeSplatterModel
params:
transformer_config:
target: freesplatter.models.transformer.Transformer
params:
patch_size: 8
input_dim: 3
inner_dim: 1024
output_dim: 23
depth: 24
n_heads: 16
renderer_config:
sh_degree: 1
img_height: 512
img_width: 512
scaling_activation_type: sigmoid
scale_min_act: 0.0001
scale_max_act: 0.02
scale_multi_act: 0.1
bg_color: [0., 0., 0.]
sh_residual: true
================================================
FILE: freesplatter/__init__.py
================================================
================================================
FILE: freesplatter/hunyuan/__init__.py
================================================
================================================
FILE: freesplatter/hunyuan/hunyuan3d_mvd_std_pipeline.py
================================================
# Open Source Model Licensed under the Apache License Version 2.0 and Other Licenses of the Third-Party Components therein:
# The below Model in this distribution may have been modified by THL A29 Limited ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2024 THL A29 Limited.
# Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved.
# The below software and/or models in this distribution may have been
# modified by THL A29 Limited ("Tencent Modifications").
# All Tencent Modifications are Copyright (C) THL A29 Limited.
# Hunyuan 3D is licensed under the TENCENT HUNYUAN NON-COMMERCIAL LICENSE AGREEMENT
# except for the third-party components listed below.
# Hunyuan 3D does not impose any additional limitations beyond what is outlined
# in the repsective licenses of these third-party components.
# Users must comply with all terms and conditions of original licenses of these third-party
# components and must ensure that the usage of the third party components adheres to
# all relevant laws and regulations.
# For avoidance of doubts, Hunyuan 3D means the large language models and
# their software and algorithms, including trained model weights, parameters (including
# optimizer states), machine-learning model code, inference-enabling code, training-enabling code,
# fine-tuning enabling code and other elements of the foregoing made publicly available
# by Tencent in accordance with TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT.
import inspect
from typing import Any, Dict, Optional
from typing import Any, Dict, List, Optional, Tuple, Union
import os
import torch
import numpy as np
from PIL import Image
import diffusers
from diffusers.image_processor import VaeImageProcessor
from diffusers.utils.import_utils import is_xformers_available
from diffusers.schedulers import KarrasDiffusionSchedulers
from diffusers.utils.torch_utils import randn_tensor
from diffusers.utils.import_utils import is_xformers_available
from diffusers.models.attention_processor import (
Attention,
AttnProcessor,
XFormersAttnProcessor,
AttnProcessor2_0
)
from diffusers import (
AutoencoderKL,
DDPMScheduler,
DiffusionPipeline,
EulerAncestralDiscreteScheduler,
UNet2DConditionModel,
ImagePipelineOutput
)
import transformers
from transformers import (
CLIPImageProcessor,
CLIPTextModel,
CLIPTokenizer,
CLIPVisionModelWithProjection,
CLIPTextModelWithProjection
)
from .utils import to_rgb_image, white_out_background, recenter_img
EXAMPLE_DOC_STRING = """
Examples:
```py
>>> import torch
>>> from diffusers import Hunyuan3d_MVD_XL_Pipeline
>>> pipe = Hunyuan3d_MVD_XL_Pipeline.from_pretrained(
... "Tencent-Hunyuan-3D/MVD-XL", torch_dtype=torch.float16
... )
>>> pipe.to("cuda")
>>> img = Image.open("demo.png")
>>> res_img = pipe(img).images[0]
```
"""
def scale_latents(latents): return (latents - 0.22) * 0.75
def unscale_latents(latents): return (latents / 0.75) + 0.22
def scale_image(image): return (image - 0.5) / 0.5
def scale_image_2(image): return (image * 0.5) / 0.8
def unscale_image(image): return (image * 0.5) + 0.5
def unscale_image_2(image): return (image * 0.8) / 0.5
class ReferenceOnlyAttnProc(torch.nn.Module):
def __init__(self, chained_proc, enabled=False, name=None):
super().__init__()
self.enabled = enabled
self.chained_proc = chained_proc
self.name = name
def __call__(self, attn, hidden_states, encoder_hidden_states=None, attention_mask=None, mode="w", ref_dict=None):
encoder_hidden_states = hidden_states if encoder_hidden_states is None else encoder_hidden_states
if self.enabled:
if mode == 'w': ref_dict[self.name] = encoder_hidden_states
elif mode == 'r': encoder_hidden_states = torch.cat([encoder_hidden_states, ref_dict.pop(self.name)], dim=1)
else: raise Exception(f"mode should not be {mode}")
return self.chained_proc(attn, hidden_states, encoder_hidden_states, attention_mask)
class RefOnlyNoisedUNet(torch.nn.Module):
def __init__(self, unet, scheduler) -> None:
super().__init__()
self.unet = unet
self.scheduler = scheduler
unet_attn_procs = dict()
for name, _ in unet.attn_processors.items():
if torch.__version__ >= '2.0': default_attn_proc = AttnProcessor2_0()
elif is_xformers_available(): default_attn_proc = XFormersAttnProcessor()
else: default_attn_proc = AttnProcessor()
unet_attn_procs[name] = ReferenceOnlyAttnProc(
default_attn_proc, enabled=name.endswith("attn1.processor"), name=name
)
unet.set_attn_processor(unet_attn_procs)
def __getattr__(self, name: str):
try:
return super().__getattr__(name)
except AttributeError:
return getattr(self.unet, name)
def forward(
self,
sample: torch.FloatTensor,
timestep: Union[torch.Tensor, float, int],
encoder_hidden_states: torch.Tensor,
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
class_labels: Optional[torch.Tensor] = None,
down_block_res_samples: Optional[Tuple[torch.Tensor]] = None,
mid_block_res_sample: Optional[Tuple[torch.Tensor]] = None,
added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None,
return_dict: bool = True,
**kwargs
):
dtype = self.unet.dtype
# cond_lat add same level noise
cond_lat = cross_attention_kwargs['cond_lat']
noise = torch.randn_like(cond_lat)
noisy_cond_lat = self.scheduler.add_noise(cond_lat, noise, timestep.reshape(-1))
noisy_cond_lat = self.scheduler.scale_model_input(noisy_cond_lat, timestep.reshape(-1))
ref_dict = {}
_ = self.unet(
noisy_cond_lat,
timestep,
encoder_hidden_states = encoder_hidden_states,
class_labels = class_labels,
cross_attention_kwargs = dict(mode="w", ref_dict=ref_dict),
added_cond_kwargs = added_cond_kwargs,
return_dict = return_dict,
**kwargs
)
res = self.unet(
sample,
timestep,
encoder_hidden_states,
class_labels=class_labels,
cross_attention_kwargs = dict(mode="r", ref_dict=ref_dict),
down_block_additional_residuals = [
sample.to(dtype=dtype) for sample in down_block_res_samples
] if down_block_res_samples is not None else None,
mid_block_additional_residual = (
mid_block_res_sample.to(dtype=dtype)
if mid_block_res_sample is not None else None),
added_cond_kwargs = added_cond_kwargs,
return_dict = return_dict,
**kwargs
)
return res
class HunYuan3D_MVD_Std_Pipeline(diffusers.DiffusionPipeline):
def __init__(
self,
vae: AutoencoderKL,
unet: UNet2DConditionModel,
scheduler: KarrasDiffusionSchedulers,
feature_extractor_vae: CLIPImageProcessor,
vision_processor: CLIPImageProcessor,
vision_encoder: CLIPVisionModelWithProjection,
vision_encoder_2: CLIPVisionModelWithProjection,
ramping_coefficients: Optional[list] = None,
add_watermarker: Optional[bool] = None,
safety_checker = None,
):
DiffusionPipeline.__init__(self)
self.register_modules(
vae=vae, unet=unet, scheduler=scheduler, safety_checker=None, feature_extractor_vae=feature_extractor_vae,
vision_processor=vision_processor, vision_encoder=vision_encoder, vision_encoder_2=vision_encoder_2,
)
self.register_to_config( ramping_coefficients = ramping_coefficients)
self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
self.default_sample_size = self.unet.config.sample_size
self.watermark = None
self.prepare_init = False
def prepare(self):
assert isinstance(self.unet, UNet2DConditionModel), "unet should be UNet2DConditionModel"
self.unet = RefOnlyNoisedUNet(self.unet, self.scheduler).eval()
self.prepare_init = True
def encode_image(self, image: torch.Tensor, scale_factor: bool = False):
latent = self.vae.encode(image).latent_dist.sample()
return (latent * self.vae.config.scaling_factor) if scale_factor else latent
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
shape = (
batch_size,
num_channels_latents,
int(height) // self.vae_scale_factor,
int(width) // self.vae_scale_factor,
)
if isinstance(generator, list) and len(generator) != batch_size:
raise ValueError(
f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
f" size of {batch_size}. Make sure the batch size matches the length of the generators."
)
if latents is None:
latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
else:
latents = latents.to(device)
# scale the initial noise by the standard deviation required by the scheduler
latents = latents * self.scheduler.init_noise_sigma
return latents
def _get_add_time_ids(
self, original_size, crops_coords_top_left, target_size, dtype, text_encoder_projection_dim=None
):
add_time_ids = list(original_size + crops_coords_top_left + target_size)
passed_add_embed_dim = (
self.unet.config.addition_time_embed_dim * len(add_time_ids) + text_encoder_projection_dim
)
expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features
if expected_add_embed_dim != passed_add_embed_dim:
raise ValueError(
f"Model expects an added time embedding vector of length {expected_add_embed_dim}, " \
f"but a vector of {passed_add_embed_dim} was created. The model has an incorrect config." \
f" Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`."
)
add_time_ids = torch.tensor([add_time_ids], dtype=dtype)
return add_time_ids
def prepare_extra_step_kwargs(self, generator, eta):
# prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
# eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
# eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
# and should be between [0, 1]
accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
extra_step_kwargs = {}
if accepts_eta: extra_step_kwargs["eta"] = eta
# check if the scheduler accepts generator
accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
if accepts_generator: extra_step_kwargs["generator"] = generator
return extra_step_kwargs
@property
def guidance_scale(self):
return self._guidance_scale
@property
def interrupt(self):
return self._interrupt
@property
def do_classifier_free_guidance(self):
return self._guidance_scale > 1 and self.unet.config.time_cond_proj_dim is None
@torch.no_grad()
def __call__(
self,
image: Image.Image = None,
guidance_scale = 2.0,
output_type: Optional[str] = "pil",
num_inference_steps: int = 50,
return_dict: bool = True,
eta: float = 0.0,
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
crops_coords_top_left: Tuple[int, int] = (0, 0),
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
latent: torch.Tensor = None,
guidance_curve = None,
**kwargs
):
if not self.prepare_init:
self.prepare()
here = dict(device=self.vae.device, dtype=self.vae.dtype)
batch_size = 1
num_images_per_prompt = 1
width, height = 512 * 2, 512 * 3
target_size = original_size = (height, width)
self._guidance_scale = guidance_scale
self._cross_attention_kwargs = cross_attention_kwargs
self._interrupt = False
device = self._execution_device
# Prepare timesteps
self.scheduler.set_timesteps(num_inference_steps, device=device)
timesteps = self.scheduler.timesteps
# Prepare latent variables
num_channels_latents = self.unet.config.in_channels
latents = self.prepare_latents(
batch_size * num_images_per_prompt,
num_channels_latents,
height,
width,
self.vae.dtype,
device,
generator,
latents=latent,
)
# Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
# Prepare added time ids & embeddings
text_encoder_projection_dim = 1280
add_time_ids = self._get_add_time_ids(
original_size,
crops_coords_top_left,
target_size,
dtype=self.vae.dtype,
text_encoder_projection_dim=text_encoder_projection_dim,
)
negative_add_time_ids = add_time_ids
# hw: preprocess
cond_image = recenter_img(image)
cond_image = to_rgb_image(image)
image_vae = self.feature_extractor_vae(images=cond_image, return_tensors="pt").pixel_values.to(**here)
image_clip = self.vision_processor(images=cond_image, return_tensors="pt").pixel_values.to(**here)
# hw: get cond_lat from cond_img using vae
cond_lat = self.encode_image(image_vae, scale_factor=False)
negative_lat = self.encode_image(torch.zeros_like(image_vae), scale_factor=False)
cond_lat = torch.cat([negative_lat, cond_lat])
# hw: get visual global embedding using clip
global_embeds_1 = self.vision_encoder(image_clip, output_hidden_states=False).image_embeds.unsqueeze(-2)
global_embeds_2 = self.vision_encoder_2(image_clip, output_hidden_states=False).image_embeds.unsqueeze(-2)
global_embeds = torch.concat([global_embeds_1, global_embeds_2], dim=-1)
ramp = global_embeds.new_tensor(self.config.ramping_coefficients).unsqueeze(-1)
prompt_embeds = self.uc_text_emb.to(**here)
pooled_prompt_embeds = self.uc_text_emb_2.to(**here)
prompt_embeds = prompt_embeds + global_embeds * ramp
add_text_embeds = pooled_prompt_embeds
if self.do_classifier_free_guidance:
negative_prompt_embeds = torch.zeros_like(prompt_embeds)
negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds)
prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0)
add_time_ids = torch.cat([negative_add_time_ids, add_time_ids], dim=0)
prompt_embeds = prompt_embeds.to(device)
add_text_embeds = add_text_embeds.to(device)
add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1)
# Denoising loop
num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0)
timestep_cond = None
self._num_timesteps = len(timesteps)
if guidance_curve is None:
guidance_curve = lambda t: guidance_scale
with self.progress_bar(total=num_inference_steps) as progress_bar:
for i, t in enumerate(timesteps):
if self.interrupt:
continue
# expand the latents if we are doing classifier free guidance
latent_model_input = torch.cat([latents] * 2) if self.do_classifier_free_guidance else latents
latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
# predict the noise residual
added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
noise_pred = self.unet(
latent_model_input,
t,
encoder_hidden_states=prompt_embeds,
timestep_cond=timestep_cond,
cross_attention_kwargs=dict(cond_lat=cond_lat),
added_cond_kwargs=added_cond_kwargs,
return_dict=False,
)[0]
# perform guidance
# cur_guidance_scale = self.guidance_scale
cur_guidance_scale = guidance_curve(t) # 1.5 + 2.5 * ((t/1000)**2)
if self.do_classifier_free_guidance:
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
noise_pred = noise_pred_uncond + cur_guidance_scale * (noise_pred_text - noise_pred_uncond)
# cur_guidance_scale_topleft = (cur_guidance_scale - 1.0) * 4 + 1.0
# noise_pred_top_left = noise_pred_uncond +
# cur_guidance_scale_topleft * (noise_pred_text - noise_pred_uncond)
# _, _, h, w = noise_pred.shape
# noise_pred[:, :, :h//3, :w//2] = noise_pred_top_left[:, :, :h//3, :w//2]
# compute the previous noisy sample x_t -> x_t-1
latents_dtype = latents.dtype
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
# call the callback, if provided
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
progress_bar.update()
latents = unscale_latents(latents)
if output_type=="latent":
image = latents
else:
image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
image = unscale_image(unscale_image_2(image)).clamp(0, 1)
image = [
Image.fromarray((image[0]*255+0.5).clamp_(0, 255).permute(1, 2, 0).cpu().numpy().astype("uint8")),
# self.image_processor.postprocess(image, output_type=output_type)[0],
cond_image.resize((512, 512))
]
if not return_dict: return (image,)
return ImagePipelineOutput(images=image)
def save_pretrained(self, save_directory):
# uc_text_emb.pt and uc_text_emb_2.pt are inferenced and saved in advance
super().save_pretrained(save_directory)
torch.save(self.uc_text_emb, os.path.join(save_directory, "uc_text_emb.pt"))
torch.save(self.uc_text_emb_2, os.path.join(save_directory, "uc_text_emb_2.pt"))
@classmethod
def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs):
# uc_text_emb.pt and uc_text_emb_2.pt are inferenced and saved in advance
pipeline = super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
pipeline.uc_text_emb = torch.load(os.path.join(pretrained_model_name_or_path, "uc_text_emb.pt"))
pipeline.uc_text_emb_2 = torch.load(os.path.join(pretrained_model_name_or_path, "uc_text_emb_2.pt"))
return pipeline
================================================
FILE: freesplatter/hunyuan/utils.py
================================================
# Open Source Model Licensed under the Apache License Version 2.0 and Other Licenses of the Third-Party Components therein:
# The below Model in this distribution may have been modified by THL A29 Limited ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2024 THL A29 Limited.
# Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved.
# The below software and/or models in this distribution may have been
# modified by THL A29 Limited ("Tencent Modifications").
# All Tencent Modifications are Copyright (C) THL A29 Limited.
# Hunyuan 3D is licensed under the TENCENT HUNYUAN NON-COMMERCIAL LICENSE AGREEMENT
# except for the third-party components listed below.
# Hunyuan 3D does not impose any additional limitations beyond what is outlined
# in the repsective licenses of these third-party components.
# Users must comply with all terms and conditions of original licenses of these third-party
# components and must ensure that the usage of the third party components adheres to
# all relevant laws and regulations.
# For avoidance of doubts, Hunyuan 3D means the large language models and
# their software and algorithms, including trained model weights, parameters (including
# optimizer states), machine-learning model code, inference-enabling code, training-enabling code,
# fine-tuning enabling code and other elements of the foregoing made publicly available
# by Tencent in accordance with TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT.
import numpy as np
from PIL import Image
def to_rgb_image(maybe_rgba: Image.Image):
'''
convert a PIL.Image to rgb mode with white background
maybe_rgba: PIL.Image
return: PIL.Image
'''
if maybe_rgba.mode == 'RGB':
return maybe_rgba
elif maybe_rgba.mode == 'RGBA':
rgba = maybe_rgba
img = np.random.randint(255, 256, size=[rgba.size[1], rgba.size[0], 3], dtype=np.uint8)
img = Image.fromarray(img, 'RGB')
img.paste(rgba, mask=rgba.getchannel('A'))
return img
else:
raise ValueError("Unsupported image type.", maybe_rgba.mode)
def white_out_background(pil_img, is_gray_fg=True):
data = pil_img.getdata()
new_data = []
# convert fore-ground white to gray
for r, g, b, a in data:
if a < 16:
new_data.append((255, 255, 255, 0)) # back-ground to be black
else:
is_white = is_gray_fg and (r>235) and (g>235) and (b>235)
new_r = 235 if is_white else r
new_g = 235 if is_white else g
new_b = 235 if is_white else b
new_data.append((new_r, new_g, new_b, a))
pil_img.putdata(new_data)
return pil_img
def recenter_img(img, size=512, color=(255,255,255)):
img = white_out_background(img)
mask = np.array(img)[..., 3]
image = np.array(img)[..., :3]
H, W, C = image.shape
coords = np.nonzero(mask)
x_min, x_max = coords[0].min(), coords[0].max()
y_min, y_max = coords[1].min(), coords[1].max()
h = x_max - x_min
w = y_max - y_min
if h == 0 or w == 0: raise ValueError
roi = image[x_min:x_max, y_min:y_max]
border_ratio = 0.15 # 0.2
pad_h = int(h * border_ratio)
pad_w = int(w * border_ratio)
result_tmp = np.full((h + pad_h, w + pad_w, C), color, dtype=np.uint8)
result_tmp[pad_h // 2: pad_h // 2 + h, pad_w // 2: pad_w // 2 + w] = roi
cur_h, cur_w = result_tmp.shape[:2]
side = max(cur_h, cur_w)
result = np.full((side, side, C), color, dtype=np.uint8)
result[(side-cur_h)//2:(side-cur_h)//2+cur_h, (side-cur_w)//2:(side - cur_w)//2+cur_w,:] = result_tmp
result = Image.fromarray(result)
return result.resize((size, size), Image.LANCZOS) if size else result
================================================
FILE: freesplatter/models/__init__.py
================================================
================================================
FILE: freesplatter/models/model.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision.transforms import v2
from einops import rearrange
from freesplatter.models.transformer import Transformer
from freesplatter.utils.infer_util import instantiate_from_config
from freesplatter.utils.recon_util import estimate_focal, fast_pnp
C0 = 0.28209479177387814
def RGB2SH(rgb):
return (rgb - 0.5) / C0
class FreeSplatterModel(nn.Module):
def __init__(
self,
transformer_config=None,
renderer_config=None,
use_2dgs=False,
sh_residual=False,
):
super().__init__()
self.sh_dim = (renderer_config.sh_degree + 1) ** 2 * 3
self.sh_residual = sh_residual
self.use_2dgs = use_2dgs
self.transformer = instantiate_from_config(transformer_config)
if not use_2dgs:
from .renderer.gaussian_renderer import GaussianRenderer
else:
from .renderer_2dgs.gaussian_renderer import GaussianRenderer
self.gs_renderer = GaussianRenderer(renderer_config=renderer_config)
self.register_buffer('pp', torch.tensor([256, 256], dtype=torch.float32), persistent=False)
def forward_gaussians(self, images, **kwargs):
"""
images: B x N x 3 x H x W
"""
gaussians = self.transformer(images) # B x N x H x W x C
if self.sh_residual:
residual = torch.zeros_like(gaussians)
sh = RGB2SH(rearrange(images, 'b n c h w -> b n h w c'))
residual[..., 3:6] = sh
gaussians = gaussians + residual
gaussians = rearrange(gaussians, 'b n h w c -> b (n h w) c')
return gaussians
def forward_renderer(self, gaussians, c2ws, fxfycxcy, **kwargs):
"""
gaussians: B x K x 14
c2ws: B x N x 4 x 4
fxfycxcy: B x N x 4
"""
render_results = self.gs_renderer.render(gaussians, fxfycxcy, c2ws, **kwargs)
return render_results
@torch.inference_mode()
def estimate_focals(
self,
images,
masks=None,
use_first_focal=False,
):
"""
Estimate the focal lengths of N input images.
images: N x 3 x H x W
masks: N x 1 x H x W
"""
assert images.ndim == 4
N, _, H, W = images.shape
assert H == W, "Non-square images are not supported."
pp = self.pp.to(images)
# pp = torch.tensor([W/2, H/2]).to(images)
focals = []
for i in range(N):
if use_first_focal and i > 0:
break
images_input = torch.cat([images[i:], images[:i]], dim=0)
gaussians = self.forward_gaussians(images_input.unsqueeze(0)) # 1 x (N x H x W) x 14
points = rearrange(gaussians[0, :H*W, :3], '(h w) c -> h w c', h=H, w=W)
mask = masks[i] if masks is not None else None
focal = estimate_focal(points, pp=pp, mask=mask)
focals.append(focal)
focals = torch.stack(focals).to(images)
focals = focals.mean().reshape(1).repeat(N)
return focals
@torch.inference_mode()
def estimate_poses(
self,
images,
gaussians=None,
masks=None,
focals=None,
use_first_focal=True,
opacity_threshold=5e-2,
pnp_iter=20,
):
"""
Estimate the camera poses of N input images.
images: N x 3 x h x W
gaussians: K x 14 or 1 x K x 14
masks: N x 1 x H x W
focals: N
"""
assert images.ndim == 4
N, _, H, W = images.shape
assert H == W, "Non-square images are not supported."
# predict gaussians from images
if gaussians is None:
gaussians = self.forward_gaussians(images.unsqueeze(0)) # 1 x (N x H x W) x 14
else:
if gaussians.ndim == 2:
gaussians = gaussians.unsqueeze(0)
assert gaussians.shape[1] == N * H * W
points = gaussians[..., :3].reshape(1, N, H, W, 3).squeeze(0) # N x H x W x 3
opacities = gaussians[..., 3+self.sh_dim].reshape(1, N, H, W).squeeze(0)
opacities = torch.sigmoid(opacities) # N x H x W
# estimate focals if not provided
if focals is None:
focals = self.estimate_focals(images, masks=masks, use_first_focal=use_first_focal)
# run PnP
c2ws = []
for i in range(N):
pts3d = points[i].float().detach().cpu().numpy()
# If masks are not provided, we use Gaussian opacities
if masks is None:
mask = (opacities[i] > opacity_threshold).detach().cpu().numpy()
else:
mask = masks[i].reshape(H, W).bool().detach().cpu().numpy()
focal = focals[i].item()
_, c2w = fast_pnp(pts3d, mask, focal=focal, niter_PnP=pnp_iter)
c2ws.append(torch.from_numpy(c2w))
c2ws = torch.stack(c2ws, dim=0).to(images)
return c2ws, focals
================================================
FILE: freesplatter/models/renderer/__init__.py
================================================
================================================
FILE: freesplatter/models/renderer/gaussian_renderer.py
================================================
import torch
from .gaussian_utils import render, GaussianModel
class GaussianRenderer:
def __init__(self, renderer_config=None):
if 'scaling_activation_type' not in renderer_config:
renderer_config['scaling_activation_type'] = 'exp'
if 'scale_min_act' not in renderer_config:
renderer_config['scale_min_act'] = 1
renderer_config['scale_max_act'] = 1
renderer_config['scale_multi_act'] = 0.1
self.gaussian_model = GaussianModel(sh_degree=renderer_config.sh_degree,
scaling_activation_type=renderer_config.scaling_activation_type,
scale_min_act=renderer_config.scale_min_act,
scale_max_act=renderer_config.scale_max_act,
scale_multi_act=renderer_config.scale_multi_act)
self.img_height = renderer_config.img_height
self.img_width = renderer_config.img_width
self.bg_color = renderer_config.bg_color if 'bg_color' in renderer_config else (1.0, 1.0, 1.0)
def render(self, latent, output_fxfycxcy, output_c2ws, rescale=None, render_size=None):
if render_size is None:
img_height, img_width = self.img_height, self.img_width
else:
img_height, img_width = render_size
if rescale is None:
rescale = torch.ones(latent.shape[0]).to(latent)
shs_dim = (self.gaussian_model.sh_degree + 1) ** 2 * 3
xyz, features, opacity, scaling, rotation = latent.split([3, shs_dim, 1, 3, 4], dim=-1)
features = features.reshape(features.shape[0], -1, shs_dim//3, 3)
bs, vs = output_fxfycxcy.shape[:2]
images = torch.zeros(bs, vs, 3, img_height, img_width, dtype=torch.float32, device=output_c2ws.device)
alphas = torch.zeros(bs, vs, 1, img_height, img_width, dtype=torch.float32, device=output_c2ws.device)
depths = torch.zeros(bs, vs, 1, img_height, img_width, dtype=torch.float32, device=output_c2ws.device)
for idx in range(bs):
pc = self.gaussian_model.set_data(xyz[idx], features[idx], scaling[idx], rotation[idx], opacity[idx], rescale[idx])
for vidx in range(vs):
render_results = render(pc, img_height, img_width, output_c2ws[idx, vidx], output_fxfycxcy[idx, vidx], self.bg_color)
image = render_results['render']
alpha = render_results['alpha']
depth = render_results['depth']
images[idx, vidx] = image
alphas[idx, vidx] = alpha
depths[idx, vidx] = depth
results = {'image': images, 'alpha': alphas, 'depth': depths}
return results
================================================
FILE: freesplatter/models/renderer/gaussian_utils.py
================================================
"""
Gaussian Splatting.
Partially borrowed from https://github.com/graphdeco-inria/gaussian-splatting.
"""
import os
import torch
from torch import nn
import numpy as np
from diff_gaussian_rasterization import (
GaussianRasterizationSettings,
GaussianRasterizer,
)
from plyfile import PlyData, PlyElement
from scipy.spatial.transform import Rotation as R
def strip_lowerdiag(L):
uncertainty = torch.zeros((L.shape[0], 6), dtype=torch.float, device=L.device)
uncertainty[:, 0] = L[:, 0, 0]
uncertainty[:, 1] = L[:, 0, 1]
uncertainty[:, 2] = L[:, 0, 2]
uncertainty[:, 3] = L[:, 1, 1]
uncertainty[:, 4] = L[:, 1, 2]
uncertainty[:, 5] = L[:, 2, 2]
return uncertainty
def strip_symmetric(sym):
return strip_lowerdiag(sym)
def build_rotation(r):
norm = torch.sqrt(
r[:, 0] * r[:, 0] + r[:, 1] * r[:, 1] + r[:, 2] * r[:, 2] + r[:, 3] * r[:, 3]
)
q = r / norm[:, None]
R = torch.zeros((q.size(0), 3, 3), device=r.device)
r = q[:, 0]
x = q[:, 1]
y = q[:, 2]
z = q[:, 3]
R[:, 0, 0] = 1 - 2 * (y * y + z * z)
R[:, 0, 1] = 2 * (x * y - r * z)
R[:, 0, 2] = 2 * (x * z + r * y)
R[:, 1, 0] = 2 * (x * y + r * z)
R[:, 1, 1] = 1 - 2 * (x * x + z * z)
R[:, 1, 2] = 2 * (y * z - r * x)
R[:, 2, 0] = 2 * (x * z - r * y)
R[:, 2, 1] = 2 * (y * z + r * x)
R[:, 2, 2] = 1 - 2 * (x * x + y * y)
return R
def build_scaling_rotation(s, r):
L = torch.zeros((s.shape[0], 3, 3), dtype=torch.float, device=s.device)
R = build_rotation(r)
L[:, 0, 0] = s[:, 0]
L[:, 1, 1] = s[:, 1]
L[:, 2, 2] = s[:, 2]
L = R @ L
return L
class Camera(nn.Module):
def __init__(self, C2W, fxfycxcy, h, w):
"""
C2W: 4x4 camera-to-world matrix; opencv convention
fxfycxcy: 4
"""
super().__init__()
self.C2W = C2W.float()
self.W2C = self.C2W.inverse()
self.znear = 0.01
self.zfar = 100.0
self.h = h
self.w = w
fx, fy, cx, cy = fxfycxcy[0], fxfycxcy[1], fxfycxcy[2], fxfycxcy[3]
self.tanfovX = 1 / (2 * fx)
self.tanfovY = 1 / (2 * fy)
self.fovX = 2 * torch.atan(self.tanfovX)
self.fovY = 2 * torch.atan(self.tanfovY)
self.shiftX = 2 * cx - 1
self.shiftY = 2 * cy - 1
def getProjectionMatrix(znear, zfar, fovX, fovY, shiftX, shiftY):
tanHalfFovY = torch.tan((fovY / 2))
tanHalfFovX = torch.tan((fovX / 2))
top = tanHalfFovY * znear
bottom = -top
right = tanHalfFovX * znear
left = -right
P = torch.zeros(4, 4, dtype=torch.float32, device=fovX.device)
z_sign = 1.0
P[0, 0] = 2.0 * znear / (right - left)
P[1, 1] = 2.0 * znear / (top - bottom)
P[0, 2] = (right + left) / (right - left) + shiftX
P[1, 2] = (top + bottom) / (top - bottom) + shiftY
P[3, 2] = z_sign
P[2, 2] = z_sign * zfar / (zfar - znear)
P[2, 3] = -(zfar * znear) / (zfar - znear)
return P
self.world_view_transform = self.W2C.transpose(0, 1)
self.projection_matrix = getProjectionMatrix(
znear=self.znear, zfar=self.zfar, fovX=self.fovX, fovY=self.fovY, shiftX=self.shiftX, shiftY=self.shiftY
).transpose(0, 1)
self.full_proj_transform = (
self.world_view_transform.unsqueeze(0).bmm(
self.projection_matrix.unsqueeze(0)
)
).squeeze(0)
self.camera_center = self.C2W[:3, 3]
class GaussianModel:
def setup_functions(self, scaling_activation_type='sigmoid', scale_min_act=0.001, scale_max_act=0.3, scale_multi_act=0.1):
def build_covariance_from_scaling_rotation(scaling, scaling_modifier, rotation):
L = build_scaling_rotation(scaling_modifier * scaling, rotation)
actual_covariance = L @ L.transpose(1, 2)
symm = strip_symmetric(actual_covariance)
return symm
if scaling_activation_type == 'exp':
self.scaling_activation = torch.exp
elif scaling_activation_type == 'softplus':
self.scaling_activation = torch.nn.functional.softplus
self.scale_multi_act = scale_multi_act
elif scaling_activation_type == 'sigmoid':
self.scale_min_act = scale_min_act
self.scale_max_act = scale_max_act
self.scaling_activation = torch.sigmoid
else:
raise NotImplementedError
self.scaling_activation_type = scaling_activation_type
self.rotation_activation = torch.nn.functional.normalize
self.opacity_activation = torch.sigmoid
self.feature_activation = torch.sigmoid
self.covariance_activation = build_covariance_from_scaling_rotation
def __init__(self, sh_degree: int, scaling_activation_type='exp', scale_min_act=0.001, scale_max_act=0.3, scale_multi_act=0.1):
self.sh_degree = sh_degree
self._xyz = torch.empty(0)
self._features_dc = torch.empty(0)
if self.sh_degree > 0:
self._features_rest = torch.empty(0)
else:
self._features_rest = None
self._scaling = torch.empty(0)
self._rotation = torch.empty(0)
self._opacity = torch.empty(0)
self.setup_functions(scaling_activation_type=scaling_activation_type, scale_min_act=scale_min_act, scale_max_act=scale_max_act, scale_multi_act=scale_multi_act)
def set_data(self, xyz, features, scaling, rotation, opacity, rescale=None):
self._xyz = xyz
self._features_dc = features[:, 0, :].contiguous() if self.sh_degree == 0 else features[:, 0:1, :].contiguous()
if self.sh_degree > 0:
self._features_rest = features[:, 1:, :].contiguous()
else:
self._features_rest = None
self._scaling = scaling
self._rotation = rotation
self._opacity = opacity
if rescale is None:
rescale = torch.ones(1).to(xyz)
self._rescale = rescale
return self
def to(self, device):
self._xyz = self._xyz.to(device)
self._features_dc = self._features_dc.to(device)
if self.sh_degree > 0:
self._features_rest = self._features_rest.to(device)
self._scaling = self._scaling.to(device)
self._rotation = self._rotation.to(device)
self._opacity = self._opacity.to(device)
return self
@property
def get_scaling(self):
if self.scaling_activation_type == 'exp':
scales = self.scaling_activation(self._scaling)
elif self.scaling_activation_type == 'softplus':
scales = self.scaling_activation(self._scaling) * self.scale_multi_act
elif self.scaling_activation_type == 'sigmoid':
scales = self.scale_min_act + (self.scale_max_act - self.scale_min_act) * self.scaling_activation(self._scaling)
scales = scales * self._rescale
return scales
@property
def get_rotation(self):
return self.rotation_activation(self._rotation)
@property
def get_xyz(self):
xyz = self._xyz * self._rescale
return xyz
@property
def get_features(self):
if self.sh_degree > 0:
features_dc = self._features_dc
features_rest = self._features_rest
return torch.cat((features_dc, features_rest), dim=1)
else:
return self.feature_activation(self._features_dc)
@property
def get_opacity(self):
return self.opacity_activation(self._opacity)
def get_covariance(self, scaling_modifier=1):
return self.covariance_activation(
self.get_scaling, scaling_modifier, self._rotation
)
def construct_list_of_attributes(self, num_rest=0):
l = ['x', 'y', 'z']
# All channels except the 3 DC
for i in range(3):
l.append('f_dc_{}'.format(i))
for i in range(num_rest):
l.append('f_rest_{}'.format(i))
l.append('opacity')
for i in range(self._scaling.shape[1]):
l.append('scale_{}'.format(i))
for i in range(self._rotation.shape[1]):
l.append('rot_{}'.format(i))
return l
def save_ply_vis(self, path):
os.makedirs(os.path.dirname(path), exist_ok=True)
xyzs = self._xyz.detach().cpu().numpy()
f_dc = self._features_dc.detach().flatten(start_dim=1).contiguous().cpu().numpy()
opacities = self._opacity.detach().cpu().numpy()
scales = torch.log(self.get_scaling)
scales = scales.detach().cpu().numpy()
rot_mat_vis = np.array([[1, 0, 0], [0, 0, -1], [0, 1, 0]])
xyzs = xyzs @ rot_mat_vis.T
rotations = self._rotation.detach().cpu().numpy()
rotations = R.from_quat(rotations[:, [1,2,3,0]]).as_matrix()
rotations = rot_mat_vis @ rotations
rotations = R.from_matrix(rotations).as_quat()[:, [3,0,1,2]]
dtype_full = [(attribute, 'f4') for attribute in self.construct_list_of_attributes(0)]
elements = np.empty(xyzs.shape[0], dtype=dtype_full)
attributes = np.concatenate((xyzs, f_dc, opacities, scales, rotations), axis=1)
elements[:] = list(map(tuple, attributes))
el = PlyElement.describe(elements, 'vertex')
PlyData([el]).write(path)
def save_ply(self, path):
os.makedirs(os.path.dirname(path), exist_ok=True)
xyzs = self._xyz.detach().cpu().numpy()
f_dc = self._features_dc.detach().flatten(start_dim=1).contiguous().cpu().numpy()
if self.sh_degree > 0:
f_rest = self._features_rest.detach().flatten(start_dim=1).contiguous().cpu().numpy()
else:
f_rest = np.zeros((f_dc.shape[0], 0), dtype=f_dc.dtype)
opacities = self._opacity.detach().cpu().numpy()
scales = torch.log(self.get_scaling)
scales = scales.detach().cpu().numpy()
rotations = self._rotation.detach().cpu().numpy()
dtype_full = [(attribute, 'f4') for attribute in self.construct_list_of_attributes(f_rest.shape[-1])]
elements = np.empty(xyzs.shape[0], dtype=dtype_full)
attributes = np.concatenate((xyzs, f_dc, f_rest, opacities, scales, rotations), axis=1)
elements[:] = list(map(tuple, attributes))
el = PlyElement.describe(elements, "vertex")
PlyData([el]).write(path)
# def load_ply(self, path):
# plydata = PlyData.read(path)
# xyz = np.stack((np.asarray(plydata.elements[0]["x"]),
# np.asarray(plydata.elements[0]["y"]),
# np.asarray(plydata.elements[0]["z"])), axis=1)
# opacities = np.asarray(plydata.elements[0]["opacity"])[..., np.newaxis]
# features_dc = np.zeros((xyz.shape[0], 3, 1))
# features_dc[:, 0, 0] = np.asarray(plydata.elements[0]["f_dc_0"])
# features_dc[:, 1, 0] = np.asarray(plydata.elements[0]["f_dc_1"])
# features_dc[:, 2, 0] = np.asarray(plydata.elements[0]["f_dc_2"])
# scale_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("scale_")]
# scale_names = sorted(scale_names, key = lambda x: int(x.split('_')[-1]))
# scales = np.zeros((xyz.shape[0], len(scale_names)))
# for idx, attr_name in enumerate(scale_names):
# scales[:, idx] = np.asarray(plydata.elements[0][attr_name])
# rot_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("rot")]
# rot_names = sorted(rot_names, key=lambda x: int(x.split("_")[-1]))
# rots = np.zeros((xyz.shape[0], len(rot_names)))
# for idx, attr_name in enumerate(rot_names):
# rots[:, idx] = np.asarray(plydata.elements[0][attr_name])
# self._xyz = torch.from_numpy(xyz.astype(np.float32))
# self._features_dc = torch.from_numpy(features_dc.astype(np.float32)).transpose(1, 2).contiguous()
# self._opacity = torch.from_numpy(opacities.astype(np.float32)).contiguous()
# self._scaling = torch.from_numpy(scales.astype(np.float32)).contiguous()
# self._rotation = torch.from_numpy(rots.astype(np.float32)).contiguous()
def render(
pc: GaussianModel,
height: int,
width: int,
C2W: torch.Tensor,
fxfycxcy: torch.Tensor,
bg_color=(1.0, 1.0, 1.0),
scaling_modifier=1.0,
):
"""
Render the scene.
"""
screenspace_points = (
torch.zeros_like(
pc.get_xyz, dtype=pc.get_xyz.dtype, requires_grad=True, device="cuda"
)
+ 0
)
try:
screenspace_points.retain_grad()
except:
pass
viewpoint_camera = Camera(C2W=C2W, fxfycxcy=fxfycxcy, h=height, w=width)
bg_color = torch.tensor(list(bg_color), dtype=torch.float32, device=C2W.device)
raster_settings = GaussianRasterizationSettings(
image_height=int(viewpoint_camera.h),
image_width=int(viewpoint_camera.w),
tanfovx=viewpoint_camera.tanfovX,
tanfovy=viewpoint_camera.tanfovY,
bg=bg_color,
scale_modifier=scaling_modifier,
viewmatrix=viewpoint_camera.world_view_transform,
projmatrix=viewpoint_camera.full_proj_transform,
sh_degree=pc.sh_degree,
campos=viewpoint_camera.camera_center,
prefiltered=False,
debug=False,
)
rasterizer = GaussianRasterizer(raster_settings=raster_settings)
means3D = pc.get_xyz
means2D = screenspace_points
opacity = pc.get_opacity
scales = pc.get_scaling
rotations = pc.get_rotation
shs = pc.get_features
rendered_image, _, rendered_depth, rendered_alpha = rasterizer(
means3D=means3D,
means2D=means2D,
shs=None if pc.sh_degree == 0 else shs,
colors_precomp=shs if pc.sh_degree == 0 else None,
opacities=opacity,
scales=scales,
rotations=rotations,
cov3D_precomp=None,
)
return {
"render": rendered_image,
"alpha": rendered_alpha,
"depth": rendered_depth,
}
================================================
FILE: freesplatter/models/renderer_2dgs/__init__.py
================================================
================================================
FILE: freesplatter/models/renderer_2dgs/gaussian_renderer.py
================================================
import torch
from .gaussian_utils import render, GaussianModel
class GaussianRenderer:
def __init__(self, renderer_config=None):
if 'scaling_activation_type' not in renderer_config:
renderer_config['scaling_activation_type'] = 'exp'
if 'scale_min_act' not in renderer_config:
renderer_config['scale_min_act'] = 1
renderer_config['scale_max_act'] = 1
renderer_config['scale_multi_act'] = 0.1
self.gaussian_model = GaussianModel(sh_degree=renderer_config.sh_degree,
scaling_activation_type=renderer_config.scaling_activation_type,
scale_min_act=renderer_config.scale_min_act,
scale_max_act=renderer_config.scale_max_act,
scale_multi_act=renderer_config.scale_multi_act)
self.img_height = renderer_config.img_height
self.img_width = renderer_config.img_width
self.bg_color = renderer_config.bg_color if 'bg_color' in renderer_config else (1.0, 1.0, 1.0)
def render(self, latent, output_fxfycxcy, output_c2ws, render_size=None):
if render_size is None:
img_height, img_width = self.img_height, self.img_width
else:
img_height, img_width = render_size
shs_dim = (self.gaussian_model.sh_degree + 1) ** 2 * 3
xyz, features, opacity, scaling, rotation = latent.split([3, shs_dim, 1, 2, 4], dim=-1)
features = features.reshape(features.shape[0], -1, shs_dim//3, 3)
bs, vs = output_fxfycxcy.shape[:2]
images = torch.zeros(bs, vs, 3, img_height, img_width, dtype=torch.float32, device=output_c2ws.device)
alphas = torch.zeros(bs, vs, 1, img_height, img_width, dtype=torch.float32, device=output_c2ws.device)
depths = torch.zeros(bs, vs, 1, img_height, img_width, dtype=torch.float32, device=output_c2ws.device)
surf_normals = torch.zeros(bs, vs, 3, img_height, img_width, dtype=torch.float32, device=output_c2ws.device)
rend_normals = torch.zeros(bs, vs, 3, img_height, img_width, dtype=torch.float32, device=output_c2ws.device)
dists = torch.zeros(bs, vs, 1, img_height, img_width, dtype=torch.float32, device=output_c2ws.device)
for idx in range(bs):
pc = self.gaussian_model.set_data(xyz[idx], features[idx], scaling[idx], rotation[idx], opacity[idx])
for vidx in range(vs):
render_results = render(pc, img_height, img_width, output_c2ws[idx, vidx], output_fxfycxcy[idx, vidx], self.bg_color)
image = render_results['render']
alpha = render_results['alpha']
depth = render_results['depth']
surf_normal = render_results['surf_normal']
rend_normal = render_results['rend_normal']
dist = render_results['dist']
images[idx, vidx] = image
alphas[idx, vidx] = alpha
depths[idx, vidx] = depth
surf_normals[idx, vidx] = surf_normal
rend_normals[idx, vidx] = rend_normal
dists[idx, vidx] = dist
results = {'image': images, 'alpha': alphas, 'depth': depths, 'surf_normals': surf_normals, 'rend_normals': rend_normals, 'dists': dists}
return results
================================================
FILE: freesplatter/models/renderer_2dgs/gaussian_utils.py
================================================
"""
Gaussian Splatting.
Partially borrowed from https://github.com/graphdeco-inria/gaussian-splatting.
"""
import os
import torch
from torch import nn
import numpy as np
from diff_surfel_rasterization import (
GaussianRasterizationSettings,
GaussianRasterizer,
)
from plyfile import PlyData, PlyElement
from scipy.spatial.transform import Rotation as R
def strip_lowerdiag(L):
uncertainty = torch.zeros((L.shape[0], 6), dtype=torch.float, device=L.device)
uncertainty[:, 0] = L[:, 0, 0]
uncertainty[:, 1] = L[:, 0, 1]
uncertainty[:, 2] = L[:, 0, 2]
uncertainty[:, 3] = L[:, 1, 1]
uncertainty[:, 4] = L[:, 1, 2]
uncertainty[:, 5] = L[:, 2, 2]
return uncertainty
def strip_symmetric(sym):
return strip_lowerdiag(sym)
def build_rotation(r):
norm = torch.sqrt(
r[:, 0] * r[:, 0] + r[:, 1] * r[:, 1] + r[:, 2] * r[:, 2] + r[:, 3] * r[:, 3]
)
q = r / norm[:, None]
R = torch.zeros((q.size(0), 3, 3), device=r.device)
r = q[:, 0]
x = q[:, 1]
y = q[:, 2]
z = q[:, 3]
R[:, 0, 0] = 1 - 2 * (y * y + z * z)
R[:, 0, 1] = 2 * (x * y - r * z)
R[:, 0, 2] = 2 * (x * z + r * y)
R[:, 1, 0] = 2 * (x * y + r * z)
R[:, 1, 1] = 1 - 2 * (x * x + z * z)
R[:, 1, 2] = 2 * (y * z - r * x)
R[:, 2, 0] = 2 * (x * z - r * y)
R[:, 2, 1] = 2 * (y * z + r * x)
R[:, 2, 2] = 1 - 2 * (x * x + y * y)
return R
def build_scaling_rotation(s, r):
L = torch.zeros((s.shape[0], 3, 3), dtype=torch.float, device=s.device)
R = build_rotation(r)
L[:, 0, 0] = s[:, 0]
L[:, 1, 1] = s[:, 1]
L[:, 2, 2] = s[:, 2]
L = R @ L
return L
def build_covariance_from_scaling_rotation(scaling, scaling_modifier, rotation):
L = build_scaling_rotation(scaling_modifier * scaling, rotation)
actual_covariance = L @ L.transpose(1, 2)
symm = strip_symmetric(actual_covariance)
return symm
def depths_to_points(view, depthmap):
c2w = (view.world_view_transform.T).inverse()
W, H = view.w, view.h
ndc2pix = torch.tensor([
[W / 2, 0, 0, (W) / 2],
[0, H / 2, 0, (H) / 2],
[0, 0, 0, 1]]).float().cuda().T
projection_matrix = c2w.T @ view.full_proj_transform
intrins = (projection_matrix @ ndc2pix)[:3,:3].T
grid_x, grid_y = torch.meshgrid(torch.arange(W, device='cuda').float(), torch.arange(H, device='cuda').float(), indexing='xy')
points = torch.stack([grid_x, grid_y, torch.ones_like(grid_x)], dim=-1).reshape(-1, 3)
rays_d = points @ intrins.inverse().T @ c2w[:3,:3].T
rays_o = c2w[:3,3]
points = depthmap.reshape(-1, 1) * rays_d + rays_o
return points
def depth_to_normal(view, depth):
"""
view: view camera
depth: depthmap
"""
points = depths_to_points(view, depth).reshape(*depth.shape[1:], 3)
output = torch.zeros_like(points)
dx = torch.cat([points[2:, 1:-1] - points[:-2, 1:-1]], dim=0)
dy = torch.cat([points[1:-1, 2:] - points[1:-1, :-2]], dim=1)
normal_map = torch.nn.functional.normalize(torch.cross(dx, dy, dim=-1), dim=-1)
output[1:-1, 1:-1, :] = normal_map
return output
class Camera(nn.Module):
def __init__(self, C2W, fxfycxcy, h, w):
"""
C2W: 4x4 camera-to-world matrix; opencv convention
fxfycxcy: 4
"""
super().__init__()
self.C2W = C2W.float()
self.W2C = self.C2W.inverse()
self.znear = 0.01
self.zfar = 100.0
self.h = h
self.w = w
fx, fy, cx, cy = fxfycxcy[0], fxfycxcy[1], fxfycxcy[2], fxfycxcy[3]
self.tanfovX = 1 / (2 * fx)
self.tanfovY = 1 / (2 * fy)
self.fovX = 2 * torch.atan(self.tanfovX)
self.fovY = 2 * torch.atan(self.tanfovY)
self.shiftX = 2 * cx - 1
self.shiftY = 2 * cy - 1
def getProjectionMatrix(znear, zfar, fovX, fovY, shiftX, shiftY):
tanHalfFovY = torch.tan((fovY / 2))
tanHalfFovX = torch.tan((fovX / 2))
top = tanHalfFovY * znear
bottom = -top
right = tanHalfFovX * znear
left = -right
P = torch.zeros(4, 4, dtype=torch.float32, device=fovX.device)
z_sign = 1.0
P[0, 0] = 2.0 * znear / (right - left)
P[1, 1] = 2.0 * znear / (top - bottom)
P[0, 2] = (right + left) / (right - left) + shiftX
P[1, 2] = (top + bottom) / (top - bottom) + shiftY
P[3, 2] = z_sign
P[2, 2] = z_sign * zfar / (zfar - znear)
P[2, 3] = -(zfar * znear) / (zfar - znear)
return P
self.world_view_transform = self.W2C.transpose(0, 1)
self.projection_matrix = getProjectionMatrix(
znear=self.znear, zfar=self.zfar, fovX=self.fovX, fovY=self.fovY, shiftX=self.shiftX, shiftY=self.shiftY
).transpose(0, 1)
self.full_proj_transform = (
self.world_view_transform.unsqueeze(0).bmm(
self.projection_matrix.unsqueeze(0)
)
).squeeze(0)
self.camera_center = self.C2W[:3, 3]
class GaussianModel:
def setup_functions(self, scaling_activation_type='sigmoid', scale_min_act=0.001, scale_max_act=0.3, scale_multi_act=0.1):
if scaling_activation_type == 'exp':
self.scaling_activation = torch.exp
elif scaling_activation_type == 'softplus':
self.scaling_activation = torch.nn.functional.softplus
self.scale_multi_act = scale_multi_act
elif scaling_activation_type == 'sigmoid':
self.scale_min_act = scale_min_act
self.scale_max_act = scale_max_act
self.scaling_activation = torch.sigmoid
else:
raise NotImplementedError
self.scaling_activation_type = scaling_activation_type
self.rotation_activation = torch.nn.functional.normalize
self.opacity_activation = torch.sigmoid
self.feature_activation = torch.sigmoid
self.covariance_activation = build_covariance_from_scaling_rotation
def __init__(self, sh_degree: int, scaling_activation_type='exp', scale_min_act=0.001, scale_max_act=0.3, scale_multi_act=0.1):
self.sh_degree = sh_degree
self._xyz = torch.empty(0)
self._features_dc = torch.empty(0)
if self.sh_degree > 0:
self._features_rest = torch.empty(0)
else:
self._features_rest = None
self._scaling = torch.empty(0)
self._rotation = torch.empty(0)
self._opacity = torch.empty(0)
self.setup_functions(scaling_activation_type=scaling_activation_type, scale_min_act=scale_min_act, scale_max_act=scale_max_act, scale_multi_act=scale_multi_act)
def set_data(self, xyz, features, scaling, rotation, opacity):
self._xyz = xyz
self._features_dc = features[:, 0, :].contiguous() if self.sh_degree == 0 else features[:, 0:1, :].contiguous()
if self.sh_degree > 0:
self._features_rest = features[:, 1:, :].contiguous()
else:
self._features_rest = None
self._scaling = scaling
self._rotation = rotation
self._opacity = opacity
return self
def to(self, device):
self._xyz = self._xyz.to(device)
self._features_dc = self._features_dc.to(device)
if self.sh_degree > 0:
self._features_rest = self._features_rest.to(device)
self._scaling = self._scaling.to(device)
self._rotation = self._rotation.to(device)
self._opacity = self._opacity.to(device)
return self
@property
def get_scaling(self):
if self.scaling_activation_type == 'exp':
scales = self.scaling_activation(self._scaling)
elif self.scaling_activation_type == 'softplus':
scales = self.scaling_activation(self._scaling) * self.scale_multi_act
elif self.scaling_activation_type == 'sigmoid':
scales = self.scale_min_act + (self.scale_max_act - self.scale_min_act) * self.scaling_activation(self._scaling)
return scales
@property
def get_rotation(self):
return self.rotation_activation(self._rotation)
@property
def get_xyz(self):
return self._xyz
@property
def get_features(self):
if self.sh_degree > 0:
features_dc = self._features_dc
features_rest = self._features_rest
return torch.cat((features_dc, features_rest), dim=1)
else:
return self.feature_activation(self._features_dc)
@property
def get_opacity(self):
return self.opacity_activation(self._opacity)
def get_covariance(self, scaling_modifier=1):
return self.covariance_activation(
self.get_scaling, scaling_modifier, self._rotation
)
def construct_list_of_attributes(self, num_rest=0):
l = ['x', 'y', 'z']
# All channels except the 3 DC
for i in range(3):
l.append('f_dc_{}'.format(i))
for i in range(num_rest):
l.append('f_rest_{}'.format(i))
l.append('opacity')
for i in range(self._scaling.shape[1]):
l.append('scale_{}'.format(i))
for i in range(self._rotation.shape[1]):
l.append('rot_{}'.format(i))
return l
def save_ply_vis(self, path):
os.makedirs(os.path.dirname(path), exist_ok=True)
xyzs = self._xyz.detach().cpu().numpy()
f_dc = self._features_dc.detach().flatten(start_dim=1).contiguous().cpu().numpy()
opacities = self._opacity.detach().cpu().numpy()
scales = torch.log(self.get_scaling)
scales = scales.detach().cpu().numpy()
rot_mat_vis = np.array([[1, 0, 0], [0, 0, -1], [0, 1, 0]])
xyzs = xyzs @ rot_mat_vis.T
rotations = self._rotation.detach().cpu().numpy()
rotations = R.from_quat(rotations[:, [1,2,3,0]]).as_matrix()
rotations = rot_mat_vis @ rotations
rotations = R.from_matrix(rotations).as_quat()[:, [3,0,1,2]]
dtype_full = [(attribute, 'f4') for attribute in self.construct_list_of_attributes(0)]
elements = np.empty(xyzs.shape[0], dtype=dtype_full)
attributes = np.concatenate((xyzs, f_dc, opacities, scales, rotations), axis=1)
elements[:] = list(map(tuple, attributes))
el = PlyElement.describe(elements, 'vertex')
PlyData([el]).write(path)
def save_ply(self, path):
os.makedirs(os.path.dirname(path), exist_ok=True)
xyzs = self._xyz.detach().cpu().numpy()
f_dc = self._features_dc.detach().flatten(start_dim=1).contiguous().cpu().numpy()
if self.sh_degree > 0:
f_rest = self._features_rest.detach().flatten(start_dim=1).contiguous().cpu().numpy()
else:
f_rest = np.zeros((f_dc.shape[0], 0), dtype=f_dc.dtype)
opacities = self._opacity.detach().cpu().numpy()
scales = torch.log(self.get_scaling)
scales = scales.detach().cpu().numpy()
rotations = self._rotation.detach().cpu().numpy()
dtype_full = [(attribute, 'f4') for attribute in self.construct_list_of_attributes(f_rest.shape[-1])]
elements = np.empty(xyzs.shape[0], dtype=dtype_full)
attributes = np.concatenate((xyzs, f_dc, f_rest, opacities, scales, rotations), axis=1)
elements[:] = list(map(tuple, attributes))
el = PlyElement.describe(elements, "vertex")
PlyData([el]).write(path)
# def load_ply(self, path):
# plydata = PlyData.read(path)
# xyz = np.stack((np.asarray(plydata.elements[0]["x"]),
# np.asarray(plydata.elements[0]["y"]),
# np.asarray(plydata.elements[0]["z"])), axis=1)
# opacities = np.asarray(plydata.elements[0]["opacity"])[..., np.newaxis]
# features_dc = np.zeros((xyz.shape[0], 3, 1))
# features_dc[:, 0, 0] = np.asarray(plydata.elements[0]["f_dc_0"])
# features_dc[:, 1, 0] = np.asarray(plydata.elements[0]["f_dc_1"])
# features_dc[:, 2, 0] = np.asarray(plydata.elements[0]["f_dc_2"])
# scale_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("scale_")]
# scale_names = sorted(scale_names, key = lambda x: int(x.split('_')[-1]))
# scales = np.zeros((xyz.shape[0], len(scale_names)))
# for idx, attr_name in enumerate(scale_names):
# scales[:, idx] = np.asarray(plydata.elements[0][attr_name])
# rot_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("rot")]
# rot_names = sorted(rot_names, key=lambda x: int(x.split("_")[-1]))
# rots = np.zeros((xyz.shape[0], len(rot_names)))
# for idx, attr_name in enumerate(rot_names):
# rots[:, idx] = np.asarray(plydata.elements[0][attr_name])
# self._xyz = torch.from_numpy(xyz.astype(np.float32))
# self._features_dc = torch.from_numpy(features_dc.astype(np.float32)).transpose(1, 2).contiguous()
# self._opacity = torch.from_numpy(opacities.astype(np.float32)).contiguous()
# self._scaling = torch.from_numpy(scales.astype(np.float32)).contiguous()
# self._rotation = torch.from_numpy(rots.astype(np.float32)).contiguous()
def render(
pc: GaussianModel,
height: int,
width: int,
C2W: torch.Tensor,
fxfycxcy: torch.Tensor,
bg_color=(1.0, 1.0, 1.0),
scaling_modifier=1.0,
):
"""
Render the scene.
"""
screenspace_points = (
torch.zeros_like(
pc.get_xyz, dtype=pc.get_xyz.dtype, requires_grad=True, device="cuda"
)
+ 0
)
try:
screenspace_points.retain_grad()
except:
pass
viewpoint_camera = Camera(C2W=C2W, fxfycxcy=fxfycxcy, h=height, w=width)
bg_color = torch.tensor(list(bg_color), dtype=torch.float32, device=C2W.device)
raster_settings = GaussianRasterizationSettings(
image_height=int(viewpoint_camera.h),
image_width=int(viewpoint_camera.w),
tanfovx=viewpoint_camera.tanfovX,
tanfovy=viewpoint_camera.tanfovY,
bg=bg_color,
scale_modifier=scaling_modifier,
viewmatrix=viewpoint_camera.world_view_transform,
projmatrix=viewpoint_camera.full_proj_transform,
sh_degree=pc.sh_degree,
campos=viewpoint_camera.camera_center,
prefiltered=False,
debug=False,
)
rasterizer = GaussianRasterizer(raster_settings=raster_settings)
means3D = pc.get_xyz
means2D = screenspace_points
opacity = pc.get_opacity
scales = pc.get_scaling
rotations = pc.get_rotation
shs = pc.get_features
rendered_image, _, allmap = rasterizer(
means3D=means3D,
means2D=means2D,
shs=None if pc.sh_degree == 0 else shs,
colors_precomp=shs if pc.sh_degree == 0 else None,
opacities=opacity,
scales=scales,
rotations=rotations,
cov3D_precomp=None,
)
# additional regularizations
render_alpha = allmap[1:2]
# get normal map
# transform normal from view space to world space
render_normal = allmap[2:5]
render_normal = (render_normal.permute(1, 2, 0) @ (viewpoint_camera.world_view_transform[:3, :3].T)).permute(2, 0, 1)
# get median depth map
render_depth_median = allmap[5:6]
render_depth_median = torch.nan_to_num(render_depth_median, 0, 0)
# get expected depth map
render_depth_expected = allmap[0:1]
render_depth_expected = (render_depth_expected / render_alpha)
render_depth_expected = torch.nan_to_num(render_depth_expected, 0, 0)
# get depth distortion map
render_dist = allmap[6:7]
# psedo surface attributes
# surf depth is either median or expected by setting depth_ratio to 1 or 0
# for bounded scene, use median depth, i.e., depth_ratio = 1;
# for unbounded scene, use expected depth, i.e., depth_ration = 0, to reduce disk anliasing.
depth_ratio = 0.0
surf_depth = render_depth_expected * (1 - depth_ratio) + depth_ratio * render_depth_median
# assume the depth points form the 'surface' and generate psudo surface normal for regularizations.
surf_normal = depth_to_normal(viewpoint_camera, surf_depth)
surf_normal = surf_normal.permute(2, 0, 1)
# remember to multiply with accum_alpha since render_normal is unnormalized.
surf_normal = surf_normal * (render_alpha).detach()
return {
"render": rendered_image,
"depth": surf_depth,
"alpha": render_alpha,
'surf_normal': surf_normal,
'rend_normal': render_normal,
'dist': render_dist,
}
================================================
FILE: freesplatter/models/transformer.py
================================================
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from inspect import isfunction
from einops import rearrange, repeat
import xformers.ops as xops
def exists(val):
return val is not None
def default(val, d):
if exists(val):
return val
return d() if isfunction(d) else d
class CrossAttention(nn.Module):
def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.):
super().__init__()
inner_dim = dim_head * heads
context_dim = default(context_dim, query_dim)
self.heads = heads
self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
self.to_k = nn.Linear(context_dim, inner_dim, bias=False)
self.to_v = nn.Linear(context_dim, inner_dim, bias=False)
self.to_out = nn.Sequential(
nn.Linear(inner_dim, query_dim, bias=False),
nn.Dropout(dropout)
)
def forward(self, x, context=None, mask=None):
h = self.heads
q = self.to_q(x)
context = default(context, x)
k = self.to_k(context)
v = self.to_v(context)
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
out = xops.memory_efficient_attention(q, k, v)
out = rearrange(out, '(b h) n d -> b n (h d)', h=h)
return self.to_out(out)
class BasicTransformerBlock(nn.Module):
def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True):
super().__init__()
self.self_attn = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout)
self.ff = nn.Sequential(
nn.Linear(dim, dim*4, bias=False),
nn.GELU(),
nn.Linear(dim*4, dim, bias=False),
)
self.norm1 = nn.LayerNorm(dim, bias=False)
self.norm2 = nn.LayerNorm(dim, bias=False)
def forward(self, x, context=None):
before_sa = self.norm1(x)
x = x + self.self_attn(before_sa)
x = self.ff(self.norm2(x)) + x
return x
class Transformer(nn.Module):
def __init__(
self,
image_size=512,
patch_size=8,
input_dim=3,
inner_dim=1024,
output_dim=14,
n_heads=16,
depth=24,
dropout=0.,
):
super().__init__()
self.patch_size = patch_size
self.input_dim = input_dim
self.inner_dim = inner_dim
self.output_dim = output_dim
self.patchify = nn.Conv2d(input_dim, inner_dim, kernel_size=patch_size, stride=patch_size, padding=0, bias=False)
num_patches = (image_size // patch_size) ** 2
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches, inner_dim))
self.ref_embed = nn.Parameter(torch.zeros(1, 1, inner_dim))
self.src_embed = nn.Parameter(torch.zeros(1, 1, inner_dim))
self.blocks = nn.ModuleList(
[BasicTransformerBlock(inner_dim, n_heads, inner_dim//n_heads, dropout=dropout)
for _ in range(depth)]
)
self.norm = nn.LayerNorm(inner_dim, bias=False)
self.unpatchify = nn.Linear(inner_dim, patch_size ** 2 * output_dim, bias=True)
nn.init.trunc_normal_(self.pos_embed, std=.02)
nn.init.trunc_normal_(self.ref_embed, std=.02)
nn.init.trunc_normal_(self.src_embed, std=.02)
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
nn.init.trunc_normal_(m.weight, std=.02)
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.weight, 1.0)
if m.bias is not None:
nn.init.constant_(m.bias, 0)
def interpolate_pos_encoding(self, x, w, h):
npatch = x.shape[-2]
N = self.pos_embed.shape[-2]
if npatch == N and w == h:
return self.pos_embed
patch_pos_embed = self.pos_embed
dim = x.shape[-1]
w0 = w // self.patch_size
h0 = h // self.patch_size
# we add a small number to avoid floating point error in the interpolation
# see discussion at https://github.com/facebookresearch/dino/issues/8
w0, h0 = w0 + 0.1, h0 + 0.1
patch_pos_embed = F.interpolate(
patch_pos_embed.reshape(1, int(math.sqrt(N)), int(math.sqrt(N)), dim).permute(0, 3, 1, 2).contiguous(),
scale_factor=(w0 / math.sqrt(N), h0 / math.sqrt(N)),
mode='bicubic',
)
assert int(w0) == patch_pos_embed.shape[-2] and int(h0) == patch_pos_embed.shape[-1]
patch_pos_embed = patch_pos_embed.permute(0, 2, 3, 1).view(1, -1, dim).contiguous()
return patch_pos_embed
def forward(self, images):
"""
images: (B, N, C, H, W)
"""
B, N, _, H, W = images.shape
# patchify
images = rearrange(images, 'b n c h w -> (b n) c h w')
tokens = self.patchify(images)
tokens = rearrange(tokens, 'bn c h w -> bn (h w) c')
# add pos encodings
tokens = rearrange(tokens, '(b n) hw c -> b n hw c', b=B)
tokens = tokens + self.interpolate_pos_encoding(tokens, W, H).unsqueeze(1)
view_embeds = torch.cat([self.ref_embed, self.src_embed.repeat(1, N-1, 1)], dim=1)
tokens = tokens + view_embeds.unsqueeze(2)
# transformer
tokens = rearrange(tokens, 'b n hw c -> b (n hw) c')
x = tokens
for layer in self.blocks:
x = layer(x)
# unpatchify
x = self.norm(x)
x = self.unpatchify(x)
x = rearrange(x, 'b (n h w) c -> b n h w c', n=N, h=H//self.patch_size, w=W//self.patch_size)
x = rearrange(x, 'b n h w (p q c) -> b n (h p) (w q) c', p=self.patch_size, q=self.patch_size)
out = x
return out
================================================
FILE: freesplatter/utils/__init__.py
================================================
================================================
FILE: freesplatter/utils/camera_util.py
================================================
import torch
import numpy as np
def normalize_vecs(vectors: torch.Tensor) -> torch.Tensor:
"""
Normalize vector lengths.
"""
return vectors / (torch.norm(vectors, dim=-1, keepdim=True))
def blender_to_opencv(camera_matrix: torch.Tensor):
"""
Convert Blender World-to-Camera matrix into OpenCV space by flipping y and z axes
Blender camera system: x-right, y-up, z-backward
OpenCV camera system: x-right, y-down, z-forward
"""
flip_yz = torch.tensor([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
if camera_matrix.ndim == 3:
flip_yz = flip_yz.unsqueeze(0)
camera_matrix_opencv = torch.matmul(flip_yz.to(camera_matrix), camera_matrix)
return camera_matrix_opencv
def pad_camera_extrinsics_4x4(extrinsics):
if extrinsics.shape[-2] == 4:
return extrinsics
padding = torch.tensor([[0, 0, 0, 1]]).to(extrinsics)
if extrinsics.ndim == 3:
padding = padding.unsqueeze(0).repeat(extrinsics.shape[0], 1, 1)
extrinsics = torch.cat([extrinsics, padding], dim=-2)
return extrinsics
def create_camera_to_world(camera_position: torch.Tensor, look_at: torch.Tensor = None, up_world: torch.Tensor = None, camera_system: str = 'opencv'):
"""
Create OpenCV or OpenGL camera extrinsics from camera locations and look-at position.
camera_position: (M, 3) or (3,)
look_at: (3)
up_world: (3)
return: (M, 3, 4) or (3, 4)
"""
# by default, looking at the origin and world up is z-axis
if look_at is None:
look_at = torch.tensor([0, 0, 0], dtype=torch.float32)
if up_world is None:
up_world = torch.tensor([0, 0, 1], dtype=torch.float32)
if camera_position.ndim == 2:
look_at = look_at.unsqueeze(0).repeat(camera_position.shape[0], 1)
up_world = up_world.unsqueeze(0).repeat(camera_position.shape[0], 1)
assert camera_system in ['opencv', 'opengl']
if camera_system == 'opencv':
# OpenCV camera: z-forward, x-right, y-down
z_axis = look_at - camera_position
z_axis = normalize_vecs(z_axis).float()
x_axis = torch.cross(z_axis, up_world)
x_axis = normalize_vecs(x_axis).float()
y_axis = torch.cross(z_axis, x_axis)
y_axis = normalize_vecs(y_axis).float()
else:
# OpenGL camera: z-backward, x-right, y-up
z_axis = camera_position - look_at
z_axis = normalize_vecs(z_axis).float()
x_axis = torch.cross(up_world, z_axis)
x_axis = normalize_vecs(x_axis).float()
y_axis = torch.cross(z_axis, x_axis)
y_axis = normalize_vecs(y_axis).float()
extrinsics = torch.stack([x_axis, y_axis, z_axis, camera_position], dim=-1)
extrinsics = pad_camera_extrinsics_4x4(extrinsics)
return extrinsics
def FOV_to_intrinsics(fov, device='cpu'):
"""
Creates a 3x3 camera intrinsics matrix from the camera field of view, specified in degrees.
Note the intrinsics are returned as normalized by image size, rather than in pixel units.
Assumes principal point is at image center.
"""
focal_length = 0.5 / np.tan(np.deg2rad(fov) * 0.5)
intrinsics = torch.tensor([[focal_length, 0, 0.5], [0, focal_length, 0.5], [0, 0, 1]], device=device)
return intrinsics
def normalize_cameras(extrinsics, camera_position: torch.Tensor = None, camera_system: str = 'opencv', canonical_index=0):
"""
Normalize the first camera to the canonical camera position, and transform other cameras accordingly.
extrinsics: (N, 4, 4)
"""
if camera_position is None:
camera_position = torch.tensor([[0, -2, 0]]).float()
assert camera_system in ['opencv', 'opengl']
canonical_distance = camera_position.norm()
# compute conditional camera distances
cond_extrinsic = extrinsics[canonical_index]
# cond_extrinsic = extrinsics[0]
cond_camera_distance = cond_extrinsic[:3, 3].norm(dim=-1, keepdim=False)
# scale camera distances
scale = canonical_distance / cond_camera_distance
extrinsics[:, :3, 3] = extrinsics[:, :3, 3] * scale
# rotate all cameras
canonical_extrinsic = create_camera_to_world(camera_position, camera_system=camera_system).to(extrinsics)
# transform_matrix = torch.matmul(canonical_extrinsic, torch.linalg.inv(extrinsics[0:1]))
transform_matrix = torch.matmul(canonical_extrinsic, torch.linalg.inv(extrinsics[canonical_index:canonical_index+1]))
normalized_extrinsics = torch.matmul(transform_matrix, extrinsics)
return normalized_extrinsics, scale
================================================
FILE: freesplatter/utils/geometry_util.py
================================================
import torch
import torch.nn.functional as F
from einops import rearrange
# --- Intrinsics Transformations ---
def normalize_intrinsics(intrinsics, image_shape):
'''Normalize an intrinsics matrix given the image shape'''
intrinsics = intrinsics.clone()
intrinsics[..., 0, :] /= image_shape[1]
intrinsics[..., 1, :] /= image_shape[0]
return intrinsics
def unnormalize_intrinsics(intrinsics, image_shape):
'''Unnormalize an intrinsics matrix given the image shape'''
intrinsics = intrinsics.clone()
intrinsics[..., 0, :] *= image_shape[1]
intrinsics[..., 1, :] *= image_shape[0]
return intrinsics
# --- Projections ---
def homogenize_points(points):
"""Append a '1' along the final dimension of the tensor (i.e. convert xyz->xyz1)"""
return torch.cat([points, torch.ones_like(points[..., :1])], dim=-1)
def normalize_homogenous_points(points):
"""Normalize the point vectors"""
return points / points[..., -1:]
def pixel_space_to_camera_space(pixel_space_points, depth, intrinsics):
"""
Convert pixel space points to camera space points.
Args:
pixel_space_points (torch.Tensor): Pixel space points with shape (h, w, 2)
depth (torch.Tensor): Depth map with shape (b, v, h, w, 1)
intrinsics (torch.Tensor): Camera intrinsics with shape (b, v, 3, 3)
Returns:
torch.Tensor: Camera space points with shape (b, v, h, w, 3).
"""
pixel_space_points = homogenize_points(pixel_space_points)
camera_space_points = torch.einsum('b v i j , h w j -> b v h w i', intrinsics.inverse(), pixel_space_points)
camera_space_points = camera_space_points * depth
return camera_space_points
def camera_space_to_world_space(camera_space_points, c2w):
"""
Convert camera space points to world space points.
Args:
camera_space_points (torch.Tensor): Camera space points with shape (b, v, h, w, 3)
c2w (torch.Tensor): Camera to world extrinsics matrix with shape (b, v, 4, 4)
Returns:
torch.Tensor: World space points with shape (b, v, h, w, 3).
"""
camera_space_points = homogenize_points(camera_space_points)
world_space_points = torch.einsum('b v i j , b v h w j -> b v h w i', c2w, camera_space_points)
return world_space_points[..., :3]
def camera_space_to_pixel_space(camera_space_points, intrinsics):
"""
Convert camera space points to pixel space points.
Args:
camera_space_points (torch.Tensor): Camera space points with shape (b, v1, v2, h, w, 3)
c2w (torch.Tensor): Camera to world extrinsics matrix with shape (b, v2, 3, 3)
Returns:
torch.Tensor: World space points with shape (b, v1, v2, h, w, 2).
"""
camera_space_points = normalize_homogenous_points(camera_space_points)
pixel_space_points = torch.einsum('b u i j , b v u h w j -> b v u h w i', intrinsics, camera_space_points)
return pixel_space_points[..., :2]
def world_space_to_camera_space(world_space_points, c2w):
"""
Convert world space points to pixel space points.
Args:
world_space_points (torch.Tensor): World space points with shape (b, v1, h, w, 3)
c2w (torch.Tensor): Camera to world extrinsics matrix with shape (b, v2, 4, 4)
Returns:
torch.Tensor: Camera space points with shape (b, v1, v2, h, w, 3).
"""
world_space_points = homogenize_points(world_space_points)
camera_space_points = torch.einsum('b u i j , b v h w j -> b v u h w i', c2w.inverse(), world_space_points)
return camera_space_points[..., :3]
def unproject_depth(depth, intrinsics, c2w):
"""
Turn the depth map into a 3D point cloud in world space
Args:
depth: (b, v, h, w, 1)
intrinsics: (b, v, 3, 3)
c2w: (b, v, 4, 4)
Returns:
torch.Tensor: World space points with shape (b, v, h, w, 3).
"""
# Compute indices of pixels
h, w = depth.shape[-3], depth.shape[-2]
x_grid, y_grid = torch.meshgrid(
torch.arange(w, device=depth.device, dtype=torch.float32),
torch.arange(h, device=depth.device, dtype=torch.float32),
indexing='xy'
) # (h, w), (h, w)
# Compute coordinates of pixels in camera space
pixel_space_points = torch.stack((x_grid, y_grid), dim=-1) # (..., h, w, 2)
camera_points = pixel_space_to_camera_space(pixel_space_points, depth, intrinsics) # (..., h, w, 3)
# Convert points to world space
world_points = camera_space_to_world_space(camera_points, c2w) # (..., h, w, 3)
return world_points
@torch.no_grad()
def calculate_in_frustum_mask(depth_1, intrinsics_1, c2w_1, depth_2, intrinsics_2, c2w_2, depth_tolerance=1e-1):
"""
A function that takes in the depth, intrinsics and c2w matrices of two sets
of views, and then works out which of the pixels in the first set of views
has a direct corresponding pixel in any of views in the second set
Args:
depth_1: (b, v1, h, w)
intrinsics_1: (b, v1, 3, 3)
c2w_1: (b, v1, 4, 4)
depth_2: (b, v2, h, w)
intrinsics_2: (b, v2, 3, 3)
c2w_2: (b, v2, 4, 4)
Returns:
torch.Tensor: Mask with shape (b, v1, h, w).
"""
_, v1, h, w = depth_1.shape
_, v2, _, _ = depth_2.shape
# unnormalize intrinsics if needed
if intrinsics_1[0, 0, 0, 2] < 1:
intrinsics_1 = unnormalize_intrinsics(intrinsics_1, (h, w))
if intrinsics_2[0, 0, 0, 2] < 1:
intrinsics_2 = unnormalize_intrinsics(intrinsics_2, (h, w))
# Unproject the depth to get the 3D points in world space
points_3d = unproject_depth(depth_1[..., None], intrinsics_1, c2w_1) # (b, v1, h, w, 3)
# Project the 3D points into the pixel space of all the second views simultaneously
camera_points = world_space_to_camera_space(points_3d, c2w_2) # (b, v1, v2, h, w, 3)
points_2d = camera_space_to_pixel_space(camera_points, intrinsics_2) # (b, v1, v2, h, w, 2)
# Calculate the depth of each point
rendered_depth = camera_points[..., 2] # (b, v1, v2, h, w)
# We use three conditions to determine if a point should be masked
# Condition 1: Check if the points are in the frustum of any of the v2 views
in_frustum_mask = (
(points_2d[..., 0] > 0) &
(points_2d[..., 0] < w) &
(points_2d[..., 1] > 0) &
(points_2d[..., 1] < h)
) # (b, v1, v2, h, w)
in_frustum_mask = in_frustum_mask.any(dim=-3) # (b, v1, h, w)
# Condition 2: Check if the points have non-zero (i.e. valid) depth in the input view
non_zero_depth = depth_1 > 1e-6
# Condition 3: Check if the points have matching depth to any of the v2
# views F.grid_sample expects the input coordinates to
# be normalized to the range [-1, 1], so we normalize first
points_2d[..., 0] /= w
points_2d[..., 1] /= h
points_2d = points_2d * 2 - 1
matching_depth = torch.ones_like(rendered_depth, dtype=torch.bool)
for b in range(depth_1.shape[0]):
for i in range(v1):
for j in range(v2):
depth = rearrange(depth_2[b, j], 'h w -> 1 1 h w')
coords = rearrange(points_2d[b, i, j], 'h w c -> 1 h w c')
sampled_depths = F.grid_sample(depth, coords, align_corners=False)[0, 0]
matching_depth[b, i, j] = torch.isclose(rendered_depth[b, i, j], sampled_depths, atol=depth_tolerance)
matching_depth = matching_depth.any(dim=-3) # (..., v1, h, w)
mask = in_frustum_mask & non_zero_depth & matching_depth
return mask
================================================
FILE: freesplatter/utils/infer_util.py
================================================
import os
import importlib
import imageio
import torch
import rembg
import numpy as np
import PIL.Image
from PIL import Image
from typing import Any
from torchvision import transforms
def instantiate_from_config(config):
if not "target" in config:
if config == '__is_first_stage__':
return None
elif config == "__is_unconditional__":
return None
raise KeyError("Expected key `target` to instantiate.")
return get_obj_from_str(config["target"])(**config.get("params", dict()))
def get_obj_from_str(string, reload=False):
module, cls = string.rsplit(".", 1)
if reload:
module_imp = importlib.import_module(module)
importlib.reload(module_imp)
return getattr(importlib.import_module(module, package=None), cls)
def resize_without_crop(pil_image, target_width, target_height):
resized_image = pil_image.resize((target_width, target_height), Image.LANCZOS)
return np.array(resized_image)[:, :, :3]
@torch.inference_mode()
def numpy2pytorch(imgs):
h = torch.from_numpy(np.stack(imgs, axis=0)).float() / 255.0 * 2.0 - 1.0
h = h.movedim(-1, 1)
return h
@torch.inference_mode()
def remove_background(
image: PIL.Image.Image,
rembg: Any = None,
force: bool = False,
**rembg_kwargs,
) -> PIL.Image.Image:
do_remove = True
if image.mode == "RGBA" and image.getextrema()[3][0] < 255:
do_remove = False
do_remove = do_remove or force
if do_remove:
W, H = image.size
k = (256.0 / float(H * W)) ** 0.5
feed = resize_without_crop(image, int(64 * round(W * k)), int(64 * round(H * k)))
feed = numpy2pytorch([feed]).to(device=rembg.device, dtype=torch.float32)
alpha = rembg(feed)[0][0]
alpha = torch.nn.functional.interpolate(alpha, size=(H, W), mode="bilinear")
alpha = alpha.squeeze().clamp(0, 1)
alpha = (alpha * 255).cpu().data.numpy().astype(np.uint8)
alpha = Image.fromarray(alpha)
no_bg_image = Image.new("RGBA", alpha.size, (0, 0, 0, 0))
no_bg_image.paste(image, mask=alpha)
image = no_bg_image
return image
@torch.inference_mode()
def remove_background(
image: PIL.Image.Image,
rembg: Any = None,
force: bool = False,
**rembg_kwargs,
) -> PIL.Image.Image:
do_remove = True
if image.mode == "RGBA" and image.getextrema()[3][0] < 255:
do_remove = False
do_remove = do_remove or force
if do_remove:
transform_image = transforms.Compose([
transforms.Resize((1024, 1024)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
image = image.convert('RGB')
input_images = transform_image(image).unsqueeze(0).to(rembg.device)
with torch.no_grad():
preds = rembg(input_images)[-1].sigmoid().cpu()
pred = preds[0].squeeze()
pred_pil = transforms.ToPILImage()(pred)
mask = pred_pil.resize(image.size)
image.putalpha(mask)
return image
# def remove_background(
# image: PIL.Image.Image,
# rembg_session: Any = None,
# force: bool = False,
# **rembg_kwargs,
# ) -> PIL.Image.Image:
# do_remove = True
# if image.mode == "RGBA" and image.getextrema()[3][0] < 255:
# do_remove = False
# do_remove = do_remove or force
# if do_remove:
# image = rembg.remove(image, session=rembg_session, **rembg_kwargs)
# return image
def resize_foreground(
image: PIL.Image.Image,
ratio: float,
) -> PIL.Image.Image:
image = np.array(image)
assert image.shape[-1] == 4
alpha = np.where(image[..., 3] > 0)
y1, y2, x1, x2 = (
alpha[0].min(),
alpha[0].max(),
alpha[1].min(),
alpha[1].max(),
)
# crop the foreground
fg = image[y1:y2, x1:x2]
# pad to square
size = max(fg.shape[0], fg.shape[1])
ph0, pw0 = (size - fg.shape[0]) // 2, (size - fg.shape[1]) // 2
ph1, pw1 = size - fg.shape[0] - ph0, size - fg.shape[1] - pw0
new_image = np.pad(
fg,
((ph0, ph1), (pw0, pw1), (0, 0)),
mode="constant",
constant_values=((0, 0), (0, 0), (0, 0)),
)
# compute padding according to the ratio
new_size = int(new_image.shape[0] / ratio)
# pad to size, double side
ph0, pw0 = (new_size - size) // 2, (new_size - size) // 2
ph1, pw1 = new_size - size - ph0, new_size - size - pw0
new_image = np.pad(
new_image,
((ph0, ph1), (pw0, pw1), (0, 0)),
mode="constant",
constant_values=((0, 0), (0, 0), (0, 0)),
)
new_image = Image.fromarray(new_image)
return new_image
def rgba_to_white_background(image: PIL.Image.Image) -> torch.Tensor:
image = np.asarray(image, dtype=np.float32) / 255.0
image = torch.from_numpy(image).movedim(2, 0).float()
image, alpha = image.split([3, 1], dim=0)
image = image * alpha + torch.ones_like(image) * (1 - alpha)
return image, alpha
def save_video(
frames: torch.Tensor,
output_path: str,
fps: int = 30,
) -> None:
# images: (N, C, H, W)
frames = [(frame.permute(1, 2, 0).cpu().numpy() * 255).astype(np.uint8) for frame in frames]
writer = imageio.get_writer(output_path, mode='I', fps=fps, codec='libx264')
for frame in frames:
writer.append_data(frame)
writer.close()
================================================
FILE: freesplatter/utils/mesh_optim.py
================================================
from typing import *
import numpy as np
import torch
import utils3d
import nvdiffrast.torch as dr
from tqdm import tqdm
import trimesh
import trimesh.visual
import xatlas
import cv2
from PIL import Image
import fast_simplification
def parametrize_mesh(vertices: np.array, faces: np.array):
"""
Parametrize a mesh to a texture space, using xatlas.
Args:
vertices (np.array): Vertices of the mesh. Shape (V, 3).
faces (np.array): Faces of the mesh. Shape (F, 3).
"""
vmapping, indices, uvs = xatlas.parametrize(vertices, faces)
vertices = vertices[vmapping]
faces = indices
return vertices, faces, uvs
def bake_texture(
vertices: np.array,
faces: np.array,
uvs: np.array,
observations: List[np.array],
masks: List[np.array],
extrinsics: List[np.array],
intrinsics: List[np.array],
texture_size: int = 2048,
near: float = 0.1,
far: float = 10.0,
mode: Literal['fast', 'opt'] = 'opt',
lambda_tv: float = 1e-2,
verbose: bool = False,
):
"""
Bake texture to a mesh from multiple observations.
Args:
vertices (np.array): Vertices of the mesh. Shape (V, 3).
faces (np.array): Faces of the mesh. Shape (F, 3).
uvs (np.array): UV coordinates of the mesh. Shape (V, 2).
observations (List[np.array]): List of observations. Each observation is a 2D image. Shape (H, W, 3).
masks (List[np.array]): List of masks. Each mask is a 2D image. Shape (H, W).
extrinsics (List[np.array]): List of extrinsics. Shape (4, 4).
intrinsics (List[np.array]): List of intrinsics. Shape (3, 3).
texture_size (int): Size of the texture.
near (float): Near plane of the camera.
far (float): Far plane of the camera.
mode (Literal['fast', 'opt']): Mode of texture baking.
lambda_tv (float): Weight of total variation loss in optimization.
verbose (bool): Whether to print progress.
"""
vertices = torch.tensor(vertices).float().cuda()
faces = torch.tensor(faces.astype(np.int32)).cuda()
uvs = torch.tensor(uvs).float().cuda()
observations = [torch.tensor(obs).float().cuda() for obs in observations]
masks = [torch.tensor(m>1e-2).bool().cuda() for m in masks]
views = [utils3d.torch.extrinsics_to_view(torch.tensor(extr).float().cuda()) for extr in extrinsics]
projections = [utils3d.torch.intrinsics_to_perspective(torch.tensor(intr).float().cuda(), near, far) for intr in intrinsics]
if mode == 'fast':
texture = torch.zeros((texture_size * texture_size, 3), dtype=torch.float32).cuda()
texture_weights = torch.zeros((texture_size * texture_size), dtype=torch.float32).cuda()
rastctx = utils3d.torch.RastContext(backend='cuda')
for observation, view, projection in tqdm(zip(observations, views, projections), total=len(observations), disable=not verbose, desc='Texture baking (fast)'):
with torch.no_grad():
rast = utils3d.torch.rasterize_triangle_faces(
rastctx, vertices[None], faces, observation.shape[1], observation.shape[0], uv=uvs[None], view=view, projection=projection
)
uv_map = rast['uv'][0].detach().flip(0)
mask = rast['mask'][0].detach().bool() & masks[0]
# nearest neighbor interpolation
uv_map = (uv_map * texture_size).floor().long()
obs = observation[mask]
uv_map = uv_map[mask]
idx = uv_map[:, 0] + (texture_size - uv_map[:, 1] - 1) * texture_size
texture = texture.scatter_add(0, idx.view(-1, 1).expand(-1, 3), obs)
texture_weights = texture_weights.scatter_add(0, idx, torch.ones((obs.shape[0]), dtype=torch.float32, device=texture.device))
mask = texture_weights > 0
texture[mask] /= texture_weights[mask][:, None]
texture = np.clip(texture.reshape(texture_size, texture_size, 3).cpu().numpy() * 255, 0, 255).astype(np.uint8)
# inpaint
mask = (texture_weights == 0).cpu().numpy().astype(np.uint8).reshape(texture_size, texture_size)
texture = cv2.inpaint(texture, mask, 3, cv2.INPAINT_TELEA)
elif mode == 'opt':
rastctx = utils3d.torch.RastContext(backend='cuda')
observations = [observations.flip(0) for observations in observations]
masks = [m.flip(0) for m in masks]
_uv = []
_uv_dr = []
for observation, view, projection in tqdm(zip(observations, views, projections), total=len(views), disable=not verbose, desc='Texture baking (opt): UV'):
with torch.no_grad():
rast = utils3d.torch.rasterize_triangle_faces(
rastctx, vertices[None], faces, observation.shape[1], observation.shape[0], uv=uvs[None], view=view, projection=projection
)
_uv.append(rast['uv'].detach())
_uv_dr.append(rast['uv_dr'].detach())
texture = torch.nn.Parameter(torch.zeros((1, texture_size, texture_size, 3), dtype=torch.float32).cuda())
optimizer = torch.optim.Adam([texture], betas=(0.5, 0.9), lr=1e-2)
def exp_anealing(optimizer, step, total_steps, start_lr, end_lr):
return start_lr * (end_lr / start_lr) ** (step / total_steps)
def cosine_anealing(optimizer, step, total_steps, start_lr, end_lr):
return end_lr + 0.5 * (start_lr - end_lr) * (1 + np.cos(np.pi * step / total_steps))
def tv_loss(texture):
return torch.nn.functional.l1_loss(texture[:, :-1, :, :], texture[:, 1:, :, :]) + \
torch.nn.functional.l1_loss(texture[:, :, :-1, :], texture[:, :, 1:, :])
total_steps = 2500
with tqdm(total=total_steps, disable=not verbose, desc='Texture baking (opt): optimizing') as pbar:
for step in range(total_steps):
optimizer.zero_grad()
selected = np.random.randint(0, len(views))
uv, uv_dr, observation, mask = _uv[selected], _uv_dr[selected], observations[selected], masks[selected]
render = dr.texture(texture, uv, uv_dr)[0]
loss = torch.nn.functional.l1_loss(render[mask], observation[mask])
if lambda_tv > 0:
loss += lambda_tv * tv_loss(texture)
loss.backward()
optimizer.step()
# annealing
optimizer.param_groups[0]['lr'] = cosine_anealing(optimizer, step, total_steps, 1e-2, 1e-5)
pbar.set_postfix({'loss': loss.item()})
pbar.update()
texture = np.clip(texture[0].flip(0).detach().cpu().numpy() * 255, 0, 255).astype(np.uint8)
mask = 1 - utils3d.torch.rasterize_triangle_faces(
rastctx, (uvs * 2 - 1)[None], faces, texture_size, texture_size
)['mask'][0].detach().cpu().numpy().astype(np.uint8)
texture = cv2.inpaint(texture, mask, 3, cv2.INPAINT_TELEA)
else:
raise ValueError(f'Unknown mode: {mode}')
return texture
def optimize_mesh(
mesh,
images: torch.Tensor,
masks: torch.Tensor,
extrinsics: torch.Tensor,
intrinsics: torch.Tensor,
simplify: float = 0.95,
texture_size: int = 1024,
verbose: bool = False,
) -> trimesh.Trimesh:
"""
Convert a generated asset to a glb file.
Args:
mesh: Extracted mesh.
simplify (float): Ratio of faces to remove in simplification.
texture_size (int): Size of the texture.
verbose (bool): Whether to print progress.
"""
vertices = np.array(mesh.vertices).astype(float)
faces = np.array(mesh.faces).astype(int)
# mesh simplification
max_faces = 50000
mesh_reduction = max(1 - max_faces / faces.shape[0], simplify)
vertices, faces = fast_simplification.simplify(
vertices, faces, target_reduction=mesh_reduction)
# parametrize mesh
vertices, faces, uvs = parametrize_mesh(vertices, faces)
# bake texture
images = [images[i].cpu().numpy() for i in range(len(images))]
masks = [masks[i].cpu().numpy() for i in range(len(masks))]
extrinsics = [extrinsics[i].cpu().numpy() for i in range(len(extrinsics))]
intrinsics = [intrinsics[i].cpu().numpy() for i in range(len(intrinsics))]
texture = bake_texture(
vertices.astype(float), faces.astype(float), uvs,
images, masks, extrinsics, intrinsics,
texture_size=texture_size,
mode='opt',
lambda_tv=0.01,
verbose=verbose
)
texture = Image.fromarray(texture)
# rotate mesh
vertices = vertices.astype(float) @ np.array([[-1, 0, 0], [0, 0, 1], [0, 1, 0]]).astype(float)
mesh = trimesh.Trimesh(vertices, faces, visual=trimesh.visual.TextureVisuals(uv=uvs, image=texture))
return mesh
================================================
FILE: freesplatter/utils/recon_util.py
================================================
import cv2
import math
import scipy
import numpy as np
import torch
import open3d as o3d
from tqdm import tqdm
from .camera_util import create_camera_to_world
###############################################################################
# Camera Trajectory
###############################################################################
def fibonacci_sampling_on_sphere(num_samples=1):
points = []
phi = np.pi * (3.0 - np.sqrt(5.0)) # golden angle in radians
for i in range(num_samples):
y = 1 - (i / float(num_samples - 1)) * 2 # y goes from 1 to -1
radius = np.sqrt(1 - y * y) # radius at y
theta = phi * i # golden angle increment
x = np.cos(theta) * radius
z = np.sin(theta) * radius
points.append([x, y, z])
points = np.array(points)
return points
def get_fibonacci_cameras(N=20, radius=2.0, device='cuda'):
def normalize_vecs(vectors):
return vectors / (torch.norm(vectors, dim=-1, keepdim=True))
t = torch.linspace(0, 1, N).reshape(-1, 1)
cam_pos = fibonacci_sampling_on_sphere(N)
cam_pos = torch.from_numpy(cam_pos).float().to(device)
cam_pos = cam_pos * radius
forward_vector = normalize_vecs(-cam_pos)
up_vector = torch.tensor([0, 0, 1], dtype=torch.float,
device=device).reshape(-1).expand_as(forward_vector)
right_vector = normalize_vecs(torch.cross(forward_vector, up_vector, dim=-1))
up_vector = normalize_vecs(torch.cross(right_vector, forward_vector, dim=-1))
rotate = torch.stack(
(right_vector, -up_vector, forward_vector), dim=-1)
rotation_matrix = torch.eye(4, device=device).unsqueeze(0).repeat(forward_vector.shape[0], 1, 1)
rotation_matrix[:, :3, :3] = rotate
translation_matrix = torch.eye(4, device=device).unsqueeze(0).repeat(forward_vector.shape[0], 1, 1)
translation_matrix[:, :3, 3] = cam_pos
cam2world = translation_matrix @ rotation_matrix
return cam2world
def get_circular_cameras(N=120, elevation=0, radius=2.0, normalize=True, device='cuda'):
camera_positions = []
for i in range(N):
azimuth = 2 * np.pi * i / N - np.pi / 2
x = radius * np.cos(elevation) * np.cos(azimuth)
y = radius * np.cos(elevation) * np.sin(azimuth)
z = radius * np.sin(elevation)
camera_positions.append([x, y, z])
camera_positions = np.array(camera_positions)
camera_positions = torch.from_numpy(camera_positions).float()
c2ws = create_camera_to_world(camera_positions, camera_system='opencv')
if normalize:
c2ws_first = create_camera_to_world(torch.tensor([0, -2, 0]), camera_system='opencv').unsqueeze(0)
c2ws = torch.linalg.inv(c2ws_first) @ c2ws
return c2ws
###############################################################################
# TSDF Fusion
###############################################################################
def rgbd_to_mesh(images, depths, c2ws, fov, mesh_path, cam_elev_thr=0):
voxel_length = 2 * 2.0 / 512.0
sdf_trunc = 2 * 0.02
color_type = o3d.pipelines.integration.TSDFVolumeColorType.RGB8
volume = o3d.pipelines.integration.ScalableTSDFVolume(
voxel_length=voxel_length,
sdf_trunc=sdf_trunc,
color_type=color_type,
)
for i in tqdm(range(c2ws.shape[0])):
camera_to_world = c2ws[i]
world_to_camera = np.linalg.inv(camera_to_world)
camera_position = camera_to_world[:3, 3]
# camera_elevation = np.rad2deg(np.arcsin(camera_position[2]))
camera_elevation = np.rad2deg(np.arcsin(camera_position[2] / np.linalg.norm(camera_position)))
if camera_elevation < cam_elev_thr:
continue
color_image = o3d.geometry.Image(np.ascontiguousarray(images[i]))
depth_image = o3d.geometry.Image(np.ascontiguousarray(depths[i]))
rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(
color_image, depth_image, depth_scale=1.0, depth_trunc=4.0, convert_rgb_to_intensity=False
)
camera_intrinsics = o3d.camera.PinholeCameraIntrinsic()
fx = fy = images[i].shape[1] / 2. / np.tan(np.deg2rad(fov / 2.0))
cx = cy = images[i].shape[1] / 2.
h = images[i].shape[0]
w = images[i].shape[1]
camera_intrinsics.set_intrinsics(
w, h, fx, fy, cx, cy
)
volume.integrate(
rgbd_image,
camera_intrinsics,
world_to_camera,
)
fused_mesh = volume.extract_triangle_mesh()
triangle_clusters, cluster_n_triangles, cluster_area = (
fused_mesh.cluster_connected_triangles())
triangle_clusters = np.asarray(triangle_clusters)
cluster_n_triangles = np.asarray(cluster_n_triangles)
cluster_area = np.asarray(cluster_area)
triangles_to_remove = cluster_n_triangles[triangle_clusters] < 500
fused_mesh.remove_triangles_by_mask(triangles_to_remove)
fused_mesh.remove_unreferenced_vertices()
fused_mesh = fused_mesh.filter_smooth_simple(number_of_iterations=2)
fused_mesh = fused_mesh.compute_vertex_normals()
o3d.io.write_triangle_mesh(mesh_path, fused_mesh)
###############################################################################
# Visualization
###############################################################################
def viewmatrix(lookdir, up, position):
"""Construct lookat view matrix."""
vec2 = normalize(lookdir)
vec0 = normalize(np.cross(up, vec2))
vec1 = normalize(np.cross(vec2, vec0))
m = np.stack([vec0, vec1, vec2, position], axis=1)
return m
def normalize(x):
"""Normalization helper function."""
return x / np.linalg.norm(x)
def generate_interpolated_path(poses, n_interp, spline_degree=5,
smoothness=.03, rot_weight=.1):
"""Creates a smooth spline path between input keyframe camera poses.
Spline is calculated with poses in format (position, lookat-point, up-point).
Args:
poses: (n, 3, 4) array of input pose keyframes.
n_interp: returned path will have n_interp * (n - 1) total poses.
spline_degree: polynomial degree of B-spline.
smoothness: parameter for spline smoothing, 0 forces exact interpolation.
rot_weight: relative weighting of rotation/translation in spline solve.
Returns:
Array of new camera poses with shape (n_interp * (n - 1), 3, 4).
"""
def poses_to_points(poses, dist):
"""Converts from pose matrices to (position, lookat, up) format."""
pos = poses[:, :3, -1]
lookat = poses[:, :3, -1] - dist * poses[:, :3, 2]
up = poses[:, :3, -1] + dist * poses[:, :3, 1]
return np.stack([pos, lookat, up], 1)
def points_to_poses(points):
"""Converts from (position, lookat, up) format to pose matrices."""
return np.array([viewmatrix(p - l, u - p, p) for p, l, u in points])
def interp(points, n, k, s):
"""Runs multidimensional B-spline interpolation on the input points."""
sh = points.shape
pts = np.reshape(points, (sh[0], -1))
k = min(k, sh[0] - 1)
tck, _ = scipy.interpolate.splprep(pts.T, k=k, s=s)
u = np.linspace(0, 1, n, endpoint=False)
new_points = np.array(scipy.interpolate.splev(u, tck))
new_points = np.reshape(new_points.T, (n, sh[1], sh[2]))
return new_points
points = poses_to_points(poses, dist=rot_weight)
new_points = interp(points,
n_interp * (points.shape[0] - 1),
k=spline_degree,
s=smoothness)
return points_to_poses(new_points)
###############################################################################
# Camera Estimation
###############################################################################
def xy_grid(W, H, device=None, origin=(0, 0), unsqueeze=None, cat_dim=-1, homogeneous=False, **arange_kw):
""" Output a (H,W,2) array of int32
with output[j,i,0] = i + origin[0]
output[j,i,1] = j + origin[1]
"""
if device is None:
# numpy
arange, meshgrid, stack, ones = np.arange, np.meshgrid, np.stack, np.ones
else:
# torch
arange = lambda *a, **kw: torch.arange(*a, device=device, **kw)
meshgrid, stack = torch.meshgrid, torch.stack
ones = lambda *a: torch.ones(*a, device=device)
tw, th = [arange(o, o + s, **arange_kw) for s, o in zip((W, H), origin)]
grid = meshgrid(tw, th, indexing='xy')
if homogeneous:
grid = grid + (ones((H, W)),)
if unsqueeze is not None:
grid = (grid[0].unsqueeze(unsqueeze), grid[1].unsqueeze(unsqueeze))
if cat_dim is not None:
grid = stack(grid, cat_dim)
return grid
def estimate_focal(pts3d, pp=None, mask=None, min_focal=0., max_focal=np.inf):
"""
Reprojection method, for when the absolute depth is known:
1) estimate the camera focal using a robust estimator
2) reproject points onto true rays, minimizing a certain error
"""
H, W, THREE = pts3d.shape
assert THREE == 3
if pp is None:
pp = torch.tensor([W/2, H/2]).to(pts3d)
# centered pixel grid
pixels = xy_grid(W, H, device=pts3d.device).view(-1, 2) - pp.view(1, 2) # (HW, 2)
pts3d = pts3d.view(H*W, 3).contiguous() # (HW, 3)
# mask points if provided
if mask is not None:
mask = mask.to(pts3d.device).ravel().bool()
assert len(mask) == pts3d.shape[0]
pts3d = pts3d[mask]
pixels = pixels[mask]
# weiszfeld
# init focal with l2 closed form
# we try to find focal = argmin Sum | pixel - focal * (x,y)/z|
xy_over_z = (pts3d[..., :2] / pts3d[..., 2:3]).nan_to_num(posinf=0, neginf=0) # homogeneous (x,y,1)
dot_xy_px = (xy_over_z * pixels).sum(dim=-1)
dot_xy_xy = xy_over_z.square().sum(dim=-1)
focal = dot_xy_px.mean(dim=0) / dot_xy_xy.mean(dim=0)
# iterative re-weighted least-squares
for iter in range(10):
# re-weighting by inverse of distance
dis = (pixels - focal.view(-1, 1) * xy_over_z).norm(dim=-1)
# print(dis.nanmean(-1))
w = dis.clip(min=1e-8).reciprocal()
# update the scaling with the new weights
focal = (w * dot_xy_px).mean(dim=0) / (w * dot_xy_xy).mean(dim=0)
focal_base = max(H, W) / (2 * np.tan(np.deg2rad(60) / 2)) # size / 1.1547005383792515
focal = focal.clip(min=min_focal*focal_base, max=max_focal*focal_base)
return focal.ravel()
def fast_pnp(pts3d, mask, focal=None, pp=None, niter_PnP=10):
"""
Estimate camera poses and focals with RANSAC-PnP.
Inputs:
pts3d: H x W x 3
focal: 1
mask: H x W
pp
"""
H, W, _ = pts3d.shape
pixels = np.mgrid[:W, :H].T.astype(float)
if focal is None:
S = max(W, H)
tentative_focals = np.geomspace(S/2, S*3, 21)
else:
tentative_focals = [focal]
if pp is None:
pp = (W/2, H/2)
best = 0,
for focal in tentative_focals:
K = np.float32([(focal, 0, pp[0]), (0, focal, pp[1]), (0, 0, 1)])
success, R, T, inliers = cv2.solvePnPRansac(pts3d[mask], pixels[mask], K, None,
iterationsCount=niter_PnP, reprojectionError=5, flags=cv2.SOLVEPNP_SQPNP)
if not success:
continue
score = len(inliers)
if success and score > best[0]:
best = score, R, T, focal
if not best[0]:
return None
_, R, T, best_focal = best
R = cv2.Rodrigues(R)[0] # world to cam
world2cam = np.eye(4).astype(float)
world2cam[:3, :3] = R
world2cam[:3, 3] = T.reshape(3)
cam2world = np.linalg.inv(world2cam)
return best_focal, cam2world
================================================
FILE: freesplatter/webui/__init__.py
================================================
================================================
FILE: freesplatter/webui/camera_viewer/__init__.py
================================================
================================================
FILE: freesplatter/webui/camera_viewer/utils.py
================================================
import os
import numpy as np
from PIL import Image
def load_image(fpath, sz=256):
img = Image.open(fpath)
img = img.resize((sz, sz))
return np.asarray(img)[:, :, :3]
def spherical_to_cartesian(sph):
theta, azimuth, radius = sph
return np.array([
radius * np.sin(theta) * np.cos(azimuth),
radius * np.sin(theta) * np.sin(azimuth),
radius * np.cos(theta),
])
def cartesian_to_spherical(xyz):
xy = xyz[0]**2 + xyz[1]**2
radius = np.sqrt(xy + xyz[2]**2)
theta = np.arctan2(np.sqrt(xy), xyz[2])
azimuth = np.arctan2(xyz[1], xyz[0])
return np.array([theta, azimuth, radius])
def elu_to_c2w(eye, lookat, up):
if isinstance(eye, list):
eye = np.array(eye)
if isinstance(lookat, list):
lookat = np.array(lookat)
if isinstance(up, list):
up = np.array(up)
l = eye - lookat
if np.linalg.norm(l) < 1e-8:
l[-1] = 1
l = l / np.linalg.norm(l)
s = np.cross(l, up)
if np.linalg.norm(s) < 1e-8:
s[0] = 1
s = s / np.linalg.norm(s)
uu = np.cross(s, l)
rot = np.eye(3)
rot[0, :] = -s
rot[1, :] = uu
rot[2, :] = l
c2w = np.eye(4)
c2w[:3, :3] = rot.T
c2w[:3, 3] = eye
return c2w
def c2w_to_elu(c2w):
w2c = np.linalg.inv(c2w)
eye = c2w[:3, 3]
lookat_dir = -w2c[2, :3]
lookat = eye + lookat_dir
up = w2c[1, :3]
return eye, lookat, up
def qvec_to_rotmat(qvec):
return np.array([
[
1 - 2 * qvec[2]**2 - 2 * qvec[3]**2,
2 * qvec[1] * qvec[2] - 2 * qvec[0] * qvec[3],
2 * qvec[3] * qvec[1] + 2 * qvec[0] * qvec[2]
], [
2 * qvec[1] * qvec[2] + 2 * qvec[0] * qvec[3],
1 - 2 * qvec[1]**2 - 2 * qvec[3]**2,
2 * qvec[2] * qvec[3] - 2 * qvec[0] * qvec[1]
], [
2 * qvec[3] * qvec[1] - 2 * qvec[0] * qvec[2],
2 * qvec[2] * qvec[3] + 2 * qvec[0] * qvec[1],
1 - 2 * qvec[1]**2 - 2 * qvec[2]**2
]
])
def rotmat(a, b):
a, b = a / np.linalg.norm(a), b / np.linalg.norm(b)
v = np.cross(a, b)
c = np.dot(a, b)
# handle exception for the opposite direction input
if c < -1 + 1e-10:
return rotmat(a + np.random.uniform(-1e-2, 1e-2, 3), b)
s = np.linalg.norm(v)
kmat = np.array([[0, -v[2], v[1]], [v[2], 0, -v[0]], [-v[1], v[0], 0]])
return np.eye(3) + kmat + kmat.dot(kmat) * ((1 - c) / (s ** 2 + 1e-10))
def recenter_cameras(c2ws):
is_list = False
if isinstance(c2ws, list):
is_list = True
c2ws = np.stack(c2ws)
center = c2ws[..., :3, -1].mean(axis=0)
c2ws[..., :3, -1] = c2ws[..., :3, -1] - center
if is_list:
c2ws = [ c2w for c2w in c2ws ]
return c2ws
def rescale_cameras(c2ws, scale):
is_list = False
if isinstance(c2ws, list):
is_list = True
c2ws = np.stack(c2ws)
c2ws[..., :3, -1] *= scale
if is_list:
c2ws = [ c2w for c2w in c2ws ]
return c2ws
================================================
FILE: freesplatter/webui/camera_viewer/visualizer.py
================================================
import os
from PIL import Image
import plotly.graph_objects as go
import numpy as np
def calc_cam_cone_pts_3d(c2w, fov_deg, zoom = 1.0):
fov_rad = np.deg2rad(fov_deg)
cam_x = c2w[0, -1]
cam_y = c2w[1, -1]
cam_z = c2w[2, -1]
corn1 = [np.tan(fov_rad / 2.0), np.tan(fov_rad / 2.0), -1.0]
corn2 = [-np.tan(fov_rad / 2.0), np.tan(fov_rad / 2.0), -1.0]
corn3 = [-np.tan(fov_rad / 2.0), -np.tan(fov_rad / 2.0), -1.0]
corn4 = [np.tan(fov_rad / 2.0), -np.tan(fov_rad / 2.0), -1.0]
corn1 = np.dot(c2w[:3, :3], corn1)
corn2 = np.dot(c2w[:3, :3], corn2)
corn3 = np.dot(c2w[:3, :3], corn3)
corn4 = np.dot(c2w[:3, :3], corn4)
# Now attach as offset to actual 3D camera position:
corn1 = np.array(corn1) / np.linalg.norm(corn1, ord=2) * zoom
corn_x1 = cam_x + corn1[0]
corn_y1 = cam_y + corn1[1]
corn_z1 = cam_z + corn1[2]
corn2 = np.array(corn2) / np.linalg.norm(corn2, ord=2) * zoom
corn_x2 = cam_x + corn2[0]
corn_y2 = cam_y + corn2[1]
corn_z2 = cam_z + corn2[2]
corn3 = np.array(corn3) / np.linalg.norm(corn3, ord=2) * zoom
corn_x3 = cam_x + corn3[0]
corn_y3 = cam_y + corn3[1]
corn_z3 = cam_z + corn3[2]
corn4 = np.array(corn4) / np.linalg.norm(corn4, ord=2) * zoom
corn_x4 = cam_x + corn4[0]
corn_y4 = cam_y + corn4[1]
corn_z4 = cam_z + corn4[2]
xs = [cam_x, corn_x1, corn_x2, corn_x3, corn_x4]
ys = [cam_y, corn_y1, corn_y2, corn_y3, corn_y4]
zs = [cam_z, corn_z1, corn_z2, corn_z3, corn_z4]
return np.array([xs, ys, zs]).T
class CameraVisualizer:
def __init__(self, poses, legends, colors, images=None, mesh_path=None, pc_path=None, camera_x=1.0):
self._fig = None
self._camera_x = camera_x
self._poses = poses
self._legends = legends
self._colors = colors
self._raw_images = None
self._bit_images = None
self._image_colorscale = None
if images is not None:
self._raw_images = images
self._bit_images = []
self._image_colorscale = []
for img in images:
if img is None:
self._bit_images.append(None)
self._image_colorscale.append(None)
continue
bit_img, colorscale = self.encode_image(img)
self._bit_images.append(bit_img)
self._image_colorscale.append(colorscale)
self._mesh = None
if mesh_path is not None and os.path.exists(mesh_path):
import trimesh
self._mesh = trimesh.load(mesh_path, force='mesh')
self._pc = None
if pc_path is not None and os.path.exists(pc_path):
self._pc = np.load(pc_path)
def encode_image(self, raw_image):
'''
:param raw_image (H, W, 3) array of uint8 in [0, 255].
'''
# https://stackoverflow.com/questions/60685749/python-plotly-how-to-add-an-image-to-a-3d-scatter-plot
dum_img = Image.fromarray(np.ones((3, 3, 3), dtype='uint8')).convert('P', palette='WEB')
idx_to_color = np.array(dum_img.getpalette()).reshape((-1, 3))
bit_image = Image.fromarray(raw_image).convert('P', palette='WEB', dither=None)
# bit_image = Image.fromarray(raw_image.clip(0, 254)).convert(
# 'P', palette='WEB', dither=None)
colorscale = [
[i / 255.0, 'rgb({}, {}, {})'.format(*rgb)] for i, rgb in enumerate(idx_to_color)]
return bit_image, colorscale
def update_figure(
self,
scene_bounds,
height=720,
line_width=10,
base_radius=0.0,
zoom_scale=1.0,
fov_deg=50.,
mesh_z_shift=0.0,
mesh_scale=1.0,
show_background=False,
show_grid=False,
show_ticklabels=False,
y_up=False,
):
fig = go.Figure()
for i in range(len(self._poses)):
pose = self._poses[i]
clr = np.array([self._colors[i], self._colors[i]])
legend = self._legends[i]
edges = [(0, 1), (0, 2), (0, 3), (0, 4), (1, 2), (2, 3), (3, 4), (4, 1)]
if isinstance(fov_deg, float) or len(fov_deg) == 1:
fov = fov_deg
else:
fov = fov_deg[i]
cone = calc_cam_cone_pts_3d(pose, fov)
radius = np.linalg.norm(pose[:3, -1])
if self._bit_images and self._bit_images[i]:
raw_image = self._raw_images[i]
bit_image = self._bit_images[i]
colorscale = self._image_colorscale[i]
(H, W, C) = raw_image.shape
z = np.zeros((H, W)) + base_radius
scale = np.linalg.norm(cone[1] - cone[2]) / 2
(x, y) = np.meshgrid(np.linspace(-scale, scale, W), np.linspace(scale, -scale, H) * H / W)
xyz = np.concatenate([x[..., None], y[..., None], z[..., None]], axis=-1)
rot_xyz = np.matmul(xyz, pose[:3, :3].T) + pose[:3, -1]
offset = cone[2] - rot_xyz[0, 0, :]
rot_xyz += offset.reshape((1, 1, 3))
x, y, z = rot_xyz[:, :, 0], rot_xyz[:, :, 1], rot_xyz[:, :, 2]
fig.add_trace(go.Surface(
x=x, y=y, z=z,
surfacecolor=bit_image,
cmin=0,
cmax=255,
colorscale=colorscale,
showscale=False,
lighting_diffuse=1.0,
lighting_ambient=1.0,
lighting_fresnel=1.0,
lighting_roughness=1.0,
# lighting_specular=0.3))
lighting_specular=0,
showlegend=False))
for (j, edge) in enumerate(edges):
(x1, x2) = (cone[edge[0], 0], cone[edge[1], 0])
(y1, y2) = (cone[edge[0], 1], cone[edge[1], 1])
(z1, z2) = (cone[edge[0], 2], cone[edge[1], 2])
fig.add_trace(go.Scatter3d(
x=[x1, x2],
y=[y1, y2],
z=[z1, z2],
mode='lines',
line=dict(color=clr, width=line_width),
showlegend=False))
# Add label.
if cone[0, 2] < 0:
fig.add_trace(go.Scatter3d(
x=[cone[0, 0]], y=[cone[0, 1]], z=[cone[0, 2] - 0.05], showlegend=False,
mode='text', text=legend, textposition='bottom center'))
else:
fig.add_trace(go.Scatter3d(
x=[cone[0, 0]], y=[cone[0, 1]], z=[cone[0, 2] + 0.05], showlegend=False,
mode='text', text=legend, textposition='top center'))
# look at the center of scene
fig.update_layout(
height=height,
autosize=True,
hovermode=False,
margin=go.layout.Margin(l=0, r=0, b=0, t=0),
showlegend=True,
legend=dict(
yanchor='bottom',
y=0.01,
xanchor='right',
x=0.99,
),
scene=dict(
aspectmode='manual',
aspectratio=dict(x=1, y=1, z=1),
camera=dict(
eye=dict(x=1.5, y=1.5, z=1.0),
center=dict(x=0.0, y=0.0, z=0.0),
up=dict(x=0.0, y=0.0, z=1.0)),
xaxis_title='X',
yaxis_title='Z' if y_up else 'Y',
zaxis_title='Y' if y_up else 'Z',
xaxis=dict(
range=[-scene_bounds, scene_bounds],
showticklabels=show_ticklabels,
showgrid=show_grid,
zeroline=False,
showbackground=show_background,
showspikes=False,
showline=False,
ticks=''),
yaxis=dict(
range=[-scene_bounds, scene_bounds],
showticklabels=show_ticklabels,
showgrid=show_grid,
zeroline=False,
showbackground=show_background,
showspikes=False,
showline=False,
ticks=''),
zaxis=dict(
range=[-scene_bounds, scene_bounds],
showticklabels=show_ticklabels,
showgrid=show_grid,
zeroline=False,
showbackground=show_background,
showspikes=False,
showline=False,
ticks='')
)
)
self._fig = fig
return fig
================================================
FILE: freesplatter/webui/gradio_customgs/__init__.py
================================================
from .customgs import CustomGS
__all__ = ['CustomGS']
================================================
FILE: freesplatter/webui/gradio_customgs/customgs.py
================================================
"""gr.Model3D() component."""
from __future__ import annotations
from pathlib import Path
from typing import Callable
from gradio_client.documentation import document
from gradio.components.base import Component
from gradio.data_classes import FileData
from gradio.events import Events
class CustomGS(Component):
"""
Creates a component allows users to upload or view 3D Model files (.obj, .glb, .stl, .gltf, .splat, or .ply).
Demos: model3D
Guides: how-to-use-3D-model-component
"""
EVENTS = [Events.change, Events.upload, Events.edit, Events.clear]
data_model = FileData
def __init__(
self,
value: str | Callable | None = None,
*,
clear_color: tuple[float, float, float, float] | None = None,
camera_position: tuple[
int | float | None, int | float | None, int | float | None
] = (
None,
None,
None,
),
zoom_speed: float = 1,
pan_speed: float = 1,
height: int | str | None = None,
label: str | None = None,
show_label: bool | None = None,
every: float | None = None,
container: bool = True,
scale: int | None = None,
min_width: int = 160,
interactive: bool | None = None,
visible: bool = True,
elem_id: str | None = None,
elem_classes: list[str] | str | None = None,
render: bool = True,
):
"""
Parameters:
value: path to (.obj, .glb, .stl, .gltf, .splat, or .ply) file to show in model3D viewer. If callable, the function will be called whenever the app loads to set the initial value of the component.
clear_color: background color of scene, should be a tuple of 4 floats between 0 and 1 representing RGBA values.
camera_position: initial camera position of scene, provided as a tuple of `(alpha, beta, radius)`. Each value is optional. If provided, `alpha` and `beta` should be in degrees reflecting the angular position along the longitudinal and latitudinal axes, respectively. Radius corresponds to the distance from the center of the object to the camera.
zoom_speed: the speed of zooming in and out of the scene when the cursor wheel is rotated or when screen is pinched on a mobile device. Should be a positive float, increase this value to make zooming faster, decrease to make it slower. Affects the wheelPrecision property of the camera.
pan_speed: the speed of panning the scene when the cursor is dragged or when the screen is dragged on a mobile device. Should be a positive float, increase this value to make panning faster, decrease to make it slower. Affects the panSensibility property of the camera.
height: The height of the model3D component, specified in pixels if a number is passed, or in CSS units if a string is passed.
interactive: if True, will allow users to upload a file; if False, can only be used to display files. If not provided, this is inferred based on whether the component is used as an input or output.
label: The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.
show_label: if True, will display label.
every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
container: If True, will place the component in a container - providing some extra padding around the border.
scale: relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.
min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
visible: If False, component will be hidden.
elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
render: If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.
"""
self.clear_color = clear_color or [0, 0, 0, 0]
self.camera_position = camera_position
self.height = height
self.zoom_speed = zoom_speed
self.pan_speed = pan_speed
super().__init__(
label=label,
every=every,
show_label=show_label,
container=container,
scale=scale,
min_width=min_width,
interactive=interactive,
visible=visible,
elem_id=elem_id,
elem_classes=elem_classes,
render=render,
value=value,
)
def preprocess(self, payload: FileData | None) -> str | None:
"""
Parameters:
payload: the uploaded file as an instance of `FileData`.
Returns:
Passes the uploaded file as a {str} filepath to the function.
"""
if payload is None:
return payload
return payload.path
def postprocess(self, value: str | Path | None) -> FileData | None:
"""
Parameters:
value: Expects function to return a {str} or {pathlib.Path} filepath of type (.obj, .glb, .stl, or .gltf)
Returns:
The uploaded file as an instance of `FileData`.
"""
if value is None:
return value
return FileData(path=str(value), orig_name=Path(value).name)
def process_example(self, input_data: str | Path | None) -> str:
return Path(input_data).name if input_data else ""
def example_inputs(self):
# TODO: Use permanent link
return "https://raw.githubusercontent.com/gradio-app/gradio/main/demo/model3D/files/Fox.gltf"
================================================
FILE: freesplatter/webui/gradio_customgs/customgs.pyi
================================================
"""gr.Model3D() component."""
from __future__ import annotations
from pathlib import Path
from typing import Callable
from gradio_client.documentation import document
from gradio.components.base import Component
from gradio.data_classes import FileData
from gradio.events import Events
from gradio.events import Dependency
class CustomGS(Component):
"""
Creates a component allows users to upload or view 3D Model files (.obj, .glb, .stl, .gltf, .splat, or .ply).
Demos: model3D
Guides: how-to-use-3D-model-component
"""
EVENTS = [Events.change, Events.upload, Events.edit, Events.clear]
data_model = FileData
def __init__(
self,
value: str | Callable | None = None,
*,
clear_color: tuple[float, float, float, float] | None = None,
camera_position: tuple[
int | float | None, int | float | None, int | float | None
] = (
None,
None,
None,
),
zoom_speed: float = 1,
pan_speed: float = 1,
height: int | str | None = None,
label: str | None = None,
show_label: bool | None = None,
every: float | None = None,
container: bool = True,
scale: int | None = None,
min_width: int = 160,
interactive: bool | None = None,
visible: bool = True,
elem_id: str | None = None,
elem_classes: list[str] | str | None = None,
render: bool = True,
):
"""
Parameters:
value: path to (.obj, .glb, .stl, .gltf, .splat, or .ply) file to show in model3D viewer. If callable, the function will be called whenever the app loads to set the initial value of the component.
clear_color: background color of scene, should be a tuple of 4 floats between 0 and 1 representing RGBA values.
camera_position: initial camera position of scene, provided as a tuple of `(alpha, beta, radius)`. Each value is optional. If provided, `alpha` and `beta` should be in degrees reflecting the angular position along the longitudinal and latitudinal axes, respectively. Radius corresponds to the distance from the center of the object to the camera.
zoom_speed: the speed of zooming in and out of the scene when the cursor wheel is rotated or when screen is pinched on a mobile device. Should be a positive float, increase this value to make zooming faster, decrease to make it slower. Affects the wheelPrecision property of the camera.
pan_speed: the speed of panning the scene when the cursor is dragged or when the screen is dragged on a mobile device. Should be a positive float, increase this value to make panning faster, decrease to make it slower. Affects the panSensibility property of the camera.
height: The height of the model3D component, specified in pixels if a number is passed, or in CSS units if a string is passed.
interactive: if True, will allow users to upload a file; if False, can only be used to display files. If not provided, this is inferred based on whether the component is used as an input or output.
label: The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.
show_label: if True, will display label.
every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
container: If True, will place the component in a container - providing some extra padding around the border.
scale: relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.
min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
visible: If False, component will be hidden.
elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
render: If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.
"""
self.clear_color = clear_color or [0, 0, 0, 0]
self.camera_position = camera_position
self.height = height
self.zoom_speed = zoom_speed
self.pan_speed = pan_speed
super().__init__(
label=label,
every=every,
show_label=show_label,
container=container,
scale=scale,
min_width=min_width,
interactive=interactive,
visible=visible,
elem_id=elem_id,
elem_classes=elem_classes,
render=render,
value=value,
)
def preprocess(self, payload: FileData | None) -> str | None:
"""
Parameters:
payload: the uploaded file as an instance of `FileData`.
Returns:
Passes the uploaded file as a {str} filepath to the function.
"""
if payload is None:
return payload
return payload.path
def postprocess(self, value: str | Path | None) -> FileData | None:
"""
Parameters:
value: Expects function to return a {str} or {pathlib.Path} filepath of type (.obj, .glb, .stl, or .gltf)
Returns:
The uploaded file as an instance of `FileData`.
"""
if value is None:
return value
return FileData(path=str(value), orig_name=Path(value).name)
def process_example(self, input_data: str | Path | None) -> str:
return Path(input_data).name if input_data else ""
def example_inputs(self):
# TODO: Use permanent link
return "https://raw.githubusercontent.com/gradio-app/gradio/main/demo/model3D/files/Fox.gltf"
from typing import Callable, Literal, Sequence, Any, TYPE_CHECKING
from gradio.blocks import Block
if TYPE_CHECKING:
from gradio.components import Timer
def change(self,
fn: Callable[..., Any] | None = None,
inputs: Block | Sequence[Block] | set[Block] | None = None,
outputs: Block | Sequence[Block] | None = None,
api_name: str | None | Literal[False] = None,
scroll_to_output: bool = False,
show_progress: Literal["full", "minimal", "hidden"] = "full",
queue: bool | None = None,
batch: bool = False,
max_batch_size: int = 4,
preprocess: bool = True,
postprocess: bool = True,
cancels: dict[str, Any] | list[dict[str, Any]] | None = None,
every: Timer | float | None = None,
trigger_mode: Literal["once", "multiple", "always_last"] | None = None,
js: str | None = None,
concurrency_limit: int | None | Literal["default"] = "default",
concurrency_id: str | None = None,
show_api: bool = True,
) -> Dependency:
"""
Parameters:
fn: the function to call when this event is triggered. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
inputs: list of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
outputs: list of gradio.components to use as outputs. If the function returns no outputs, this should be an empty list.
api_name: defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, the endpoint will be exposed in the api docs as an unnamed endpoint, although this behavior will be changed in Gradio 4.0. If set to a string, the endpoint will be exposed in the api docs with the given name.
scroll_to_output: if True, will scroll to output component on completion
show_progress: how to show the progress animation while event is running: "full" shows a spinner which covers the output component area as well as a runtime display in the upper right corner, "minimal" only shows the runtime display, "hidden" shows no progress animation at all
queue: if True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app.
batch: if True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
max_batch_size: maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
preprocess: if False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
postprocess: if False, will not run postprocessing of component data before returning 'fn' output to the browser.
cancels: a list of other events to cancel when this listener is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. Functions that have not yet run (or generators that are iterating) will be cancelled, but functions that are currently running will be allowed to finish.
every: continously calls `value` to recalculate it if `value` is a function (has no effect otherwise). Can provide a Timer whose tick resets `value`, or a float that provides the regular interval for the reset Timer.
trigger_mode: if "once" (default for all events except `.change()`) would not allow any submissions while an event is pending. If set to "multiple", unlimited submissions are allowed while pending, and "always_last" (default for `.change()` and `.key_up()` events) would allow a second submission after the pending event is complete.
js: optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components.
concurrency_limit: if set, this is the maximum number of this event that can be running simultaneously. Can be set to None to mean no concurrency_limit (any number of this event can be running simultaneously). Set to "default" to use the default concurrency limit (defined by the `default_concurrency_limit` parameter in `Blocks.queue()`, which itself is 1 by default).
concurrency_id: if set, this is the id of the concurrency group. Events with the same concurrency_id will be limited by the lowest set concurrency_limit.
show_api: whether to show this event in the "view API" page of the Gradio app, or in the ".view_api()" method of the Gradio clients. Unlike setting api_name to False, setting show_api to False will still allow downstream apps as well as the Clients to use this event. If fn is None, show_api will automatically be set to False.
"""
...
def upload(self,
fn: Callable[..., Any] | None = None,
inputs: Block | Sequence[Block] | set[Block] | None = None,
outputs: Block | Sequence[Block] | None = None,
api_name: str | None | Literal[False] = None,
scroll_to_output: bool = False,
show_progress: Literal["full", "minimal", "hidden"] = "full",
queue: bool | None = None,
batch: bool = False,
max_batch_size: int = 4,
preprocess: bool = True,
postprocess: bool = True,
cancels: dict[str, Any] | list[dict[str, Any]] | None = None,
every: Timer | float | None = None,
trigger_mode: Literal["once", "multiple", "always_last"] | None = None,
js: str | None = None,
concurrency_limit: int | None | Literal["default"] = "default",
concurrency_id: str | None = None,
show_api: bool = True,
) -> Dependency:
"""
Parameters:
fn: the function to call when this event is triggered. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
inputs: list of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
outputs: list of gradio.components to use as outputs. If the function returns no outputs, this should be an empty list.
api_name: defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, the endpoint will be exposed in the api docs as an unnamed endpoint, although this behavior will be changed in Gradio 4.0. If set to a string, the endpoint will be exposed in the api docs with the given name.
scroll_to_output: if True, will scroll to output component on completion
show_progress: how to show the progress animation while event is running: "full" shows a spinner which covers the output component area as well as a runtime display in the upper right corner, "minimal" only shows the runtime display, "hidden" shows no progress animation at all
queue: if True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app.
batch: if True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
max_batch_size: maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
preprocess: if False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
postprocess: if False, will not run postprocessing of component data before returning 'fn' output to the browser.
cancels: a list of other events to cancel when this listener is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. Functions that have not yet run (or generators that are iterating) will be cancelled, but functions that are currently running will be allowed to finish.
every: continously calls `value` to recalculate it if `value` is a function (has no effect otherwise). Can provide a Timer whose tick resets `value`, or a float that provides the regular interval for the reset Timer.
trigger_mode: if "once" (default for all events except `.change()`) would not allow any submissions while an event is pending. If set to "multiple", unlimited submissions are allowed while pending, and "always_last" (default for `.change()` and `.key_up()` events) would allow a second submission after the pending event is complete.
js: optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components.
concurrency_limit: if set, this is the maximum number of this event that can be running simultaneously. Can be set to None to mean no concurrency_limit (any number of this event can be running simultaneously). Set to "default" to use the default concurrency limit (defined by the `default_concurrency_limit` parameter in `Blocks.queue()`, which itself is 1 by default).
concurrency_id: if set, this is the id of the concurrency group. Events with the same concurrency_id will be limited by the lowest set concurrency_limit.
show_api: whether to show this event in the "view API" page of the Gradio app, or in the ".view_api()" method of the Gradio clients. Unlike setting api_name to False, setting show_api to False will still allow downstream apps as well as the Clients to use this event. If fn is None, show_api will automatically be set to False.
"""
...
def edit(self,
fn: Callable[..., Any] | None = None,
inputs: Block | Sequence[Block] | set[Block] | None = None,
outputs: Block | Sequence[Block] | None = None,
api_name: str | None | Literal[False] = None,
scroll_to_output: bool = False,
show_progress: Literal["full", "minimal", "hidden"] = "full",
queue: bool | None = None,
batch: bool = False,
max_batch_size: int = 4,
preprocess: bool = True,
postprocess: bool = True,
cancels: dict[str, Any] | list[dict[str, Any]] | None = None,
every: Timer | float | None = None,
trigger_mode: Literal["once", "multiple", "always_last"] | None = None,
js: str | None = None,
concurrency_limit: int | None | Literal["default"] = "default",
concurrency_id: str | None = None,
show_api: bool = True,
) -> Dependency:
"""
Parameters:
fn: the function to call when this event is triggered. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
inputs: list of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
outputs: list of gradio.components to use as outputs. If the function returns no outputs, this should be an empty list.
api_name: defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, the endpoint will be exposed in the api docs as an unnamed endpoint, although this behavior will be changed in Gradio 4.0. If set to a string, the endpoint will be exposed in the api docs with the given name.
scroll_to_output: if True, will scroll to output component on completion
show_progress: how to show the progress animation while event is running: "full" shows a spinner which covers the output component area as well as a runtime display in the upper right corner, "minimal" only shows the runtime display, "hidden" shows no progress animation at all
queue: if True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app.
batch: if True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
max_batch_size: maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
preprocess: if False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
postprocess: if False, will not run postprocessing of component data before returning 'fn' output to the browser.
cancels: a list of other events to cancel when this listener is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. Functions that have not yet run (or generators that are iterating) will be cancelled, but functions that are currently running will be allowed to finish.
every: continously calls `value` to recalculate it if `value` is a function (has no effect otherwise). Can provide a Timer whose tick resets `value`, or a float that provides the regular interval for the reset Timer.
trigger_mode: if "once" (default for all events except `.change()`) would not allow any submissions while an event is pending. If set to "multiple", unlimited submissions are allowed while pending, and "always_last" (default for `.change()` and `.key_up()` events) would allow a second submission after the pending event is complete.
js: optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components.
concurrency_limit: if set, this is the maximum number of this event that can be running simultaneously. Can be set to None to mean no concurrency_limit (any number of this event can be running simultaneously). Set to "default" to use the default concurrency limit (defined by the `default_concurrency_limit` parameter in `Blocks.queue()`, which itself is 1 by default).
concurrency_id: if set, this is the id of the concurrency group. Events with the same concurrency_id will be limited by the lowest set concurrency_limit.
show_api: whether to show this event in the "view API" page of the Gradio app, or in the ".view_api()" method of the Gradio clients. Unlike setting api_name to False, setting show_api to False will still allow downstream apps as well as the Clients to use this event. If fn is None, show_api will automatically be set to False.
"""
...
def clear(self,
fn: Callable[..., Any] | None = None,
inputs: Block | Sequence[Block] | set[Block] | None = None,
outputs: Block | Sequence[Block] | None = None,
api_name: str | None | Literal[False] = None,
scroll_to_output: bool = False,
show_progress: Literal["full", "minimal", "hidden"] = "full",
queue: bool | None = None,
batch: bool = False,
max_batch_size: int = 4,
preprocess: bool = True,
postprocess: bool = True,
cancels: dict[str, Any] | list[dict[str, Any]] | None = None,
every: Timer | float | None = None,
trigger_mode: Literal["once", "multiple", "always_last"] | None = None,
js: str | None = None,
concurrency_limit: int | None | Literal["default"] = "default",
concurrency_id: str | None = None,
show_api: bool = True,
) -> Dependency:
"""
Parameters:
fn: the function to call when this event is triggered. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
inputs: list of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
outputs: list of gradio.components to use as outputs. If the function returns no outputs, this should be an empty list.
api_name: defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, the endpoint will be exposed in the api docs as an unnamed endpoint, although this behavior will be changed in Gradio 4.0. If set to a string, the endpoint will be exposed in the api docs with the given name.
scroll_to_output: if True, will scroll to output component on completion
show_progress: how to show the progress animation while event is running: "full" shows a spinner which covers the output component area as well as a runtime display in the upper right corner, "minimal" only shows the runtime display, "hidden" shows no progress animation at all
queue: if True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app.
batch: if True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
max_batch_size: maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
preprocess: if False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
postprocess: if False, will not run postprocessing of component data before returning 'fn' output to the browser.
cancels: a list of other events to cancel when this listener is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. Functions that have not yet run (or generators that are iterating) will be cancelled, but functions that are currently running will be allowed to finish.
every: continously calls `value` to recalculate it if `value` is a function (has no effect otherwise). Can provide a Timer whose tick resets `value`, or a float that provides the regular interval for the reset Timer.
trigger_mode: if "once" (default for all events except `.change()`) would not allow any submissions while an event is pending. If set to "multiple", unlimited submissions are allowed while pending, and "always_last" (default for `.change()` and `.key_up()` events) would allow a second submission after the pending event is complete.
js: optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components.
concurrency_limit: if set, this is the maximum number of this event that can be running simultaneously. Can be set to None to mean no concurrency_limit (any number of this event can be running simultaneously). Set to "default" to use the default concurrency limit (defined by the `default_concurrency_limit` parameter in `Blocks.queue()`, which itself is 1 by default).
concurrency_id: if set, this is the id of the concurrency group. Events with the same concurrency_id will be limited by the lowest set concurrency_limit.
show_api: whether to show this event in the "view API" page of the Gradio app, or in the ".view_api()" method of the Gradio clients. Unlike setting api_name to False, setting show_api to False will still allow downstream apps as well as the Clients to use this event. If fn is None, show_api will automatically be set to False.
"""
...
================================================
FILE: freesplatter/webui/gradio_customgs/templates/component/Canvas3D-60a8d213.js
================================================
import { c as Gr, g as av, r as sv } from "./Index-f5583db3.js";
function cv(an, ln) {
for (var Be = 0; Be < ln.length; Be++) {
const A = ln[Be];
if (typeof A != "string" && !Array.isArray(A)) {
for (const f in A)
if (f !== "default" && !(f in an)) {
const V = Object.getOwnPropertyDescriptor(A, f);
V && Object.defineProperty(an, f, V.get ? V : {
enumerable: !0,
get: () => A[f]
});
}
}
}
return Object.freeze(Object.defineProperty(an, Symbol.toStringTag, { value: "Module" }));
}
var Yc = { exports: {} }, of;
function sf() {
return of || (of = 1, function(an, ln) {
(function(Be, A) {
an.exports = A();
})(typeof self < "u" ? self : typeof Gr < "u" ? Gr : Gr, function() {
return function(Be) {
var A = {};
function f(V) {
if (A[V])
return A[V].exports;
var _ = A[V] = { i: V, l: !1, exports: {} };
return Be[V].call(_.exports, _, _.exports, f), _.l = !0, _.exports;
}
return f.m = Be, f.c = A, f.d = function(V, _, C) {
f.o(V, _) || Object.defineProperty(V, _, { enumerable: !0, get: C });
}, f.r = function(V) {
typeof Symbol < "u" && Symbol.toStringTag && Object.defineProperty(V, Symbol.toStringTag, { value: "Module" }), Object.defineProperty(V, "__esModule", { value: !0 });
}, f.t = function(V, _) {
if (1 & _ && (V = f(V)), 8 & _ || 4 & _ && typeof V == "object" && V && V.__esModule)
return V;
var C = /* @__PURE__ */ Object.create(null);
if (f.r(C), Object.defineProperty(C, "default", { enumerable: !0, value: V }), 2 & _ && typeof V != "string")
for (var u in V)
f.d(C, u, (function(I) {
return V[I];
}).bind(null, u));
return C;
}, f.n = function(V) {
var _ = V && V.__esModule ? function() {
return V.default;
} : function() {
return V;
};
return f.d(_, "a", _), _;
}, f.o = function(V, _) {
return Object.prototype.hasOwnProperty.call(V, _);
}, f.p = "", f(f.s = 169);
}([function(Be, A, f) {
f.d(A, "d", function() {
return O;
}), f.d(A, "e", function() {
return x;
}), f.d(A, "f", function() {
return m;
}), f.d(A, "b", function() {
return c;
}), f.d(A, "a", function() {
return T;
}), f.d(A, "c", function() {
return E;
});
var V = f(14), _ = f(28), C = f(44), u = f(11), I = f(74), O = function() {
function g(l, h) {
l === void 0 && (l = 0), h === void 0 && (h = 0), this.x = l, this.y = h;
}
return g.prototype.toString = function() {
return "{X: " + this.x + " Y: " + this.y + "}";
}, g.prototype.getClassName = function() {
return "Vector2";
}, g.prototype.getHashCode = function() {
var l = 0 | this.x;
return l = 397 * l ^ (0 | this.y);
}, g.prototype.toArray = function(l, h) {
return h === void 0 && (h = 0), l[h] = this.x, l[h + 1] = this.y, this;
}, g.prototype.fromArray = function(l, h) {
return h === void 0 && (h = 0), g.FromArrayToRef(l, h, this), this;
}, g.prototype.asArray = function() {
var l = new Array();
return this.toArray(l, 0), l;
}, g.prototype.copyFrom = function(l) {
return this.x = l.x, this.y = l.y, this;
}, g.prototype.copyFromFloats = function(l, h) {
return this.x = l, this.y = h, this;
}, g.prototype.set = function(l, h) {
return this.copyFromFloats(l, h);
}, g.prototype.add = function(l) {
return new g(this.x + l.x, this.y + l.y);
}, g.prototype.addToRef = function(l, h) {
return h.x = this.x + l.x, h.y = this.y + l.y, this;
}, g.prototype.addInPlace = function(l) {
return this.x += l.x, this.y += l.y, this;
}, g.prototype.addVector3 = function(l) {
return new g(this.x + l.x, this.y + l.y);
}, g.prototype.subtract = function(l) {
return new g(this.x - l.x, this.y - l.y);
}, g.prototype.subtractToRef = function(l, h) {
return h.x = this.x - l.x, h.y = this.y - l.y, this;
}, g.prototype.subtractInPlace = function(l) {
return this.x -= l.x, this.y -= l.y, this;
}, g.prototype.multiplyInPlace = function(l) {
return this.x *= l.x, this.y *= l.y, this;
}, g.prototype.multiply = function(l) {
return new g(this.x * l.x, this.y * l.y);
}, g.prototype.multiplyToRef = function(l, h) {
return h.x = this.x * l.x, h.y = this.y * l.y, this;
}, g.prototype.multiplyByFloats = function(l, h) {
return new g(this.x * l, this.y * h);
}, g.prototype.divide = function(l) {
return new g(this.x / l.x, this.y / l.y);
}, g.prototype.divideToRef = function(l, h) {
return h.x = this.x / l.x, h.y = this.y / l.y, this;
}, g.prototype.divideInPlace = function(l) {
return this.divideToRef(l, this);
}, g.prototype.negate = function() {
return new g(-this.x, -this.y);
}, g.prototype.negateInPlace = function() {
return this.x *= -1, this.y *= -1, this;
}, g.prototype.negateToRef = function(l) {
return l.copyFromFloats(-1 * this.x, -1 * this.y);
}, g.prototype.scaleInPlace = function(l) {
return this.x *= l, this.y *= l, this;
}, g.prototype.scale = function(l) {
var h = new g(0, 0);
return this.scaleToRef(l, h), h;
}, g.prototype.scaleToRef = function(l, h) {
return h.x = this.x * l, h.y = this.y * l, this;
}, g.prototype.scaleAndAddToRef = function(l, h) {
return h.x += this.x * l, h.y += this.y * l, this;
}, g.prototype.equals = function(l) {
return l && this.x === l.x && this.y === l.y;
}, g.prototype.equalsWithEpsilon = function(l, h) {
return h === void 0 && (h = _.a), l && V.a.WithinEpsilon(this.x, l.x, h) && V.a.WithinEpsilon(this.y, l.y, h);
}, g.prototype.floor = function() {
return new g(Math.floor(this.x), Math.floor(this.y));
}, g.prototype.fract = function() {
return new g(this.x - Math.floor(this.x), this.y - Math.floor(this.y));
}, g.prototype.length = function() {
return Math.sqrt(this.x * this.x + this.y * this.y);
}, g.prototype.lengthSquared = function() {
return this.x * this.x + this.y * this.y;
}, g.prototype.normalize = function() {
var l = this.length();
return l === 0 || (this.x /= l, this.y /= l), this;
}, g.prototype.clone = function() {
return new g(this.x, this.y);
}, g.Zero = function() {
return new g(0, 0);
}, g.One = function() {
return new g(1, 1);
}, g.FromArray = function(l, h) {
return h === void 0 && (h = 0), new g(l[h], l[h + 1]);
}, g.FromArrayToRef = function(l, h, v) {
v.x = l[h], v.y = l[h + 1];
}, g.CatmullRom = function(l, h, v, b, D) {
var w = D * D, N = D * w;
return new g(0.5 * (2 * h.x + (-l.x + v.x) * D + (2 * l.x - 5 * h.x + 4 * v.x - b.x) * w + (-l.x + 3 * h.x - 3 * v.x + b.x) * N), 0.5 * (2 * h.y + (-l.y + v.y) * D + (2 * l.y - 5 * h.y + 4 * v.y - b.y) * w + (-l.y + 3 * h.y - 3 * v.y + b.y) * N));
}, g.Clamp = function(l, h, v) {
var b = l.x;
b = (b = b > v.x ? v.x : b) < h.x ? h.x : b;
var D = l.y;
return new g(b, D = (D = D > v.y ? v.y : D) < h.y ? h.y : D);
}, g.Hermite = function(l, h, v, b, D) {
var w = D * D, N = D * w, M = 2 * N - 3 * w + 1, U = -2 * N + 3 * w, X = N - 2 * w + D, j = N - w;
return new g(l.x * M + v.x * U + h.x * X + b.x * j, l.y * M + v.y * U + h.y * X + b.y * j);
}, g.Lerp = function(l, h, v) {
return new g(l.x + (h.x - l.x) * v, l.y + (h.y - l.y) * v);
}, g.Dot = function(l, h) {
return l.x * h.x + l.y * h.y;
}, g.Normalize = function(l) {
var h = l.clone();
return h.normalize(), h;
}, g.Minimize = function(l, h) {
return new g(l.x < h.x ? l.x : h.x, l.y < h.y ? l.y : h.y);
}, g.Maximize = function(l, h) {
return new g(l.x > h.x ? l.x : h.x, l.y > h.y ? l.y : h.y);
}, g.Transform = function(l, h) {
var v = g.Zero();
return g.TransformToRef(l, h, v), v;
}, g.TransformToRef = function(l, h, v) {
var b = h.m, D = l.x * b[0] + l.y * b[4] + b[12], w = l.x * b[1] + l.y * b[5] + b[13];
v.x = D, v.y = w;
}, g.PointInTriangle = function(l, h, v, b) {
var D = 0.5 * (-v.y * b.x + h.y * (-v.x + b.x) + h.x * (v.y - b.y) + v.x * b.y), w = D < 0 ? -1 : 1, N = (h.y * b.x - h.x * b.y + (b.y - h.y) * l.x + (h.x - b.x) * l.y) * w, M = (h.x * v.y - h.y * v.x + (h.y - v.y) * l.x + (v.x - h.x) * l.y) * w;
return N > 0 && M > 0 && N + M < 2 * D * w;
}, g.Distance = function(l, h) {
return Math.sqrt(g.DistanceSquared(l, h));
}, g.DistanceSquared = function(l, h) {
var v = l.x - h.x, b = l.y - h.y;
return v * v + b * b;
}, g.Center = function(l, h) {
var v = l.add(h);
return v.scaleInPlace(0.5), v;
}, g.DistanceOfPointFromSegment = function(l, h, v) {
var b = g.DistanceSquared(h, v);
if (b === 0)
return g.Distance(l, h);
var D = v.subtract(h), w = Math.max(0, Math.min(1, g.Dot(l.subtract(h), D) / b)), N = h.add(D.multiplyByFloats(w, w));
return g.Distance(l, N);
}, g;
}(), x = function() {
function g(l, h, v) {
l === void 0 && (l = 0), h === void 0 && (h = 0), v === void 0 && (v = 0), this._isDirty = !0, this._x = l, this._y = h, this._z = v;
}
return Object.defineProperty(g.prototype, "x", { get: function() {
return this._x;
}, set: function(l) {
this._x = l, this._isDirty = !0;
}, enumerable: !1, configurable: !0 }), Object.defineProperty(g.prototype, "y", { get: function() {
return this._y;
}, set: function(l) {
this._y = l, this._isDirty = !0;
}, enumerable: !1, configurable: !0 }), Object.defineProperty(g.prototype, "z", { get: function() {
return this._z;
}, set: function(l) {
this._z = l, this._isDirty = !0;
}, enumerable: !1, configurable: !0 }), g.prototype.toString = function() {
return "{X: " + this._x + " Y:" + this._y + " Z:" + this._z + "}";
}, g.prototype.getClassName = function() {
return "Vector3";
}, g.prototype.getHashCode = function() {
var l = 0 | this._x;
return l = 397 * (l = 397 * l ^ (0 | this._y)) ^ (0 | this._z);
}, g.prototype.asArray = function() {
var l = [];
return this.toArray(l, 0), l;
}, g.prototype.toArray = function(l, h) {
return h === void 0 && (h = 0), l[h] = this._x, l[h + 1] = this._y, l[h + 2] = this._z, this;
}, g.prototype.fromArray = function(l, h) {
return h === void 0 && (h = 0), g.FromArrayToRef(l, h, this), this;
}, g.prototype.toQuaternion = function() {
return c.RotationYawPitchRoll(this._y, this._x, this._z);
}, g.prototype.addInPlace = function(l) {
return this.addInPlaceFromFloats(l._x, l._y, l._z);
}, g.prototype.addInPlaceFromFloats = function(l, h, v) {
return this.x += l, this.y += h, this.z += v, this;
}, g.prototype.add = function(l) {
return new g(this._x + l._x, this._y + l._y, this._z + l._z);
}, g.prototype.addToRef = function(l, h) {
return h.copyFromFloats(this._x + l._x, this._y + l._y, this._z + l._z);
}, g.prototype.subtractInPlace = function(l) {
return this.x -= l._x, this.y -= l._y, this.z -= l._z, this;
}, g.prototype.subtract = function(l) {
return new g(this._x - l._x, this._y - l._y, this._z - l._z);
}, g.prototype.subtractToRef = function(l, h) {
return this.subtractFromFloatsToRef(l._x, l._y, l._z, h);
}, g.prototype.subtractFromFloats = function(l, h, v) {
return new g(this._x - l, this._y - h, this._z - v);
}, g.prototype.subtractFromFloatsToRef = function(l, h, v, b) {
return b.copyFromFloats(this._x - l, this._y - h, this._z - v);
}, g.prototype.negate = function() {
return new g(-this._x, -this._y, -this._z);
}, g.prototype.negateInPlace = function() {
return this.x *= -1, this.y *= -1, this.z *= -1, this;
}, g.prototype.negateToRef = function(l) {
return l.copyFromFloats(-1 * this._x, -1 * this._y, -1 * this._z);
}, g.prototype.scaleInPlace = function(l) {
return this.x *= l, this.y *= l, this.z *= l, this;
}, g.prototype.scale = function(l) {
return new g(this._x * l, this._y * l, this._z * l);
}, g.prototype.scaleToRef = function(l, h) {
return h.copyFromFloats(this._x * l, this._y * l, this._z * l);
}, g.prototype.scaleAndAddToRef = function(l, h) {
return h.addInPlaceFromFloats(this._x * l, this._y * l, this._z * l);
}, g.prototype.projectOnPlane = function(l, h) {
var v = g.Zero();
return this.projectOnPlaneToRef(l, h, v), v;
}, g.prototype.projectOnPlaneToRef = function(l, h, v) {
var b = l.normal, D = l.d, w = S.Vector3[0];
this.subtractToRef(h, w), w.normalize();
var N = g.Dot(w, b), M = -(g.Dot(h, b) + D) / N, U = w.scaleInPlace(M);
h.addToRef(U, v);
}, g.prototype.equals = function(l) {
return l && this._x === l._x && this._y === l._y && this._z === l._z;
}, g.prototype.equalsWithEpsilon = function(l, h) {
return h === void 0 && (h = _.a), l && V.a.WithinEpsilon(this._x, l._x, h) && V.a.WithinEpsilon(this._y, l._y, h) && V.a.WithinEpsilon(this._z, l._z, h);
}, g.prototype.equalsToFloats = function(l, h, v) {
return this._x === l && this._y === h && this._z === v;
}, g.prototype.multiplyInPlace = function(l) {
return this.x *= l._x, this.y *= l._y, this.z *= l._z, this;
}, g.prototype.multiply = function(l) {
return this.multiplyByFloats(l._x, l._y, l._z);
}, g.prototype.multiplyToRef = function(l, h) {
return h.copyFromFloats(this._x * l._x, this._y * l._y, this._z * l._z);
}, g.prototype.multiplyByFloats = function(l, h, v) {
return new g(this._x * l, this._y * h, this._z * v);
}, g.prototype.divide = function(l) {
return new g(this._x / l._x, this._y / l._y, this._z / l._z);
}, g.prototype.divideToRef = function(l, h) {
return h.copyFromFloats(this._x / l._x, this._y / l._y, this._z / l._z);
}, g.prototype.divideInPlace = function(l) {
return this.divideToRef(l, this);
}, g.prototype.minimizeInPlace = function(l) {
return this.minimizeInPlaceFromFloats(l._x, l._y, l._z);
}, g.prototype.maximizeInPlace = function(l) {
return this.maximizeInPlaceFromFloats(l._x, l._y, l._z);
}, g.prototype.minimizeInPlaceFromFloats = function(l, h, v) {
return l < this._x && (this.x = l), h < this._y && (this.y = h), v < this._z && (this.z = v), this;
}, g.prototype.maximizeInPlaceFromFloats = function(l, h, v) {
return l > this._x && (this.x = l), h > this._y && (this.y = h), v > this._z && (this.z = v), this;
}, g.prototype.isNonUniformWithinEpsilon = function(l) {
var h = Math.abs(this._x), v = Math.abs(this._y);
if (!V.a.WithinEpsilon(h, v, l))
return !0;
var b = Math.abs(this._z);
return !V.a.WithinEpsilon(h, b, l) || !V.a.WithinEpsilon(v, b, l);
}, Object.defineProperty(g.prototype, "isNonUniform", { get: function() {
var l = Math.abs(this._x);
return l !== Math.abs(this._y) || l !== Math.abs(this._z);
}, enumerable: !1, configurable: !0 }), g.prototype.floor = function() {
return new g(Math.floor(this._x), Math.floor(this._y), Math.floor(this._z));
}, g.prototype.fract = function() {
return new g(this._x - Math.floor(this._x), this._y - Math.floor(this._y), this._z - Math.floor(this._z));
}, g.prototype.length = function() {
return Math.sqrt(this._x * this._x + this._y * this._y + this._z * this._z);
}, g.prototype.lengthSquared = function() {
return this._x * this._x + this._y * this._y + this._z * this._z;
}, g.prototype.normalize = function() {
return this.normalizeFromLength(this.length());
}, g.prototype.reorderInPlace = function(l) {
var h = this;
return (l = l.toLowerCase()) === "xyz" || (S.Vector3[0].copyFrom(this), ["x", "y", "z"
gitextract_e5wtol89/ ├── .gitignore ├── LICENSE.txt ├── README.md ├── app.py ├── configs/ │ ├── freesplatter-object-2dgs.yaml │ ├── freesplatter-object.yaml │ └── freesplatter-scene.yaml ├── freesplatter/ │ ├── __init__.py │ ├── hunyuan/ │ │ ├── __init__.py │ │ ├── hunyuan3d_mvd_std_pipeline.py │ │ └── utils.py │ ├── models/ │ │ ├── __init__.py │ │ ├── model.py │ │ ├── renderer/ │ │ │ ├── __init__.py │ │ │ ├── gaussian_renderer.py │ │ │ └── gaussian_utils.py │ │ ├── renderer_2dgs/ │ │ │ ├── __init__.py │ │ │ ├── gaussian_renderer.py │ │ │ └── gaussian_utils.py │ │ └── transformer.py │ ├── utils/ │ │ ├── __init__.py │ │ ├── camera_util.py │ │ ├── geometry_util.py │ │ ├── infer_util.py │ │ ├── mesh_optim.py │ │ └── recon_util.py │ └── webui/ │ ├── __init__.py │ ├── camera_viewer/ │ │ ├── __init__.py │ │ ├── utils.py │ │ └── visualizer.py │ ├── gradio_customgs/ │ │ ├── __init__.py │ │ ├── customgs.py │ │ ├── customgs.pyi │ │ └── templates/ │ │ ├── component/ │ │ │ ├── Canvas3D-60a8d213.js │ │ │ ├── Canvas3DGS-0fbc0d9a.js │ │ │ ├── Index-f5583db3.js │ │ │ ├── __vite-browser-external-2447137e.js │ │ │ ├── index.js │ │ │ ├── style.css │ │ │ └── wrapper-6f348d45-19fa94bf.js │ │ └── example/ │ │ ├── index.js │ │ └── style.css │ ├── gradio_custommodel3d/ │ │ ├── __init__.py │ │ ├── custommodel3d.py │ │ ├── custommodel3d.pyi │ │ └── templates/ │ │ ├── component/ │ │ │ ├── Canvas3D-e42d3d6b.js │ │ │ ├── Canvas3DGS-f5539f54.js │ │ │ ├── Index-0bb1de05.js │ │ │ ├── __vite-browser-external-2447137e.js │ │ │ ├── index.js │ │ │ ├── style.css │ │ │ └── wrapper-6f348d45-f837cf34.js │ │ └── example/ │ │ ├── index.js │ │ └── style.css │ ├── parameters.py │ ├── runner.py │ ├── shared_opts.py │ ├── style.css │ ├── tab_img_to_3d.py │ ├── tab_instant3d.py │ ├── tab_text_to_img_to_3d.py │ ├── tab_views_to_3d.py │ └── tab_views_to_scene.py └── requirements.txt
SYMBOL INDEX (1580 symbols across 37 files)
FILE: freesplatter/hunyuan/hunyuan3d_mvd_std_pipeline.py
function scale_latents (line 81) | def scale_latents(latents): return (latents - 0.22) * 0.75
function unscale_latents (line 82) | def unscale_latents(latents): return (latents / 0.75) + 0.22
function scale_image (line 83) | def scale_image(image): return (image - 0.5) / 0.5
function scale_image_2 (line 84) | def scale_image_2(image): return (image * 0.5) / 0.8
function unscale_image (line 85) | def unscale_image(image): return (image * 0.5) + 0.5
function unscale_image_2 (line 86) | def unscale_image_2(image): return (image * 0.8) / 0.5
class ReferenceOnlyAttnProc (line 91) | class ReferenceOnlyAttnProc(torch.nn.Module):
method __init__ (line 92) | def __init__(self, chained_proc, enabled=False, name=None):
method __call__ (line 98) | def __call__(self, attn, hidden_states, encoder_hidden_states=None, at...
class RefOnlyNoisedUNet (line 107) | class RefOnlyNoisedUNet(torch.nn.Module):
method __init__ (line 108) | def __init__(self, unet, scheduler) -> None:
method __getattr__ (line 123) | def __getattr__(self, name: str):
method forward (line 129) | def forward(
class HunYuan3D_MVD_Std_Pipeline (line 185) | class HunYuan3D_MVD_Std_Pipeline(diffusers.DiffusionPipeline):
method __init__ (line 186) | def __init__(
method prepare (line 212) | def prepare(self):
method encode_image (line 217) | def encode_image(self, image: torch.Tensor, scale_factor: bool = False):
method prepare_latents (line 222) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
method _get_add_time_ids (line 244) | def _get_add_time_ids(
method prepare_extra_step_kwargs (line 264) | def prepare_extra_step_kwargs(self, generator, eta):
method guidance_scale (line 280) | def guidance_scale(self):
method interrupt (line 284) | def interrupt(self):
method do_classifier_free_guidance (line 288) | def do_classifier_free_guidance(self):
method __call__ (line 292) | def __call__(
method save_pretrained (line 459) | def save_pretrained(self, save_directory):
method from_pretrained (line 466) | def from_pretrained(cls, pretrained_model_name_or_path, *model_args, *...
FILE: freesplatter/hunyuan/utils.py
function to_rgb_image (line 26) | def to_rgb_image(maybe_rgba: Image.Image):
function white_out_background (line 43) | def white_out_background(pil_img, is_gray_fg=True):
function recenter_img (line 59) | def recenter_img(img, size=512, color=(255,255,255)):
FILE: freesplatter/models/model.py
function RGB2SH (line 13) | def RGB2SH(rgb):
class FreeSplatterModel (line 17) | class FreeSplatterModel(nn.Module):
method __init__ (line 18) | def __init__(
method forward_gaussians (line 41) | def forward_gaussians(self, images, **kwargs):
method forward_renderer (line 56) | def forward_renderer(self, gaussians, c2ws, fxfycxcy, **kwargs):
method estimate_focals (line 67) | def estimate_focals(
method estimate_poses (line 102) | def estimate_poses(
FILE: freesplatter/models/renderer/gaussian_renderer.py
class GaussianRenderer (line 6) | class GaussianRenderer:
method __init__ (line 7) | def __init__(self, renderer_config=None):
method render (line 24) | def render(self, latent, output_fxfycxcy, output_c2ws, rescale=None, r...
FILE: freesplatter/models/renderer/gaussian_utils.py
function strip_lowerdiag (line 19) | def strip_lowerdiag(L):
function strip_symmetric (line 31) | def strip_symmetric(sym):
function build_rotation (line 35) | def build_rotation(r):
function build_scaling_rotation (line 61) | def build_scaling_rotation(s, r):
class Camera (line 73) | class Camera(nn.Module):
method __init__ (line 74) | def __init__(self, C2W, fxfycxcy, h, w):
class GaussianModel (line 130) | class GaussianModel:
method setup_functions (line 131) | def setup_functions(self, scaling_activation_type='sigmoid', scale_min...
method __init__ (line 156) | def __init__(self, sh_degree: int, scaling_activation_type='exp', scal...
method set_data (line 169) | def set_data(self, xyz, features, scaling, rotation, opacity, rescale=...
method to (line 184) | def to(self, device):
method get_scaling (line 195) | def get_scaling(self):
method get_rotation (line 206) | def get_rotation(self):
method get_xyz (line 210) | def get_xyz(self):
method get_features (line 215) | def get_features(self):
method get_opacity (line 224) | def get_opacity(self):
method get_covariance (line 227) | def get_covariance(self, scaling_modifier=1):
method construct_list_of_attributes (line 232) | def construct_list_of_attributes(self, num_rest=0):
method save_ply_vis (line 246) | def save_ply_vis(self, path):
method save_ply (line 270) | def save_ply(self, path):
function render (line 325) | def render(
FILE: freesplatter/models/renderer_2dgs/gaussian_renderer.py
class GaussianRenderer (line 6) | class GaussianRenderer:
method __init__ (line 7) | def __init__(self, renderer_config=None):
method render (line 24) | def render(self, latent, output_fxfycxcy, output_c2ws, render_size=None):
FILE: freesplatter/models/renderer_2dgs/gaussian_utils.py
function strip_lowerdiag (line 19) | def strip_lowerdiag(L):
function strip_symmetric (line 31) | def strip_symmetric(sym):
function build_rotation (line 35) | def build_rotation(r):
function build_scaling_rotation (line 61) | def build_scaling_rotation(s, r):
function build_covariance_from_scaling_rotation (line 73) | def build_covariance_from_scaling_rotation(scaling, scaling_modifier, ro...
function depths_to_points (line 80) | def depths_to_points(view, depthmap):
function depth_to_normal (line 98) | def depth_to_normal(view, depth):
class Camera (line 112) | class Camera(nn.Module):
method __init__ (line 113) | def __init__(self, C2W, fxfycxcy, h, w):
class GaussianModel (line 169) | class GaussianModel:
method setup_functions (line 170) | def setup_functions(self, scaling_activation_type='sigmoid', scale_min...
method __init__ (line 190) | def __init__(self, sh_degree: int, scaling_activation_type='exp', scal...
method set_data (line 203) | def set_data(self, xyz, features, scaling, rotation, opacity):
method to (line 215) | def to(self, device):
method get_scaling (line 226) | def get_scaling(self):
method get_rotation (line 236) | def get_rotation(self):
method get_xyz (line 240) | def get_xyz(self):
method get_features (line 244) | def get_features(self):
method get_opacity (line 253) | def get_opacity(self):
method get_covariance (line 256) | def get_covariance(self, scaling_modifier=1):
method construct_list_of_attributes (line 261) | def construct_list_of_attributes(self, num_rest=0):
method save_ply_vis (line 275) | def save_ply_vis(self, path):
method save_ply (line 299) | def save_ply(self, path):
function render (line 354) | def render(
FILE: freesplatter/models/transformer.py
function exists (line 10) | def exists(val):
function default (line 14) | def default(val, d):
class CrossAttention (line 20) | class CrossAttention(nn.Module):
method __init__ (line 21) | def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, ...
method forward (line 37) | def forward(self, x, context=None, mask=None):
class BasicTransformerBlock (line 53) | class BasicTransformerBlock(nn.Module):
method __init__ (line 54) | def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None,...
method forward (line 68) | def forward(self, x, context=None):
class Transformer (line 75) | class Transformer(nn.Module):
method __init__ (line 76) | def __init__(
method _init_weights (line 114) | def _init_weights(self, m):
method interpolate_pos_encoding (line 124) | def interpolate_pos_encoding(self, x, w, h):
method forward (line 145) | def forward(self, images):
FILE: freesplatter/utils/camera_util.py
function normalize_vecs (line 5) | def normalize_vecs(vectors: torch.Tensor) -> torch.Tensor:
function blender_to_opencv (line 12) | def blender_to_opencv(camera_matrix: torch.Tensor):
function pad_camera_extrinsics_4x4 (line 25) | def pad_camera_extrinsics_4x4(extrinsics):
function create_camera_to_world (line 35) | def create_camera_to_world(camera_position: torch.Tensor, look_at: torch...
function FOV_to_intrinsics (line 76) | def FOV_to_intrinsics(fov, device='cpu'):
function normalize_cameras (line 87) | def normalize_cameras(extrinsics, camera_position: torch.Tensor = None, ...
FILE: freesplatter/utils/geometry_util.py
function normalize_intrinsics (line 8) | def normalize_intrinsics(intrinsics, image_shape):
function unnormalize_intrinsics (line 16) | def unnormalize_intrinsics(intrinsics, image_shape):
function homogenize_points (line 26) | def homogenize_points(points):
function normalize_homogenous_points (line 31) | def normalize_homogenous_points(points):
function pixel_space_to_camera_space (line 36) | def pixel_space_to_camera_space(pixel_space_points, depth, intrinsics):
function camera_space_to_world_space (line 54) | def camera_space_to_world_space(camera_space_points, c2w):
function camera_space_to_pixel_space (line 70) | def camera_space_to_pixel_space(camera_space_points, intrinsics):
function world_space_to_camera_space (line 86) | def world_space_to_camera_space(world_space_points, c2w):
function unproject_depth (line 102) | def unproject_depth(depth, intrinsics, c2w):
function calculate_in_frustum_mask (line 134) | def calculate_in_frustum_mask(depth_1, intrinsics_1, c2w_1, depth_2, int...
FILE: freesplatter/utils/infer_util.py
function instantiate_from_config (line 13) | def instantiate_from_config(config):
function get_obj_from_str (line 23) | def get_obj_from_str(string, reload=False):
function resize_without_crop (line 31) | def resize_without_crop(pil_image, target_width, target_height):
function numpy2pytorch (line 37) | def numpy2pytorch(imgs):
function remove_background (line 44) | def remove_background(
function remove_background (line 72) | def remove_background(
function resize_foreground (line 114) | def resize_foreground(
function rgba_to_white_background (line 155) | def rgba_to_white_background(image: PIL.Image.Image) -> torch.Tensor:
function save_video (line 163) | def save_video(
FILE: freesplatter/utils/mesh_optim.py
function parametrize_mesh (line 15) | def parametrize_mesh(vertices: np.array, faces: np.array):
function bake_texture (line 31) | def bake_texture(
function optimize_mesh (line 153) | def optimize_mesh(
FILE: freesplatter/utils/recon_util.py
function fibonacci_sampling_on_sphere (line 16) | def fibonacci_sampling_on_sphere(num_samples=1):
function get_fibonacci_cameras (line 33) | def get_fibonacci_cameras(N=20, radius=2.0, device='cuda'):
function get_circular_cameras (line 61) | def get_circular_cameras(N=120, elevation=0, radius=2.0, normalize=True,...
function rgbd_to_mesh (line 83) | def rgbd_to_mesh(images, depths, c2ws, fov, mesh_path, cam_elev_thr=0):
function viewmatrix (line 143) | def viewmatrix(lookdir, up, position):
function normalize (line 152) | def normalize(x):
function generate_interpolated_path (line 157) | def generate_interpolated_path(poses, n_interp, spline_degree=5,
function xy_grid (line 207) | def xy_grid(W, H, device=None, origin=(0, 0), unsqueeze=None, cat_dim=-1...
function estimate_focal (line 232) | def estimate_focal(pts3d, pp=None, mask=None, min_focal=0., max_focal=np...
function fast_pnp (line 279) | def fast_pnp(pts3d, mask, focal=None, pp=None, niter_PnP=10):
FILE: freesplatter/webui/camera_viewer/utils.py
function load_image (line 6) | def load_image(fpath, sz=256):
function spherical_to_cartesian (line 12) | def spherical_to_cartesian(sph):
function cartesian_to_spherical (line 23) | def cartesian_to_spherical(xyz):
function elu_to_c2w (line 33) | def elu_to_c2w(eye, lookat, up):
function c2w_to_elu (line 65) | def c2w_to_elu(c2w):
function qvec_to_rotmat (line 76) | def qvec_to_rotmat(qvec):
function rotmat (line 94) | def rotmat(a, b):
function recenter_cameras (line 106) | def recenter_cameras(c2ws):
function rescale_cameras (line 122) | def rescale_cameras(c2ws, scale):
FILE: freesplatter/webui/camera_viewer/visualizer.py
function calc_cam_cone_pts_3d (line 8) | def calc_cam_cone_pts_3d(c2w, fov_deg, zoom = 1.0):
class CameraVisualizer (line 52) | class CameraVisualizer:
method __init__ (line 54) | def __init__(self, poses, legends, colors, images=None, mesh_path=None...
method encode_image (line 90) | def encode_image(self, raw_image):
method update_figure (line 108) | def update_figure(
FILE: freesplatter/webui/gradio_customgs/customgs.py
class CustomGS (line 15) | class CustomGS(Component):
method __init__ (line 27) | def __init__(
method preprocess (line 94) | def preprocess(self, payload: FileData | None) -> str | None:
method postprocess (line 105) | def postprocess(self, value: str | Path | None) -> FileData | None:
method process_example (line 116) | def process_example(self, input_data: str | Path | None) -> str:
method example_inputs (line 119) | def example_inputs(self):
FILE: freesplatter/webui/gradio_customgs/customgs.pyi
class CustomGS (line 16) | class CustomGS(Component):
method __init__ (line 28) | def __init__(
method preprocess (line 95) | def preprocess(self, payload: FileData | None) -> str | None:
method postprocess (line 106) | def postprocess(self, value: str | Path | None) -> FileData | None:
method process_example (line 117) | def process_example(self, input_data: str | Path | None) -> str:
method example_inputs (line 120) | def example_inputs(self):
method change (line 129) | def change(self,
method upload (line 174) | def upload(self,
method edit (line 219) | def edit(self,
method clear (line 264) | def clear(self,
FILE: freesplatter/webui/gradio_customgs/templates/component/Canvas3D-60a8d213.js
function cv (line 2) | function cv(an, ln) {
function sf (line 19) | function sf() {
function V (line 64927) | function V(_) {
function O (line 64966) | function O() {
function O (line 64978) | function O(x) {
function m (line 66144) | function m() {
function C (line 66432) | function C(O, x, m, c) {
function u (line 66440) | function u() {
function O (line 66469) | function O() {
function C (line 66527) | function C(I, O) {
function ae (line 66603) | function ae(ee) {
function ae (line 66665) | function ae(ee) {
function ae (line 66705) | function ae(ee) {
function ae (line 66724) | function ae(ee) {
function ae (line 66759) | function ae(ee) {
function ae (line 66798) | function ae(ee) {
function ae (line 66826) | function ae(ee) {
function ae (line 66852) | function ae(ee) {
function ae (line 66882) | function ae(ee) {
function ae (line 66910) | function ae(ee) {
function ae (line 66936) | function ae(ee) {
function ae (line 66958) | function ae(ee) {
function ae (line 67042) | function ae(ee, K) {
function ae (line 67096) | function ae(ee) {
function ae (line 67125) | function ae(ee) {
function ae (line 67150) | function ae(ee) {
function ae (line 67160) | function ae(ee) {
function ae (line 67179) | function ae(ee) {
function ae (line 67199) | function ae(ee) {
function ae (line 67334) | function ae(ee) {
function ae (line 67477) | function ae(ee) {
function ae (line 67499) | function ae(ee) {
function ae (line 67521) | function ae(ee) {
function W (line 67598) | function W() {
function W (line 67704) | function W(q) {
function W (line 68100) | function W() {
function W (line 68219) | function W() {
function W (line 68312) | function W(q) {
function q (line 68392) | function q() {
function q (line 68421) | function q() {
function I (line 68532) | function I() {
function I (line 68561) | function I(O) {
function u (line 68735) | function u() {
function vv (line 68915) | function vv(an) {
function yv (line 68932) | function yv(an, ln) {
function bv (line 68935) | function bv(an, ln, Be) {
class Ev (line 69002) | class Ev extends uv {
method constructor (line 69003) | constructor(ln) {
method reset_camera_position (line 69014) | get reset_camera_position() {
FILE: freesplatter/webui/gradio_customgs/templates/component/Canvas3DGS-0fbc0d9a.js
class X (line 2) | class X {
method constructor (line 3) | constructor(U = 0, Q = 0, F = 0) {
method equals (line 6) | equals(U) {
method add (line 9) | add(U) {
method subtract (line 12) | subtract(U) {
method multiply (line 15) | multiply(U) {
method divide (line 18) | divide(U) {
method cross (line 21) | cross(U) {
method dot (line 25) | dot(U) {
method lerp (line 28) | lerp(U, Q) {
method magnitude (line 31) | magnitude() {
method distanceTo (line 34) | distanceTo(U) {
method normalize (line 37) | normalize() {
method flat (line 41) | flat() {
method clone (line 44) | clone() {
method toString (line 47) | toString() {
method One (line 50) | static One(U = 1) {
class z (line 54) | class z {
method constructor (line 55) | constructor(U = 0, Q = 0, F = 0, l = 1) {
method equals (line 58) | equals(U) {
method normalize (line 61) | normalize() {
method multiply (line 65) | multiply(U) {
method inverse (line 69) | inverse() {
method apply (line 73) | apply(U) {
method flat (line 77) | flat() {
method clone (line 80) | clone() {
method FromEuler (line 83) | static FromEuler(U) {
method toEuler (line 87) | toEuler() {
method FromMatrix3 (line 95) | static FromMatrix3(U) {
method FromAxisAngle (line 113) | static FromAxisAngle(U, Q) {
method LookRotation (line 117) | static LookRotation(U) {
method toString (line 126) | toString() {
class gU (line 130) | class gU {
method constructor (line 131) | constructor() {
class nU (line 144) | class nU {
method constructor (line 145) | constructor(U = 1, Q = 0, F = 0, l = 0, Z = 0, t = 1, d = 0, B = 0, n ...
method equals (line 148) | equals(U) {
method multiply (line 158) | multiply(U) {
method clone (line 162) | clone() {
method determinant (line 166) | determinant() {
method invert (line 170) | invert() {
method Compose (line 177) | static Compose(U, Q, F) {
method toString (line 181) | toString() {
class jU (line 185) | class jU extends Event {
method constructor (line 186) | constructor(U) {
class OU (line 190) | class OU extends Event {
method constructor (line 191) | constructor(U) {
class LU (line 195) | class LU extends Event {
method constructor (line 196) | constructor(U) {
class NU (line 200) | class NU extends gU {
method constructor (line 201) | constructor() {
method _updateMatrix (line 213) | _updateMatrix() {
method position (line 216) | get position() {
method position (line 219) | set position(U) {
method rotation (line 222) | get rotation() {
method rotation (line 225) | set rotation(U) {
method scale (line 228) | get scale() {
method scale (line 231) | set scale(U) {
method forward (line 234) | get forward() {
method transform (line 238) | get transform() {
class VU (line 242) | class VU {
method constructor (line 243) | constructor(U = 1, Q = 0, F = 0, l = 0, Z = 1, t = 0, d = 0, B = 0, n ...
method equals (line 246) | equals(U) {
method multiply (line 256) | multiply(U) {
method clone (line 260) | clone() {
method Eye (line 264) | static Eye(U = 1) {
method Diagonal (line 267) | static Diagonal(U) {
method RotationFromQuaternion (line 270) | static RotationFromQuaternion(U) {
method RotationFromEuler (line 273) | static RotationFromEuler(U) {
method toString (line 277) | toString() {
class FU (line 281) | class FU {
method constructor (line 282) | constructor(U = 0, Q = null, F = null, l = null, Z = null) {
method Deserialize (line 309) | static Deserialize(U) {
method vertexCount (line 315) | get vertexCount() {
method positions (line 318) | get positions() {
method rotations (line 321) | get rotations() {
method scales (line 324) | get scales() {
method colors (line 327) | get colors() {
method selection (line 330) | get selection() {
class WU (line 335) | class WU {
method SplatToPLY (line 336) | static SplatToPLY(U, Q) {
class ZU (line 365) | class ZU extends NU {
method constructor (line 366) | constructor(U = void 0) {
method saveToFile (line 375) | saveToFile(U = null, Q = null) {
method data (line 398) | get data() {
method selected (line 401) | get selected() {
method selected (line 404) | set selected(U) {
method colorTransforms (line 407) | get colorTransforms() {
method colorTransformsMap (line 410) | get colorTransformsMap() {
class PU (line 414) | class PU {
method constructor (line 415) | constructor() {
method fx (line 425) | get fx() {
method fx (line 428) | set fx(U) {
method fy (line 431) | get fy() {
method fy (line 434) | set fy(U) {
method near (line 437) | get near() {
method near (line 440) | set near(U) {
method far (line 443) | get far() {
method far (line 446) | set far(U) {
method width (line 449) | get width() {
method height (line 452) | get height() {
method projectionMatrix (line 455) | get projectionMatrix() {
method viewMatrix (line 458) | get viewMatrix() {
method viewProj (line 461) | get viewProj() {
class BU (line 465) | class BU {
method constructor (line 466) | constructor(U = 0, Q = 0, F = 0, l = 0) {
method equals (line 469) | equals(U) {
method add (line 472) | add(U) {
method subtract (line 475) | subtract(U) {
method multiply (line 478) | multiply(U) {
method dot (line 481) | dot(U) {
method lerp (line 484) | lerp(U, Q) {
method magnitude (line 487) | magnitude() {
method distanceTo (line 490) | distanceTo(U) {
method normalize (line 493) | normalize() {
method flat (line 497) | flat() {
method clone (line 500) | clone() {
method toString (line 503) | toString() {
class _U (line 507) | class _U extends NU {
method constructor (line 508) | constructor(U = void 0) {
method data (line 516) | get data() {
class qU (line 520) | class qU extends gU {
method constructor (line 521) | constructor() {
method saveToFile (line 543) | saveToFile(U = null, Q = null) {
method objects (line 574) | get objects() {
function GU (line 578) | async function GU(g, U) {
function EU (line 584) | async function EU(g, U) {
class $U (line 611) | class $U {
method LoadAsync (line 612) | static async LoadAsync(U, Q, F, l = !1) {
method LoadFromFileAsync (line 616) | static async LoadFromFileAsync(U, Q, F) {
method LoadFromArrayBuffer (line 629) | static LoadFromArrayBuffer(U, Q) {
class UF (line 634) | class UF {
method LoadAsync (line 635) | static async LoadAsync(U, Q, F, l = "", Z = !1) {
method LoadFromFileAsync (line 641) | static async LoadFromFileAsync(U, Q, F, l = "") {
method LoadFromArrayBuffer (line 654) | static LoadFromArrayBuffer(U, Q, F = "") {
method _ParsePLYBuffer (line 658) | static _ParsePLYBuffer(U, Q) {
function FF (line 762) | function FF(g, U, Q) {
function XU (line 775) | function XU(g, U, Q) {
class QF (line 782) | class QF {
method constructor (line 783) | constructor(U, Q) {
method renderer (line 811) | get renderer() {
method scene (line 814) | get scene() {
method camera (line 817) | get camera() {
method program (line 820) | get program() {
method passes (line 823) | get passes() {
method started (line 826) | get started() {
function V (line 841) | function V(A) {
function u (line 851) | function u() {
function w (line 856) | function w(A) {
function M (line 863) | function M(A) {
function i (line 875) | function i(A, e, a, W) {
function O (line 895) | function O(A, e, a = {}) {
function oU (line 927) | function oU(A) {
method constructor (line 1041) | constructor(A) {
method constructor (line 1045) | constructor(A) {
function o (line 1075) | function o(J) {
method fromWireType (line 1082) | fromWireType(W) {
method toWireType (line 1099) | toWireType(W, o) {
method destructorFunction (line 1146) | destructorFunction(W) {
method destructorFunction (line 1164) | destructorFunction(m) {
function e (line 1186) | function e(W, o) {
function CU (line 1208) | function CU() {
class BF (line 1237) | class BF {
method constructor (line 1238) | constructor(U) {
method offsets (line 1332) | get offsets() {
method data (line 1335) | get data() {
method width (line 1338) | get width() {
method height (line 1341) | get height() {
method transforms (line 1344) | get transforms() {
method transformsWidth (line 1347) | get transformsWidth() {
method transformsHeight (line 1350) | get transformsHeight() {
method transformIndices (line 1353) | get transformIndices() {
method transformIndicesWidth (line 1356) | get transformIndicesWidth() {
method transformIndicesHeight (line 1359) | get transformIndicesHeight() {
method colorTransforms (line 1362) | get colorTransforms() {
method colorTransformsWidth (line 1365) | get colorTransformsWidth() {
method colorTransformsHeight (line 1368) | get colorTransformsHeight() {
method colorTransformIndices (line 1371) | get colorTransformIndices() {
method colorTransformIndicesWidth (line 1374) | get colorTransformIndicesWidth() {
method colorTransformIndicesHeight (line 1377) | get colorTransformIndicesHeight() {
method positions (line 1380) | get positions() {
method rotations (line 1383) | get rotations() {
method scales (line 1386) | get scales() {
method vertexCount (line 1389) | get vertexCount() {
method needsRebuild (line 1392) | get needsRebuild() {
method updating (line 1395) | get updating() {
class rU (line 1399) | class rU {
method constructor (line 1400) | constructor(U = 0, Q = 0, F = 0, l = 255) {
method flat (line 1403) | flat() {
method flatNorm (line 1406) | flatNorm() {
method toHexString (line 1409) | toHexString() {
method toString (line 1412) | toString() {
class SU (line 1416) | class SU extends QF {
method constructor (line 1417) | constructor(U, Q) {
method renderData (line 1475) | get renderData() {
method depthIndex (line 1478) | get depthIndex() {
method chunks (line 1481) | get chunks() {
method splatTexture (line 1484) | get splatTexture() {
method outlineThickness (line 1487) | get outlineThickness() {
method outlineThickness (line 1490) | set outlineThickness(U) {
method outlineColor (line 1493) | get outlineColor() {
method outlineColor (line 1496) | set outlineColor(U) {
method _getVertexSource (line 1499) | _getVertexSource() {
method _getFragmentSource (line 1607) | _getFragmentSource() {
class nF (line 1645) | class nF {
method constructor (line 1646) | constructor(U = 1) {
method dispose (line 1657) | dispose() {
class VF (line 1660) | class VF {
method constructor (line 1661) | constructor(U = null, Q = null) {
method canvas (line 1690) | get canvas() {
method gl (line 1693) | get gl() {
method renderProgram (line 1696) | get renderProgram() {
method backgroundColor (line 1699) | get backgroundColor() {
method backgroundColor (line 1702) | set backgroundColor(U) {
class ZF (line 1706) | class ZF {
method constructor (line 1707) | constructor(U, Q, F = 0.5, l = 0.5, Z = 5, t = !0, d = new X()) {
function hF (line 1795) | function hF(g) {
function oF (line 1812) | function oF(g, U) {
function IF (line 1815) | function IF(g, U, Q) {
class rF (line 1883) | class rF extends AF {
method constructor (line 1884) | constructor(U) {
FILE: freesplatter/webui/gradio_customgs/templates/component/Index-f5583db3.js
function xl (line 20) | function xl(e) {
function Jl (line 214) | function Jl(e) {
function Ql (line 241) | function Ql(e, t, n) {
class bo (line 275) | class bo extends Ml {
method constructor (line 276) | constructor(t) {
function fs (line 315) | function fs(e) {
function cs (line 375) | function cs(e, t, n) {
class Fr (line 381) | class Fr extends Yl {
method constructor (line 382) | constructor(t) {
function ei (line 414) | function ei(e) {
function Ts (line 439) | function Ts(e) {
function As (line 594) | function As(e, t, n) {
class Pt (line 621) | class Pt extends _s {
method constructor (line 622) | constructor(t) {
function Gs (line 657) | function Gs(e) {
function js (line 752) | function js(e) {
function qs (line 762) | function qs(e, t, n) {
class zs (line 790) | class zs extends Bs {
method constructor (line 791) | constructor(t) {
function Js (line 807) | function Js(e) {
class Qs (line 824) | class Qs extends Vs {
method constructor (line 825) | constructor(t) {
function ra (line 840) | function ra(e) {
class yo (line 857) | class yo extends Ys {
method constructor (line 858) | constructor(t) {
function fa (line 873) | function fa(e) {
class ca (line 890) | class ca extends ia {
method constructor (line 891) | constructor(t) {
function ga (line 906) | function ga(e) {
method constructor (line 924) | constructor(t) {
function ka (line 939) | function ka(e) {
class Ta (line 956) | class Ta extends ba {
method constructor (line 957) | constructor(t) {
function Pa (line 972) | function Pa(e) {
class Eo (line 989) | class Eo extends Aa {
method constructor (line 990) | constructor(t) {
function Ra (line 1005) | function Ra(e) {
method constructor (line 1023) | constructor(t) {
function Xa (line 1362) | function Xa(e) {
function Wa (line 1382) | function Wa(e) {
function ui (line 1402) | function ui(e) {
function Za (line 1431) | function Za(e) {
function xa (line 1500) | function xa(e, t, n) {
class Ja (line 1515) | class Ja extends Ga {
method constructor (line 1516) | constructor(t) {
function ci (line 1527) | function ci(e, t, n) {
function Ft (line 1532) | function Ft(e, t) {
function _i (line 1611) | function _i(e) {
function hi (line 1683) | function hi(e) {
function eu (line 1708) | function eu(e) {
function tu (line 1793) | function tu(e) {
function nu (line 1866) | function nu(e) {
function di (line 1917) | function di(e) {
function ru (line 1927) | async function ru() {
function iu (line 1930) | async function iu() {
function ou (line 1933) | function ou(e, t, n) {
class lu (line 1995) | class lu extends Qa {
method constructor (line 1996) | constructor(t) {
function mi (line 2010) | function mi(e, t, n) {
function Bo (line 2013) | function Bo(e, t, n) {
function rr (line 2016) | function rr(e) {
function au (line 2042) | async function au(e, t) {
function uu (line 2076) | function uu(e) {
function pi (line 2083) | async function pi(e) {
function cu (line 2096) | function cu(e, t, n, r) {
function _u (line 2126) | function _u(e, t) {
function hu (line 2131) | async function hu(e, t, n, r = pu) {
function du (line 2149) | async function du(e, t) {
class jr (line 2161) | class jr {
method constructor (line 2162) | constructor({
function mu (line 2177) | function mu(e, t) {
function gi (line 2851) | function gi(e, t, n, r) {
function bi (line 2871) | function bi(e, t) {
function gu (line 2874) | function gu(e, t, n) {
function bu (line 2902) | async function bu(e, t) {
function wi (line 2913) | function wi(e, t, n) {
function kr (line 2918) | async function kr(e, t = void 0, n = [], r = !1, i = void 0) {
function wu (line 2965) | function wu(e, t) {
function vi (line 2969) | async function vi(e, t, n) {
function Tr (line 2986) | async function Tr(e, t, n) {
function ir (line 3058) | function ir(e, t) {
function ot (line 3165) | function ot() {
function vu (line 3167) | function vu(e) {
function yu (line 3170) | function yu(e) {
function Eu (line 3173) | function Eu(e) {
function Su (line 3176) | function Su(e, t) {
function ku (line 3179) | function ku(e, ...t) {
function Lo (line 3191) | function Lo(e) {
function Tu (line 3196) | function Tu(e) {
function Au (line 3208) | function Au(e, t) {
function Yt (line 3213) | function Yt(e, t = ot) {
function It (line 3239) | function It(e, t, n) {
function Ei (line 3270) | function Ei(e) {
function Ar (line 3273) | function Ar(e, t, n, r) {
function Si (line 3291) | function Si(e, t = {}) {
function Bu (line 3325) | function Bu(e) {
function Hu (line 3331) | function Hu(e) {
function Nu (line 3334) | function Nu(e) {
function Lu (line 3339) | function Lu(e) {
function Ou (line 3342) | function Ou(e) {
function Xt (line 3345) | function Xt(e, t) {
function Du (line 3348) | function Du(e, t, n) {
function Mu (line 3353) | function Mu(e, t) {
function Ru (line 3359) | function Ru(e) {
function ki (line 3364) | function ki(e) {
function Oo (line 3367) | function Oo(e, t) {
function Uu (line 3374) | function Uu(e, t) {
function Fu (line 3377) | function Fu(e, t, n) {
function Tt (line 3385) | function Tt(e, t, n) {
function Hn (line 3407) | function Hn(e, t) {
function or (line 3426) | function or(e, t, n) {
function Ti (line 3444) | function Ti(e) {
function zu (line 3447) | function zu(e) {
function Do (line 3450) | function Do(e) {
function Mo (line 3453) | function Mo(e) {
function Ro (line 3456) | function Ro(e) {
function Uo (line 3459) | function Uo(e) {
function Fo (line 3462) | function Fo(e) {
function Vu (line 3465) | function Vu(e) {
function Go (line 3468) | function Go(e) {
function jo (line 3471) | function jo(e) {
function Cr (line 3474) | function Cr(e) {
function Wu (line 3478) | function Wu(e) {
function xu (line 3570) | function xu(e) {
function Ju (line 3588) | function Ju(e) {
function Bi (line 3592) | function Bi(e) {
function Xo (line 3598) | function Xo(e) {
function Yu (line 3638) | function Yu(e) {
function Ci (line 3652) | function Ci(e) {
function Ku (line 3656) | function Ku(e) {
function $u (line 5096) | function $u(e, t) {
function ef (line 5112) | function ef(e) {
function L (line 5135) | function L(e, t) {
function Zo (line 5216) | function Zo(e, t) {
function e (line 5241) | function e(t, n) {
function Ir (line 5691) | function Ir(e) {
function pf (line 5694) | function pf(e) {
function gf (line 5697) | function gf(e) {
function xo (line 5700) | function xo(e) {
function bf (line 5703) | function bf(e) {
function Lr (line 5706) | function Lr(e) {
function wf (line 5715) | function wf(e, t) {
function sr (line 5724) | function sr(e, t) {
function vf (line 5731) | function vf(e) {
function Jo (line 5734) | function Jo(e, t, n, r) {
function Qo (line 5738) | function Qo(e, t, n) {
function qr (line 5742) | function qr(e, t, n, r, i) {
function yf (line 5745) | function yf(e, t) {
function Ef (line 5749) | function Ef(e, t) {
function Sf (line 5752) | function Sf(e, t) {
function zr (line 5758) | function zr() {
function t (line 5782) | function t(n, r, i) {
function t (line 5794) | function t(n, r, i, o) {
function t (line 5803) | function t(n, r, i) {
function t (line 5812) | function t(n, r) {
function Cf (line 5821) | function Cf(e) {
function Hf (line 5827) | function Hf(e) {
function mn (line 5830) | function mn(e, t, n, r, i, o, l) {
function Nf (line 5928) | function Nf(e, t) {
function Pf (line 5933) | function Pf(e, t) {
function ur (line 5938) | function ur(e) {
function If (line 5952) | function If(e) {
function e (line 5987) | function e(t, n, r, i) {
function Of (line 6092) | function Of(e, t) {
function Mf (line 6127) | function Mf(e) {
function Ko (line 6130) | function Ko(e) {
function Rf (line 6133) | function Rf(e, t) {
function Uf (line 6139) | function Uf(e) {
function Ff (line 6149) | function Ff(e, ...t) {
function Gf (line 6158) | function Gf(e, t) {
function $o (line 6161) | function $o(e) {
function jf (line 6164) | function jf(e) {
function Or (line 6170) | function Or(e) {
function qf (line 6178) | function qf(e, t) {
function el (line 6184) | function el(e) {
function Ct (line 6235) | function Ct() {
function Di (line 6249) | function Di(e) {
function Pn (line 6252) | function Pn(e, t = Ct().fallbackLocale) {
function mt (line 6256) | function mt() {
function Fi (line 6409) | function Fi(e) {
function wc (line 6445) | function wc(e) {
function Ut (line 6489) | function Ut(e) {
function vc (line 6492) | function vc(e) {
function yc (line 6498) | function yc(e, t, n) {
class Ec (line 6526) | class Ec extends hc {
method constructor (line 6527) | constructor(t) {
function Dc (line 6562) | function Dc(e) {
function Mc (line 6739) | function Mc(e) {
function Rc (line 6767) | function Rc(e) {
function ji (line 6870) | function ji(e) {
function Uc (line 6915) | function Uc(e) {
function qi (line 6951) | function qi(e) {
function Fc (line 6961) | function Fc(e, t, n) {
function Gc (line 6976) | function Gc(e, t, n) {
class jc (line 7099) | class jc extends Sc {
method constructor (line 7100) | constructor(t) {
method paste_clipboard (line 7127) | get paste_clipboard() {
method open_file_upload (line 7130) | get open_file_upload() {
method load_files (line 7133) | get load_files() {
function hl (line 7138) | function hl() {
function Vc (line 7141) | function Vc(e) {
function dl (line 7145) | function dl(e, t) {
function ml (line 7151) | function ml(e) {
function eh (line 7157) | async function eh(e) {
function Yc (line 7202) | function Yc(e) {
function Kc (line 7292) | function Kc(e) {
function $c (line 7325) | function $c(e) {
function e0 (line 7398) | function e0(e) {
function t0 (line 7449) | function t0(e) {
function n0 (line 7484) | function n0(e, t, n) {
class r0 (line 7531) | class r0 extends Xc {
method constructor (line 7532) | constructor(t) {
function Wi (line 7556) | function Wi(e) {
function Zi (line 7594) | function Zi(e) {
function xi (line 7632) | function xi(e) {
function _0 (line 7669) | function _0(e) {
function h0 (line 7703) | function h0(e) {
function d0 (line 7777) | function d0(e, t, n) {
class m0 (line 7796) | class m0 extends i0 {
method constructor (line 7797) | constructor(t) {
function H0 (line 7834) | function H0(e) {
function N0 (line 7892) | function N0(e) {
function P0 (line 7941) | function P0(e) {
function I0 (line 8016) | function I0(e) {
function L0 (line 8081) | function L0(e) {
function O0 (line 8132) | function O0(e) {
function D0 (line 8184) | function D0(e) {
function M0 (line 8194) | async function M0() {
function R0 (line 8197) | async function R0() {
function U0 (line 8200) | function U0(e, t, n) {
class F0 (line 8262) | class F0 extends p0 {
method constructor (line 8263) | constructor(t) {
function St (line 8277) | function St(e) {
function Z0 (line 8299) | function Z0(e) {
function x0 (line 8339) | function x0(e, t, n) {
class J0 (line 8359) | class J0 extends G0 {
method constructor (line 8360) | constructor(t) {
function to (line 8395) | function to(e, t, n) {
function no (line 8399) | function no(e, t, n) {
function u_ (line 8403) | function u_(e) {
function f_ (line 8459) | function f_(e) {
function ro (line 8554) | function ro(e) {
function c_ (line 8574) | function c_(e) {
function __ (line 8589) | function __(e) {
function h_ (line 8619) | function h_(e) {
function io (line 8659) | function io(e) {
function d_ (line 8688) | function d_(e) {
function m_ (line 8712) | function m_(e) {
function oo (line 8743) | function oo(e) {
function lo (line 8764) | function lo(e) {
function p_ (line 8796) | function p_(e) {
function g_ (line 8827) | function g_(e) {
function so (line 8853) | function so(e) {
function ao (line 8893) | function ao(e) {
function b_ (line 8935) | function b_(e) {
function uo (line 8949) | function uo(e) {
function fo (line 8971) | function fo(e) {
function co (line 8985) | function co(e) {
function _o (line 9011) | function _o(e) {
function ho (line 9042) | function ho(e) {
function w_ (line 9067) | function w_(e) {
function v_ (line 9179) | async function v_(e, t = !0) {
function y_ (line 9195) | function y_(e, t, n) {
class Bl (line 9274) | class Bl extends Q0 {
method constructor (line 9275) | constructor(t) {
function I_ (line 9319) | function I_(e) {
function L_ (line 9382) | function L_(e, t, n) {
class th (line 9388) | class th extends E_ {
method constructor (line 9389) | constructor(t) {
function R_ (line 9412) | function R_(e) {
function U_ (line 9497) | function U_(e) {
function F_ (line 9582) | function F_(e) {
function G_ (line 9616) | function G_(e) {
function j_ (line 9752) | function j_(e) {
function q_ (line 9802) | function q_(e) {
function z_ (line 9871) | function z_(e) {
function V_ (line 9891) | function V_(e) {
function X_ (line 9963) | function X_(e) {
function W_ (line 9996) | function W_(e, t, n) {
class nh (line 10031) | class nh extends O_ {
method constructor (line 10032) | constructor(t) {
FILE: freesplatter/webui/gradio_customgs/templates/component/wrapper-6f348d45-19fa94bf.js
function z (line 2) | function z(s) {
function gt (line 5) | function gt(s) {
function Oe (line 33) | function Oe(s) {
function vt (line 36) | function vt() {
function Qe (line 39) | function Qe(s) {
function St (line 42) | function St(s, e) {
function wt (line 105) | function wt(s, e) {
function Je (line 118) | function Je(s, e, t, r, i) {
function et (line 122) | function et(s, e) {
function Ot (line 126) | function Ot(s) {
function Ee (line 129) | function Ee(s) {
method constructor (line 161) | constructor(e) {
method add (line 172) | add(e) {
method [ue] (line 180) | [ue]() {
method constructor (line 214) | constructor(e, t, r) {
method extensionName (line 223) | static get extensionName() {
method offer (line 232) | offer() {
method accept (line 243) | accept(e) {
method cleanup (line 251) | cleanup() {
method acceptAsServer (line 268) | acceptAsServer(e) {
method acceptAsClient (line 281) | acceptAsClient(e) {
method normalizeParams (line 300) | normalizeParams(e) {
method decompress (line 344) | decompress(e, t, r) {
method compress (line 359) | compress(e, t, r) {
method _decompress (line 374) | _decompress(e, t, r) {
method _compress (line 404) | _compress(e, t, r) {
function Ut (line 425) | function Ut(s) {
function st (line 428) | function st(s) {
function Bt (line 435) | function Bt(s) {
function Wt (line 582) | function Wt(s) {
function be (line 585) | function be(s) {
method constructor (line 647) | constructor(e = {}) {
method _write (line 658) | _write(e, t, r) {
method consume (line 670) | consume(e) {
method startLoop (line 698) | startLoop(e) {
method getInfo (line 731) | getInfo() {
method getPayloadLength16 (line 845) | getPayloadLength16() {
method getPayloadLength64 (line 858) | getPayloadLength64() {
method haveLength (line 878) | haveLength() {
method getMask (line 894) | getMask() {
method getData (line 908) | getData(e) {
method decompress (line 932) | decompress(e, t) {
method dataMessage (line 961) | dataMessage() {
method controlMessage (line 989) | controlMessage(e) {
function g (line 1024) | function g(s, e, t, r, i) {
method constructor (line 1040) | constructor(e, t, r) {
method frame (line 1064) | static frame(e, t) {
method close (line 1083) | close(e, t, r, i) {
method ping (line 1119) | ping(e, t, r) {
method pong (line 1143) | pong(e, t, r) {
method send (line 1175) | send(e, t, r) {
method dispatch (line 1228) | dispatch(e, t, r, i) {
method dequeue (line 1254) | dequeue() {
method enqueue (line 1266) | enqueue(e) {
method sendFrame (line 1276) | sendFrame(e, t) {
class B (line 1282) | class B {
method constructor (line 1289) | constructor(e) {
method target (line 1295) | get target() {
method type (line 1301) | get type() {
class Y (line 1307) | class Y extends B {
method constructor (line 1321) | constructor(e, t = {}) {
method code (line 1327) | get code() {
method reason (line 1333) | get reason() {
method wasClean (line 1339) | get wasClean() {
class le (line 1346) | class le extends B {
method constructor (line 1356) | constructor(e, t = {}) {
method error (line 1362) | get error() {
method message (line 1368) | get message() {
class xe (line 1374) | class xe extends B {
method constructor (line 1383) | constructor(e, t = {}) {
method data (line 1389) | get data() {
method addEventListener (line 1407) | addEventListener(s, e, t = {}) {
method removeEventListener (line 1452) | removeEventListener(s, e) {
function Z (line 1467) | function Z(s, e, t) {
function k (line 1471) | function k(s, e, t) {
function ss (line 1474) | function ss(s) {
function rs (line 1537) | function rs(s) {
method constructor (line 1569) | constructor(e, t, r) {
method binaryType (line 1579) | get binaryType() {
method binaryType (line 1582) | set binaryType(e) {
method bufferedAmount (line 1588) | get bufferedAmount() {
method extensions (line 1594) | get extensions() {
method isPaused (line 1600) | get isPaused() {
method onclose (line 1607) | get onclose() {
method onerror (line 1614) | get onerror() {
method onopen (line 1621) | get onopen() {
method onmessage (line 1628) | get onmessage() {
method protocol (line 1634) | get protocol() {
method readyState (line 1640) | get readyState() {
method url (line 1646) | get url() {
method setSocket (line 1663) | setSocket(e, t, r) {
method emitClose (line 1678) | emitClose() {
method close (line 1705) | close(e, t) {
method pause (line 1728) | pause() {
method ping (line 1739) | ping(e, t, r) {
method pong (line 1756) | pong(e, t, r) {
method resume (line 1770) | resume() {
method send (line 1788) | send(e, t, r) {
method terminate (line 1809) | terminate() {
method get (line 1865) | get() {
method set (line 1871) | set(e) {
function ht (line 1886) | function ht(s, e, t, r) {
function ee (line 2051) | function ee(s, e) {
function bs (line 2054) | function bs(s) {
function xs (line 2057) | function xs(s) {
function b (line 2060) | function b(s, e, t) {
function ve (line 2065) | function ve(s, e, t) {
function ks (line 2077) | function ks(s, e) {
function ws (line 2081) | function ws() {
function Os (line 2085) | function Os(s) {
function Ye (line 2089) | function Ye() {
function Cs (line 2092) | function Cs(s, e) {
function Ts (line 2095) | function Ts(s) {
function Ls (line 2099) | function Ls(s) {
function ct (line 2102) | function ct(s) {
function ut (line 2105) | function ut() {
function fe (line 2111) | function fe(s) {
function dt (line 2114) | function dt() {
function _t (line 2118) | function _t() {
function Ps (line 2123) | function Ps(s) {
class As (line 2152) | class As extends Us {
method constructor (line 2179) | constructor(e, t) {
method address (line 2232) | address() {
method close (line 2244) | close(e) {
method shouldHandle (line 2268) | shouldHandle(e) {
method handleUpgrade (line 2286) | handleUpgrade(e, t, r, i) {
method completeUpgrade (line 2374) | completeUpgrade(e, t, r, i, n, o, l) {
function js (line 2410) | function js(s, e) {
function G (line 2418) | function G(s) {
function Ze (line 2421) | function Ze() {
function H (line 2424) | function H(s, e, t, r) {
function R (line 2438) | function R(s, e, t, r, i) {
FILE: freesplatter/webui/gradio_customgs/templates/example/index.js
function w (line 15) | function w(a) {
function h (line 78) | function h(a, e, n) {
class E (line 84) | class E extends u {
method constructor (line 85) | constructor(e) {
FILE: freesplatter/webui/gradio_custommodel3d/custommodel3d.py
class CustomModel3D (line 15) | class CustomModel3D(Component):
method __init__ (line 27) | def __init__(
method preprocess (line 94) | def preprocess(self, payload: FileData | None) -> str | None:
method postprocess (line 105) | def postprocess(self, value: str | Path | None) -> FileData | None:
method process_example (line 116) | def process_example(self, input_data: str | Path | None) -> str:
method example_inputs (line 119) | def example_inputs(self):
FILE: freesplatter/webui/gradio_custommodel3d/custommodel3d.pyi
class CustomModel3D (line 16) | class CustomModel3D(Component):
method __init__ (line 28) | def __init__(
method preprocess (line 95) | def preprocess(self, payload: FileData | None) -> str | None:
method postprocess (line 106) | def postprocess(self, value: str | Path | None) -> FileData | None:
method process_example (line 117) | def process_example(self, input_data: str | Path | None) -> str:
method example_inputs (line 120) | def example_inputs(self):
method change (line 129) | def change(self,
method upload (line 174) | def upload(self,
method edit (line 219) | def edit(self,
method clear (line 264) | def clear(self,
FILE: freesplatter/webui/gradio_custommodel3d/templates/component/Canvas3D-e42d3d6b.js
function cv (line 2) | function cv(an, ln) {
function sf (line 19) | function sf() {
function V (line 64927) | function V(_) {
function O (line 64966) | function O() {
function O (line 64978) | function O(x) {
function m (line 66144) | function m() {
function C (line 66432) | function C(O, x, m, c) {
function u (line 66440) | function u() {
function O (line 66469) | function O() {
function C (line 66527) | function C(I, O) {
function ae (line 66603) | function ae(ee) {
function ae (line 66665) | function ae(ee) {
function ae (line 66705) | function ae(ee) {
function ae (line 66724) | function ae(ee) {
function ae (line 66759) | function ae(ee) {
function ae (line 66798) | function ae(ee) {
function ae (line 66826) | function ae(ee) {
function ae (line 66852) | function ae(ee) {
function ae (line 66882) | function ae(ee) {
function ae (line 66910) | function ae(ee) {
function ae (line 66936) | function ae(ee) {
function ae (line 66958) | function ae(ee) {
function ae (line 67042) | function ae(ee, K) {
function ae (line 67096) | function ae(ee) {
function ae (line 67125) | function ae(ee) {
function ae (line 67150) | function ae(ee) {
function ae (line 67160) | function ae(ee) {
function ae (line 67179) | function ae(ee) {
function ae (line 67199) | function ae(ee) {
function ae (line 67334) | function ae(ee) {
function ae (line 67477) | function ae(ee) {
function ae (line 67499) | function ae(ee) {
function ae (line 67521) | function ae(ee) {
function W (line 67598) | function W() {
function W (line 67704) | function W(q) {
function W (line 68100) | function W() {
function W (line 68219) | function W() {
function W (line 68312) | function W(q) {
function q (line 68392) | function q() {
function q (line 68421) | function q() {
function I (line 68532) | function I() {
function I (line 68561) | function I(O) {
function u (line 68735) | function u() {
function vv (line 68915) | function vv(an) {
function yv (line 68932) | function yv(an, ln) {
function bv (line 68935) | function bv(an, ln, Be) {
class Ev (line 69006) | class Ev extends uv {
method constructor (line 69007) | constructor(ln) {
method reset_camera_position (line 69018) | get reset_camera_position() {
FILE: freesplatter/webui/gradio_custommodel3d/templates/component/Canvas3DGS-f5539f54.js
class S (line 2) | class S {
method constructor (line 3) | constructor(U = 0, l = 0, F = 0) {
method equals (line 6) | equals(U) {
method add (line 9) | add(U) {
method subtract (line 12) | subtract(U) {
method multiply (line 15) | multiply(U) {
method cross (line 18) | cross(U) {
method dot (line 22) | dot(U) {
method lerp (line 25) | lerp(U, l) {
method magnitude (line 28) | magnitude() {
method distanceTo (line 31) | distanceTo(U) {
method normalize (line 34) | normalize() {
method flat (line 38) | flat() {
method clone (line 41) | clone() {
method toString (line 44) | toString() {
method One (line 47) | static One(U = 1) {
class K (line 51) | class K {
method constructor (line 52) | constructor(U = 0, l = 0, F = 0, Q = 1) {
method equals (line 55) | equals(U) {
method normalize (line 58) | normalize() {
method multiply (line 62) | multiply(U) {
method inverse (line 66) | inverse() {
method apply (line 70) | apply(U) {
method flat (line 74) | flat() {
method clone (line 77) | clone() {
method FromEuler (line 80) | static FromEuler(U) {
method toEuler (line 84) | toEuler() {
method FromMatrix3 (line 92) | static FromMatrix3(U) {
method FromAxisAngle (line 110) | static FromAxisAngle(U, l) {
method toString (line 114) | toString() {
class gU (line 118) | class gU {
method constructor (line 119) | constructor() {
class BU (line 132) | class BU {
method constructor (line 133) | constructor(U = 1, l = 0, F = 0, Q = 0, Z = 0, d = 1, V = 0, B = 0, t ...
method equals (line 136) | equals(U) {
method multiply (line 146) | multiply(U) {
method clone (line 150) | clone() {
method determinant (line 154) | determinant() {
method invert (line 158) | invert() {
method Compose (line 165) | static Compose(U, l, F) {
method toString (line 169) | toString() {
class jU (line 173) | class jU extends Event {
method constructor (line 174) | constructor(U) {
class OU (line 178) | class OU extends Event {
method constructor (line 179) | constructor(U) {
class LU (line 183) | class LU extends Event {
method constructor (line 184) | constructor(U) {
class rU (line 188) | class rU extends gU {
method constructor (line 189) | constructor() {
method _updateMatrix (line 199) | _updateMatrix() {
method position (line 202) | get position() {
method position (line 205) | set position(U) {
method rotation (line 208) | get rotation() {
method rotation (line 211) | set rotation(U) {
method scale (line 214) | get scale() {
method scale (line 217) | set scale(U) {
method forward (line 220) | get forward() {
method transform (line 224) | get transform() {
class dU (line 228) | class dU {
method constructor (line 229) | constructor(U = 1, l = 0, F = 0, Q = 0, Z = 1, d = 0, V = 0, B = 0, t ...
method equals (line 232) | equals(U) {
method multiply (line 242) | multiply(U) {
method clone (line 246) | clone() {
method Eye (line 250) | static Eye(U = 1) {
method Diagonal (line 253) | static Diagonal(U) {
method RotationFromQuaternion (line 256) | static RotationFromQuaternion(U) {
method RotationFromEuler (line 259) | static RotationFromEuler(U) {
method toString (line 263) | toString() {
class _ (line 267) | class _ {
method constructor (line 268) | constructor(U = 0, l = null, F = null, Q = null, Z = null) {
method Deserialize (line 295) | static Deserialize(U) {
method vertexCount (line 301) | get vertexCount() {
method positions (line 304) | get positions() {
method rotations (line 307) | get rotations() {
method scales (line 310) | get scales() {
method colors (line 313) | get colors() {
method selection (line 316) | get selection() {
class cU (line 321) | class cU {
method SplatToPLY (line 322) | static SplatToPLY(U, l) {
class tU (line 351) | class tU extends rU {
method constructor (line 352) | constructor(U = void 0) {
method saveToFile (line 361) | saveToFile(U = null, l = null) {
method data (line 384) | get data() {
method selected (line 387) | get selected() {
method selected (line 390) | set selected(U) {
class PU (line 394) | class PU {
method constructor (line 395) | constructor() {
method fx (line 405) | get fx() {
method fx (line 408) | set fx(U) {
method fy (line 411) | get fy() {
method fy (line 414) | set fy(U) {
method near (line 417) | get near() {
method near (line 420) | set near(U) {
method far (line 423) | get far() {
method far (line 426) | set far(U) {
method width (line 429) | get width() {
method height (line 432) | get height() {
method projectionMatrix (line 435) | get projectionMatrix() {
method viewMatrix (line 438) | get viewMatrix() {
method viewProj (line 441) | get viewProj() {
class QU (line 445) | class QU {
method constructor (line 446) | constructor(U = 0, l = 0, F = 0, Q = 0) {
method equals (line 449) | equals(U) {
method add (line 452) | add(U) {
method subtract (line 455) | subtract(U) {
method multiply (line 458) | multiply(U) {
method dot (line 461) | dot(U) {
method lerp (line 464) | lerp(U, l) {
method magnitude (line 467) | magnitude() {
method distanceTo (line 470) | distanceTo(U) {
method normalize (line 473) | normalize() {
method flat (line 477) | flat() {
method clone (line 480) | clone() {
method toString (line 483) | toString() {
class _U (line 487) | class _U extends rU {
method constructor (line 488) | constructor(U = void 0) {
method data (line 496) | get data() {
class qU (line 500) | class qU extends gU {
method constructor (line 501) | constructor() {
method saveToFile (line 523) | saveToFile(U = null, l = null) {
method objects (line 554) | get objects() {
function NU (line 558) | async function NU(r, U) {
function GU (line 564) | async function GU(r, U) {
class $U (line 591) | class $U {
method LoadAsync (line 592) | static async LoadAsync(U, l, F, Q = !1) {
method LoadFromFileAsync (line 596) | static async LoadFromFileAsync(U, l, F) {
class UF (line 611) | class UF {
method LoadAsync (line 612) | static async LoadAsync(U, l, F, Q = "", Z = !1) {
method LoadFromFileAsync (line 619) | static async LoadFromFileAsync(U, l, F, Q = "") {
method _ParsePLYBuffer (line 633) | static _ParsePLYBuffer(U, l) {
function FF (line 737) | function FF(r, U, l) {
function EU (line 750) | function EU(r, U, l) {
class QF (line 757) | class QF {
method constructor (line 758) | constructor(U, l) {
method renderer (line 786) | get renderer() {
method scene (line 789) | get scene() {
method camera (line 792) | get camera() {
method program (line 795) | get program() {
method passes (line 798) | get passes() {
method started (line 801) | get started() {
function R (line 816) | function R(n) {
function Y (line 826) | function Y() {
function u (line 831) | function u(n) {
function j (line 838) | function j(n) {
function a (line 850) | function a(n, A, e, W) {
function f (line 870) | function f(n, A, e = {}) {
function IU (line 902) | function IU(n) {
method constructor (line 1016) | constructor(n) {
method constructor (line 1020) | constructor(n) {
function h (line 1050) | function h(b) {
method fromWireType (line 1057) | fromWireType(W) {
method toWireType (line 1074) | toWireType(W, h) {
method destructorFunction (line 1121) | destructorFunction(W) {
method destructorFunction (line 1139) | destructorFunction(m) {
function A (line 1161) | function A(W, h) {
function mU (line 1183) | function mU() {
class BF (line 1212) | class BF {
method constructor (line 1213) | constructor(U) {
method offsets (line 1279) | get offsets() {
method data (line 1282) | get data() {
method width (line 1285) | get width() {
method height (line 1288) | get height() {
method transforms (line 1291) | get transforms() {
method transformsWidth (line 1294) | get transformsWidth() {
method transformsHeight (line 1297) | get transformsHeight() {
method transformIndices (line 1300) | get transformIndices() {
method transformIndicesWidth (line 1303) | get transformIndicesWidth() {
method transformIndicesHeight (line 1306) | get transformIndicesHeight() {
method positions (line 1309) | get positions() {
method rotations (line 1312) | get rotations() {
method scales (line 1315) | get scales() {
method vertexCount (line 1318) | get vertexCount() {
method needsRebuild (line 1321) | get needsRebuild() {
method updating (line 1324) | get updating() {
class XU (line 1328) | class XU {
method constructor (line 1329) | constructor(U = 0, l = 0, F = 0, Q = 255) {
method flat (line 1332) | flat() {
method flatNorm (line 1335) | flatNorm() {
method toHexString (line 1338) | toHexString() {
method toString (line 1341) | toString() {
class SU (line 1345) | class SU extends QF {
method constructor (line 1346) | constructor(U, l) {
method renderData (line 1404) | get renderData() {
method depthIndex (line 1407) | get depthIndex() {
method chunks (line 1410) | get chunks() {
method splatTexture (line 1413) | get splatTexture() {
method outlineThickness (line 1416) | get outlineThickness() {
method outlineThickness (line 1419) | set outlineThickness(U) {
method outlineColor (line 1422) | get outlineColor() {
method outlineColor (line 1425) | set outlineColor(U) {
method _getVertexSource (line 1428) | _getVertexSource() {
method _getFragmentSource (line 1524) | _getFragmentSource() {
class VF (line 1562) | class VF {
method constructor (line 1563) | constructor(U = 1) {
method dispose (line 1574) | dispose() {
class ZF (line 1577) | class ZF {
method constructor (line 1578) | constructor(U = null, l = null) {
method canvas (line 1607) | get canvas() {
method gl (line 1610) | get gl() {
method renderProgram (line 1613) | get renderProgram() {
method backgroundColor (line 1616) | get backgroundColor() {
method backgroundColor (line 1619) | set backgroundColor(U) {
class nF (line 1623) | class nF {
method constructor (line 1624) | constructor(U, l, F = 0.5, Q = 0.5, Z = 5, d = !0, V = new S()) {
function sF (line 1712) | function sF(r) {
function IF (line 1729) | function IF(r, U) {
function JF (line 1732) | function JF(r, U, l) {
class oF (line 1800) | class oF extends AF {
method constructor (line 1801) | constructor(U) {
FILE: freesplatter/webui/gradio_custommodel3d/templates/component/Index-0bb1de05.js
function Wl (line 20) | function Wl(e) {
function Zl (line 214) | function Zl(e) {
function Jl (line 241) | function Jl(e, t, n) {
class bo (line 275) | class bo extends Ml {
method constructor (line 276) | constructor(t) {
function us (line 315) | function us(e) {
function fs (line 375) | function fs(e, t, n) {
class Fr (line 381) | class Fr extends Ql {
method constructor (line 382) | constructor(t) {
function ei (line 414) | function ei(e) {
function ks (line 439) | function ks(e) {
function Ts (line 594) | function Ts(e, t, n) {
class Tt (line 621) | class Tt extends cs {
method constructor (line 622) | constructor(t) {
function Fs (line 657) | function Fs(e) {
function Gs (line 752) | function Gs(e) {
function qs (line 762) | function qs(e, t, n) {
class js (line 790) | class js extends As {
method constructor (line 791) | constructor(t) {
function Zs (line 807) | function Zs(e) {
class Js (line 824) | class Js extends zs {
method constructor (line 825) | constructor(t) {
function na (line 840) | function na(e) {
class yo (line 857) | class yo extends Qs {
method constructor (line 858) | constructor(t) {
function ua (line 873) | function ua(e) {
class fa (line 890) | class fa extends ra {
method constructor (line 891) | constructor(t) {
function pa (line 906) | function pa(e) {
method constructor (line 924) | constructor(t) {
function Sa (line 939) | function Sa(e) {
class ka (line 956) | class ka extends ga {
method constructor (line 957) | constructor(t) {
function Na (line 972) | function Na(e) {
class Eo (line 989) | class Eo extends Ta {
method constructor (line 990) | constructor(t) {
function Da (line 1005) | function Da(e) {
method constructor (line 1023) | constructor(t) {
function Va (line 1362) | function Va(e) {
function Xa (line 1382) | function Xa(e) {
function ui (line 1402) | function ui(e) {
function xa (line 1431) | function xa(e) {
function Wa (line 1500) | function Wa(e, t, n) {
class Za (line 1515) | class Za extends Fa {
method constructor (line 1516) | constructor(t) {
function ci (line 1527) | function ci(e, t, n) {
function Ft (line 1532) | function Ft(e, t) {
function _i (line 1610) | function _i(e) {
function Ka (line 1680) | function Ka(e) {
function $a (line 1765) | function $a(e) {
function eu (line 1838) | function eu(e) {
function hi (line 1889) | function hi(e) {
function tu (line 1899) | async function tu() {
function nu (line 1902) | async function nu() {
function ru (line 1905) | function ru(e, t, n) {
class iu (line 1963) | class iu extends Ja {
method constructor (line 1964) | constructor(t) {
function di (line 1978) | function di(e, t, n) {
function Bo (line 1981) | function Bo(e, t, n) {
function rr (line 1984) | function rr(e) {
function lu (line 2010) | async function lu(e, t) {
function su (line 2044) | function su(e) {
function mi (line 2051) | async function mi(e) {
function uu (line 2064) | function uu(e, t, n, r) {
function fu (line 2094) | function fu(e, t) {
function cu (line 2099) | async function cu(e, t, n, r = du) {
function _u (line 2117) | async function _u(e, t) {
class qr (line 2129) | class qr {
method constructor (line 2130) | constructor({
function hu (line 2145) | function hu(e, t) {
function pi (line 2819) | function pi(e, t, n, r) {
function gi (line 2839) | function gi(e, t) {
function mu (line 2842) | function mu(e, t, n) {
function pu (line 2870) | async function pu(e, t) {
function bi (line 2881) | function bi(e, t, n) {
function kr (line 2886) | async function kr(e, t = void 0, n = [], r = !1, i = void 0) {
function gu (line 2933) | function gu(e, t) {
function vi (line 2937) | async function vi(e, t, n) {
function Tr (line 2954) | async function Tr(e, t, n) {
function ir (line 3026) | function ir(e, t) {
function ot (line 3133) | function ot() {
function bu (line 3135) | function bu(e) {
function vu (line 3138) | function vu(e) {
function wu (line 3141) | function wu(e) {
function yu (line 3144) | function yu(e, t) {
function Eu (line 3147) | function Eu(e, ...t) {
function Lo (line 3159) | function Lo(e) {
function Su (line 3164) | function Su(e) {
function ku (line 3176) | function ku(e, t) {
function Zt (line 3181) | function Zt(e, t = ot) {
function It (line 3207) | function It(e, t, n) {
function yi (line 3238) | function yi(e) {
function Ar (line 3241) | function Ar(e, t, n, r) {
function Ei (line 3259) | function Ei(e, t = {}) {
function Tu (line 3293) | function Tu(e) {
function Bu (line 3299) | function Bu(e) {
function Cu (line 3302) | function Cu(e) {
function Pu (line 3307) | function Pu(e) {
function Iu (line 3310) | function Iu(e) {
function jt (line 3313) | function jt(e, t) {
function Lu (line 3316) | function Lu(e, t, n) {
function Ou (line 3321) | function Ou(e, t) {
function Mu (line 3327) | function Mu(e) {
function Si (line 3332) | function Si(e) {
function Oo (line 3335) | function Oo(e, t) {
function Du (line 3342) | function Du(e, t) {
function Ru (line 3345) | function Ru(e, t, n) {
function At (line 3353) | function At(e, t, n) {
function Hn (line 3375) | function Hn(e, t) {
function or (line 3394) | function or(e, t, n) {
function ki (line 3412) | function ki(e) {
function qu (line 3415) | function qu(e) {
function Mo (line 3418) | function Mo(e) {
function Do (line 3421) | function Do(e) {
function Ro (line 3424) | function Ro(e) {
function Uo (line 3427) | function Uo(e) {
function Fo (line 3430) | function Fo(e) {
function ju (line 3433) | function ju(e) {
function Go (line 3436) | function Go(e) {
function qo (line 3439) | function qo(e) {
function Cr (line 3442) | function Cr(e) {
function Vu (line 3446) | function Vu(e) {
function xu (line 3538) | function xu(e) {
function Wu (line 3556) | function Wu(e) {
function Ai (line 3560) | function Ai(e) {
function Xo (line 3566) | function Xo(e) {
function Ju (line 3606) | function Ju(e) {
function Bi (line 3620) | function Bi(e) {
function Qu (line 3624) | function Qu(e) {
function Yu (line 5064) | function Yu(e, t) {
function Ku (line 5080) | function Ku(e) {
function O (line 5103) | function O(e, t) {
function Wo (line 5184) | function Wo(e, t) {
function e (line 5209) | function e(t, n) {
function Ir (line 5659) | function Ir(e) {
function df (line 5662) | function df(e) {
function mf (line 5665) | function mf(e) {
function Zo (line 5668) | function Zo(e) {
function pf (line 5671) | function pf(e) {
function Lr (line 5674) | function Lr(e) {
function gf (line 5683) | function gf(e, t) {
function sr (line 5692) | function sr(e, t) {
function bf (line 5699) | function bf(e) {
function Jo (line 5702) | function Jo(e, t, n, r) {
function Qo (line 5706) | function Qo(e, t, n) {
function jr (line 5710) | function jr(e, t, n, r, i) {
function vf (line 5713) | function vf(e, t) {
function wf (line 5717) | function wf(e, t) {
function yf (line 5720) | function yf(e, t) {
function zr (line 5726) | function zr() {
function t (line 5750) | function t(n, r, i) {
function t (line 5762) | function t(n, r, i, o) {
function t (line 5771) | function t(n, r, i) {
function t (line 5780) | function t(n, r) {
function Af (line 5789) | function Af(e) {
function Bf (line 5795) | function Bf(e) {
function _n (line 5798) | function _n(e, t, n, r, i, o, l) {
function Cf (line 5896) | function Cf(e, t) {
function Hf (line 5901) | function Hf(e, t) {
function ur (line 5906) | function ur(e) {
function Nf (line 5920) | function Nf(e) {
function e (line 5955) | function e(t, n, r, i) {
function If (line 6060) | function If(e, t) {
function Of (line 6095) | function Of(e) {
function Ko (line 6098) | function Ko(e) {
function Mf (line 6101) | function Mf(e, t) {
function Df (line 6107) | function Df(e) {
function Rf (line 6117) | function Rf(e, ...t) {
function Uf (line 6126) | function Uf(e, t) {
function $o (line 6129) | function $o(e) {
function Ff (line 6132) | function Ff(e) {
function Or (line 6138) | function Or(e) {
function Gf (line 6146) | function Gf(e, t) {
function el (line 6152) | function el(e) {
function Ht (line 6203) | function Ht() {
function Oi (line 6217) | function Oi(e) {
function Pn (line 6220) | function Pn(e, t = Ht().fallbackLocale) {
function mt (line 6224) | function mt() {
function Ui (line 6377) | function Ui(e) {
function gc (line 6413) | function gc(e) {
function Ut (line 6457) | function Ut(e) {
function bc (line 6460) | function bc(e) {
function vc (line 6466) | function vc(e, t, n) {
class wc (line 6494) | class wc extends cc {
method constructor (line 6495) | constructor(t) {
function Lc (line 6530) | function Lc(e) {
function Oc (line 6707) | function Oc(e) {
function Mc (line 6735) | function Mc(e) {
function Gi (line 6838) | function Gi(e) {
function Dc (line 6883) | function Dc(e) {
function qi (line 6919) | function qi(e) {
function Rc (line 6929) | function Rc(e, t, n) {
function Uc (line 6944) | function Uc(e, t, n) {
class Fc (line 7067) | class Fc extends yc {
method constructor (line 7068) | constructor(t) {
method paste_clipboard (line 7095) | get paste_clipboard() {
method open_file_upload (line 7098) | get open_file_upload() {
method load_files (line 7101) | get load_files() {
function hl (line 7106) | function hl() {
function jc (line 7109) | function jc(e) {
function dl (line 7113) | function dl(e, t) {
function ml (line 7119) | function ml(e) {
function K_ (line 7125) | async function K_(e) {
function Jc (line 7170) | function Jc(e) {
function Qc (line 7260) | function Qc(e) {
function Yc (line 7293) | function Yc(e) {
function Kc (line 7366) | function Kc(e) {
function $c (line 7417) | function $c(e) {
function e0 (line 7452) | function e0(e, t, n) {
class t0 (line 7499) | class t0 extends zc {
method constructor (line 7500) | constructor(t) {
function Xi (line 7524) | function Xi(e) {
function xi (line 7562) | function xi(e) {
function Wi (line 7600) | function Wi(e) {
function f0 (line 7637) | function f0(e) {
function c0 (line 7671) | function c0(e) {
function _0 (line 7745) | function _0(e, t, n) {
class h0 (line 7764) | class h0 extends n0 {
method constructor (line 7765) | constructor(t) {
function C0 (line 7802) | function C0(e) {
function H0 (line 7857) | function H0(e) {
function N0 (line 7906) | function N0(e) {
function P0 (line 7983) | function P0(e) {
function I0 (line 8048) | function I0(e) {
function L0 (line 8099) | function L0(e) {
function Ji (line 8151) | function Ji(e) {
function O0 (line 8161) | async function O0() {
function M0 (line 8164) | async function M0() {
function D0 (line 8167) | function D0(e, t, n) {
class R0 (line 8232) | class R0 extends d0 {
method constructor (line 8233) | constructor(t) {
function St (line 8247) | function St(e) {
function X0 (line 8269) | function X0(e) {
function x0 (line 8309) | function x0(e, t, n) {
class W0 (line 8329) | class W0 extends U0 {
method constructor (line 8330) | constructor(t) {
function to (line 8365) | function to(e, t, n) {
function no (line 8369) | function no(e, t, n) {
function s_ (line 8373) | function s_(e) {
function a_ (line 8429) | function a_(e) {
function ro (line 8524) | function ro(e) {
function u_ (line 8544) | function u_(e) {
function f_ (line 8559) | function f_(e) {
function c_ (line 8589) | function c_(e) {
function io (line 8629) | function io(e) {
function __ (line 8658) | function __(e) {
function h_ (line 8682) | function h_(e) {
function oo (line 8713) | function oo(e) {
function lo (line 8734) | function lo(e) {
function d_ (line 8766) | function d_(e) {
function m_ (line 8797) | function m_(e) {
function so (line 8823) | function so(e) {
function ao (line 8863) | function ao(e) {
function p_ (line 8905) | function p_(e) {
function uo (line 8919) | function uo(e) {
function fo (line 8941) | function fo(e) {
function co (line 8955) | function co(e) {
function _o (line 8981) | function _o(e) {
function ho (line 9012) | function ho(e) {
function g_ (line 9037) | function g_(e) {
function b_ (line 9149) | async function b_(e, t = !0) {
function v_ (line 9165) | function v_(e, t, n) {
class Al (line 9244) | class Al extends Z0 {
method constructor (line 9245) | constructor(t) {
function N_ (line 9289) | function N_(e) {
function P_ (line 9352) | function P_(e, t, n) {
class $_ (line 9358) | class $_ extends w_ {
method constructor (line 9359) | constructor(t) {
function M_ (line 9382) | function M_(e) {
function D_ (line 9467) | function D_(e) {
function R_ (line 9552) | function R_(e) {
function U_ (line 9586) | function U_(e) {
function F_ (line 9722) | function F_(e) {
function G_ (line 9772) | function G_(e) {
function q_ (line 9841) | function q_(e) {
function j_ (line 9861) | function j_(e) {
function z_ (line 9933) | function z_(e) {
function V_ (line 9966) | function V_(e, t, n) {
class eh (line 10001) | class eh extends I_ {
method constructor (line 10002) | constructor(t) {
FILE: freesplatter/webui/gradio_custommodel3d/templates/component/wrapper-6f348d45-f837cf34.js
function z (line 2) | function z(s) {
function gt (line 5) | function gt(s) {
function Oe (line 33) | function Oe(s) {
function vt (line 36) | function vt() {
function Qe (line 39) | function Qe(s) {
function St (line 42) | function St(s, e) {
function wt (line 105) | function wt(s, e) {
function Je (line 118) | function Je(s, e, t, r, i) {
function et (line 122) | function et(s, e) {
function Ot (line 126) | function Ot(s) {
function Ee (line 129) | function Ee(s) {
method constructor (line 161) | constructor(e) {
method add (line 172) | add(e) {
method [ue] (line 180) | [ue]() {
method constructor (line 214) | constructor(e, t, r) {
method extensionName (line 223) | static get extensionName() {
method offer (line 232) | offer() {
method accept (line 243) | accept(e) {
method cleanup (line 251) | cleanup() {
method acceptAsServer (line 268) | acceptAsServer(e) {
method acceptAsClient (line 281) | acceptAsClient(e) {
method normalizeParams (line 300) | normalizeParams(e) {
method decompress (line 344) | decompress(e, t, r) {
method compress (line 359) | compress(e, t, r) {
method _decompress (line 374) | _decompress(e, t, r) {
method _compress (line 404) | _compress(e, t, r) {
function Ut (line 425) | function Ut(s) {
function st (line 428) | function st(s) {
function Bt (line 435) | function Bt(s) {
function Wt (line 582) | function Wt(s) {
function be (line 585) | function be(s) {
method constructor (line 647) | constructor(e = {}) {
method _write (line 658) | _write(e, t, r) {
method consume (line 670) | consume(e) {
method startLoop (line 698) | startLoop(e) {
method getInfo (line 731) | getInfo() {
method getPayloadLength16 (line 845) | getPayloadLength16() {
method getPayloadLength64 (line 858) | getPayloadLength64() {
method haveLength (line 878) | haveLength() {
method getMask (line 894) | getMask() {
method getData (line 908) | getData(e) {
method decompress (line 932) | decompress(e, t) {
method dataMessage (line 961) | dataMessage() {
method controlMessage (line 989) | controlMessage(e) {
function g (line 1024) | function g(s, e, t, r, i) {
method constructor (line 1040) | constructor(e, t, r) {
method frame (line 1064) | static frame(e, t) {
method close (line 1083) | close(e, t, r, i) {
method ping (line 1119) | ping(e, t, r) {
method pong (line 1143) | pong(e, t, r) {
method send (line 1175) | send(e, t, r) {
method dispatch (line 1228) | dispatch(e, t, r, i) {
method dequeue (line 1254) | dequeue() {
method enqueue (line 1266) | enqueue(e) {
method sendFrame (line 1276) | sendFrame(e, t) {
class B (line 1282) | class B {
method constructor (line 1289) | constructor(e) {
method target (line 1295) | get target() {
method type (line 1301) | get type() {
class Y (line 1307) | class Y extends B {
method constructor (line 1321) | constructor(e, t = {}) {
method code (line 1327) | get code() {
method reason (line 1333) | get reason() {
method wasClean (line 1339) | get wasClean() {
class le (line 1346) | class le extends B {
method constructor (line 1356) | constructor(e, t = {}) {
method error (line 1362) | get error() {
method message (line 1368) | get message() {
class xe (line 1374) | class xe extends B {
method constructor (line 1383) | constructor(e, t = {}) {
method data (line 1389) | get data() {
method addEventListener (line 1407) | addEventListener(s, e, t = {}) {
method removeEventListener (line 1452) | removeEventListener(s, e) {
function Z (line 1467) | function Z(s, e, t) {
function k (line 1471) | function k(s, e, t) {
function ss (line 1474) | function ss(s) {
function rs (line 1537) | function rs(s) {
method constructor (line 1569) | constructor(e, t, r) {
method binaryType (line 1579) | get binaryType() {
method binaryType (line 1582) | set binaryType(e) {
method bufferedAmount (line 1588) | get bufferedAmount() {
method extensions (line 1594) | get extensions() {
method isPaused (line 1600) | get isPaused() {
method onclose (line 1607) | get onclose() {
method onerror (line 1614) | get onerror() {
method onopen (line 1621) | get onopen() {
method onmessage (line 1628) | get onmessage() {
method protocol (line 1634) | get protocol() {
method readyState (line 1640) | get readyState() {
method url (line 1646) | get url() {
method setSocket (line 1663) | setSocket(e, t, r) {
method emitClose (line 1678) | emitClose() {
method close (line 1705) | close(e, t) {
method pause (line 1729) | pause() {
method ping (line 1740) | ping(e, t, r) {
method pong (line 1757) | pong(e, t, r) {
method resume (line 1771) | resume() {
method send (line 1789) | send(e, t, r) {
method terminate (line 1810) | terminate() {
method get (line 1867) | get() {
method set (line 1873) | set(e) {
function ht (line 1888) | function ht(s, e, t, r) {
function ee (line 2053) | function ee(s, e) {
function bs (line 2056) | function bs(s) {
function xs (line 2059) | function xs(s) {
function b (line 2062) | function b(s, e, t) {
function ve (line 2067) | function ve(s, e, t) {
function ks (line 2079) | function ks(s, e) {
function ws (line 2083) | function ws() {
function Os (line 2087) | function Os(s) {
function Ye (line 2091) | function Ye() {
function Cs (line 2094) | function Cs(s, e) {
function Ts (line 2097) | function Ts(s) {
function Ls (line 2101) | function Ls(s) {
function ct (line 2104) | function ct(s) {
function ut (line 2107) | function ut() {
function fe (line 2113) | function fe(s) {
function dt (line 2116) | function dt() {
function _t (line 2120) | function _t() {
function Ps (line 2125) | function Ps(s) {
class As (line 2154) | class As extends Us {
method constructor (line 2181) | constructor(e, t) {
method address (line 2234) | address() {
method close (line 2246) | close(e) {
method shouldHandle (line 2270) | shouldHandle(e) {
method handleUpgrade (line 2288) | handleUpgrade(e, t, r, i) {
method completeUpgrade (line 2376) | completeUpgrade(e, t, r, i, n, o, l) {
function js (line 2412) | function js(s, e) {
function G (line 2420) | function G(s) {
function Ze (line 2423) | function Ze() {
function H (line 2426) | function H(s, e, t, r) {
function R (line 2440) | function R(s, e, t, r, i) {
FILE: freesplatter/webui/gradio_custommodel3d/templates/example/index.js
function w (line 15) | function w(a) {
function h (line 78) | function h(a, e, n) {
class E (line 84) | class E extends u {
method constructor (line 85) | constructor(e) {
FILE: freesplatter/webui/parameters.py
function parse_3d_args (line 134) | def parse_3d_args(args, kwargs):
function parse_2d_args (line 143) | def parse_2d_args(args, kwargs):
function parse_retex_args (line 149) | def parse_retex_args(args, kwargs):
function parse_stablessdnerf_args (line 163) | def parse_stablessdnerf_args(args, kwargs):
FILE: freesplatter/webui/runner.py
function inv_sigmoid (line 33) | def inv_sigmoid(x: torch.Tensor) -> torch.Tensor:
function save_gaussian (line 37) | def save_gaussian(latent, gs_vis_path, model, opacity_threshold=None, pa...
class FreeSplatterRunner (line 73) | class FreeSplatterRunner:
method __init__ (line 74) | def __init__(self, device):
method run_segmentation (line 147) | def run_segmentation(
method run_img_to_3d (line 159) | def run_img_to_3d(
method run_views_to_3d (line 236) | def run_views_to_3d(
method run_freesplatter_object (line 285) | def run_freesplatter_object(
method visualize_cameras_object (line 406) | def visualize_cameras_object(
method run_views_to_scene (line 445) | def run_views_to_scene(
method run_freesplatter_scene (line 476) | def run_freesplatter_scene(
method visualize_cameras_scene (line 538) | def visualize_cameras_scene(
FILE: freesplatter/webui/shared_opts.py
function create_prompt_opts (line 7) | def create_prompt_opts(var_dict):
function create_generate_bar (line 15) | def create_generate_bar(var_dict, text='Generate', variant='primary', se...
function create_base_opts (line 38) | def create_base_opts(var_dict,
function create_auxiliary_prompt_opts (line 66) | def create_auxiliary_prompt_opts(var_dict, aux_prompt='', aux_negative_p...
function create_batch_size_opts (line 77) | def create_batch_size_opts(var_dict,
function create_loss_sliders (line 109) | def create_loss_sliders(var_dict,
function create_optimization_opts (line 139) | def create_optimization_opts(var_dict,
function create_stablessdnerf_opts (line 179) | def create_stablessdnerf_opts(
function create_superres_opts (line 206) | def create_superres_opts(
function on_select (line 239) | def on_select(evt: gr.SelectData):
function create_mesh_input (line 244) | def create_mesh_input(var_dict, cache_dir, preproc_api, render_bs=8, api...
function create_send_buttons (line 268) | def create_send_buttons(var_dict):
function set_seed (line 279) | def set_seed(seed):
function send_to_click (line 284) | def send_to_click(*inputs, target_tab_ids=None):
FILE: freesplatter/webui/tab_img_to_3d.py
function create_interface_img_to_3d (line 8) | def create_interface_img_to_3d(segmentation_api, freesplatter_api, model...
FILE: freesplatter/webui/tab_instant3d.py
function create_interface_instant3d (line 8) | def create_interface_instant3d(
FILE: freesplatter/webui/tab_text_to_img_to_3d.py
function create_interface_text_to_img_to_3d (line 8) | def create_interface_text_to_img_to_3d(sd_api, examples=None, advanced=T...
FILE: freesplatter/webui/tab_views_to_3d.py
function create_interface_views_to_3d (line 10) | def create_interface_views_to_3d(freesplatter_api):
FILE: freesplatter/webui/tab_views_to_scene.py
function create_interface_views_to_scene (line 10) | def create_interface_views_to_scene(freesplatter_api):
Copy disabled (too large)
Download .json
Condensed preview — 64 files, each showing path, character count, and a content snippet. Download the .json file for the full structured content (10,880K chars).
[
{
"path": ".gitignore",
"chars": 362,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
},
{
"path": "LICENSE.txt",
"chars": 9607,
"preview": "Tencent is pleased to support the open source community by making FreeSplatter available. \n\nCopyright (C) 2024 THL A29 L"
},
{
"path": "README.md",
"chars": 3146,
"preview": "<div align=\"center\">\n \n# FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction\n\n<a href='https:/"
},
{
"path": "app.py",
"chars": 5263,
"preview": "import os\nif 'OMP_NUM_THREADS' not in os.environ:\n os.environ['OMP_NUM_THREADS'] = '16'\nimport torch\nimport gradio as"
},
{
"path": "configs/freesplatter-object-2dgs.yaml",
"chars": 539,
"preview": "model:\n target: freesplatter.models.model.FreeSplatterModel\n params:\n transformer_config:\n target: freesplatte"
},
{
"path": "configs/freesplatter-object.yaml",
"chars": 521,
"preview": "model:\n target: freesplatter.models.model.FreeSplatterModel\n params:\n transformer_config:\n target: freesplatte"
},
{
"path": "configs/freesplatter-scene.yaml",
"chars": 549,
"preview": "model:\n target: freesplatter.models.model.FreeSplatterModel\n params:\n transformer_config:\n target: freesplatte"
},
{
"path": "freesplatter/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "freesplatter/hunyuan/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "freesplatter/hunyuan/hunyuan3d_mvd_std_pipeline.py",
"chars": 20253,
"preview": "# Open Source Model Licensed under the Apache License Version 2.0 and Other Licenses of the Third-Party Components there"
},
{
"path": "freesplatter/hunyuan/utils.py",
"chars": 3775,
"preview": "# Open Source Model Licensed under the Apache License Version 2.0 and Other Licenses of the Third-Party Components there"
},
{
"path": "freesplatter/models/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "freesplatter/models/model.py",
"chars": 5076,
"preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torchvision.transforms import v2\nfrom einops imp"
},
{
"path": "freesplatter/models/renderer/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "freesplatter/models/renderer/gaussian_renderer.py",
"chars": 2799,
"preview": "import torch\n\nfrom .gaussian_utils import render, GaussianModel\n\n\nclass GaussianRenderer:\n def __init__(self, rendere"
},
{
"path": "freesplatter/models/renderer/gaussian_utils.py",
"chars": 14149,
"preview": "\"\"\"\nGaussian Splatting.\nPartially borrowed from https://github.com/graphdeco-inria/gaussian-splatting.\n\"\"\"\n\n\nimport os\ni"
},
{
"path": "freesplatter/models/renderer_2dgs/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "freesplatter/models/renderer_2dgs/gaussian_renderer.py",
"chars": 3404,
"preview": "import torch\n\nfrom .gaussian_utils import render, GaussianModel\n\n\nclass GaussianRenderer:\n def __init__(self, rendere"
},
{
"path": "freesplatter/models/renderer_2dgs/gaussian_utils.py",
"chars": 16676,
"preview": "\"\"\"\nGaussian Splatting.\nPartially borrowed from https://github.com/graphdeco-inria/gaussian-splatting.\n\"\"\"\n\n\nimport os\ni"
},
{
"path": "freesplatter/models/transformer.py",
"chars": 5903,
"preview": "import math\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom inspect import isfunction\nfrom einop"
},
{
"path": "freesplatter/utils/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "freesplatter/utils/camera_util.py",
"chars": 4544,
"preview": "import torch\nimport numpy as np\n\n\ndef normalize_vecs(vectors: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Normalize vecto"
},
{
"path": "freesplatter/utils/geometry_util.py",
"chars": 7550,
"preview": "import torch\nimport torch.nn.functional as F\nfrom einops import rearrange\n\n\n# --- Intrinsics Transformations ---\n\ndef no"
},
{
"path": "freesplatter/utils/infer_util.py",
"chars": 5411,
"preview": "import os\nimport importlib\nimport imageio\nimport torch\nimport rembg\nimport numpy as np\nimport PIL.Image\nfrom PIL import "
},
{
"path": "freesplatter/utils/mesh_optim.py",
"chars": 8854,
"preview": "from typing import *\nimport numpy as np\nimport torch\nimport utils3d\nimport nvdiffrast.torch as dr\nfrom tqdm import tqdm\n"
},
{
"path": "freesplatter/utils/recon_util.py",
"chars": 11817,
"preview": "import cv2\nimport math\nimport scipy\nimport numpy as np\nimport torch\nimport open3d as o3d\nfrom tqdm import tqdm\n\nfrom .ca"
},
{
"path": "freesplatter/webui/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "freesplatter/webui/camera_viewer/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "freesplatter/webui/camera_viewer/utils.py",
"chars": 2896,
"preview": "import os\nimport numpy as np\nfrom PIL import Image\n\n\ndef load_image(fpath, sz=256):\n img = Image.open(fpath)\n img "
},
{
"path": "freesplatter/webui/camera_viewer/visualizer.py",
"chars": 8908,
"preview": "import os\n\nfrom PIL import Image\nimport plotly.graph_objects as go\nimport numpy as np\n\n\ndef calc_cam_cone_pts_3d(c2w, fo"
},
{
"path": "freesplatter/webui/gradio_customgs/__init__.py",
"chars": 55,
"preview": "from .customgs import CustomGS\n\n__all__ = ['CustomGS']\n"
},
{
"path": "freesplatter/webui/gradio_customgs/customgs.py",
"chars": 6567,
"preview": "\"\"\"gr.Model3D() component.\"\"\"\n\nfrom __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Callable\n"
},
{
"path": "freesplatter/webui/gradio_customgs/customgs.pyi",
"chars": 28658,
"preview": "\"\"\"gr.Model3D() component.\"\"\"\n\nfrom __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Callable\n"
},
{
"path": "freesplatter/webui/gradio_customgs/templates/component/Canvas3D-60a8d213.js",
"chars": 4627687,
"preview": "import { c as Gr, g as av, r as sv } from \"./Index-f5583db3.js\";\nfunction cv(an, ln) {\n for (var Be = 0; Be < ln.length"
},
{
"path": "freesplatter/webui/gradio_customgs/templates/component/Canvas3DGS-0fbc0d9a.js",
"chars": 212804,
"preview": "import { r as KU } from \"./Index-f5583db3.js\";\nclass X {\n constructor(U = 0, Q = 0, F = 0) {\n this.x = U, this.y = Q"
},
{
"path": "freesplatter/webui/gradio_customgs/templates/component/Index-f5583db3.js",
"chars": 242623,
"preview": "const {\n SvelteComponent: Ml,\n assign: Rl,\n create_slot: Ul,\n detach: Fl,\n element: Gl,\n get_all_dirty_from_scope:"
},
{
"path": "freesplatter/webui/gradio_customgs/templates/component/__vite-browser-external-2447137e.js",
"chars": 41,
"preview": "const e = {};\nexport {\n e as default\n};\n"
},
{
"path": "freesplatter/webui/gradio_customgs/templates/component/index.js",
"chars": 163,
"preview": "import { E as s, a as l, M as o, I as d } from \"./Index-f5583db3.js\";\nexport {\n s as BaseExample,\n l as BaseModel3D,\n "
},
{
"path": "freesplatter/webui/gradio_customgs/templates/component/style.css",
"chars": 15223,
"preview": ".block.svelte-1t38q2d{position:relative;margin:0;box-shadow:var(--block-shadow);border-width:var(--block-border-width);b"
},
{
"path": "freesplatter/webui/gradio_customgs/templates/component/wrapper-6f348d45-19fa94bf.js",
"chars": 78062,
"preview": "import S from \"./__vite-browser-external-2447137e.js\";\nfunction z(s) {\n return s && s.__esModule && Object.prototype.ha"
},
{
"path": "freesplatter/webui/gradio_customgs/templates/example/index.js",
"chars": 1641,
"preview": "const {\n SvelteComponent: u,\n append: c,\n attr: d,\n detach: g,\n element: v,\n init: o,\n insert: r,\n noop: f,\n sa"
},
{
"path": "freesplatter/webui/gradio_customgs/templates/example/style.css",
"chars": 61,
"preview": ".gallery.svelte-1gecy8w{padding:var(--size-1) var(--size-2)}\n"
},
{
"path": "freesplatter/webui/gradio_custommodel3d/__init__.py",
"chars": 70,
"preview": "from .custommodel3d import CustomModel3D\n\n__all__ = ['CustomModel3D']\n"
},
{
"path": "freesplatter/webui/gradio_custommodel3d/custommodel3d.py",
"chars": 6498,
"preview": "\"\"\"gr.Model3D() component.\"\"\"\n\nfrom __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Callable\n"
},
{
"path": "freesplatter/webui/gradio_custommodel3d/custommodel3d.pyi",
"chars": 28589,
"preview": "\"\"\"gr.Model3D() component.\"\"\"\n\nfrom __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Callable\n"
},
{
"path": "freesplatter/webui/gradio_custommodel3d/templates/component/Canvas3D-e42d3d6b.js",
"chars": 4627897,
"preview": "import { c as Gr, g as av, r as sv } from \"./Index-0bb1de05.js\";\nfunction cv(an, ln) {\n for (var Be = 0; Be < ln.length"
},
{
"path": "freesplatter/webui/gradio_custommodel3d/templates/component/Canvas3DGS-f5539f54.js",
"chars": 207916,
"preview": "import { r as KU } from \"./Index-0bb1de05.js\";\nclass S {\n constructor(U = 0, l = 0, F = 0) {\n this.x = U, this.y = l"
},
{
"path": "freesplatter/webui/gradio_custommodel3d/templates/component/Index-0bb1de05.js",
"chars": 241484,
"preview": "const {\n SvelteComponent: Ml,\n assign: Dl,\n create_slot: Rl,\n detach: Ul,\n element: Fl,\n get_all_dirty_from_scope:"
},
{
"path": "freesplatter/webui/gradio_custommodel3d/templates/component/__vite-browser-external-2447137e.js",
"chars": 41,
"preview": "const e = {};\nexport {\n e as default\n};\n"
},
{
"path": "freesplatter/webui/gradio_custommodel3d/templates/component/index.js",
"chars": 163,
"preview": "import { E as s, a as l, M as o, I as d } from \"./Index-0bb1de05.js\";\nexport {\n s as BaseExample,\n l as BaseModel3D,\n "
},
{
"path": "freesplatter/webui/gradio_custommodel3d/templates/component/style.css",
"chars": 15200,
"preview": ".block.svelte-1t38q2d{position:relative;margin:0;box-shadow:var(--block-shadow);border-width:var(--block-border-width);b"
},
{
"path": "freesplatter/webui/gradio_custommodel3d/templates/component/wrapper-6f348d45-f837cf34.js",
"chars": 78109,
"preview": "import S from \"./__vite-browser-external-2447137e.js\";\nfunction z(s) {\n return s && s.__esModule && Object.prototype.ha"
},
{
"path": "freesplatter/webui/gradio_custommodel3d/templates/example/index.js",
"chars": 1641,
"preview": "const {\n SvelteComponent: u,\n append: c,\n attr: d,\n detach: g,\n element: v,\n init: o,\n insert: r,\n noop: f,\n sa"
},
{
"path": "freesplatter/webui/gradio_custommodel3d/templates/example/style.css",
"chars": 61,
"preview": ".gallery.svelte-1gecy8w{padding:var(--size-1) var(--size-2)}\n"
},
{
"path": "freesplatter/webui/parameters.py",
"chars": 4943,
"preview": "from collections import OrderedDict\n\n\nnerf_mesh_defaults = OrderedDict([\n ('prompt', None),\n ('negative_prompt', N"
},
{
"path": "freesplatter/webui/runner.py",
"chars": 22763,
"preview": "import os\nimport json\nimport uuid\nimport time\nimport rembg\nimport numpy as np\nimport trimesh\nimport torch\nimport fpsampl"
},
{
"path": "freesplatter/webui/shared_opts.py",
"chars": 14979,
"preview": "import random\nimport gradio as gr\nfrom functools import partial\nfrom .gradio_custommodel3d import CustomModel3D\n\n\ndef cr"
},
{
"path": "freesplatter/webui/style.css",
"chars": 568,
"preview": ".force-hide-container {\n margin: 0;\n box-shadow: none;\n --block-border-width: 0;\n background: transparent;\n "
},
{
"path": "freesplatter/webui/tab_img_to_3d.py",
"chars": 7264,
"preview": "import random\nimport gradio as gr\nfrom functools import partial\nfrom .gradio_custommodel3d import CustomModel3D\nfrom .gr"
},
{
"path": "freesplatter/webui/tab_instant3d.py",
"chars": 2125,
"preview": "import gradio as gr\nfrom functools import partial\nfrom .gradio_custommodel3d import CustomModel3D\nfrom .shared_opts impo"
},
{
"path": "freesplatter/webui/tab_text_to_img_to_3d.py",
"chars": 3996,
"preview": "import gradio as gr\nfrom functools import partial\nfrom .shared_opts import create_base_opts, create_generate_bar, create"
},
{
"path": "freesplatter/webui/tab_views_to_3d.py",
"chars": 4795,
"preview": "import os\nimport glob\nimport gradio as gr\nfrom functools import partial\nfrom PIL import Image\nfrom .gradio_custommodel3d"
},
{
"path": "freesplatter/webui/tab_views_to_scene.py",
"chars": 2845,
"preview": "import os\nimport glob\nimport gradio as gr\nfrom functools import partial\nfrom PIL import Image\nfrom .gradio_custommodel3d"
},
{
"path": "requirements.txt",
"chars": 619,
"preview": "pytorch-lightning==2.4.0\neinops\nplotly\nomegaconf\ntrimesh\nrembg\ngradio==5.5.0\nhuggingface_hub[cli]==0.26.2\ntransformers=="
}
]
About this extraction
This page contains the full source code of the TencentARC/FreeSplatter GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 64 files (10.1 MB), approximately 2.7M tokens, and a symbol index with 1580 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.