Full Code of droid-dataset/droid for AI

main 33ae6a67274f cached
151 files
18.6 MB
4.9M tokens
509 symbols
1 requests
Copy disabled (too large) Download .txt
Showing preview only (19,560K chars total). Download the full file to get everything.
Repository: droid-dataset/droid
Branch: main
Commit: 33ae6a67274f
Files: 151
Total size: 18.6 MB

Directory structure:
gitextract_3u1e4tmg/

├── .docker/
│   ├── README.md
│   ├── laptop/
│   │   ├── Dockerfile.laptop
│   │   ├── Dockerfile.laptop_fr3
│   │   ├── Dockerfile.laptop_panda
│   │   ├── docker-compose-laptop-data-upload.yaml
│   │   ├── docker-compose-laptop.yaml
│   │   └── entrypoint.sh
│   └── nuc/
│       ├── Dockerfile.nuc
│       ├── Dockerfile.nuc_fr3
│       ├── Dockerfile.nuc_panda
│       └── docker-compose-nuc.yaml
├── .github/
│   └── workflows/
│       ├── build_container_laptop_fr3.yaml
│       ├── build_container_laptop_panda.yaml
│       ├── build_container_nuc_fr3.yaml
│       ├── build_container_nuc_panda.yaml
│       ├── pages.yaml
│       └── pre-commit.yaml
├── .gitignore
├── .gitmodules
├── .pre-commit-config.yaml
├── Makefile
├── README.md
├── config/
│   ├── fr3/
│   │   ├── franka_hardware.yaml
│   │   └── franka_panda.yaml
│   └── panda/
│       ├── franka_hardware.yaml
│       └── franka_panda.yaml
├── docs/
│   ├── Gemfile
│   ├── _config.yml
│   ├── _sass/
│   │   └── color_schemes/
│   │       └── droid.scss
│   ├── contribution-guidelines.md
│   ├── example-workflows/
│   │   ├── calibrating-cameras.md
│   │   ├── data-collection.md
│   │   ├── example-workflows.md
│   │   ├── teleoperation.md
│   │   └── training-policies.md
│   ├── hardware-setup/
│   │   ├── assembly.md
│   │   ├── hardware-setup.md
│   │   └── shopping-list.md
│   ├── index.md
│   ├── software-setup/
│   │   ├── docker.md
│   │   ├── host-installation.md
│   │   └── software-setup.md
│   └── the-droid-dataset.md
├── droid/
│   ├── __init__.py
│   ├── calibration/
│   │   ├── __init__.py
│   │   ├── calibration_info.json
│   │   └── calibration_utils.py
│   ├── camera_utils/
│   │   ├── __init__.py
│   │   ├── camera_readers/
│   │   │   ├── __init__.py
│   │   │   └── zed_camera.py
│   │   ├── info.py
│   │   ├── recording_readers/
│   │   │   ├── mp4_reader.py
│   │   │   └── svo_reader.py
│   │   └── wrappers/
│   │       ├── __init__.py
│   │       ├── multi_camera_wrapper.py
│   │       └── recorded_multi_camera_wrapper.py
│   ├── controllers/
│   │   └── oculus_controller.py
│   ├── data_loading/
│   │   ├── data_loader.py
│   │   ├── dataset.py
│   │   ├── tf_data_loader.py
│   │   └── trajectory_sampler.py
│   ├── data_processing/
│   │   ├── data_transforms.py
│   │   └── timestep_processing.py
│   ├── evaluation/
│   │   ├── eval_launcher.py
│   │   ├── eval_launcher_robomimic.py
│   │   ├── policy_wrapper.py
│   │   └── rt1_wrapper.py
│   ├── franka/
│   │   ├── __init__.py
│   │   ├── launch_gripper.sh
│   │   ├── launch_robot.sh
│   │   └── robot.py
│   ├── misc/
│   │   ├── parameters.py
│   │   ├── pointcloud_utils.py
│   │   ├── server_interface.py
│   │   ├── subprocess_utils.py
│   │   ├── time.py
│   │   ├── transformations.py
│   │   └── version_control/
│   │       ├── 1_0.json
│   │       ├── 1_1.json
│   │       └── loader.py
│   ├── plotting/
│   │   ├── __init__.py
│   │   ├── analysis_func.py
│   │   ├── misc.py
│   │   └── text.py
│   ├── postprocessing/
│   │   ├── __init__.py
│   │   ├── parse.py
│   │   ├── schema.py
│   │   ├── stages.py
│   │   └── util/
│   │       ├── __init__.py
│   │       ├── svo2depth.py
│   │       ├── svo2mp4.py
│   │       └── validate.py
│   ├── robot_env.py
│   ├── robot_ik/
│   │   ├── arm.py
│   │   ├── franka/
│   │   │   ├── fr3.xml
│   │   │   ├── mesh/
│   │   │   │   ├── finger.obj
│   │   │   │   ├── hand.obj
│   │   │   │   ├── link0.obj
│   │   │   │   ├── link1.obj
│   │   │   │   ├── link2.obj
│   │   │   │   ├── link3.obj
│   │   │   │   ├── link4.obj
│   │   │   │   ├── link5.obj
│   │   │   │   ├── link6.obj
│   │   │   │   └── link7.obj
│   │   │   └── panda.xml
│   │   └── robot_ik_solver.py
│   ├── training/
│   │   ├── model_trainer.py
│   │   └── models/
│   │       └── policy_network.py
│   ├── trajectory_utils/
│   │   ├── misc.py
│   │   ├── trajectory_reader.py
│   │   └── trajectory_writer.py
│   └── user_interface/
│       ├── __init__.py
│       ├── data_collector.py
│       ├── eval_gui.py
│       ├── gui.py
│       ├── gui_info.json
│       ├── gui_parameters.py
│       ├── misc.py
│       └── text.py
├── pyproject.toml
├── scripts/
│   ├── README.md
│   ├── convert/
│   │   ├── svo_to_depth.py
│   │   ├── svo_to_mp4.py
│   │   └── to_tfrecord.py
│   ├── evaluation/
│   │   ├── evaluate_policy.py
│   │   ├── evaluate_rt1.py
│   │   └── results.ipynb
│   ├── labeling/
│   │   ├── label_data.py
│   │   └── task_label_filepath.json
│   ├── main.py
│   ├── postprocess.py
│   ├── server/
│   │   ├── launch_server.sh
│   │   └── run_server.py
│   ├── setup/
│   │   ├── README.md
│   │   ├── intro.txt
│   │   ├── laptop_setup.sh
│   │   └── nuc_setup.sh
│   ├── tests/
│   │   ├── calibrate_cameras.py
│   │   ├── collect_trajectory.py
│   │   ├── memory_leak.py
│   │   └── replay_trajectory.py
│   ├── training/
│   │   ├── sanity_check/
│   │   │   ├── image_obs.py
│   │   │   └── state_obs.py
│   │   ├── sweepable_train_policy [wip].py
│   │   └── train_policy.py
│   └── visualizations/
│       ├── create_plots.py
│       ├── visualize_data.py
│       ├── visualize_day.py
│       └── visualize_trajectory.py
└── setup.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .docker/README.md
================================================
# Overview

In order to simplify the setup and deployment of DROID across different machines we supply Dockerfiles for both the control server (nuc) and the client machine (laptop). The directory structure is broken down as follows: 

    ├── nuc                                      # directory for nuc docker setup files
    ├──── Dockerfile.nuc                         # nuc image definition
    ├──── docker-compose-nuc.yaml                # nuc container deployment settings
    ├── laptop                                   # directory for laptop docker setup files
    ├──── Dockerfile.laptop                      # laptop image definition
    ├──── docker-compose-laptop.yaml             # laptop container deployment settings
    ├──── entrypoint.sh                          # script that is run on entrypoint of Docker container

We recognise that some users may not already be familiar with Docker, the syntax of Dockerfiles and the syntax of docker compose configuration files. We point the user towards the following resources on these topics:

* [Docker Overview](https://docs.docker.com/get-started/overview/)
* [Dockerfile Reference](https://docs.docker.com/engine/reference/builder/)
* [Docker Compose Overview](https://docs.docker.com/compose/)

# NUC Setup
In order to set up the control server on your NUC run `sudo ./nuc_setup.sh` from this [directory](https://github.com/droid-dataset/droid/tree/main/scripts/setup). Running through all the steps in this script will install host system dependencies and ensure the control server runs automatically in a docker container each time your machine is booted.

Further details on the steps in README.md in the `scripts/setup` directory.

# Laptop Setup  
In order to set up the user client on your laptop run `sudo ./laptop_setup.sh` from this [directory](https://github.com/droid-dataset/droid/tree/main/scripts/setup). Running through all the steps in this script will install host system dependencies and ensure the user client can be run in a docker container.

Further details can be found [README.md](https://github.com/droid-dataset/droid/tree/main/scripts/setup/README.md).



================================================
FILE: .docker/laptop/Dockerfile.laptop
================================================
FROM nvidia/cuda:12.1.0-devel-ubuntu22.04

# build args
ARG ROBOT_TYPE=${ROBOT_TYPE}
ARG LIBFRANKA_VERSION=${LIBFRANKA_VERSION}
ARG ROBOT_IP=${ROBOT_IP}
ARG NUC_IP=${NUC_IP}
ARG NUC_ROBOT_CONFIG_DIR=/app/config/${ROBOT_TYPE}
ARG NUC_OCULUS_DIR=/app/droid/oculus_reader
ARG NUC_POLYMETIS_DIR=/app/droid/fairo/polymetis
ARG NUC_POLYMETIS_CONFIG_DIR=${NUC_POLYMETIS_DIR}/polymetis/conf

# runtime env vars
ENV ROBOT_TYPE=${ROBOT_TYPE}
ENV LIBFRANKA_VERSION=${LIBFRANKA_VERSION}
ENV ROBOT_IP=${ROBOT_IP}
ENV NUC_IP=${NUC_IP}
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES \
    ${NVIDIA_DRIVER_CAPABILITIES:+$NVIDIA_DRIVER_CAPABILITIES,}compute,video,utility

# copy project code to container
COPY . /app
WORKDIR /app

# base system installations
RUN apt-get update && \
    apt-get install -y software-properties-common build-essential sudo git curl wget python3-pip libspdlog-dev \
    libeigen3-dev lsb-release ffmpeg libsm6 libxext6 zstd && \
    apt-get upgrade -y

# install miniconda 
RUN wget \
    https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
    && mkdir /root/.conda \
    && bash Miniconda3-latest-Linux-x86_64.sh -b \
    && rm -f Miniconda3-latest-Linux-x86_64.sh
ENV PATH /root/miniconda3/bin:$PATH

# create conda environment
RUN conda create -n "robot" python=3.7
SHELL ["conda", "run", "-n", "robot", "/bin/bash", "-c"]

# install the zed sdk
ARG UBUNTU_RELEASE_YEAR=22
ARG ZED_SDK_MAJOR=4
ARG ZED_SDK_MINOR=0
ARG CUDA_MAJOR=12
ARG CUDA_MINOR=1

RUN echo "Europe/Paris" > /etc/localtime ; echo "CUDA Version ${CUDA_MAJOR}.${CUDA_MINOR}.0" > /usr/local/cuda/version.txt

# setup the ZED SDK
RUN apt-get update -y || true ; apt-get install --no-install-recommends lsb-release wget less udev sudo zstd build-essential cmake python3 python3-pip libpng-dev libgomp1 -y && \ 
    python3 -m pip install numpy opencv-python && \
    wget -q -O ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run https://download.stereolabs.com/zedsdk/${ZED_SDK_MAJOR}.${ZED_SDK_MINOR}/cu${CUDA_MAJOR}${CUDA_MINOR%.*}/ubuntu${UBUNTU_RELEASE_YEAR} && \
    chmod +x ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run && \
    ./ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run -- silent skip_tools skip_cuda && \
    ln -sf /lib/x86_64-linux-gnu/libusb-1.0.so.0 /usr/lib/x86_64-linux-gnu/libusb-1.0.so && \
    rm ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run && \
    rm -rf /var/lib/apt/lists/*
RUN conda install -c conda-forge libstdcxx-ng requests # required for pyzed
RUN python /usr/local/zed/get_python_api.py && python -m pip install --ignore-installed /app/pyzed-4.0-cp37-cp37m-linux_x86_64.whl

# install oculus reader
RUN add-apt-repository universe && \
    apt-get update -y && \ 
    apt-get install -y android-tools-fastboot
RUN pip3 install -e $NUC_OCULUS_DIR && \
    apt install -y android-tools-adb

# python environment setup
RUN pip3 install -e . && \
    pip3 install dm-robotics-moma==0.5.0 --no-deps && \
    pip3 install dm-robotics-transformations==0.5.0 --no-deps && \
    pip3 install dm-robotics-agentflow==0.5.0 --no-deps && \
    pip3 install dm-robotics-geometry==0.5.0 --no-deps && \
    pip3 install dm-robotics-manipulation==0.5.0 --no-deps && \
    pip3 install dm-robotics-controllers==0.5.0 --no-deps


# using miniconda instead of anaconda so overwrite sh scripts
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;

# set absolute paths
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's|~|/root|g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's|~|/root|g' {} \;

# set polymetis config files
RUN cp ${NUC_ROBOT_CONFIG_DIR}/franka_hardware.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_client/franka_hardware.yaml && \
    cp ${NUC_ROBOT_CONFIG_DIR}/franka_panda.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_model/franka_panda.yaml

# start the server
RUN chmod +x /app/.docker/laptop/entrypoint.sh
ENTRYPOINT ["/app/.docker/laptop/entrypoint.sh"]


================================================
FILE: .docker/laptop/Dockerfile.laptop_fr3
================================================
FROM nvidia/cuda:12.1.0-devel-ubuntu22.04

# set robot parameters
ENV ROBOT_TYPE=fr3
ENV LIBFRANKA_VERSION=0.10.0
ENV ROBOT_IP=172.16.0.1 
ENV NUC_IP=172.16.0.2
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES \
    ${NVIDIA_DRIVER_CAPABILITIES:+$NVIDIA_DRIVER_CAPABILITIES,}compute,video,utility

# set directory structure
ARG NUC_ROBOT_CONFIG_DIR=/app/config/${ROBOT_TYPE}
ARG NUC_OCULUS_DIR=/app/droid/oculus_reader
ARG NUC_POLYMETIS_DIR=/app/droid/fairo/polymetis
ARG NUC_POLYMETIS_CONFIG_DIR=${NUC_POLYMETIS_DIR}/polymetis/conf

# copy project code to container
COPY . /app
WORKDIR /app

# base system installations
RUN apt-get update && \
    apt-get install -y software-properties-common build-essential sudo git curl wget python3-pip libspdlog-dev \
    libeigen3-dev lsb-release ffmpeg libsm6 libxext6 zstd && \
    apt-get upgrade -y

# install miniconda 
RUN wget \
    https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
    && mkdir /root/.conda \
    && bash Miniconda3-latest-Linux-x86_64.sh -b \
    && rm -f Miniconda3-latest-Linux-x86_64.sh
ENV PATH /root/miniconda3/bin:$PATH

# create conda environment
RUN conda create -n "robot" python=3.7
SHELL ["conda", "run", "-n", "robot", "/bin/bash", "-c"]

# install the zed sdk
ARG UBUNTU_RELEASE_YEAR=22
ARG ZED_SDK_MAJOR=4
ARG ZED_SDK_MINOR=0
ARG CUDA_MAJOR=12
ARG CUDA_MINOR=1

RUN echo "Europe/Paris" > /etc/localtime ; echo "CUDA Version ${CUDA_MAJOR}.${CUDA_MINOR}.0" > /usr/local/cuda/version.txt

# setup the ZED SDK
RUN apt-get update -y || true ; apt-get install --no-install-recommends lsb-release wget less udev sudo zstd build-essential cmake python3 python3-pip libpng-dev libgomp1 -y && \ 
    python3 -m pip install numpy opencv-python && \
    wget -q -O ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run https://download.stereolabs.com/zedsdk/${ZED_SDK_MAJOR}.${ZED_SDK_MINOR}/cu${CUDA_MAJOR}${CUDA_MINOR%.*}/ubuntu${UBUNTU_RELEASE_YEAR} && \
    chmod +x ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run && \
    ./ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run -- silent skip_tools skip_cuda && \
    ln -sf /lib/x86_64-linux-gnu/libusb-1.0.so.0 /usr/lib/x86_64-linux-gnu/libusb-1.0.so && \
    rm ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run && \
    rm -rf /var/lib/apt/lists/*
RUN conda install -c conda-forge libstdcxx-ng requests # required for pyzed
RUN python /usr/local/zed/get_python_api.py && python -m pip install --ignore-installed /app/pyzed-4.0-cp37-cp37m-linux_x86_64.whl

# install oculus reader
RUN add-apt-repository universe && \
    apt-get update -y && \ 
    apt-get install -y android-tools-fastboot
RUN pip3 install -e $NUC_OCULUS_DIR && \
    apt install -y android-tools-adb

# python environment setup
RUN pip3 install -e . && \
    pip3 install dm-robotics-moma==0.5.0 --no-deps && \
    pip3 install dm-robotics-transformations==0.5.0 --no-deps && \
    pip3 install dm-robotics-agentflow==0.5.0 --no-deps && \
    pip3 install dm-robotics-geometry==0.5.0 --no-deps && \
    pip3 install dm-robotics-manipulation==0.5.0 --no-deps && \
    pip3 install dm-robotics-controllers==0.5.0 --no-deps


# using miniconda instead of anaconda so overwrite sh scripts
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;

# set absolute paths
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's|~|/root|g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's|~|/root|g' {} \;

# set polymetis config files
RUN cp ${NUC_ROBOT_CONFIG_DIR}/franka_hardware.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_client/franka_hardware.yaml && \
    cp ${NUC_ROBOT_CONFIG_DIR}/franka_panda.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_model/franka_panda.yaml

# start the server
RUN chmod +x /app/.docker/laptop/entrypoint.sh
ENTRYPOINT ["/app/.docker/laptop/entrypoint.sh"]


================================================
FILE: .docker/laptop/Dockerfile.laptop_panda
================================================
FROM nvidia/cuda:12.1.0-devel-ubuntu22.04

# set robot parameters
ENV ROBOT_TYPE=panda
ENV LIBFRANKA_VERSION=0.9.0
ENV ROBOT_IP=172.16.0.1 
ENV NUC_IP=172.16.0.2
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES \
    ${NVIDIA_DRIVER_CAPABILITIES:+$NVIDIA_DRIVER_CAPABILITIES,}compute,video,utility

# set directory structure
ARG NUC_ROBOT_CONFIG_DIR=/app/config/${ROBOT_TYPE}
ARG NUC_OCULUS_DIR=/app/droid/oculus_reader
ARG NUC_POLYMETIS_DIR=/app/droid/fairo/polymetis
ARG NUC_POLYMETIS_CONFIG_DIR=${NUC_POLYMETIS_DIR}/polymetis/conf

# copy project code to container
COPY . /app
WORKDIR /app

# base system installations
RUN apt-get update && \
    apt-get install -y software-properties-common build-essential sudo git curl wget python3-pip libspdlog-dev \
    libeigen3-dev lsb-release ffmpeg libsm6 libxext6 zstd && \
    apt-get upgrade -y

# install miniconda 
RUN wget \
    https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
    && mkdir /root/.conda \
    && bash Miniconda3-latest-Linux-x86_64.sh -b \
    && rm -f Miniconda3-latest-Linux-x86_64.sh
ENV PATH /root/miniconda3/bin:$PATH

# create conda environment
RUN conda create -n "robot" python=3.7
SHELL ["conda", "run", "-n", "robot", "/bin/bash", "-c"]

# install the zed sdk
ARG UBUNTU_RELEASE_YEAR=22
ARG ZED_SDK_MAJOR=4
ARG ZED_SDK_MINOR=0
ARG CUDA_MAJOR=12
ARG CUDA_MINOR=1

RUN echo "Europe/Paris" > /etc/localtime ; echo "CUDA Version ${CUDA_MAJOR}.${CUDA_MINOR}.0" > /usr/local/cuda/version.txt

# setup the ZED SDK
RUN apt-get update -y || true ; apt-get install --no-install-recommends lsb-release wget less udev sudo zstd build-essential cmake python3 python3-pip libpng-dev libgomp1 -y && \ 
    python3 -m pip install numpy opencv-python && \
    wget -q -O ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run https://download.stereolabs.com/zedsdk/${ZED_SDK_MAJOR}.${ZED_SDK_MINOR}/cu${CUDA_MAJOR}${CUDA_MINOR%.*}/ubuntu${UBUNTU_RELEASE_YEAR} && \
    chmod +x ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run && \
    ./ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run -- silent skip_tools skip_cuda && \
    ln -sf /lib/x86_64-linux-gnu/libusb-1.0.so.0 /usr/lib/x86_64-linux-gnu/libusb-1.0.so && \
    rm ZED_SDK_Linux_Ubuntu${UBUNTU_RELEASE_YEAR}.run && \
    rm -rf /var/lib/apt/lists/*

RUN conda install -c conda-forge libstdcxx-ng requests # required for pyzed
RUN python /usr/local/zed/get_python_api.py && python -m pip install --ignore-installed /app/pyzed-4.0-cp37-cp37m-linux_x86_64.whl

# install oculus reader
RUN add-apt-repository universe && \
    apt-get update -y && \ 
    apt-get install -y android-tools-fastboot
RUN pip3 install -e $NUC_OCULUS_DIR && \
    apt install -y android-tools-adb && \
    rm -rf /var/lib/apt/lists/*

# python environment setup
RUN pip3 install -e . && \
    pip3 install dm-robotics-moma==0.5.0 --no-deps && \
    pip3 install dm-robotics-transformations==0.5.0 --no-deps && \
    pip3 install dm-robotics-agentflow==0.5.0 --no-deps && \
    pip3 install dm-robotics-geometry==0.5.0 --no-deps && \
    pip3 install dm-robotics-manipulation==0.5.0 --no-deps && \
    pip3 install dm-robotics-controllers==0.5.0 --no-deps


# using miniconda instead of anaconda so overwrite sh scripts
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;

# set absolute paths
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's|~|/root|g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's|~|/root|g' {} \;

# set polymetis config files
RUN cp ${NUC_ROBOT_CONFIG_DIR}/franka_hardware.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_client/franka_hardware.yaml && \
    cp ${NUC_ROBOT_CONFIG_DIR}/franka_panda.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_model/franka_panda.yaml

# start the server
RUN chmod +x /app/.docker/laptop/entrypoint.sh
ENTRYPOINT ["/app/.docker/laptop/entrypoint.sh"]


================================================
FILE: .docker/laptop/docker-compose-laptop-data-upload.yaml
================================================
version: "3"

services: 
  laptop_setup:
    image: ghcr.io/droid-dataset/droid_laptop:${ROBOT_TYPE}
    environment:
      ROOT_DIR: ${ROOT_DIR}
      DISPLAY: ${DISPLAY}
      XAUTHORITY: ${DOCKER_XAUTH}
      ROBOT_TYPE: ${ROBOT_TYPE}
      LIBFRANKA_VERSION: ${LIBFRANKA_VERSION}
      NUC_IP: ${NUC_IP}
      ROBOT_IP: ${ROBOT_IP}
      LAPTOP_IP: ${LAPTOP_IP}
      NVIDIA_VISIBLE_DEVICES: all
      ANDROID_ADB_SERVER_ADDRESS: host.docker.internal
    volumes:
      - /tmp/.X11-unix:/tmp/.X11-unix:rw
      - ${DOCKER_XAUTH}:${DOCKER_XAUTH}
      - ${ROOT_DIR}/droid/misc/parameters.py:/app/droid/misc/parameters.py
      - ${ROOT_DIR}/droid/calibration/calibration_info.json:/app/droid/calibration/calibration_info.json
      - ${ROOT_DIR}/data:/app/data
      - ${ROOT_DIR}/cache:/app/cache
      - ${ROOT_DIR}/droid-credentials.json:/app/droid-credentials.json
      - ${ROOT_DIR}/scripts/postprocess.py:/app/scripts/postprocess.py
    build: 
      context: ../../
      dockerfile: .docker/laptop/Dockerfile.laptop
    devices:
      - "/dev:/dev"
    runtime: nvidia
    privileged: true
    network_mode: "host"
    command: python /app/scripts/postprocess.py


================================================
FILE: .docker/laptop/docker-compose-laptop.yaml
================================================
version: "3"

services: 
  laptop_setup:
    image: ghcr.io/droid-dataset/droid_laptop:${ROBOT_TYPE}
    environment:
      ROOT_DIR: ${ROOT_DIR}
      DISPLAY: ${DISPLAY}
      XAUTHORITY: ${DOCKER_XAUTH}
      ROBOT_TYPE: ${ROBOT_TYPE}
      LIBFRANKA_VERSION: ${LIBFRANKA_VERSION}
      NUC_IP: ${NUC_IP}
      ROBOT_IP: ${ROBOT_IP}
      LAPTOP_IP: ${LAPTOP_IP}
      NVIDIA_VISIBLE_DEVICES: all
      ANDROID_ADB_SERVER_ADDRESS: host.docker.internal
    volumes:
      - /tmp/.X11-unix:/tmp/.X11-unix:rw
      - ${DOCKER_XAUTH}:${DOCKER_XAUTH}
      - ${ROOT_DIR}/droid/misc/parameters.py:/app/droid/misc/parameters.py
      - ${ROOT_DIR}/droid/calibration/calibration_info.json:/app/droid/calibration/calibration_info.json
      - ${ROOT_DIR}/data:/app/data
      - ${ROOT_DIR}/cache:/app/cache
    build: 
      context: ../../
      dockerfile: .docker/laptop/Dockerfile.laptop
      args:
        ROOT_DIR: ${ROOT_DIR}
        ROBOT_TYPE: ${ROBOT_TYPE}
        NUC_IP: ${NUC_IP}
        ROBOT_IP: ${ROBOT_IP}
    devices:
      - "/dev:/dev"
    runtime: nvidia
    privileged: true
    network_mode: "host"
    command: python /app/scripts/main.py


================================================
FILE: .docker/laptop/entrypoint.sh
================================================
#!/bin/bash

# activate conda
source ~/miniconda3/bin/activate
conda activate robot

# run user command
exec "$@"


================================================
FILE: .docker/nuc/Dockerfile.nuc
================================================
# pull ubuntu base image
FROM ubuntu:bionic

# build args
ARG ROBOT_TYPE=${ROBOT_TYPE}
ARG LIBFRANKA_VERSION=${LIBFRANKA_VERSION}
ARG ROBOT_IP=${ROBOT_IP} 
ARG NUC_IP=${NUC_IP}
ARG NUC_POLYMETIS_DIR=/app/droid/fairo/polymetis
ARG NUC_ROBOT_CONFIG_DIR=/app/config/${ROBOT_TYPE}
ARG NUC_POLYMETIS_CONFIG_DIR=${NUC_POLYMETIS_DIR}/polymetis/conf

# runtime env vars
ENV ROBOT_TYPE=${ROBOT_TYPE}
ENV LIBFRANKA_VERSION=${LIBFRANKA_VERSION}
ENV ROBOT_IP=${ROBOT_IP} 
ENV NUC_IP=${NUC_IP}

# base system installations
RUN apt-get update && \
    apt-get install -y software-properties-common build-essential sudo git curl wget python3-pip libspdlog-dev \
    libeigen3-dev lsb-release ffmpeg libsm6 libxext6 && \
    apt-get upgrade -y 

# install latest cmake version
RUN wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null && \
    apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main' && \  
    apt-get update && \
    apt-get install -y cmake

# copy project code to container
COPY . /app
WORKDIR /app

# install miniconda 
RUN wget \
    https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
    && mkdir /root/.conda \
    && bash Miniconda3-latest-Linux-x86_64.sh -b \
    && rm -f Miniconda3-latest-Linux-x86_64.sh
ENV PATH /root/miniconda3/bin:$PATH

# setup polymetis conda env
WORKDIR ${NUC_POLYMETIS_DIR}
RUN conda env create -f ./polymetis/environment.yml && \
    conda run -n polymetis-local pip install -e ./polymetis
SHELL ["conda", "run", "-n", "polymetis-local", "/bin/bash", "-c"]

# build libfranka and polymetis
RUN ./scripts/build_libfranka.sh ${LIBFRANKA_VERSION} && \
    mkdir -p ./polymetis/build && \
    cd ./polymetis/build && \
    cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_FRANKA=ON -DBUILD_TESTS=ON -DBUILD_DOCS=ON && \
    make -j

# set polymetis config files
WORKDIR /app
RUN cp ${NUC_ROBOT_CONFIG_DIR}/franka_hardware.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_client/franka_hardware.yaml && \
    cp ${NUC_ROBOT_CONFIG_DIR}/franka_panda.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_model/franka_panda.yaml

# python environment setup 
RUN pip3 install -e . && \
    pip3 install dm-robotics-moma==0.5.0 --no-deps && \
    pip3 install dm-robotics-transformations==0.5.0 --no-deps && \
    pip3 install dm-robotics-agentflow==0.5.0 --no-deps && \
    pip3 install dm-robotics-geometry==0.5.0 --no-deps && \
    pip3 install dm-robotics-manipulation==0.5.0 --no-deps && \
    pip3 install dm-robotics-controllers==0.5.0 --no-deps

# using miniconda instead of anaconda so overwrite sh scripts
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;

# set absolute paths
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's|~|/root|g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's|~|/root|g' {} \;

# start the server
RUN chmod +x /app/scripts/server/launch_server.sh
ENTRYPOINT ["/app/scripts/server/launch_server.sh"]



================================================
FILE: .docker/nuc/Dockerfile.nuc_fr3
================================================
# pull ubuntu base image
FROM ubuntu:bionic

# set robot parameters
ENV ROBOT_TYPE=fr3
ENV LIBFRANKA_VERSION=0.10.0
ENV ROBOT_IP=172.16.0.1 
ENV NUC_IP=172.16.0.2

# set directory structure
ARG NUC_POLYMETIS_DIR=/app/droid/fairo/polymetis
ARG NUC_ROBOT_CONFIG_DIR=/app/config/${ROBOT_TYPE}
ARG NUC_POLYMETIS_CONFIG_DIR=${NUC_POLYMETIS_DIR}/polymetis/conf

# base system installations
RUN apt-get update && \
    apt-get install -y software-properties-common build-essential sudo git curl wget python3-pip libspdlog-dev \
    libeigen3-dev lsb-release ffmpeg libsm6 libxext6 && \
    apt-get upgrade -y

# install latest cmake version
RUN wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null && \
    apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main' && \  
    apt-get update && \
    apt-get install -y cmake

# copy project code to container
COPY . /app
WORKDIR /app

# install miniconda 
RUN wget \
    https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
    && mkdir /root/.conda \
    && bash Miniconda3-latest-Linux-x86_64.sh -b \
    && rm -f Miniconda3-latest-Linux-x86_64.sh
ENV PATH /root/miniconda3/bin:$PATH

# setup polymetis conda env
WORKDIR ${NUC_POLYMETIS_DIR}
RUN conda env create -f ./polymetis/environment.yml && \
    conda run -n polymetis-local pip install -e ./polymetis
SHELL ["conda", "run", "-n", "polymetis-local", "/bin/bash", "-c"]

# build libfranka and polymetis
RUN ./scripts/build_libfranka.sh ${LIBFRANKA_VERSION} && \
    mkdir -p ./polymetis/build && \
    cd ./polymetis/build && \
    cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_FRANKA=ON -DBUILD_TESTS=ON -DBUILD_DOCS=ON && \
    make -j

# set polymetis config files
WORKDIR /app
RUN cp ${NUC_ROBOT_CONFIG_DIR}/franka_hardware.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_client/franka_hardware.yaml && \
    cp ${NUC_ROBOT_CONFIG_DIR}/franka_panda.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_model/franka_panda.yaml

# python environment setup 
RUN pip3 install -e . && \
    pip3 install dm-robotics-moma==0.5.0 --no-deps && \
    pip3 install dm-robotics-transformations==0.5.0 --no-deps && \
    pip3 install dm-robotics-agentflow==0.5.0 --no-deps && \
    pip3 install dm-robotics-geometry==0.5.0 --no-deps && \
    pip3 install dm-robotics-manipulation==0.5.0 --no-deps && \
    pip3 install dm-robotics-controllers==0.5.0 --no-deps

# using miniconda instead of anaconda so overwrite sh scripts
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;

# set absolute paths
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's|~|/root|g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's|~|/root|g' {} \;

# start the server
RUN chmod +x /app/scripts/server/launch_server.sh
ENTRYPOINT ["/app/scripts/server/launch_server.sh"]



================================================
FILE: .docker/nuc/Dockerfile.nuc_panda
================================================
# pull ubuntu base image
FROM ubuntu:bionic

# set robot parameters
ENV ROBOT_TYPE=panda
ENV LIBFRANKA_VERSION=0.9.0
ENV ROBOT_IP=172.16.0.1 
ENV NUC_IP=172.16.0.2

# set directory structure
ARG NUC_POLYMETIS_DIR=/app/droid/fairo/polymetis
ARG NUC_ROBOT_CONFIG_DIR=/app/config/${ROBOT_TYPE}
ARG NUC_POLYMETIS_CONFIG_DIR=${NUC_POLYMETIS_DIR}/polymetis/conf

# base system installations
RUN apt-get update && \
    apt-get install -y software-properties-common build-essential sudo git curl wget python3-pip libspdlog-dev \
    libeigen3-dev lsb-release ffmpeg libsm6 libxext6 && \
    apt-get upgrade -y

# install latest cmake version
RUN wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null && \
    apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main' && \  
    apt-get update && \
    apt-get install -y cmake

# copy project code to container
COPY . /app
WORKDIR /app

# install miniconda 
RUN wget \
    https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
    && mkdir /root/.conda \
    && bash Miniconda3-latest-Linux-x86_64.sh -b \
    && rm -f Miniconda3-latest-Linux-x86_64.sh
ENV PATH /root/miniconda3/bin:$PATH

# setup polymetis conda env
WORKDIR ${NUC_POLYMETIS_DIR}
RUN conda env create -f ./polymetis/environment.yml && \
    conda run -n polymetis-local pip install -e ./polymetis
SHELL ["conda", "run", "-n", "polymetis-local", "/bin/bash", "-c"]

# build libfranka and polymetis
RUN ./scripts/build_libfranka.sh ${LIBFRANKA_VERSION} && \
    mkdir -p ./polymetis/build && \
    cd ./polymetis/build && \
    cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_FRANKA=ON -DBUILD_TESTS=ON -DBUILD_DOCS=ON && \
    make -j

# set polymetis config files
WORKDIR /app
RUN cp ${NUC_ROBOT_CONFIG_DIR}/franka_hardware.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_client/franka_hardware.yaml && \
    cp ${NUC_ROBOT_CONFIG_DIR}/franka_panda.yaml ${NUC_POLYMETIS_CONFIG_DIR}/robot_model/franka_panda.yaml

# python environment setup 
RUN pip3 install -e . && \
    pip3 install dm-robotics-moma==0.5.0 --no-deps && \
    pip3 install dm-robotics-transformations==0.5.0 --no-deps && \
    pip3 install dm-robotics-agentflow==0.5.0 --no-deps && \
    pip3 install dm-robotics-geometry==0.5.0 --no-deps && \
    pip3 install dm-robotics-manipulation==0.5.0 --no-deps && \
    pip3 install dm-robotics-controllers==0.5.0 --no-deps

# using miniconda instead of anaconda so overwrite sh scripts
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's/anaconda/miniconda/g' {} \;

# set absolute paths
RUN find /app/droid/franka -type f -name "launch_*.sh" -exec sed -i 's|~|/root|g' {} \;
RUN find /app/scripts/server -type f -name "launch_server.sh" -exec sed -i 's|~|/root|g' {} \;

# start the server
RUN chmod +x /app/scripts/server/launch_server.sh
ENTRYPOINT ["/app/scripts/server/launch_server.sh"]



================================================
FILE: .docker/nuc/docker-compose-nuc.yaml
================================================
version: "3"

services: 
  setup_nuc:
    image: ghcr.io/droid-dataset/droid_nuc:${ROBOT_TYPE}
    environment:
      ROBOT_TYPE: ${ROBOT_TYPE}
      LIBFRANKA_VERSION: ${LIBFRANKA_VERSION}
      NUC_IP: ${NUC_IP}
      ROBOT_IP: ${ROBOT_IP}
      LAPTOP_IP: ${LAPTOP_IP}
    volumes:
      - ${ROOT_DIR}/droid/misc/parameters.py:/app/droid/misc/parameters.py
      - ${ROOT_DIR}/config/${ROBOT_TYPE}/franka_hardware.yaml:/app/droid/fairo/polymetis/polymetis/conf/robot_client/franka_hardware.yaml
      - ${ROOT_DIR}/config/${ROBOT_TYPE}/franka_panda.yaml:/app/droid/fairo/polymetis/polymetis/conf/robot_model/franka_panda.yaml
    build: 
      context: ../../
      dockerfile: .docker/nuc/Dockerfile.nuc
      args:
        ROBOT_TYPE: ${ROBOT_TYPE}
        LIBFRANKA_VERSION: ${LIBFRANKA_VERSION}
        NUC_IP: ${NUC_IP}
        ROBOT_IP: ${ROBOT_IP}
        LAPTOP_IP: ${LAPTOP_IP}
    devices:
      - "/dev:/dev"
    privileged: true
    network_mode: "host"
    cap_add:
      - SYS_NICE
    ulimits:
      rtprio: 99
      memlock: 102400
    deploy:
      restart_policy: 
        condition: any



================================================
FILE: .github/workflows/build_container_laptop_fr3.yaml
================================================
name: build laptop fr3 container and host on ghcr.io

on:
  push:
    branches: ['main']
    paths:
      - .gitmodules
      - droid/**
      - .docker/laptop/*
      - .github/workflows/build_container_laptop_fr3
  workflow_dispatch:

env:
  REGISTRY: ghcr.io
  ORG_NAME: droid-dataset
  IMAGE_NAME: droid_laptop

jobs:
  build-and-push-image:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:

      - name: Checkout repository
        uses: actions/checkout@v3
        with:
          submodules: recursive
      
      - name: Free Space
        run: |
          sudo rm -rf /opt/ghc
          sudo rm -rf /opt/hostedtoolcache
          sudo rm -rf /usr/share/dotnet
          sudo rm -rf "$AGENT_TOOLSDIRECTORY"
          sudo rm -rf ${GITHUB_WORKSPACE}/.git

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Log in to the Container registry
        uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v3
        with:
          context: .
          file: .docker/laptop/Dockerfile.laptop_fr3
          push: true
          no-cache: true
          tags: ${{ env.REGISTRY }}/${{ env.ORG_NAME }}/${{ env.IMAGE_NAME }}:fr3


================================================
FILE: .github/workflows/build_container_laptop_panda.yaml
================================================
name: build laptop panda container and host on ghcr.io

on:
  push:
    branches: ['main']
    paths:
      - .gitmodules
      - droid/**
      - .docker/laptop/*
      - .github/workflows/build_container_laptop_panda
  workflow_dispatch:

env:
  REGISTRY: ghcr.io
  ORG_NAME: droid-dataset
  IMAGE_NAME: droid_laptop

jobs:
  build-and-push-image:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:

      - name: Checkout repository
        uses: actions/checkout@v3
        with:
          submodules: recursive

      - name: Free Space
        run: |
          sudo rm -rf /opt/ghc
          sudo rm -rf /opt/hostedtoolcache
          sudo rm -rf /usr/share/dotnet
          sudo rm -rf "$AGENT_TOOLSDIRECTORY"
          sudo rm -rf ${GITHUB_WORKSPACE}/.git

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Log in to the Container registry
        uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v3
        with:
          context: .
          file: .docker/laptop/Dockerfile.laptop_panda
          push: true
          no-cache: true
          tags: ${{ env.REGISTRY }}/${{ env.ORG_NAME }}/${{ env.IMAGE_NAME }}:panda


================================================
FILE: .github/workflows/build_container_nuc_fr3.yaml
================================================
name: build nuc fr3 container and host on ghcr.io

on:
  push:
    branches: ['main']
    paths:
      - .gitmodules
      - droid/**
      - .docker/nuc/*
      - .github/workflows/build_container_nuc_fr3
  workflow_dispatch:

env:
  REGISTRY: ghcr.io
  ORG_NAME: droid-dataset
  IMAGE_NAME: droid_nuc

jobs:
  build-and-push-image:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v3
        with:
          submodules: recursive

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Log in to the Container registry
        uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v3
        with:
          context: .
          file: .docker/nuc/Dockerfile.nuc_fr3
          push: true
          no-cache: true
          tags: ${{ env.REGISTRY }}/${{ env.ORG_NAME }}/${{ env.IMAGE_NAME }}:fr3


================================================
FILE: .github/workflows/build_container_nuc_panda.yaml
================================================
name: build nuc panda container and host on ghcr.io

on:
  push:
    branches: ['main']
    paths:
      - .gitmodules
      - droid/**
      - .docker/nuc/*
      - .github/workflows/build_container_nuc_panda
  workflow_dispatch:

env:
  REGISTRY: ghcr.io
  ORG_NAME: droid-dataset
  IMAGE_NAME: droid_nuc

jobs:
  build-and-push-image:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v3
        with:
          submodules: recursive

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Log in to the Container registry
        uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v3
        with:
          context: .
          file: .docker/nuc/Dockerfile.nuc_panda
          push: true
          no-cache: true
          tags: ${{ env.REGISTRY }}/${{ env.ORG_NAME }}/${{ env.IMAGE_NAME }}:panda


================================================
FILE: .github/workflows/pages.yaml
================================================
name: Deploy Documentation

on:
  push:
    branches: 
      - "main"
    paths:
      - "docs/**"
  workflow_dispatch:

permissions:
  contents: read
  pages: write
  id-token: write

# Allow one concurrent deployment
concurrency:
  group: "pages"
  cancel-in-progress: true

jobs:
  # Build job
  build:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: docs
    steps:
      - name: Checkout
        uses: actions/checkout@v4
      - name: Setup Ruby
        uses: ruby/setup-ruby@v1
        with:
          ruby-version: '3.1' # Not needed with a .ruby-version file
          bundler-cache: true # runs 'bundle install' and caches installed gems automatically
          cache-version: 0 # Increment this number if you need to re-download cached gems
          working-directory: '${{ github.workspace }}/docs'
      - name: Setup Pages
        id: pages
        uses: actions/configure-pages@v4
      - name: Build with Jekyll
        run: bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
        env:
          JEKYLL_ENV: production
      - name: Upload artifact
        uses: actions/upload-pages-artifact@v3
        with:
          path: "docs/_site/"

  deploy:
    environment:
      name: github-pages
      url: ${{ steps.deployment.outputs.page_url }}
    runs-on: ubuntu-latest
    needs: build
    steps:
      - name: Deploy to GitHub Pages
        id: deployment
        uses: actions/deploy-pages@v4


================================================
FILE: .github/workflows/pre-commit.yaml
================================================
name: pre-commit

on:
  workflow_dispatch:
  schedule:
    - cron: "0 9 * * 1"
  pull_request:
  push:
    branches: [main]

jobs:
  pre-commit:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
      with:
        fetch-depth: 0
    - uses: actions/setup-python@v2
    - uses: pre-commit/action@v2.0.3
      with:
        token: ${{ secrets.GITHUB_TOKEN }}
        extra_args: --files


================================================
FILE: .gitignore
================================================
*.DS_Store


================================================
FILE: .gitmodules
================================================
[submodule "droid/oculus_reader"]
	path = droid/oculus_reader
	url = git@github.com:rail-berkeley/oculus_reader.git
[submodule "droid/fairo"]
	path = droid/fairo
	url = https://github.com/facebookresearch/fairo.git


================================================
FILE: .pre-commit-config.yaml
================================================
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
exclude: ".git"
default_stages:
  - commit

repos:
  - repo: https://github.com/charliermarsh/ruff-pre-commit
    rev: v0.0.252
    hooks:
      - id: ruff
        args: [ --fix, --exit-non-zero-on-fix ]

  - repo: https://github.com/timothycrosley/isort
    rev: 5.12.0
    hooks:
      - id: isort

  - repo: https://github.com/psf/black
    rev: 23.1.0
    hooks:
      - id: black

  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.4.0
    hooks:
      - id: check-added-large-files
        args: ["--maxkb=40000"]
      - id: check-ast
      - id: check-case-conflict
      - id: check-merge-conflict
      - id: check-toml
      - id: check-yaml
      - id: end-of-file-fixer
      - id: trailing-whitespace
      - id: debug-statements


================================================
FILE: Makefile
================================================
.PHONY: help check autoformat
.DEFAULT: help

# Generates a useful overview/help message for various make features - add to this as necessary!
help:
	@echo "make check"
	@echo "    Run code style and linting (black, ruff) *without* changing files!"
	@echo "make autoformat"
	@echo "    Run code styling (black, ruff) and update in place - committing with pre-commit also does this."

check:
	black --check .
	ruff check --show-source .

autoformat:
	black .
	ruff check --fix --show-fixes .


================================================
FILE: README.md
================================================
# The DROID Robot Platform

This repository contains the code for setting up your DROID robot platform and using it to collect teleoperated demonstration data. This platform was used to collect the [DROID dataset](https://droid-dataset.github.io), a large, in-the-wild dataset of robot manipulations.

If you are interested in using the DROID dataset for training robot policies, please check out our [policy learning repo](https://github.com/droid-dataset/droid_policy_learning).
For more information about DROID, please see the following links: 

[**[Homepage]**](https://droid-dataset.github.io)   [**[Documentation]**](https://droid-dataset.github.io/droid)   [**[Paper]**](https://arxiv.org/abs/2403.12945)   [**[Dataset Visualizer]**](https://droid-dataset.github.io/dataset.html).

![](https://droid-dataset.github.io/droid/assets/index/droid_teaser.jpg)

---------
## Setup Guide

We assembled a step-by-step guide for setting up the DROID robot platform in our [developer documentation](https://droid-dataset.github.io/droid).
This guide has been used to set up 18 DROID robot platforms over the course of the DROID dataset collection. Please refer to the steps in this guide for setting up your own robot. Specifically, you can follow these key steps:

1. [Hardware Assembly and Setup](https://droid-dataset.github.io/droid/docs/hardware-setup)
2. [Software Installation and Setup](https://droid-dataset.github.io/droid/docs/software-setup)
3. [Example Workflows to collect data or calibrate cameras](https://droid-dataset.github.io/droid/docs/example-workflows)

If you encounter issues during setup, please raise them as issues in this github repo.


================================================
FILE: config/fr3/franka_hardware.yaml
================================================
hz: 1000
use_real_time: true
exec: franka_panda_client

robot_client:
  _target_: polymetis.robot_client.executable_robot_client.ExecutableRobotClient
  use_real_time: ${use_real_time}
  metadata_cfg:
    _target_: polymetis.robot_client.metadata.RobotClientMetadata
    default_Kq: [40, 30, 50, 25, 35, 25, 10]
    default_Kqd: [4, 6, 5, 5, 3, 2, 1]
    default_Kx: [400, 400, 400, 15, 15, 15]
    default_Kxd: [37, 37, 37, 2, 2, 2]
    hz: ${hz}
    robot_model_cfg: ${robot_model}
  executable_cfg:
    robot_ip: "172.16.0.1"
    control_ip: ${ip}
    control_port: ${port}
    readonly: false
    mock: false
    use_real_time: ${use_real_time}
    hz: ${hz}
    num_dofs: ${robot_model.num_dofs}
    exec: ${exec}
    robot_client_metadata_path: ???

    limit_rate: true
    lpf_cutoff_frequency: 100

    limits: 
      # bounding box of the workspace
      cartesian_pos_upper:
        - 1.0
        - 1.0
        - 1.0
      cartesian_pos_lower:
        - -1.0
        - -1.0
        - -1.0

      # the remaining limits are set to the original franka limits minus a margin
      joint_pos_upper: #margin: 0.1 rad
        - 2.65
        - 1.68
        - 2.80
        - -0.16
        - 2.70
        - 4.40
        - 2.90
      joint_pos_lower: #margin: 0.1 rad
        - -2.65
        - -1.68
        - -2.80
        - -2.95
        - -2.70
        - 0.45
        - -2.90
      joint_vel: #margin: 0.1 rad/s
        - 2.075
        - 2.075
        - 2.075
        - 2.075
        - 2.51
        - 2.51
        - 2.51
      elbow_vel: 2.075      #margin: 0.1 rad/s
      joint_torques:  #margin: 1N for first 4 joints, 0.5N for last 3 joints
        - 86.0
        - 86.0
        - 86.0
        - 86.0
        - 11.5
        - 11.5
        - 11.5

    collision_behavior:
      lower_torque: [40.0, 40.0, 40.0, 40.0, 40.0, 40.0, 40.0]
      upper_torque: [40.0, 40.0, 40.0, 40.0, 40.0, 40.0, 40.0]
      lower_force: [40.0, 40.0, 40.0, 40.0, 40.0, 40.0]
      upper_force: [40.0, 40.0, 40.0, 40.0, 40.0, 40.0]

    safety_controller:
      is_active: true
      margins: # margin from hard safety limits at which safety controllers start to kick in
        cartesian_pos: 0.05
        joint_pos: 0.2
        joint_vel: 0.5
      stiffness:
        cartesian_pos: 200.0
        joint_pos: 50.0
        joint_vel: 20.0


================================================
FILE: config/fr3/franka_panda.yaml
================================================
# @package _group_
robot_description_path: "franka_panda/panda_arm.urdf"
controlled_joints:  [0, 1, 2, 3, 4, 5, 6]
num_dofs: 7
ee_link_idx: 7
ee_link_name: panda_link8
rest_pose: [-0.13935425877571106, -0.020481698215007782, -0.05201413854956627, -2.0691256523132324, 0.05058913677930832, 2.0028650760650635, -0.9167874455451965]
joint_limits_low: [-2.7437, -1.7837, -2.9007, -3.0421, -2.8065, 0.5445, -3.0159]
joint_limits_high: [2.7437, 1.7837, 2.9007, -0.1518, 2.8065, 4.5169, 3.0159]
joint_damping: [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
torque_limits: [87., 87., 87., 87., 12., 12., 12.]


================================================
FILE: config/panda/franka_hardware.yaml
================================================
hz: 1000
use_real_time: true
exec: franka_panda_client

robot_client:
  _target_: polymetis.robot_client.executable_robot_client.ExecutableRobotClient
  use_real_time: ${use_real_time}
  metadata_cfg:
    _target_: polymetis.robot_client.metadata.RobotClientMetadata
    default_Kq: [40, 30, 50, 25, 35, 25, 10]
    default_Kqd: [4, 6, 5, 5, 3, 2, 1]
    default_Kx: [400, 400, 400, 15, 15, 15]
    default_Kxd: [37, 37, 37, 2, 2, 2]
    hz: ${hz}
    robot_model_cfg: ${robot_model}
  executable_cfg:
    robot_ip: "172.16.0.2"
    control_ip: ${ip}
    control_port: ${port}
    readonly: false
    mock: false
    use_real_time: ${use_real_time}
    hz: ${hz}
    num_dofs: ${robot_model.num_dofs}
    exec: ${exec}
    robot_client_metadata_path: ???

    limit_rate: true
    lpf_cutoff_frequency: 100

    limits: 
      # bounding box of the workspace
      cartesian_pos_upper:
        - 1.0
        - 1.0
        - 1.0
      cartesian_pos_lower:
        - -1.0
        - -1.0
        - -1.0

      # the remaining limits are set to the original franka limits minus a margin
      joint_pos_upper: #margin: 0.1 rad
        - 2.80
        - 1.66
        - 2.80
        - -0.17
        - 2.80
        - 3.65
        - 2.80
      joint_pos_lower: #margin: 0.1 rad
        - -2.80
        - -1.66
        - -2.80
        - -2.97
        - -2.80
        - 0.08
        - -2.80
      joint_vel: #margin: 0.1 rad/s
        - 2.075
        - 2.075
        - 2.075
        - 2.075
        - 2.51
        - 2.51
        - 2.51
      elbow_vel: 2.075      #margin: 0.1 rad/s
      joint_torques:  #margin: 1N for first 4 joints, 0.5N for last 3 joints
        - 86.0
        - 86.0
        - 86.0
        - 86.0
        - 11.5
        - 11.5
        - 11.5

    collision_behavior:
      lower_torque: [40.0, 40.0, 40.0, 40.0, 40.0, 40.0, 40.0]
      upper_torque: [40.0, 40.0, 40.0, 40.0, 40.0, 40.0, 40.0]
      lower_force: [40.0, 40.0, 40.0, 40.0, 40.0, 40.0]
      upper_force: [40.0, 40.0, 40.0, 40.0, 40.0, 40.0]

    safety_controller:
      is_active: true
      margins: # margin from hard safety limits at which safety controllers start to kick in
        cartesian_pos: 0.05
        joint_pos: 0.2
        joint_vel: 0.5
      stiffness:
        cartesian_pos: 200.0
        joint_pos: 50.0
        joint_vel: 20.0


================================================
FILE: config/panda/franka_panda.yaml
================================================
# @package _group_
robot_description_path: "franka_panda/panda_arm.urdf"
controlled_joints:  [0, 1, 2, 3, 4, 5, 6]
num_dofs: 7
ee_link_idx: 7
ee_link_name: panda_link8
rest_pose: [-0.13935425877571106, -0.020481698215007782, -0.05201413854956627, -2.0691256523132324, 0.05058913677930832, 2.0028650760650635, -0.9167874455451965]
joint_limits_low: [-2.8973, -1.7628, -2.8973, -3.0718, -2.8973, -0.0175, -2.8973]
joint_limits_high: [2.8973, 1.7628, 2.8973, -0.0698, 2.8973, 3.7525, 2.8973]
joint_damping: [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
torque_limits: [87., 87., 87., 87., 12., 12., 12.]


================================================
FILE: docs/Gemfile
================================================
source 'https://rubygems.org'

gem "jekyll", "~> 4.3.3" # installed by `gem jekyll`
# gem "webrick"        # required when using Ruby >= 3 and Jekyll <= 4.2.2

gem "just-the-docs", "0.7.0" # pinned to the current release
# gem "just-the-docs"        # always download the latest release


================================================
FILE: docs/_config.yml
================================================
title: DROID Docs
description: Developer documentation for recreating the DROID platform setup.
theme: just-the-docs

url: https://droid-dataset.github.io/droid # this url not being used right now
color_scheme: droid # TODO: update name to DROID

aux_links:
  Official GitHub Repository: https://github.com/droid-dataset/droid


================================================
FILE: docs/_sass/color_schemes/droid.scss
================================================
@import "./color_schemes/light";

// add custom colours here
$cardinal-red: #8C1515; // can be updated assumed primary author's university color scheme

$link-color: $cardinal-red;
$btn-primary-color: $cardinal-red;


================================================
FILE: docs/contribution-guidelines.md
================================================
---
layout: default
title: Contribution Guidelines
nav_order: 6
---

# Contribution Guidelines

Community contributions to broader robotics community data collection efforts are welcomed. The platform developed in the DROID project can be used to curate datasets to be contributed to community data collection efforts such as the [Open X-Embodiment](https://robotics-transformer-x.github.io/) data collection effort. 

To contribute to the Open X-Embodiment project consider enrolling your dataset using the following [form](https://docs.google.com/forms/d/e/1FAIpQLSeYinS_Y5Bf1ufTnlROULVquD4gw6xY_wUBssfVYkHNaPp4LQ/viewform) and/or reaching out to the associated project via the email <open-x-embodiment@googlegroups.com>.


================================================
FILE: docs/example-workflows/calibrating-cameras.md
================================================
---
layout: default
title: Calibrating Cameras
parent: Example Workflows
nav_order: 2
---

# Prerequisites

This guide assumes you have already setup DROID application software to run on your host machine or through Docker. Proceed with this guide having launched the control server on the NUC and the GUI application on the laptop.

# Calibrating Cameras

The GUI will let you know if any of your camera’s have yet to be calibrated, or if any of your camera’s haven’t been calibrated in a while (this is to make sure you don’t accidentally move them and forget). Since the hand camera is relatively fixed, it is okay to calibrate significantly less frequently than the other cameras, but it is still good practice to calibrate it every now and then.

Useful Information:
* If calibration is successful, you will be brought back to the calibration hub. If it is unsuccessful, the GUI will inform you as such.
* During calibration, you will see a visualization of the pose estimation of the charuco board. Stable, green boxes along with stable axes lines are good! When these are not present, a calibration is likely to fail. If things start to significantly jumble during the automated calibration procedure, feel free to press B and try again.
* Because the board is heavy, the robot may move differently when the board is attached to the gripper.
* **WARNING:** If A button click is taking a really long time. The camera might have failed. You can confirm this by looking in the terminal for a line that resembles “can’t claim interface…”. The solution is to simply close the GUI, and load it up again. Your previous calibration info will not be lost.

## Mounting Calibration Board

Please follow the instructions in the assembly guide for mounting the calibration board.

## Calibrating a 3rd-Person Camera

<iframe src="https://drive.google.com/file/d/1kcE4YzeJYJLbKanY2N8R66tiCqWjk6ij/preview" width="640" height="480" allow="autoplay"></iframe>

## Calibration a Hand-Mounted Camera

<iframe src="https://drive.google.com/file/d/1Raqb35VDrr4YvSBaola5kSR-QFNKZ32w/preview" width="640" height="480" allow="autoplay"></iframe>


================================================
FILE: docs/example-workflows/data-collection.md
================================================
---
layout: default
title: Collecting Data
parent: Example Workflows
nav_order: 3
---

# Prerequisites

**Important:** Before proceeding please ensure you have followed the camera calibration guide and calibrated your cameras. 

This guide assumes you have already setup DROID application software to run on your host machine or through Docker. Proceed with this guide having launched the control server on the NUC and the GUI application on the laptop.

# Collecting Data

## Using the GUI

* The GUI will start by prompting you to enter your name.
	* Note 1: Your trajectories will be associated with the name you type in. So, make sure to be consistent, and type in your full name! The login page will not let you pass until you have done so.
	* Note 2: Press shift to see the camera feed. Anytime you see a camera feed, confirm that there are exactly 6 images on screen. If there is anything different, halt data collection, unplug and replug the 3 camera wires, and restart the GUI.

* After this, you will be presented with the task configuration page. This is where you enter all the tasks that are currently doable from the scene you have created for your robot. You may select from the predefined tasks using the checkboxes, or enter your own tasks in the text box. For your convenience, use the shift button to toggle between this page and the camera feed. This can be useful for checking what is in view of your cameras.
	* In the upper left corner are 3 tabs:
	* Task Ideas: A google doc of task ideas for inspiration split by room type (ex: bedroom, kitchen, etc)
	* Preferred Tasks: We will be collecting demonstrations for 20 abstract tasks, and 20 specific tasks as a backup. These latter 20 tasks are specific instances of the 20 abstract tasks. We ask that you make sure that around once every couple xdays, you collect some trajectories for as many of these tasks as possible. Clicking this tab will bring you to a page that lists all of the specific tasks and allows you to keep track of which ones you have collected data for.
	* Franka Website: This will bring you to your Franka’s website, where you can lock / unlock robot joints, as well as activate FCI mode.

* In the lower right corner are another 3 tabs:
	* Collect Trajectory: This will bring you to the requested task page, where you will be prompted with your task for the next trajectory. You may press A to begin the trajectory, or B to sample a new task if it’s necessary.
	* Calibrate: This will bring you to the camera calibration page. Click the camera that you would like to calibrate. Note that because we need to turn the cameras on in high resolution, button clicks may take a while here. See the section below for more information on camera calibration.
	* Practice: This will allow you to collect a trajectory without the data being saved. You can use this tool however you see fit.

* Periodically, the GUI will prompt you with desired scene changes. Proceed with them as instructed. When the scene changes involve altering a camera position, make sure to recalibrate the associated camera!

* Miscellaneous Notes:
	* Finish trajectories in such a way that the robot can be reset (ex: nothing in gripper, as it will be dropped.
	* Try to create scenes with as many available tasks as possible.
	* Although we want you to stick to the requested tasks, use your best judgment.
	* At any time, hold 'Return' for 5 seconds to reset the robot

# Uploading Data

Instructions for uploading data can be found in [this](https://github.com/droid-dataset/droid/tree/main/scripts) readme.



================================================
FILE: docs/example-workflows/example-workflows.md
================================================
---
layout: default
title: Example Workflows
nav_order: 5
has_children: true
has_toc: false
permalink: /docs/example-workflows
---

## 🤖 Welcome to the DROID example workflows guides

The goal of the workflow guides is to enable users/developers to operate the DROID platform once both the hardware and software have been configured. Guides within this section cover the following topics:

1. [Teleoperating the Robot](https://droid-dataset.github.io/droid/example-workflows/teleoperation.html): a guide on using the Oculus Quest 2 to teleoperate the robot. 
2. [Calibration Cameras](https://droid-dataset.github.io/droid/example-workflows/calibrating-cameras.html): a guide on calibrating the extrinsic parameter of cameras.
3. [Collecting Data](https://droid-dataset.github.io/droid/example-workflows/data-collection.html): a guide on using the GUI to collect task demonstrations.
4. [Training & Evaluating Policies](https://droid-dataset.github.io/droid/example-workflows/training-policies.html): a guide on training and evaluating policies using the DROID.


================================================
FILE: docs/example-workflows/teleoperation.md
================================================
---
layout: default
title: Teleoperating the Robot
parent: Example Workflows
nav_order: 1
---

# Prerequisites

This guide assumes you have already setup DROID application software to run on your host machine or through Docker. Proceed with this guide having launched the control server on the NUC and the GUI application on the laptop.

# Teleoperating the Robot

* To teleoperate the robot, your oculus controller should be plugged into your laptop, and the permissions prompt should be accepted (done by putting the headset on and clicking “Accept”. Also, the controller has to be in view of the headset cameras. It detects the pose of the handle via infrared stickers on the handle.
* To control the robot, we will use only the right controller. If you would like to use the left controller for teleoperation instead, change [this](https://github.com/droid-dataset/droid/blob/5f2f96b5cf9d95dde67fda21a8ab776683aeeae7/droid/controllers/oculus_controller.py#L16) parameter.
* Teleoperation works by applying the changes to the oculus handle’s pose to the robot gripper’s pose. The trigger on the front of the controller is used to control the gripper. Actions are only applied when the trigger on the side of the controller is being held.
* It is important for intuitive control that the controller’s definition of forward is aligned with the direction of the robot. The controller defines “forward” on the first step where the side button of the controller is held. At any point, you can redefine the forward direction by pressing down on the joystick until you hear a click. At some point, try changing the definition of forward to get a better feel for its purpose.
* To practice, select `Practice` from the GUI application.

<iframe src="https://drive.google.com/file/d/1Rg10T5rVaK9m_0BYXq2EkVdNgLpZnj3P/preview" width="640" height="480" allow="autoplay"></iframe>



================================================
FILE: docs/example-workflows/training-policies.md
================================================
---
layout: default
title: Training Policies
parent: Example Workflows
nav_order: 4
---

# Training Policies

* Please follow the guide in our [policy learning repo](https://github.com/droid-dataset/droid_policy_learning) to learn how to train and evaluate policies for your own DROID setup!



================================================
FILE: docs/hardware-setup/assembly.md
================================================
---
layout: default
title: Assembly
parent: Hardware Setup
nav_order: 2
---

This assembly guide starts from constructing the standing desk, before progressing to mounting the robot on the desk and installing various other components to the platform. There are a number of design options/variations for the mobile platform, these are discussed under each of the major headings.

<details open markdown="block">
  <summary>
    Table of contents
  </summary>
  {: .text-delta }
1. TOC
{:toc}
</details>

## Building the Desk

It is expected that you have procured a standing desk similar to the one listed in the shopping list. The desk you procured should be accompanied with an assembly guide please follow this guide to construct your standing desk.

**Important:** at this stage, don't attach any extras such as trays and/or cable management items to the bottom of the desk as we will be installing components to the underside of the table in future sections of the guide.

<!---
### Optional: Custom Frame and Wheels

In some instances it may be beneficial to perform additional modifications on your standing desk. A possible modification is adding a custom frame and wheel system to increase the overall stability of the platform... (Peter to complete UoE mod details here). 

<TODO: complete this section>
-->

## Mounting Robot on the Desk

### Option 1: Breadboard Base

A flexible option for mounting the robot on the desk is to bolt a set of rails to a breadboard and subsequently mount the robot on these rails. The main advantage of this form of mount is that it is possible to reposition the rails and mount position of the robot.

The first thing we will do is construct the rails upon which the robot will be mounted:

* Take a 1 foot aluminum bar, and slide the bracket circled in red in the below image (yours may look different) into the left and right side of it.
* Put two of the screws (1 cm long, 5mm wide) into a corner bracket and loosely screw one side into the bracket within the 1 foot aluminum bar on either side.
* Use this technique to create a rail that looks like the rail in the third image below, and another that looks is a mirrored version of this (ie. one corner bracket in the upper right and one in the lower right)

<p>
<img src="../assets/hardware-setup/breadboard_base1.jpg" alt="image" width="45%" height="auto"> 
<img src="../assets/hardware-setup/breadboard_base2.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/breadboard_base3.jpg" alt="image" width="45%" height="auto">
</p>


Next we will mount these rails on to our breadboard:

* Screw the aluminum bars into the breadboard in the configuration shown below. Unfortunately it’s not possible to achieve perfect symmetry when bolting the rails to the breadboard. Feel free to nudge the centering a row or column over for diversity (ex: configure it so there are two holes on the right instead of the left).
    * Make sure this is one hole between the bars on the top, and 5 holes between the bars on the bottom
    * **Hint:** to keep the screws from colliding when screwing them in, (lift up the bar before screwing so the screw sits at the bottom. (Insert picture)
* Slide two brackets into the top levels of each aluminum bar, we will be using these to attach the Franka.

<p>
<img src="../assets/hardware-setup/breadboard_base4.jpg" alt="image" width="45%" height="auto"> 
<img src="../assets/hardware-setup/breadboard_base5.jpg" alt="image" width="45%" height="auto">
</p>

<img src="../assets/hardware-setup/breadboard_base6.jpg" alt="image" width="45%" height="auto">


Now we mount the robot:

* Align the brackets with the Franka screw holes, and loosely screw the robot to the base.
    * If you’re using an FR3, use the[ ⅝” long screws](https://www.mcmaster.com/92949A539/) to mount the robot instead of the original screws.

* Once everything is loosely screwed in, tighten ALL SCREWS until everything is entirely fixed!

<img src="../assets/hardware-setup/breadboard_base7.jpg" alt="image" width="45%" height="auto">

Next place the breadboard on your standing desk:

* Center the broadboard as much as you can on top of the desk. It should look roughly like the below image (minus the wires on the robot):
    * Important: Make sure that the franka is facing the side without any desk protrusions. An example of a desk protrusion is the height adjuster on the recommended desk (seen on the back, left side of the desk below).

<img src="../assets/hardware-setup/mobile_base1.jpg" alt="image" width="45%" height="auto">

* Use four clamps (two on either side) to connect the breadboard to the desk.

<p>
<img src="../assets/hardware-setup/mobile_base2.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/mobile_base3.jpg" alt="image" width="45%" height="auto">
</p>

* Place the Franka power box, laptop, and NUC in the following configuration:

<p>
<img src="../assets/hardware-setup/mobile_base4.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/mobile_base5.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/mobile_base6.jpg" alt="image" width="45%" height="auto">
</p>

<!---
### Option 2: Fixed Mount Plate

<TODO: complete this section>
--->

## Mounting Additional Components on the Desk

A requirement of the DROID platform is that it can easily be transported from one location to the next. In this section, we outline configurations for components on the desk in a way that enables greater ease of transportation. Some important points to this end: 

* Throughout this section, take time to organize loose wiring with zip ties, velcro strips and/or sticky hooks in order to keep your desk organised, it is important that wiring be as permanent as possible.
* Getting rid of as much loose wiring and making things as organized as possible during this step will make your life much easier later on!
* Cameras are mounted to the sides of the desk ensure your organise wiring such that you keep space open for moving around the camera mounting location!
* When this doc discusses the location under the desk, we will define forward using the robot. For example “Attach X to the back right side of the desk” means with respect to the direction that the robot is facing.
* Be generous with the amount of industrial strength velcro you use on power boxes, they can be heavy!
* Scroll down to see what the end product should look like. It would be a good idea to frequently come back to these images and compare.


### Option 1: Configuring Components on Underside of Table 

* Use velcro to attach the large power strip to the back left of the desk.
* Make sure you face the protruding wiring towards the back of the desk.
* Use velcro to attach the standing desk power box to the desk, organize the loose wiring, and plug it it into the power strip:

<img src="../assets/hardware-setup/miscellaneous_wiring1.jpg" alt="image" width="45%" height="auto">

* Use velcro to attach the Franka emergency stop (the white button) to the the bottom of the desk. Attach it on the right hand side directly under the Franka towards the outside of the desk so it is easily accessible. Face the protruding wire towards the center of the desk to keep it from protruding:

<p>
<img src="../assets/hardware-setup/miscellaneous_wiring2.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/miscellaneous_wiring3.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/miscellaneous_wiring4.jpg" alt="image" width="45%" height="auto">
</p>

* Connect the emergency stop wire to the X3 port on the Franka

<img src="../assets/hardware-setup/miscellaneous_wiring5.jpg" alt="image" width="45%" height="auto">

* Use Velcro to attach the power unit for the Robotiq gripper. Face the built in wire towards the Franka. Plug the other side in and connect it to the power strip.

<p>
<img src="../assets/hardware-setup/miscellaneous_wiring6.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/miscellaneous_wiring7.jpg" alt="image" width="45%" height="auto">
</p>

* Use Velcro to attach the power unit for the NUC directly under it (middle right side of the desk). Face the built in cable towards the outside of the desk. Plug in the other side and connect it to the power strip.

<p>
<img src="../assets/hardware-setup/miscellaneous_wiring8.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/miscellaneous_wiring9.jpg" alt="image" width="45%" height="auto">
</p>

* Connect the Franka power box power cable to the power strip:

<img src="../assets/hardware-setup/miscellaneous_wiring10.jpg" alt="image" width="45%" height="auto">

* Connect the cable from the X1 port on the Franka to the power box

<p>
<img src="../assets/hardware-setup/miscellaneous_wiring11.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/miscellaneous_wiring12.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/miscellaneous_wiring13.jpg" alt="image" width="45%" height="auto">
</p>

* Use Velcro to attach the Ethernet switch under the desk on the back right hand side of the desk (under the Franka control box).
* Connect and organize the Ethernet switch power cable, the switch to Franka power box ethernet cable, the switch to NUC ethernet cable, and the switch the laptop Ethernet cable.

<p>
<img src="../assets/hardware-setup/miscellaneous_wiring14.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/miscellaneous_wiring15.jpg" alt="image" width="45%" height="auto">
</p>

* If you’re using our recommended laptop, use velcro to attach the power box to the bottom of the desk on the back left side, with the built-in cable facing the robot. Plug in the detachable cable and connect it to the power strip.

<img src="../assets/hardware-setup/miscellaneous_wiring16.jpg" alt="image" width="45%" height="auto">

* You’re done! Your setup should look something like the photos below. Ignore the wires going down the robot, and the hand camera, you’ll set those up later.
* Before proceeding, please set up your robot with the Franka software as we will need to put it into zero gravity mode for the next step. Once you’ve done this, familiarize yourself with the text in the “Powering Up Franka” section of this document.

<p>
<img src="../assets/hardware-setup/miscellaneous_wiring17.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/miscellaneous_wiring18.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/miscellaneous_wiring19.jpg" alt="image" width="45%" height="auto">
</p>

<!---
### Option 2: Configuring Components on Custom Shelving

<TODO: determine if we actually want to add this to the docs>
--->


## Mounting Hand Camera on Robot 

In this section, we will specify how to mount the zed mini camera on the Franka robot arm.It will be easier to perform this part of the assembly with the Franka robot arm in a position that you have easy access to the last link of the robot (often referred to as link8).


* Secure the camera in the custom mount specified in the shopping list, insert a nut into the hole on the back of the mount. Insert the 10mm screw on the other side, and tighten the screw in until the attachment is tight.
	
<img src="../assets/hardware-setup/camera_mount1.jpg" alt="image" width="45%" height="auto">

* Remove the back two screws in the Franka wrist:

<img src="../assets/hardware-setup/camera_mount2.jpg" alt="image" width="45%" height="auto">

* Use 30mm screws to attach the hand camera to the gripper (**Important:** these are different from the default screws that come with the arm).

<img src="../assets/hardware-setup/camera_mount3.jpg" alt="image" width="45%" height="auto">

## Mounting Robotiq Gripper on Robot

In this section, we will first prepare the Robotiq gripper wiring as it is non-trivial. Following this we will specify how to mount the gripper on the arm.

### Preparing Wires

* You should have a thick cable that has 5 exposed wires colored red, black, white, green, and silver. We are going to connect them to ports as demonstrated in the below images. 

<p>
<img src="../assets/hardware-setup/robotiq_wiring1.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/robotiq_wiring2.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/robotiq_wiring3.jpg" alt="image" width="45%" height="auto">
</p>

* To connect these wires into these ports, you need to cut off some of the rubber to expose ~5mm of metal thread. Then, unscrew the bolts up top to create some space for the metal threads. Jam the metal threads into the empty space, and screw the bolts tight again on top of them. \

* Connect the wires as follows:

<p>
<img src="../assets/hardware-setup/robotiq_wiring4.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/robotiq_wiring5.jpg" alt="image" width="45%" height="auto">
</p>

* Cover these wires with a plastic of some sort to keep things from pulling on them. Then, use a Velcro strip to tightly bound everything together so that there is no pressure on the loose open wires. Otherwise, they will come loose and the gripper will stop working. Make sure to leave the outlet uncovered.

<p>
<img src="../assets/hardware-setup/robotiq_wiring6.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/robotiq_wiring7.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/robotiq_wiring8.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/robotiq_wiring9.jpg" alt="image" width="45%" height="auto">
</p>

* Try connecting the gripper to the AC cable. Make sure that the gripper light shines red.

### Mount Procedure

* To start, you should collect these screws from the box (4 x 10mm, 4 x 30mm)

<img src="../assets/hardware-setup/robotiq_attach1.jpg" alt="image" width="45%" height="auto">

* Align the gripper mount with the screw holes on the Franka wrist, with the protruding wiring facing the right of the robot. The USB port on the camera should be facing the same direction and parallel to the protruding wire.
* Use the four smaller screws to connect the gripper mount to the Franka wrist.

<img src="../assets/hardware-setup/robotiq_attach2.jpg" alt="image" width="45%" height="auto">

* Align the Robotiq gripper with the metal pins on the wrist mount similar to the picture above. Then, use the long screws to attach the gripper in all four corners.

<p>
<img src="../assets/hardware-setup/robotiq_attach3.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/robotiq_attach4.jpg" alt="image" width="45%" height="auto">
</p>

* Place a small piece of thick Velcro (soft side) over the light on the gripper. Otherwise, it will shine into the camera.
* **Important:** Gripper is rotated 180 degrees in the below for better view.

<p>
<img src="../assets/hardware-setup/robotiq_attach5.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/robotiq_attach6.jpg" alt="image" width="45%" height="auto">
</p>

## Robot Cable Management

In this section, we will outline how to manage cables on the Franka robot arm. You’ll want to put the Franka robot arm into zero gravity mode for this section. It is also strongly recommended that you use the 12 foot Zed Mini wire that came with the camera.

Throughout this section we’ll be strapping the wires to the robot with the velcro strips. Use the best length for each connection, and be aware that for some connections you may need to connect two strips together:

<img src="../assets/hardware-setup/arm_wire1.jpg" alt="image" width="45%" height="auto">


It's important to keep the following points in mind as we work through this section:

* Leave enough slack between each wire-to-robot connection for any possible joint configuration. Joint limits are frequently reached, so make sure you do this CAREFULLY and TEST to make sure there’s enough slack meticulously! Otherwise you could damage the equipment.
* Keep the wires organized. We do not want them catching onto things during data collection!
* When you strap the Velcro straps, strap them as tight as possible! You also may want to put zip ties on either side of the wire to keep it from slipping around. Otherwise, you will compromise the amount of slack you allotted each joint.
* After you attach each strap, you may want to move the robot around to extreme positions to make sure there’s never tension on the wire.
* As you move along the wire, use zip ties about every 5cm!

* When you plug in the camera wire, make sure:
    * You use the long Zed wire that came with it! Other wires will not support fast enough data transfer.
    * When plugging the wire into the camera the side with the arrow MUST face the side with the lenses. Otherwise you will get a segmentation fault when reading the camera, even though it is a USB C connection and plugs in either way!
    * It’s a bit hard to see the arrows in the pictures below, but if you look at the ZED Mini wire in real life you’ll see what I am pointing at.

<img src="../assets/hardware-setup/arm_wire2.jpg" alt="image" width="45%" height="auto">

* Follow the pictures below to see the joint positions to move to before adding a tight velcro strap at the specified location.
* First picture is to visualize the neutral joint position, second picture is the extreme joint position that we need enough slack for, third picture is the wire velcro strapped with enough slack for that extreme joint position.

<p>
<img src="../assets/hardware-setup/arm_wire3.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/arm_wire4.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/arm_wire5.jpg" alt="image" width="45%" height="auto">
</p>

* This second velcro strap is just to manage loose wire along a joint.

<img src="../assets/hardware-setup/arm_wire6.jpg" alt="image" width="45%" height="auto">

* Visualization of the second joint extreme, and the next velcro strap location.

<img src="../assets/hardware-setup/arm_wire7.jpg" alt="image" width="45%" height="auto">

* Visualization of the third joint extreme, and the next velcro strap location.

<img src="../assets/hardware-setup/arm_wire8.jpg" alt="image" width="45%" height="auto">

* Visualization of the fourth joint extreme, and the next velcro strap location.

<img src="../assets/hardware-setup/arm_wire9.jpg" alt="image" width="45%" height="auto">

* Visualization of the final wiring:

<p>
<img src="../assets/hardware-setup/arm_wire10.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/arm_wire11.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/arm_wire12.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/arm_wire13.jpg" alt="image" width="45%" height="auto">
</p>

* Now, we will connect these wires to the rest of the mobile base.
* Organize the Robotiq wire slack left over under the desk. If you bought our suggested desk, you can use the wire holders that came with the desk to hold the slack:

<p>
<img src="../assets/hardware-setup/arm_wire14.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/arm_wire15.jpg" alt="image" width="45%" height="auto">
</p>


* Plug in the AC adapter to the power slot you left open in the Robotiq Gripper Cable Management section.

<p>
<img src="../assets/hardware-setup/arm_wire16.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/arm_wire17.jpg" alt="image" width="45%" height="auto">
</p>

* Organize the Zed Mini wire under the desk, so it only protrudes near the laptop as follows:

<img src="../assets/hardware-setup/arm_wire18.jpg" alt="image" width="45%" height="auto">

## Mounting Third-Person Cameras

In this section, we will clamp mounts for third-person cameras to desk.

* Clamp a camera stand to either side of the desk such that the camera attached to the stand can be positioned with a view of the scene. Take the following additional points into consideration when clamping the stand to the desk.
	* The clamp and camera position of the stands should be randomized as much as possible during data collection. Don't fix the stand to a single position each time.
	* The robot arm can shake the table slightly as it moves. Whenever setting up the camera stand, tighten the clamp and all joints as much as possible to prevent the camera from shaking.

<p>
<img src="../assets/hardware-setup/third_person1.png" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/third_person2.jpg" alt="image" width="45%" height="auto">
</p>

## General Electrical Wiring

### Power Cable Wiring

### Ethernet Cable Wiring

### Camera Wiring
 
* To keep the Alienware laptop PCI boards from getting overloaded, it is important to connect the cameras in a very specific manner. Otherwise, you will get a Segmentation Fault.
* Plug all cameras directly into the laptop ports, without the use of any Many2One USB converters OR extension chords!
* In other words, directly plug two cameras in through the USB ports on the right side of the computer, and one directly in through the usb port on the back side of the computer.
* This should leave one USB-C port open for the oculus wiring.


## Mounting Calibration Board

In this section we will outline how to mount the calibration board.


### Option 1: Using Magnets

* Split long magnets into three groups of 6 stacked magnets, and put a sticky tab (provided with them) on both magnet stacks.
* Attach either (NOT THE STICKY SIDE!) to the gripper, aligning them along the metal screws as visualized below (again, the sticky side should be pointing away from the gripper!), and additionally one of the two . Be careful to evenly space the magnets so the screws are in the middle.
* **Important:** Sticky adhesive not visualized in this photo

<p>
<img src="../assets/hardware-setup/calibration_board1.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/calibration_board2.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/calibration_board3.jpg" alt="image" width="45%" height="auto">
</p>

* Firmly press the Charuco board into the sticky side of the magnets, and hold it there for 30 seconds.

<p>
<img src="../assets/hardware-setup/calibration_board4.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/calibration_board5.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/calibration_board6.jpg" alt="image" width="45%" height="auto">
</p>

* Try moving the gripper around manually to make sure the board stays on during movement.
* **Important:** The board dents easily. Always be very careful not to drop it.
* During data collection, you can stick the charuco board to the standing desk (for transportation) using the magnets.
* Optionally, place 3 singular magnets in sequence near the other end of the board. This can be used to stick the board to the leg of the standing desk during transportation.

<p>
<img src="../assets/hardware-setup/calibration_board7.jpg" alt="image" width="45%" height="auto">
<img src="../assets/hardware-setup/calibration_board8.jpg" alt="image" width="45%" height="auto">
</p>



================================================
FILE: docs/hardware-setup/hardware-setup.md
================================================
---
layout: default
title: Hardware Setup
nav_order: 2
has_children: true
has_toc: false
permalink: /docs/hardware-setup
---

## 🔨 Welcome to the DROID hardware guides

The goal of the hardware guides are to: 

1. Procure all the required hardware components
2. Assemble the hardware components into a functional platform

### Step 1: Procure Hardware Components

The first step to replicate the DROID platform is to procure the required hardware. In the [shopping list](https://droid-dataset.github.io/droid/hardware-setup/shopping-list), we provide an exhaustive list of required components to assemble the platform along with a list of suppliers which you can reference when procuring items. 

### Step 2: Assemble Hardware Components

Once you have acquired the required hardware components, the [assembly guide](https://droid-dataset.github.io/droid/hardware-setup/assembly) will walk you through assembling these components into a functional data collection platform. This includes, mounting the robot, cameras and grippers and a general cable management guide. 

### Final Result

By the end of both guides you should have a platform that resembles the image at the bottom of this page. If/when you get to this point congratulations, you are now ready to progress to the [software setup](https://droid-dataset.github.io/droid/docs/software-setup)

<img src="../assets/hardware-setup/droid_setup.jpg" alt="image" width="90%" height="auto"> 



================================================
FILE: docs/hardware-setup/shopping-list.md
================================================
---
layout: default
title: Shopping List
parent: Hardware Setup
nav_order: 1
---

# Shopping List

## Approximate Total Cost: ~~$20,000~~ $46,000 (updated ~Sept 2025)

## Robot:

| Component | Quantity | Approximate Component Cost | Suppliers |
| --------- | -------- | -------------------------- | --------- |
| Franka Emika Panda (or) Franka Research 3 | 1 | ~~$10,500~~ $35,000 (after tarifs in the US, the $10k was most likely fake news in the first place) | [Franka Robotics](https://franka.de/franka-research-3#request-a-quote-fr) |

## Gripper:

| Component | Quantity | Approximate Component Cost | Suppliers |
| --------- | -------- | -------------------------- | --------- |
| 2F-85 Robotiq Gripper | 1 | ~~$4,700~~ $6,000 | [Robotiq Official Site](https://robotiq.com/products/2f85-140-adaptive-robot-gripper) |
| Power Supply | 1 | $13 | [Amazon US](https://www.amazon.com/SHNITPWR-Converter-Transformer-100-240V-5-5x2-5mm/dp/B07SDRDV5B/ref=sr_1_5?gclid=Cj0KCQjwkOqZBhDNARIsAACsbfLAkpfvkq7tgnpHcJb2H4Eg7Q8Df5htlSW5inOVLktYBrb2sOOB500aAsY5EALw_wcB&hvadid=174221512760&hvdev=c&hvlocphy=9031969&hvnetw=g&hvqmt=e&hvrand=18374790677584297399&hvtargid=kwd-16464769873&hydadcr=19109_9441157&keywords=24v%2B2a%2Bpower%2Bsupply&qid=1664826676&qu=eyJxc2MiOiIzLjg2IiwicXNhIjoiMy41NyIsInFzcCI6IjMuNDIifQ%3D%3D&sr=8-5&th=1) |

## Computers:

| Component | Quantity | Approximate Component Cost | Suppliers |
| --------- | -------- | -------------------------- | --------- |
| NUC | 1 | $500 | [Amazon US](https://www.amazon.com/NUC11PAHi7-Mainsteam-Barebone%EF%BC%8CIntel-i7-1165G7-Components/dp/B09BKRRT2Y/ref=sr_1_3?crid=MA71JDIBHYBB&keywords=NUC%2B11%2Bi7&qid=1669688069&sprefix=nuc%2B11%2Bi7%2Caps%2C192&sr=8-3&th=1) |
| Razer Blade 15 - QHD 240 Hz - GeForce RTX 4070 - Black  | 1 | ~~$2,400~~ (no longer available, just buy a decent Linux laptop or tower depending on whether you want to easily move the setup around; beefy onboard GPU only required if you don't want to run remote inference, but maybe get at least a small on-board NVIDIA GPU to be safe) | ~~[Razer](https://www.razer.com/gaming-laptops/Razer-Blade-15/RZ09-0485ZED3-R3U1)~~ |
| Oculus Quest | 1 | $249 | [Meta Store](https://www.meta.com/us/quest/products/quest-2/) |
| Monitor and Keyboard (no specific specification) | 1 | $100 |  |

## Cameras:

| Component | Quantity | Approximate Component Cost | Suppliers |
| --------- | -------- | -------------------------- | --------- |
| Zed 2 | 2 | $449 | [StereoLabs](https://store.stereolabs.com/en-gb/products/zed-2?_gl=1*1corjfm*_ga*MTE0NDI2NjE3LjE2NDk5NjcwMjM.*_ga_LQLTWBS792*MTY1MDQwMjQ2NC40LjAuMTY1MDQwMjQ2NC42MA..&_ga=2.198936172.278922144.1650231923-114426617.1649967023) |
| Zed Mini | 1 | $400 | [StereoLabs](https://store.stereolabs.com/products/zed-mini?_ga=2.58413544.391576102.1670373235-292338975.1668109772&_gl=1*1ep4wi8*_ga*MjkyMzM4OTc1LjE2NjgxMDk3NzI.*_ga_LQLTWBS792*MTY3MDcwODcxNC4xMS4xLjE2NzA3MDkzMjIuNjAuMC4w) |
| ULANZI Camera Mount | 2 | $80 | [Amazon US](https://www.amazon.com/Flexible-Adjustable-Articulated-Rotatable-Aluminum/dp/B08LV7GZVB/ref=sr_1_2?crid=2HACR1JZEBPRV&keywords=camera%2Bmount%2Bsturdy&qid=1670995104&s=electronics&sprefix=camera%2Bmount%2Bsturd%2Celectronics%2C183&sr=1-2&ufe=app_do%3Aamzn1.fos.006c50ae-5d4c-4777-9bc0-4513d670b6bc&th=1) |
| Charuco Board | 1 | $114 | [Calib.io](https://calib.io/products/charuco-targets?variant=9400454807599) |
| Camera Mount Files | 1 | $0 ($2 for print materials) | [STL files](https://drive.google.com/drive/folders/1k56XVdlfrXCX4iOlFlTlkoTh-2Px6CyD) |

## Mobile Base:

| Component | Quantity | Approximate Component Cost | Suppliers |
| --------- | -------- | -------------------------- | --------- |
| Screw Kit | 1 | $27 | [Amazon US](https://www.amazon.com/dp/B083SGJ7BD?ref_=cm_sw_r_cp_ud_dp_AQE4AW6MK6QAQ00SE417&th=1) |
| Standing Desk | 1 | $130 | [Amazon US](https://www.amazon.com/SHW-Electric-Height-Adjustable-Computer/dp/B08668Y49C/ref=mp_s_a_1_13?crid=21JB1RSHEAYFJ&keywords=40%2Brolling%2Bstanding%2Bdesk&qid=1644355171&sprefix=40%2Brolling%2Bstanding%2Bdesk%2Caps%2C298&sr=8-13&th=1) |
| Breadboard | 1 | $100 | [Newport](https://www.newport.com/p/SA2-13) |
| Clamps | 4 | $12 | [MSCDirect](https://www.mscdirect.com/product/details/67105742?cid=ppc-google-New+-+Clamping%2C+Workholding+%26+Positioning+-+PLA_sxxmVRNtt___164124448499_c_S&mkwid=sxxmVRNtt%7Cdc&pcrid=164124448499&rd=k&product_id=67105742&gclid=CjwKCAjw8e7mBRBsEiwAPVxxiGVA1SyGmWreg4kKH1lWPj3AiQYlmnwmHUBFVUUj3xBSkY_9NFEHZxoC_XYQAvD_BwE) |
| Aluminum Bars (12 inches) | 2 | $0.35 (per inch) | [8020](https://8020.net/1010.html) |
| Screws + T-Nuts | 12 | $0.66 | [8020](https://8020.net/3393.html) |
| Screws (For FR3 Only) | 1 | $10 | [McMaster-Carr](https://www.mcmaster.com/92949A539/) |

## Chords:

| Component | Quantity | Approximate Component Cost | Suppliers |
| --------- | -------- | -------------------------- | --------- |
| USB port | 1 | $10 | [Amazon US](https://www.amazon.com/BYEASY-Portable-Applicable-MacBook-Notebook/dp/B07FH7XJCD/ref=mp_s_a_1_1_sspa?crid=13LJFDB63AIUY&keywords=many%2Bto%2Bone%2Busb&qid=1674798390&sprefix=many%2Bto%2Bone%2Busb%2Caps%2C689&sr=8-1-spons&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUE1TVhTSDhZS0dXRTImZW5jcnlwdGVkSWQ9QTA0Njg0MjVSMEpFNzBUMzBOQ1cmZW5jcnlwdGVkQWRJZD1BMDc3NDUxOTFSTFdQOFNMQjFWMEcmd2lkZ2V0TmFtZT1zcF9waG9uZV9zZWFyY2hfYXRmJmFjdGlvbj1jbGlja1JlZGlyZWN0JmRvTm90TG9nQ2xpY2s9dHJ1ZQ&th=1) |
| Ethernet Cable | 3 | $5 | [Amazon US](https://www.amazon.com/AmazonBasics-Cat-6-Gigabit-Ethernet-Internet/dp/B089MGH8T5/ref=sr_1_1_sspa?crid=NNVN8X9N4SR4&keywords=short+ethernet+cable&qid=1669691593&sprefix=short+ethernet+cable+.%2Caps%2C227&sr=8-1-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEyMEtMVEtCWE5ZUERLJmVuY3J5cHRlZElkPUEwOTk4Njk5SlQ1UUdBQkRBMkU3JmVuY3J5cHRlZEFkSWQ9QTAyMDk5NDkxMzdOTlRaM1c4VFlWJndpZGdldE5hbWU9c3BfYXRmJmFjdGlvbj1jbGlja1JlZGlyZWN0JmRvTm90TG9nQ2xpY2s9dHJ1ZQ==) |

## Miscellaneous:

| Component | Quantity | Approximate Component Cost | Suppliers |
| --------- | -------- | -------------------------- | --------- |
| Magnets | 1 | $20 | [Amazon US](https://www.amazon.com/Neodymium-Magnets-Double-sided-Adhesive-Rare-Earth/dp/B078KTLWQ9/ref=sr_1_6?crid=1IOM068UZ4RDK&keywords=long%2Bmagnet&qid=1670802719&sprefix=long%2Bmagne%2Caps%2C133&sr=8-6&th=1) |
| Industrial Strength Velcro | 1 | $20 | [Amazon US](https://www.amazon.com/VELCRO-Brand-Industrial-Fasteners-VEL-30838-USA/dp/B0B74YZVSN/ref=sr_1_5?crid=3U9Y5NFUBJ3IH&keywords=industrial+velcro&qid=1669691954&sprefix=industtrial+velcro%2Caps%2C138&sr=8-5) |
| Chord Wraps | 1 | $18 | [Amazon US](https://www.amazon.com/VELCRO-Brand-Industrial-Fasteners-VEL-30838-USA/dp/B0B74YZVSN/ref=sr_1_5?crid=3U9Y5NFUBJ3IH&keywords=industrial+velcro&qid=1669691954&sprefix=industtrial+velcro%2Caps%2C138&sr=8-5) |
| Chord Hanger | 1 | $18 | [Amazon US](https://www.amazon.com/VELCRO-Brand-Industrial-Fasteners-VEL-30838-USA/dp/B0B74YZVSN/ref=sr_1_5?crid=3U9Y5NFUBJ3IH&keywords=industrial+velcro&qid=1669691954&sprefix=industtrial+velcro%2Caps%2C138&sr=8-5) |
| Zip Ties | 1 | $25 | [Amazon US](https://www.amazon.com/inch-Tensile-Strength-Resistant-Self-locking/dp/B09VL2RDH9/ref=sr_1_2_sspa?crid=3VSNR598R3WIB&keywords=zip+ties&qid=1669693628&sprefix=zip+tie%2Caps%2C205&sr=8-2-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUExOU42T1k4Q0VYUk5HJmVuY3J5cHRlZElkPUEwNDM4NjAyMUpFTDVCV0lUMVZLTCZlbmNyeXB0ZWRBZElkPUEwMDIxNDY2MTZMVUpWSEtPNFNHMSZ3aWRnZXROYW1lPXNwX2F0ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU=) |
| Duct Tape | 1 | $8 | [Amazon US](https://www.amazon.com/Original-Strength-Duck-Silver-394475/dp/B0000DH4ME/ref=sr_1_3?crid=1BATBX7XS37IH&keywords=duct%2Btape&qid=1669693579&sprefix=duct%2Btape%2Caps%2C135&sr=8-3&th=1) |
| Cables Ties | 1 | $6 | [Amazon US](https://www.amazon.com/dp/B08TTPX4KB?ref_=cm_sw_r_cp_ud_dp_E8TXV87317XD0AXHRXZ8&th=1) |
| Power Strip | 1 | $22 | [Amazon US](https://www.amazon.com/Alestor-Protector-Outlets-Extension-Essentials/dp/B08P5LRY37/ref=mp_s_a_1_8?crid=3F3BGM9NP2U08&keywords=power%2Bstrip&qid=1674789403&sprefix=power%2Bst%2Caps%2C182&sr=8-8&th=1) |
| Extension Cord | 1 | $11 | [Amazon US](https://www.amazon.com/AmazonBasics-Extension-Cord-feet-Black/dp/B075BCD1LP/ref=mp_s_a_1_5?crid=13A81TEC0APYS&keywords=3%2Bprong%2Bextension%2Bcord&qid=1674789516&sprefix=3%2Bprong%2B%2Caps%2C376&sr=8-5&th=1) |
| Ethernet Switch | 1 | $22 | [Amazon US](https://www.amazon.com/NETGEAR-Gigabit-Ethernet-Unmanaged-1000Mbps/dp/B00KFD0SMC/ref=mp_s_a_1_9?crid=25PREBQW9BS0N&keywords=netgear+ethernet+splitter&qid=1674794661&sprefix=netgear+ethern%2Caps%2C136&sr=8-9) |


================================================
FILE: docs/index.md
================================================
---
layout: default
title: Introduction
nav_order: 1
---

# 🤖 **D**istributed **RO**bot **I**nteraction **D**ataset

![](./assets/index/droid_teaser.jpg)


## 👋 Welcome to the DROID Developer Documentation

This goal of this documentation site is to enable robotics researchers to:

1. Replicate the hardware of the DROID data collection platform.
2. Configure software to make the DROID data collection platform operational.
3. Onboard developers/users on using the platform and contributing data to the DROID.

The guides for accomplishing these goals are split into the following high-level sections:

* 🔨 [**Hardware Setup:**](https://droid-dataset.github.io/droid/docs/hardware-setup) a list of the required hardware (with links to suppliers) and a guide for assembling the platform.
* 🖥️ [**Software Setup:**](https://droid-dataset.github.io/droid/docs/software-setup) guides on configuring device settings and software. 
* 📈 [**The DROID Dataset:**](https://droid-dataset.github.io/droid/the-droid-dataset) an in-the-wild robot manipulation dataset.
* 🤖 [**Example Workflows:**](https://droid-dataset.github.io/droid/docs/example-workflows) tutorials for common workflows (e.g. data collection, policy deployment).
* 📖 [**Contribution Guidelines:**](https://droid-dataset.github.io/droid/contribution-guidelines) guidelines for contributing data to DROID. 





================================================
FILE: docs/software-setup/docker.md
================================================
---
layout: default
title: Running Application through Docker
has_children: false
nav_order: 1
parent: Software Setup
---


This guide commences with outlining the configuration of your Franka robot arm. Following this we detail the configuration of the Oculus Quest. We then outline the software setup for the NUC which interfaces directly with the Franka control unit and finally the laptop which runs the GUI application and client software.

<details open markdown="block">
  <summary>
    Table of contents
  </summary>
  {: .text-delta }
1. TOC
{:toc}
</details>

# Configuring the Franka robot

## Franka World Account

A Franka World account is used to manage Franka robot arms and associated firmware updates. You may login to Franka World or register and account at the following [portal](https://franka.world/). 

Thankfully Franka Robotics have provided a tutorial on registering and synchronizing software for your Franka robot on Franka World. Please follow this tutorial to set up your robot arm:

<iframe width="560" height="315" src="https://www.youtube.com/embed/lF6HwaFnGaQ?si=IfZgSJzYRVEyCJzO" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

Importantly, after the above tutorial, it is necessary to ensure that the Franka Control Interface (FCI) feature is installed on your robot. Further information on this interface which Franka Robotics expose is found in the guide below:

<iframe width="560" height="315" src="https://www.youtube.com/embed/91wFDNHVXI4?si=NzGwr9n7X5klrWHW" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

## Operating the Robot

Please refer to the Franka Robotics [Getting Started Guide](https://frankaemika.github.io/docs/getting_started.html) to perform the initial setup and operation of your robot arm. At this point it is worthwhile noting the details of how to activate and deactivate FCI for your robot, these details can be found [here](https://frankaemika.github.io/docs/getting_started.html#preparing-the-robot-for-fci-usage-in-desk). 


## Updating Inertia Parameters for Robotiq Gripper

In order to accurately represent the robot model with the Robotiq gripper we need to update the end-effector configuration. From the Desk UI you may access this option under Settings > End-Effector as outlined [here](https://facebookresearch.github.io/fairo/polymetis/troubleshooting.html#end-effector-configuration). 

Please add the following [configuration file](https://github.com/frankaemika/external_gripper_example/blob/master/panda_with_robotiq_gripper_example/config/endeffector-config.json) via the UI.

# Configuring the Oculus Quest

The Oculus will be used to run an application that tracks the pose of hand-held controllers which we use to teleoperate the robot.

The first step to setting up your Oculus Quest is to associate a Meta account with the device. You may create an account at the following [portal](https://secure.oculus.com/sign-up/). Once you've created a Meta account you also need to register as a Meta developer, a guide for accomplishing this can be found [here](https://developers.facebook.com/docs/development/register/).

Once you have a meta developer account, you are ready to begin enabling developer mode on your Oculus device. Developer mode is required to run a custom application for the Oculus Quest which we use as part of the DROID platform. For instructions on enabling developer mode on your Oculus device please see the following [guide](https://developer.oculus.com/documentation/native/android/mobile-device-setup/).

Later in this guide we will detail how to run the required application on the Oculus, for now the device setup is complete.

# Configuring the NUC

The NUC device is used to run the polymetis controller manager server. This server manages translating commands from the user client to the robot client. A high-level schematic outling the controller manager is given below, further details can also be found [here](https://facebookresearch.github.io/fairo/polymetis/overview.html):

<img src="../assets/software-setup/polymetis_controller_manager.png" alt="image" width="90%" height="auto">

In order to gain an understanding of the various components being installed it is worthwhile working through this section. If you are less concerned with understanding individual software dependencies and their function using the automated install script is more time efficient.

## Booting with Ubuntu 22.04

We will install Ubuntu 22.04 through flashing a USB with Ubuntu 22.04 and using this flased USB drive as the install medium.

You may use the following [application](https://etcher.balena.io/) to flash a USB drive with the Ubuntu OS image. Download this application on a device other than the laptop/workstation as we don't want to include any unnecessary applications on this device. Follow the official [Ubuntu Installation guide](https://ubuntu.com/tutorials/install-ubuntu-desktop#1-overview) to set up the USB flashdrive and install the Ubuntu OS (22.04).

When installing Ubuntu, set the computer password `robot` and enable the `Log in automatically` setting.

## Ubuntu Pro Token

In order to simplify the process of performing a RT patch of the Ubuntu Kernel we leverage Ubuntu Pro. Generate an Ubuntu Pro token through following the section 3 called "Get your free Ubuntu pro subscription" from the following [guide](https://ubuntu.com/pro/tutorial). We will use this token to activate Ubuntu pro and activate an RT patched kernel on our machine.

## Cloning Codebase

To clone the codebase including submodule dependencies run:

```bash
git clone git@github.com:droid-dataset/droid.git
cd droid 
git submodule update --init --recursive
```

## Configure Parameters

Complete the following parameters in `./droid/misc/parameters.py`. 

* Start by specifying static IP addresses for all the machines on the platform (NUC, laptop, control unit). 

* Add the sudo password for the machine you are using (only relevant for the NUC machine). 

* Set the type and serial number of your robot. The serial number for your robot can be found on the control box unit. 

* If this is your first time setting up the platform you can ignore the camera ids as you will need to load the GUI to generate these. 

* Set the Charuco board parameters to match the board you are using.

* Provide an Ubuntu Pro token (required to automate rt-patch of kernel on the NUC). 

Complete the following parameters in `./config/<robot_type>/franka_hardware.yaml`

* Specify the `robot_ip` parameter. 


## Run Setup Script

To complete the device setup execute the setup script with elevated privileges through running the following:

```bash
sudo ./scripts/setup/nuc_setup.sh
```

# Configuring the Laptop/Workstation 

The laptop/workstation device manages the execution of the Oculus Quest application, camera data and user client code including a GUI application. Recalling the Polymetis diagram from earlier, all the user client code is run on this device. 

<img src="../assets/software-setup/polymetis_controller_manager.png" alt="image" width="90%" height="auto">

## Booting with Ubuntu 22.04

We will install Ubuntu 22.04 through flashing a USB with Ubuntu 22.04 and using this flased USB drive as the install medium.

You may use the following [application](https://etcher.balena.io/) to flash a USB drive with the Ubuntu OS image. Download this application on a device other than the laptop/workstation as we don't want to include any unnecessary applications on this device. Follow the official [Ubuntu Installation guide](https://ubuntu.com/tutorials/install-ubuntu-desktop#1-overview) to set up the USB flashdrive and install the Ubuntu OS (22.04).

When installing Ubuntu, set the computer password `robot` and enable the `Log in automatically` setting.

## Cloning Codebase    

To clone the codebase including submodule dependencies run:

```bash
git clone git@github.com:droid-dataset/droid.git                
cd droid
git submodule update --init --recursive
```

## Configure Parameters

Complete the following parameters in `./droid/misc/parameters.py`.

* Specify static IP addresses for all the machines on the platform (NUC, laptop, control unit).

* Add the sudo password for the machine you are using (only relevant for the NUC machine).

* Set the type and serial number of your robot. The serial number for your robot can be found on the control box unit.

* If this is your first time setting up the platform you can ignore the camera ids as you will need to load the GUI to generate these.

* Set the Charuco board parameters to match the board you are using.

* Provide an Ubuntu Pro token (required to automate rt-patch of kernel on the NUC).

Complete the following parameters in `./config/<robot_type>/franka_hardware.yaml`

* Specify the `robot_ip` parameter.

## Run Setup Script

To complete the device setup execute the setup script with elevated privileges through running the following:

```bash
sudo ./scripts/setup/laptop_setup.sh
```

# Testing/Validating Entire Setup

In order to validate your device setup we will attempt to collect a trajectory. Start by running the NUC server machine through running the corresponding Docker container. This can be accomplished through running the setup script which spins the control server container. Alternatively you may run the container directly with the below command. 


```bash
cd .docker/nuc 
docker compose -f docker-compose-nuc.yaml up
```

**Note:** you want to ensure environment variables defined in the docker compose file are exported for them to be available to the container:

Next run the test script for collecting a trajectory on the laptop through running the test Docker container:

```bash
cd .docker/laptop
docker compose -f docker-compose-laptop.yaml run laptop_setup python scripts/test/collect_trajectory.py
```

**Note:** you want to ensure environment variables defined in the docker compose file are exported for them to be available to the container:


================================================
FILE: docs/software-setup/host-installation.md
================================================
---
layout: default
title: Running Application on Host
has_children: false
nav_order: 2
parent: Software Setup
---

This guide commences with outlining the configuration of your Franka robot arm. Following this we detail the configuration of the Oculus Quest. We then outline the software setup for the NUC which interfaces directly with the Franka control unit and finally the laptop which runs the GUI application and client software.

<details open markdown="block">
  <summary>
    Table of contents
  </summary>
  {: .text-delta }
1. TOC
{:toc}
</details>

# Configuring the Franka robot

## Franka World Account

A Franka World account is used to manage Franka robot arms and associated firmware updates. You may login to Franka World or register and account at the following [portal](https://franka.world/). 

Thankfully Franka Robotics have provided a tutorial on registering and synchronizing software for your Franka robot on Franka World. Please follow this tutorial to set up your robot arm:

<iframe width="560" height="315" src="https://www.youtube.com/embed/lF6HwaFnGaQ?si=IfZgSJzYRVEyCJzO" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

Importantly, after the above tutorial, it is necessary to ensure that the Franka Control Interface (FCI) feature is installed on your robot. Further information on this interface which Franka Robotics expose is found in the guide below:

<iframe width="560" height="315" src="https://www.youtube.com/embed/91wFDNHVXI4?si=NzGwr9n7X5klrWHW" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

## Operating the Robot

Please refer to the Franka Robotics [Getting Started Guide](https://frankaemika.github.io/docs/getting_started.html) to perform the initial setup and operation of your robot arm. At this point it is worthwhile noting the details of how to activate and deactivate FCI for your robot, these details can be found [here](https://frankaemika.github.io/docs/getting_started.html#preparing-the-robot-for-fci-usage-in-desk). 


## Updating Inertia Parameters for Robotiq Gripper

In order to accurately represent the robot model with the Robotiq gripper we need to update the end-effector configuration. From the Desk UI you may access this option under Settings > End-Effector as outlined [here](https://facebookresearch.github.io/fairo/polymetis/troubleshooting.html#end-effector-configuration). 

Please add the following [configuration file](https://github.com/frankaemika/external_gripper_example/blob/master/panda_with_robotiq_gripper_example/config/endeffector-config.json) via the UI.

# Configuring the Oculus Quest

The Oculus will be used to run an application that tracks the pose of hand-held controllers which we use to teleoperate the robot.

The first step to setting up your Oculus Quest is to associate a Meta account with the device. You may create an account at the following [portal](https://secure.oculus.com/sign-up/). Once you've created a Meta account you also need to register as a Meta developer, a guide for accomplishing this can be found [here](https://developers.facebook.com/docs/development/register/).

Once you have a meta developer account, you are ready to begin enabling developer mode on your Oculus device. Developer mode is required to run a custom application for the Oculus Quest which we use as part of the DROID platform. For instructions on enabling developer mode on your Oculus device please see the following [guide](https://developer.oculus.com/documentation/native/android/mobile-device-setup/).

Later in this guide we will detail how to run the required application on the Oculus, for now the device setup is complete.

# Configuring the NUC

The NUC device is used to run the polymetis controller manager server. This server manages translating commands from the user client to the robot client. A high-level schematic outling the controller manager is given below, further details can also be found [here](https://facebookresearch.github.io/fairo/polymetis/overview.html):

<img src="../assets/software-setup/polymetis_controller_manager.png" alt="image" width="90%" height="auto">

## Booting with Ubuntu 18.04

We will install Ubuntu 18.04 through flashing a USB with Ubuntu 18.04 and using this flased USB drive as the install medium.

You may use the following [application](https://etcher.balena.io/) to flash a USB drive with the Ubuntu OS image. Download this application on a device other than the NUC as we don't want to include any unnecessary applications on the NUC (e.g. personal laptop). Follow the official [Ubuntu Installation guide](https://ubuntu.com/tutorials/install-ubuntu-desktop#1-overview) to set up the USB flashdrive and install the Ubuntu OS (18.04) on the NUC.

When installing Ubuntu, set the computer password `robot` and enable the `Log in automatically` setting.

## Configuring Static IP Address

To set the static IP for the NUC, enter Settings > Network and click the add wired connection option (through clicking the + option under the Wired heading). Click the IPv4 tab; under IPv4 method select manual. Set the Address to 172.16.0.2 and the netmask to 255.255.255.0. Click apply to create this static IP address. 

## Performing RT Patch of Kernel

In order to ensure that real-time priorities set in the libfranka codebase are respected we need to perform a Real-Time (RT) patch of the default kernel. For this section we will be following the [official guide](https://frankaemika.github.io/docs/installation_linux.html#setting-up-the-real-time-kernel) provided by Franka Robotics. 

To ensure, you replicate the same setup as the original DROID platform implementation start by installing dependencies with:

```bash
sudo apt-get install build-essential bc curl ca-certificates gnupg2 libssl-dev lsb-release libelf-dev bison flex dwarves zstd libncurses-dev
```

Following the installation of dependencies, install a kernel patch that has been tested with the DROID platform:

```bash
curl -SLO [https://mirrors.edge.kernel.org/pub/linux/kernel/v5.x/linux-5.4.70.tar.xz](https://mirrors.edge.kernel.org/pub/linux/kernel/v5.x/linux-5.4.70.tar.xz)
curl -SLO [https://mirrors.edge.kernel.org/pub/linux/kernel/projects/rt/5.4/older/patch-5.4.70-rt40.patch.xz](https://mirrors.edge.kernel.org/pub/linux/kernel/projects/rt/5.4/older/patch-5.4.70-rt40.patch.xz)
xz -d *.xz
```

Follow the official Franka Robotics guide from the [Compiling the kernel](https://frankaemika.github.io/docs/installation_linux.html#compiling-the-kernel) section onwards, using the kernel patch that was installed above. 

## CPU Frequency Scaling

Most CPUs are by default configured to use a lower operating frequency, we wish to override this setting in order to increase the operating frequenecy and as a result reduce the latency when interfacing with libfranka. Follow this [section](https://frankaemika.github.io/docs/troubleshooting.html#disabling-cpu-frequency-scaling) of the official Franka Robotics guides to disable CPU frequency scaling.


## Configuring Python Virtual Environment (Conda)

We will use Conda as the package manager for our python virtual environment. You can install Conda from the following [link](https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html). Once you've installed Conda, ensure that you have configured git on the NUC so you can pull software packages from GitHub. A guide on configuring git is provided [here](https://docs.github.com/en/get-started/getting-started-with-git/set-up-git#setting-up-git).

Next, we will commence with setting up the Polymetis library. The following [guide](https://facebookresearch.github.io/fairo/polymetis/installation.html#for-advanced-users-developers) details how to build Polymetis, we will be following this guide.

The first step of the linked guide requires you to clone the fairo repository. We have included a pinned version of the library with the DROID main repository as a git submodule. Clone the DROID repository and submodules with the following command:

```bash
git clone git@github.com:droid-dataset/droid.git
# sync and pull submodules
git submodule sync
git submodule update --init --recursive
```

You should now find the fairo repository at `droid/fairo`, from this point continue with steps 2 and 3 of the polymetis guide.

In step 4, when building libfranka from source, it is crucial to fix the version of libfranka you are using to one that is compatible with the robot you are using, see the below table for a list of robot hardware and libfranka version compatibility: 

| Robot | libfranka Version |
| --------- | -------- |
| Franka Emika Panda | 0.9.0 |
| Franka Research 3 | 0.10.0 |

In order to install a specific libfranka version (e.g. 0.10.0) using the linked guide, you can run the `build_libfranka` script as follows:

```bash
./scripts/build_libfranka.sh 0.10.0
```

When building the polymetis from source with CMAKE, be sure to set all setting to `ON`: 

```bash
cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_FRANKA=ON -DBUILD_TESTS=ON -DBUILD_DOCS=ON
make -j
```

Now that Polymetis is built, we will add other dependencies to our conda virtual environment. 

Now that polymetis is built we will commence with installing the remaining dependencies for our python virtual environment. From the root directory of this repository, with the python virtual environment created during polymetis build activated run the following command to install remaining dependencies:

```bash
pip install -e .

# Done like this to avoid dependency issues
pip install dm-robotics-moma==0.5.0 --no-deps
pip install dm-robotics-transformations==0.5.0 --no-deps
pip install dm-robotics-agentflow==0.5.0 --no-deps
pip install dm-robotics-geometry==0.5.0 --no-deps
pip install dm-robotics-manipulation==0.5.0 --no-deps
pip install dm-robotics-controllers==0.5.0 --no-deps
```

The next step is to ensure the the project config files reference the robot you are using. First we will update the robot client config file and after this we will update the robot model config file. Please find links to the required config files below: 

| Robot | Config Files |
| --------- | -------- |
| Franka Emika Panda | [config files](https://drive.google.com/drive/folders/1wXTQQbFKjd9ed3yKxB4td9GzA_XrR7Xk) |
| Franka Research 3 | [config files](https://drive.google.com/drive/folders/178-MJTAVV0m5_RDs2ScUNcYameGDA0Eg) |

We will start by updating the robot client config file. To do so go into `fairo/polymetis/polymetis/conf/robot_client` and delete the existing `franka_hardware.yaml` file. Replace this file with `franka_hardware[robot_name].yaml` from the linked config files folder. Rename this file to `franka_hardware.yaml`. Finally, edit this `franka_hardware.yaml` file and replace `executable_cfg.robot_ip` to match you Franka control box's IP address. 

Next we will update the robot model config file. To do so go into `fairo/polymetis/polymetis/conf/robot_model` and delete the existing `franka_panda.yaml` file. Replace this file with `franka_panda[robot_name].yaml` from the linked config files folder. Rename this file to `franka_panda.yaml`. 

Update the IP parameters in `droid/misc/parameters.py`, in particular set `robot_ip` to match the IP address of your robot and `nuc_ip` to match the IP address of your NUC. Also set the `sudo_password` to match your machine's sudo password (sudo access is required to launch the robot). Finally update the `robot_type` parameter to `panda` or `fr3` depending on which robot you are using. 

If you choose to install miniconda instead of anaconda in previous steps of this guide you will need to make the following edits:

In droid/franka change the word anaconda to minicode in the scripts `launch_gripper.sh` and `launch_robot.sh`. Also change the paths to be absolute. Repeat for the `launch_server.sh` file in `scripts/server`

## Optional: Configure Headless Server

When you have validated your entire setup you may wish to make the NUC a headless server such that you don't need to manually run commands to start the control server. 

From the terminal enter `crontab -e` and add the ollowing line to the bottom of the file:

```
@reboot bash [YOUR PATH TO DROID]/scripts/server/launch_server.sh
```

You may remove any displays associated with your NUC once you have configured it to run as a headless server. 

# Configuring the Laptop/Workstation 

The laptop/workstation device manages the execution of the Oculus Quest application, camera data and user client code including a GUI application. Recalling the Polymetis diagram from earlier, all the user client code is run on this device. 
<img src="../assets/software-setup/polymetis_controller_manager.png" alt="image" width="90%" height="auto">

## Booting with Ubuntu 22.04

We will install Ubuntu 22.04 through flashing a USB with Ubuntu 22.04 and using this flased USB drive as the install medium.

You may use the following [application](https://etcher.balena.io/) to flash a USB drive with the Ubuntu OS image. Download this application on a device other than the laptop/workstation as we don't want to include any unnecessary applications on this device. Follow the official [Ubuntu Installation guide](https://ubuntu.com/tutorials/install-ubuntu-desktop#1-overview) to set up the USB flashdrive and install the Ubuntu OS (22.04).

When installing Ubuntu, set the computer password `robot` and enable the `Log in automatically` setting.

## Configuring Static IP Address

To set the static IP for the laptop, enter Settings > Network and click the add wired connection option (through clicking the + option under the Wired heading). Click the IPv4 tab; under IPv4 method select manual. Set the Address to 172.16.0.1 and the netmask to 255.255.255.0. Click apply to create this static IP address.

## Configuring Python Virtual Environment (Conda)

We will use Conda as the package manager for our python virtual environment. You can install Conda from the following [link](https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html). Once you've installed Conda, ensure that you have configured git on the NUC so you can pull software packages from GitHub. A guide on configuring git is provided [here](https://docs.github.com/en/get-started/getting-started-with-git/set-up-git#setting-up-git).

Clone the DROID repository and submodules with the following command:

```bash
git clone git@github.com:droid-dataset/droid.git
git submodule sync
git submodule update --init --recursive
```

Next create and activate a conda environment called `robot` with python 3.7 through running the following commands:

```bash
conda create -n "robot" python=3.7
conda activate robot
```

Ensure that GCC is installed through running the following commands:

```bash
sudo apt update
sudo apt install build-essential
```

Verify your installation through running (you should observe a version returned):

```bash
gcc --version
```

Next we will install the ZED SDK and python API which is required to interface with ZED cameras. Follow the official guide outlined [here](https://www.stereolabs.com/docs/installation/linux), while adhering to the below instructions during your installation:

* Ensure you have the robot conda environment activated
* Enter yes when prompted to install CUDA
* Enter yes to everything else during the installation procedure with the exception of optimizing ZED models this is optiona

Once you have restarted your machine post installation, activate the robot conda environment and test that you can import pyzed through running:

```bash
python -c "import pyzed"
```

The software for the Oculus application is included under `droid/oculus_reader` as a git submodule. Start by installing the oculus reader dependencies into the current virtual environment through running:

```bash
pip install -e ./droid/oculus_reader
```

Next, install the android debug bridge software dependency required to interface with your Oculus Quest through running:

```bash
sudo apt install android-tools-adb
```

Finally from the root directory of the project repository, with the python virtual environment activated run the following command to install remaining dependencies:

```bash
pip install -e .

# Done like this to avoid dependency issues
pip install dm-robotics-moma==0.5.0 --no-deps
pip install dm-robotics-transformations==0.5.0 --no-deps
pip install dm-robotics-agentflow==0.5.0 --no-deps
pip install dm-robotics-geometry==0.5.0 --no-deps
pip install dm-robotics-manipulation==0.5.0 --no-deps
pip install dm-robotics-controllers==0.5.0 --no-deps
```

Update the IP parameters in `droid/misc/parameters.py`, in particular set `robot_ip` to match the IP address of your robot and `nuc_ip` to match the IP address of your NUC. In addition, set the `robot_serial_number` to match your robot's serial number (found on your franka website, under Settings -> Franka World -> Control S/N). For the `robot_type variable`, enter 'fr3' or 'panda' depending on which Franka robot you are using. Update the Charuco board parameters to match the board you are using. Finally you will need to set the camera ids in this parameters file further details on how to accomplish this are provided later in the guide. 

If you choose to install miniconda instead of anaconda in previous steps of this guide you will need to make the following edits:

In droid/franka change the word anaconda to minicode in the scripts `launch_gripper.sh` and `launch_robot.sh`. Also change the paths to be absolute. Repeat for the `launch_server.sh` file in `scripts/server`


# Testing/Validating Entire Setup

In order to validate your device setup we will attempt to collect a trajectory. Start by running the NUC server machine through running the following commands:

```bash
conda activate polymetis-local
python scripts/server/run_server.py
```

With the control server running on the NUC, run the script for testing trajectory collection with:

```bash
conda activate robot
python scripts/test/collect_trajectory.py
```


================================================
FILE: docs/software-setup/software-setup.md
================================================
---
layout: default
title: Software Setup
has_children: true
has_toc: false
nav_order: 3
permalink: /docs/software-setup
---

## 🖥️  Welcome to the DROID software guides.

The goal of the software guides are to:

1. Configure the Franka robot for DROID application software
2. Configure the Oculus Quest 2 for DROID application software
3. Set up a Polymetis control server on the NUC 
4. Set up a user client and GUI on the laptop/workstation
5. Validate the entire platform setup

There are two methods of installation for the software used in DROID; one can run the application software directly on the host machine or through Docker. Below we provide a brief description of each, importantly it is only necessary to perform one of the two installations, please choose one based on the outlined considerations. We strongly recommend using the Docker installation as it requires less manual configuration and it decouples most of the DROID application config from your host machine configuration.

### Installation Method 1: Docker Installation
Running DROID software through Docker requires less installation steps and allows for machines to easily be repurposed for other sets of software as the application software is containerised. Running the application software through Docker does however require an understanding of Docker the management of Docker containers. 

### Installation Method 2: Host Installation
Running DROID software directly on the host machine requires more installation steps but is worthwhile in the case where machines are dedicated to the DROID setup as it forgoes the need to launch and manage Docker containers. 


================================================
FILE: docs/the-droid-dataset.md
================================================
---
layout: default
title: The DROID Dataset
nav_order: 4
---

## 🔍 Exploring the Dataset

It is possible to interactively explore the DROID via the following [interactive dataset visualizer](https://droid-dataset.github.io/dataset.html). This is a great way to start understanding the DROID dataset and is a highly recommended starting point.

<a href="https://droid-dataset.github.io/dataset.html"><img src="./assets/the-droid-dataset/droid_data_visualizer.png" alt="image" width="90%" height="auto"></a>

Additionally, we provide a [Dataset Colab](https://colab.research.google.com/drive/1b4PPH4XGht4Jve2xPKMCh-AXXAQziNQa?usp=sharing) that demonstrates how to load and visualize samples from our dataset.

## 📈 Using the Dataset

The DROID dataset is hosted within a Google Cloud Bucket and is offered in two formats:

1. [RLDS](https://github.com/google-research/rlds): ideal for dataloading for the purpose of training policies
2. Raw Data: ideal for those who wish to manipulate the raw data, use high-resolution images or stereo/depth information.

The DROID dataset is hosted on a Google cloud bucket. To download it, install the [gsutil package](https://cloud.google.com/storage/docs/gsutil_install). We provide three different versions of the dataset for download:
```
# Full DROID dataset in RLDS (1.7TB)
gsutil -m cp -r gs://gresearch/robotics/droid <path_to_your_target_dir>

# Example 100 episodes from the DROID dataset in RLDS for debugging (2GB)
gsutil -m cp -r gs://gresearch/robotics/droid_100 <path_to_your_target_dir>

# Raw DROID dataset in stereo HD, stored as MP4 videos (8.7TB)
gsutil -m cp -r gs://gresearch/robotics/droid_raw <path_to_your_target_dir>

# Raw DROID dataset, non-stereo HD video only (5.6TB, excluding stereo video & raw SVO cam files)
gsutil -m rsync -r -x ".*SVO.*|.*stereo.*\.mp4$" "gs://gresearch/robotics/droid_raw" <path_to_your_target_dir>
```

### Accessing RLDS Dataset

We provide a [Dataset Colab](https://colab.research.google.com/drive/1b4PPH4XGht4Jve2xPKMCh-AXXAQziNQa?usp=sharing) that walks you through the process of loading and visualizing a few samples from the DROID dataset. 

We also provide an example of a "training-ready" data loader that allows for efficient loading of DROID data for policy training (in PyTorch and JAX), including parallelized loading, normalization and augmentation in our [policy learning repo](https://github.com/droid-dataset/droid_policy_learning/blob/master/examples/droid_dataloader.py).


## 📝 Dataset Schema

The following fields are contained in every RLDS episode:
```python
DROID = {
        "episode_metadata": {
                "recording_folderpath": tf.Text, # path to the folder of recordings
                "file_path": tf.Text, # path to the original data file
                },
	"steps": {
		"is_first": tf.Scalar(dtype=bool), # true on first step of the episode
                "is_last": tf.Scalar(dtype=bool), # true on last step of the episode
        	"is_terminal": tf.Scalar(dtype=bool), # true on last step of the episode if it is a terminal step, True for demos
                                
                "language_instruction": tf.Text, # language instruction
                "language_instruction_2": tf.Text, # alternative language instruction
                "language_instruction_3": tf.Text, # alternative language instruction
                "observation": {
                                "gripper_position": tf.Tensor(1, dtype=float64), # gripper position state
                                "cartesian_position": tf.Tensor(6, dtype=float64), # robot Cartesian state
                                "joint_position": tf.Tensor(7, dtype=float64), # joint position state
                                "wrist_image_left": tf.Image(180, 320, 3, dtype=uint8), # wrist camera RGB left viewpoint        
                                "exterior_image_1_left": tf.Image(180, 320, 3, dtype=uint8), # exterior camera 1 left viewpoint
                                "exterior_image_2_left": tf.Image(180, 320, 3, dtype=uint8), # exterior camera 2 left viewpoint
                		},                            
                "action_dict": {
                                "gripper_position": tf.Tensor(1, dtype=float64), # commanded gripper position
                                "gripper_velocity": tf.Tensor(1, dtype=float64), # commanded gripper velocity
                                "cartesian_position": tf.Tensor(6, dtype=float64), # commanded Cartesian position
                                "cartesian_velocity": tf.Tensor(6, dtype=float64), # commanded Cartesian velocity
                                "joint_position": tf.Tensor(7, dtype=float64),  # commanded joint position
                        	"joint_velocity": tf.Tensor(7, dtype=float64), # commanded joint velocity
                		},
		"discount": tf.Scalar(dtype=float32), # discount if provided, default to 1
                "reward": tf.Scalar(dtype=float32), # reward if provided, 1 on final step for demos
                "action": tf.Tensor(7, dtype=float64), # robot action, consists of [6x joint velocities, 1x gripper position]
	},
}
```

### Accessing Raw Data

You can download the raw DROID data using the gsutil command listed above. It contains full-HD stereo videos for all three cameras, alongside with all other information contained in the RLDS dataset. Concretely, each episode folder contains the following information:
```
episode:
   |
   |---- metadata_*.json: Episode metadata like building ID, data collector ID etc.
   |---- trajectory.h5: All low-dimensional information like action and proprioception trajectories.
   |---- recordings:
             |
             |---- MP4:
             |      |
             |      |---- *.mp4: High-res video of single (left) camera view.
             |      |---- *-stereo.mp4: High-res video of concatenated stereo camera views.
             |
             |---- SVO:
                    |
                    |---- *.svo: Raw ZED SVO file with encoded camera recording information (contains some additional metadata)

```

**Note**: We realized that we missed 20% of episodes when face-blurring & copying the *raw* DROID data to the release bucket and are working on uploading the remainder of the dataset. This only affects the *raw* version of DROID, the RLDS version is complete. This should be fixed within a few days -- please reach out to pertsch@berkeley.edu if you have any concerns in the meantime!

## 📄 Data Analysis and Further Information
Please consult the [paper](https://droid-dataset.github.io/paper.pdf) for detailed data analysis and further information about the dataset.


================================================
FILE: droid/__init__.py
================================================


================================================
FILE: droid/calibration/__init__.py
================================================


================================================
FILE: droid/calibration/calibration_info.json
================================================
{"26405488_left": {"pose": [0.1113325872574514, 0.8207481722451926, 0.04550905451430457, -1.64633974845365, -0.17857016318886476, -2.937663676632542], "timestamp": 1673597556.0704472}, "26405488_right": {"pose": [-0.009417838799177, 0.7972668750161092, 0.0693151223699266, -1.6473025257925253, -0.18917295438381165, -2.942218897492042], "timestamp": 1673597557.3687217}, "19824535_left": {"pose": [-0.07528310648512061, 0.02872136764193708, 0.023316565592628957, -0.3255773557130956, 0.006247432431090383, -1.562874530416053], "timestamp": 1680893804.7454631}, "19824535_right": {"pose": [-0.06961440752114842, -0.035396599227355253, 0.020410786906711484, -0.31886273624702377, 0.007822018427507738, -1.552143062181081], "timestamp": 1680893807.0068362}, "29838012_left": {"pose": [0.41548840480128696, 0.4775470950869088, 0.4114322255308952, -2.2160410202605263, -0.02257694787387754, -2.963723863071196], "timestamp": 1686263782.2506583}, "29838012_right": {"pose": [0.294022371612491, 0.45672117609819446, 0.4111009411005134, -2.2079948175639927, -0.01940119744356439, -2.959776211707233], "timestamp": 1686263784.2242827}, "23404442_left": {"pose": [0.25955495037528126, -0.5001018266363574, 0.436881899751338, -2.2773456296459424, 0.3361897716248827, -0.5369007045203406], "timestamp": 1686263149.2820735}, "23404442_right": {"pose": [0.3520388119867355, -0.5577297192334658, 0.4018245547798702, -2.279806362246719, 0.34639742666205775, -0.5475215836690435], "timestamp": 1686263151.7758524}}


================================================
FILE: droid/calibration/calibration_utils.py
================================================
import json
import os
import time
from collections import defaultdict

import cv2
import matplotlib.pyplot as plt
import numpy as np
from cv2 import aruco
from scipy.spatial.transform import Rotation as R

from droid.misc.parameters import *
from droid.misc.transformations import *

# Create Board #
CHARUCO_BOARD = aruco.CharucoBoard_create(
    squaresX=CHARUCOBOARD_COLCOUNT,
    squaresY=CHARUCOBOARD_ROWCOUNT,
    squareLength=CHARUCOBOARD_CHECKER_SIZE,
    markerLength=CHARUCOBOARD_MARKER_SIZE,
    dictionary=ARUCO_DICT,
)

# Detector Params
detector_params = cv2.aruco.DetectorParameters_create()
detector_params.cornerRefinementMethod = cv2.aruco.CORNER_REFINE_SUBPIX
calib_flags = cv2.CALIB_USE_INTRINSIC_GUESS + cv2.CALIB_FIX_PRINCIPAL_POINT + cv2.CALIB_FIX_FOCAL_LENGTH

# Prepare Calibration Info #
dir_path = os.path.dirname(os.path.realpath(__file__))
calib_info_filepath = os.path.join(dir_path, "calibration_info.json")


def load_calibration_info(keep_time=False):
    if not os.path.isfile(calib_info_filepath):
        return {}
    with open(calib_info_filepath, "r") as jsonFile:
        calibration_info = json.load(jsonFile)
    if not keep_time:
        calibration_info = {key: data["pose"] for key, data in calibration_info.items()}
    return calibration_info


def update_calibration_info(cam_id, transformation):
    calibration_info = load_calibration_info(keep_time=True)
    calibration_info[cam_id] = {"pose": list(transformation), "timestamp": time.time()}

    with open(calib_info_filepath, "w") as jsonFile:
        json.dump(calibration_info, jsonFile)


def check_calibration_info(required_ids, time_threshold=3600):
    calibration_info = load_calibration_info(keep_time=True)
    calibration_ids = list(calibration_info.keys())
    info_dict = {"missing": [], "old": []}

    for cam_id in required_ids:
        if cam_id not in calibration_ids:
            info_dict["missing"].append(cam_id)
            continue
        time_passed = time.time() - calibration_info[cam_id]["timestamp"]
        if time_passed > time_threshold:
            info_dict["old"].append(cam_id)

    return info_dict


def visualize_calibration(calibration_dict):
    shapes = [".", "o", "v", "^", "s", "x", "D", "h", "<", ">", "8", "1", "2", "3"]
    assert len(calibration_dict) < (len(shapes) - 1)
    plt.clf()

    axes = plt.subplot(111, projection="3d")
    axes.plot(0, 0, 0, "*", label="Robot Base")

    for view_id in calibration_dict:
        curr_shape = shapes.pop(0)
        pose = calibration_dict[view_id]
        angle = [int(d * 180 / np.pi) for d in pose[3:]]
        label = "{0}: {1}".format(view_id, angle)
        axes.plot(pose[0], pose[1], pose[2], curr_shape, label=label)

    plt.legend(loc="center right", bbox_to_anchor=(2, 0.5))
    plt.title("Calibration Visualization")
    plt.show()


def calibration_traj(t, pos_scale=0.1, angle_scale=0.2, hand_camera=False):
    x = -np.abs(np.sin(3 * t)) * pos_scale
    y = -0.8 * np.sin(2 * t) * pos_scale
    z = 0.5 * np.sin(4 * t) * pos_scale
    a = -np.sin(4 * t) * angle_scale
    b = np.sin(3 * t) * angle_scale
    c = np.sin(2 * t) * angle_scale
    if hand_camera:
        value = np.array([z, y, -x, c / 1.5, b / 1.5, -a / 1.5])
    else:
        value = np.array([x, y, z, a, b, c])
    return value


class CharucoDetector:
    def __init__(
        self,
        intrinsics_dict,
        inlier_error_threshold=3.0,
        reprojection_error_threshold=3.0,
        num_img_threshold=10,
        num_corner_threshold=10,
    ):
        # Set Parameters
        self.inlier_error_threshold = inlier_error_threshold
        self.reprojection_error_threshold = reprojection_error_threshold
        self.num_img_threshold = num_img_threshold
        self.num_corner_threshold = num_corner_threshold
        self.intrinsic_params = {}
        self._intrinsics_dict = intrinsics_dict
        self._readings_dict = defaultdict(list)
        self._pose_dict = defaultdict(list)
        self._curr_cam_id = None

    def process_image(self, image):
        if image.shape[2] == 4:
            gray = cv2.cvtColor(image, cv2.COLOR_BGRA2GRAY)
        elif image.shape[2] == 3:
            gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
        else:
            raise ValueError
        img_size = image.shape[:2]

        # Find Aruco Markers In Image #
        corners, ids, rejected = aruco.detectMarkers(image=gray, dictionary=ARUCO_DICT, parameters=detector_params)

        corners, ids, _, _ = cv2.aruco.refineDetectedMarkers(
            gray,
            CHARUCO_BOARD,
            corners,
            ids,
            rejected,
            parameters=detector_params,
            **self._intrinsics_dict[self._curr_cam_id],
        )

        # Find Charuco Corners #
        if len(corners) == 0:
            return None

        num_corners_found, charuco_corners, charuco_ids = aruco.interpolateCornersCharuco(
            markerCorners=corners, markerIds=ids, image=gray, board=CHARUCO_BOARD, **self.intrinsic_params
        )

        if num_corners_found < self.num_corner_threshold:
            return None

        return corners, charuco_corners, charuco_ids, img_size

    def add_sample(self, cam_id, image, pose):
        readings = self.process_image(image)
        if readings is None:
            return
        self._readings_dict[cam_id].append(readings)
        self._pose_dict[cam_id].append(pose)

    def calculate_target_to_cam(self, readings, train=True):
        init_corners_all = []  # Corners discovered in all images processed
        init_ids_all = []  # Aruco ids corresponding to corners discovered
        fixed_image_size = readings[0][3]

        # Proccess Readings #
        init_successes = []
        for i in range(len(readings)):
            corners, charuco_corners, charuco_ids, img_size = readings[i]
            assert img_size == fixed_image_size
            init_corners_all.append(charuco_corners)
            init_ids_all.append(charuco_ids)
            init_successes.append(i)

        # First Pass: Find Outliers #
        threshold = self.num_img_threshold if train else 5
        if len(init_successes) < threshold:
            return None
        # print('Not enough points round 1')
        # print('Num Points: ', len(init_successes))
        # return None

        calibration_error, cameraMatrix, distCoeffs, rvecs, tvecs, stdIntrinsics, stdExtrinsics, perViewErrors = (
            aruco.calibrateCameraCharucoExtended(
                charucoCorners=init_corners_all,
                charucoIds=init_ids_all,
                board=CHARUCO_BOARD,
                imageSize=fixed_image_size,
                flags=calib_flags,
                **self._intrinsics_dict[self._curr_cam_id],
            )
        )

        # Remove Outliers #
        threshold = self.num_img_threshold if train else 5
        final_corners_all = [
            init_corners_all[i] for i in range(len(perViewErrors)) if perViewErrors[i] <= self.inlier_error_threshold
        ]
        final_ids_all = [
            init_ids_all[i] for i in range(len(perViewErrors)) if perViewErrors[i] <= self.inlier_error_threshold
        ]
        final_successes = [
            init_successes[i] for i in range(len(perViewErrors)) if perViewErrors[i] <= self.inlier_error_threshold
        ]
        if len(final_successes) < threshold:
            return None
        # print('Not enough points round 2')
        # print('Num Points: ', len(final_successes))
        # print('Error Mean: ', perViewErrors.mean())
        # print('Error Std: ', perViewErrors.std())
        # return None

        # Second Pass: Calculate Finalized Extrinsics #
        calibration_error, cameraMatrix, distCoeffs, rvecs, tvecs = aruco.calibrateCameraCharuco(
            charucoCorners=final_corners_all,
            charucoIds=final_ids_all,
            board=CHARUCO_BOARD,
            imageSize=fixed_image_size,
            flags=calib_flags,
            **self._intrinsics_dict[self._curr_cam_id],
        )

        # Return Transformation #
        if calibration_error > self.reprojection_error_threshold:
            return None
        # print('Failed Calibration Threshold')
        # print('Calibration Error: ', calibration_error)
        # return None

        rmats = [R.from_rotvec(rvec.flatten()).as_matrix() for rvec in rvecs]
        tvecs = [tvec.flatten() for tvec in tvecs]

        return rmats, tvecs, final_successes

    def augment_image(self, cam_id, image, visualize=False, visual_type=["markers", "axes"]):
        if type(visual_type) != list:
            visual_type = [visual_type]
        assert all([t in ["markers", "charuco", "axes"] for t in visual_type])
        if image.shape[2] == 4:
            image = cv2.cvtColor(image, cv2.COLOR_BGRA2BGR)
        self._curr_cam_id = cam_id

        image = np.copy(image)
        readings = self.process_image(image)

        if readings is None:
            if visualize:
                cv2.imshow("Charuco board: {0}".format(cam_id), image)
                cv2.waitKey(20)
            return image

        corners, charuco_corners, charuco_ids, image_size = readings

        # Outline the aruco markers found in our query image
        if "markers" in visual_type:
            image = aruco.drawDetectedMarkers(image=image, corners=corners)

        # Draw the Charuco board we've detected to show our calibrator the board was properly detected
        if "charuco" in visual_type:
            image = aruco.drawDetectedCornersCharuco(image=image, charucoCorners=charuco_corners, charucoIds=charuco_ids)

        if "axes" in visual_type:
            calibration_error, cameraMatrix, distCoeffs, rvecs, tvecs = aruco.calibrateCameraCharuco(
                charucoCorners=[charuco_corners],
                charucoIds=[charuco_ids],
                board=CHARUCO_BOARD,
                imageSize=image_size,
                flags=calib_flags,
                **self._intrinsics_dict[self._curr_cam_id],
            )
            cv2.drawFrameAxes(image, cameraMatrix, distCoeffs, rvecs[0], tvecs[0], 0.1)

        # Visualize
        if visualize:
            cv2.imshow("Charuco board: {0}".format(cam_id), image)
            cv2.waitKey(20)

        return image


class ThirdPersonCameraCalibrator(CharucoDetector):
    def __init__(
        self, intrinsics_dict, lin_error_threshold=1e-3, rot_error_threshold=1e-2, train_percentage=0.7, **kwargs
    ):
        self.lin_error_threshold = lin_error_threshold
        self.rot_error_threshold = rot_error_threshold
        self.train_percentage = train_percentage
        super().__init__(intrinsics_dict, **kwargs)

    def calibrate(self, cam_id):
        return self._calibrate_cam_to_base(cam_id=cam_id)

    def _calibrate_cam_to_base(self, cam_id=None, readings=None, gripper_poses=None, target2cam_results=None):
        # Get Calibration Data #
        if cam_id is not None:
            readings, gripper_poses = self._readings_dict[cam_id], self._pose_dict[cam_id]
            self._curr_cam_id = cam_id

        # Get Target2Cam Transformation #
        if target2cam_results is None:
            target2cam_results = self.calculate_target_to_cam(readings)
        if target2cam_results is None:
            return None

        R_target2cam, t_target2cam, successes = target2cam_results
        gripper_poses = np.array(gripper_poses)[successes]

        # Calculate Appropriate Transformations #
        t_base2gripper = [
            -R.from_euler("xyz", pose[3:6]).inv().as_matrix() @ np.array(pose[:3]) for pose in gripper_poses
        ]
        R_base2gripper = [R.from_euler("xyz", pose[3:6]).inv().as_matrix() for pose in gripper_poses]

        # Perform Calibration #
        rmat, pos = cv2.calibrateHandEye(
            R_gripper2base=R_base2gripper,
            t_gripper2base=t_base2gripper,
            R_target2cam=R_target2cam,
            t_target2cam=t_target2cam,
            method=4,
        )

        # Return Pose #
        pos = pos.flatten()
        angle = R.from_matrix(rmat).as_euler("xyz")
        pose = np.concatenate([pos, angle])

        return pose

    def _calibrate_gripper_to_target(self, cam_id=None, readings=None, gripper_poses=None, target2cam_results=None):
        # Get Calibration Data #
        if cam_id is not None:
            readings, gripper_poses = self._readings_dict[cam_id], self._pose_dict[cam_id]
            self._curr_cam_id = cam_id

        # Get Target2Cam Transformation #
        if target2cam_results is None:
            target2cam_results = self.calculate_target_to_cam(readings)
        if target2cam_results is None:
            return None

        R_target2cam, t_target2cam, successes = target2cam_results
        gripper_poses = np.array(gripper_poses)[successes]

        # Calculate Appropriate Transformations #
        t_base2gripper = [
            -R.from_euler("xyz", pose[3:6]).inv().as_matrix() @ np.array(pose[:3]) for pose in gripper_poses
        ]
        R_base2gripper = [R.from_euler("xyz", pose[3:6]).inv().as_matrix() for pose in gripper_poses]

        # Perform Calibration #
        rmat, pos = cv2.calibrateHandEye(
            R_gripper2base=R_target2cam,
            t_gripper2base=t_target2cam,
            R_target2cam=R_base2gripper,
            t_target2cam=t_base2gripper,
            method=4,
        )

        # Return Pose #
        pos = pos.flatten()
        angle = R.from_matrix(rmat).as_euler("xyz")
        pose = np.concatenate([pos, angle])

        return pose

    def _calculate_gripper_to_base(self, train_readings, train_gripper_poses, eval_readings=None):
        if eval_readings is None:
            eval_readings = train_readings

        # Get Eval Target2Cam Transformations #
        eval_results = self.calculate_target_to_cam(eval_readings, train=False)
        if eval_results is None:
            return None
        eval_R_target2cam, eval_t_target2cam, eval_successes = eval_results
        rmats, tvecs = [], []

        # Get Train Target2Cam Transformations #
        train_results = self.calculate_target_to_cam(train_readings)
        if train_results is None:
            return None

        # Use Training Data For Calibrations #
        gripper2target = self._calibrate_gripper_to_target(
            gripper_poses=train_gripper_poses, target2cam_results=train_results
        )
        R_gripper2target = R.from_euler("xyz", gripper2target[3:]).as_matrix()
        t_gripper2target = np.array(gripper2target[:3])

        cam2base = self._calibrate_cam_to_base(gripper_poses=train_gripper_poses, target2cam_results=train_results)
        R_cam2base = R.from_euler("xyz", cam2base[3:]).as_matrix()
        t_cam2base = np.array(cam2base[:3])

        # Calculate Gripper2Base #
        for i in range(len(eval_R_target2cam)):
            R_gripper2cam = eval_R_target2cam[i] @ R_gripper2target
            t_gripper2cam = eval_R_target2cam[i] @ t_gripper2target + eval_t_target2cam[i]

            R_gripper2base = R_cam2base @ R_gripper2cam
            t_gripper2base = R_cam2base @ t_gripper2cam + t_cam2base

            rmats.append(R_gripper2base)
            tvecs.append(t_gripper2base)

        # Return Poses #
        eulers = np.array([R.from_matrix(rmat).as_euler("xyz") for rmat in rmats])
        eval_poses = np.concatenate([np.array(tvecs), eulers], axis=1)

        return eval_poses, eval_successes

    def is_calibration_accurate(self, cam_id):
        # Set Camera #
        self._curr_cam_id = cam_id

        # Split Into Train / Test #
        readings = self._readings_dict[cam_id]
        if len(readings) == 0:
            return False
        poses = np.array(self._pose_dict[cam_id])
        ind = np.random.choice(len(readings), size=len(readings), replace=False)
        num_train = int(len(readings) * self.train_percentage)

        train_ind, test_ind = ind[:num_train], ind[num_train:]
        train_poses, test_poses = poses[train_ind], poses[test_ind]
        train_readings = [readings[i] for i in train_ind]
        test_readings = [readings[i] for i in test_ind]

        # Calculate Approximate Gripper2Base Transformations #
        results = self._calculate_gripper_to_base(train_readings, train_poses, eval_readings=test_readings)
        if results is None:
            return False
        approx_poses, successes = results
        test_poses = np.array(test_poses)[successes]

        # Calculate Per Dimension Error #
        pose_error = np.array([pose_diff(pose, approx_pose) for pose, approx_pose in zip(test_poses, approx_poses)])
        lin_error = np.linalg.norm(pose_error[:, :3], axis=0) ** 2 / pose_error.shape[0]
        rot_error = np.linalg.norm(pose_error[:, 3:6], axis=0) ** 2 / pose_error.shape[0]

        # Check Calibration Error #
        lin_success = np.all(lin_error < self.lin_error_threshold)
        rot_success = np.all(rot_error < self.rot_error_threshold)

        # print('Pose Std: ', poses.std(axis=0))
        # print('Lin Error: ', lin_error)
        # print('Rot Error: ', rot_error)

        return lin_success and rot_success


class HandCameraCalibrator(CharucoDetector):
    def __init__(self, camera, lin_error_threshold=1e-3, rot_error_threshold=1e-2, train_percentage=0.7, **kwargs):
        self.lin_error_threshold = lin_error_threshold
        self.rot_error_threshold = rot_error_threshold
        self.train_percentage = train_percentage
        super().__init__(camera, **kwargs)

    def calibrate(self, cam_id):
        return self._calibrate_cam_to_gripper(cam_id=cam_id)

    def _calibrate_cam_to_gripper(self, cam_id=None, readings=None, gripper_poses=None, target2cam_results=None):
        # Get Calibration Data #
        if cam_id is not None:
            readings, gripper_poses = self._readings_dict[cam_id], self._pose_dict[cam_id]
            self._curr_cam_id = cam_id

        # Get Target2Cam Transformation #
        if target2cam_results is None:
            target2cam_results = self.calculate_target_to_cam(readings)
        if target2cam_results is None:
            return None

        R_target2cam, t_target2cam, successes = target2cam_results
        gripper_poses = np.array(gripper_poses)[successes]

        # Calculate Appropriate Transformations #
        t_gripper2base = [np.array(pose[:3]) for pose in gripper_poses]
        R_gripper2base = [R.from_euler("xyz", pose[3:6]).as_matrix() for pose in gripper_poses]

        # Perform Calibration #
        rmat, pos = cv2.calibrateHandEye(
            R_gripper2base=R_gripper2base,
            t_gripper2base=t_gripper2base,
            R_target2cam=R_target2cam,
            t_target2cam=t_target2cam,
            method=4,
        )

        # Return Pose #
        pos = pos.flatten()
        angle = R.from_matrix(rmat).as_euler("xyz")
        pose = np.concatenate([pos, angle])

        return pose

    def _calibrate_base_to_target(self, cam_id=None, readings=None, gripper_poses=None, target2cam_results=None):
        # Get Calibration Data #
        if cam_id is not None:
            readings, gripper_poses = self._readings_dict[cam_id], self._pose_dict[cam_id]
            self._curr_cam_id = cam_id

        # Get Target2Cam Transformation #
        if target2cam_results is None:
            target2cam_results = self.calculate_target_to_cam(readings)
        if target2cam_results is None:
            return None

        R_target2cam, t_target2cam, successes = target2cam_results
        gripper_poses = np.array(gripper_poses)[successes]

        # Calculate Appropriate Transformations #
        t_gripper2base = [np.array(pose[:3]) for pose in gripper_poses]
        R_gripper2base = [R.from_euler("xyz", pose[3:6]).as_matrix() for pose in gripper_poses]

        # Perform Calibration #
        rmat, pos = cv2.calibrateHandEye(
            R_gripper2base=R_target2cam,
            t_gripper2base=t_target2cam,
            R_target2cam=R_gripper2base,
            t_target2cam=t_gripper2base,
            method=4,
        )

        # Return Pose #
        pos = pos.flatten()
        angle = R.from_matrix(rmat).as_euler("xyz")
        pose = np.concatenate([pos, angle])

        return pose

    def _calculate_gripper_to_base(self, train_readings, train_gripper_poses, eval_readings=None):
        if eval_readings is None:
            eval_readings = train_readings

        # Get Eval Target2Cam Transformations #
        eval_results = self.calculate_target_to_cam(eval_readings, train=False)
        if eval_results is None:
            return None
        eval_R_target2cam, eval_t_target2cam, eval_successes = eval_results
        rmats, tvecs = [], []

        # Get Train Target2Cam Transformations #
        train_results = self.calculate_target_to_cam(train_readings)
        if train_results is None:
            return None

        # Use Training Data For Calibrations #
        base2target = self._calibrate_base_to_target(gripper_poses=train_gripper_poses, target2cam_results=train_results)
        R_base2target = R.from_euler("xyz", base2target[3:]).as_matrix()
        t_base2target = np.array(base2target[:3])

        cam2gripper = self._calibrate_cam_to_gripper(gripper_poses=train_gripper_poses, target2cam_results=train_results)
        R_cam2gripper = R.from_euler("xyz", cam2gripper[3:]).as_matrix()
        t_cam2gripper = np.array(cam2gripper[:3])

        # Calculate Gripper2Base #
        for i in range(len(eval_R_target2cam)):
            R_base2cam = eval_R_target2cam[i] @ R_base2target
            t_base2cam = eval_R_target2cam[i] @ t_base2target + eval_t_target2cam[i]

            R_base2gripper = R_cam2gripper @ R_base2cam
            t_base2gripper = R_cam2gripper @ t_base2cam + t_cam2gripper

            R_gripper2base = R.from_matrix(R_base2gripper).inv().as_matrix()
            t_gripper2base = -R_gripper2base @ t_base2gripper

            rmats.append(R_gripper2base)
            tvecs.append(t_gripper2base)

        # Return Poses #
        eulers = np.array([R.from_matrix(rmat).as_euler("xyz") for rmat in rmats])
        eval_poses = np.concatenate([np.array(tvecs), eulers], axis=1)

        return eval_poses, eval_successes

    def is_calibration_accurate(self, cam_id):
        # Set Camera #
        self._curr_cam_id = cam_id

        # Split Into Train / Test #
        readings = self._readings_dict[cam_id]
        if len(readings) == 0:
            return False
        poses = np.array(self._pose_dict[cam_id])
        ind = np.random.choice(len(readings), size=len(readings), replace=False)
        num_train = int(len(readings) * self.train_percentage)

        train_ind, test_ind = ind[:num_train], ind[num_train:]
        train_poses, test_poses = poses[train_ind], poses[test_ind]
        train_readings = [readings[i] for i in train_ind]
        test_readings = [readings[i] for i in test_ind]

        # Calculate Approximate Gripper2Base Transformations #
        results = self._calculate_gripper_to_base(train_readings, train_poses, eval_readings=test_readings)
        if results is None:
            return False
        approx_poses, successes = results
        test_poses = np.array(test_poses)[successes]

        # Calculate Per Dimension Error #
        pose_error = np.array([pose_diff(pose, approx_pose) for pose, approx_pose in zip(test_poses, approx_poses)])
        lin_error = np.linalg.norm(pose_error[:, :3], axis=0) ** 2 / pose_error.shape[0]
        rot_error = np.linalg.norm(pose_error[:, 3:6], axis=0) ** 2 / pose_error.shape[0]

        # Check Calibration Error #
        lin_success = np.all(lin_error < self.lin_error_threshold)
        rot_success = np.all(rot_error < self.rot_error_threshold)

        # print('Pose Std: ', poses.std(axis=0))
        # print('Lin Error: ', lin_error)
        # print('Rot Error: ', rot_error)

        return lin_success and rot_success


================================================
FILE: droid/camera_utils/__init__.py
================================================


================================================
FILE: droid/camera_utils/camera_readers/__init__.py
================================================


================================================
FILE: droid/camera_utils/camera_readers/zed_camera.py
================================================
from copy import deepcopy

import cv2
import numpy as np

from droid.misc.parameters import hand_camera_id
from droid.misc.time import time_ms

try:
    import pyzed.sl as sl
except ModuleNotFoundError:
    print("WARNING: You have not setup the ZED cameras, and currently cannot use them")


def gather_zed_cameras():
    all_zed_cameras = []
    try:
        cameras = sl.Camera.get_device_list()
    except NameError:
        return []

    for cam in cameras:
        cam = ZedCamera(cam)
        all_zed_cameras.append(cam)

    return all_zed_cameras


resize_func_map = {"cv2": cv2.resize, None: None}

standard_params = dict(
    depth_minimum_distance=0.1, camera_resolution=sl.RESOLUTION.HD720, depth_stabilization=False, camera_fps=60, camera_image_flip=sl.FLIP_MODE.OFF
)

advanced_params = dict(
    depth_minimum_distance=0.1, camera_resolution=sl.RESOLUTION.HD2K, depth_stabilization=False, camera_fps=15, camera_image_flip=sl.FLIP_MODE.OFF
)


class ZedCamera:
    def __init__(self, camera):
        # Save Parameters #
        self.serial_number = str(camera.serial_number)
        self.is_hand_camera = self.serial_number == hand_camera_id
        self.high_res_calibration = False
        self.current_mode = None
        self._current_params = None
        self._extriniscs = {}

        # Open Camera #
        print("Opening Zed: ", self.serial_number)

    def enable_advanced_calibration(self):
        self.high_res_calibration = True

    def disable_advanced_calibration(self):
        self.high_res_calibration = False

    def set_reading_parameters(
        self,
        image=True,
        depth=False,
        pointcloud=False,
        concatenate_images=False,
        resolution=(0, 0),
        resize_func=None,
    ):
        # Non-Permenant Values #
        self.traj_image = image
        self.traj_concatenate_images = concatenate_images
        self.traj_resolution = resolution

        # Permenant Values #
        self.depth = depth
        self.pointcloud = pointcloud
        self.resize_func = resize_func_map[resize_func]

    ### Camera Modes ###
    def set_calibration_mode(self):
        # Set Parameters #
        self.image = True
        self.concatenate_images = False
        self.skip_reading = False
        self.zed_resolution = sl.Resolution(0, 0)
        self.resizer_resolution = (0, 0)

        # Set Mode #
        change_settings_1 = (self.high_res_calibration) and (self._current_params != advanced_params)
        change_settings_2 = (not self.high_res_calibration) and (self._current_params != standard_params)
        if change_settings_1:
            self._configure_camera(advanced_params)
        if change_settings_2:
            self._configure_camera(standard_params)
        self.current_mode = "calibration"

    def set_trajectory_mode(self):
        # Set Parameters #
        self.image = self.traj_image
        self.concatenate_images = self.traj_concatenate_images
        self.skip_reading = not any([self.image, self.depth, self.pointcloud])

        if self.resize_func is None:
            self.zed_resolution = sl.Resolution(*self.traj_resolution)
            self.resizer_resolution = (0, 0)
        else:
            self.zed_resolution = sl.Resolution(0, 0)
            self.resizer_resolution = self.traj_resolution

        # Set Mode #
        change_settings = self._current_params != standard_params
        if change_settings:
            self._configure_camera(standard_params)
        self.current_mode = "trajectory"

    def _configure_camera(self, init_params):
        # Close Existing Camera #
        self.disable_camera()

        # Initialize Readers #
        self._cam = sl.Camera()
        self._sbs_img = sl.Mat()
        self._left_img = sl.Mat()
        self._right_img = sl.Mat()
        self._left_depth = sl.Mat()
        self._right_depth = sl.Mat()
        self._left_pointcloud = sl.Mat()
        self._right_pointcloud = sl.Mat()
        self._runtime = sl.RuntimeParameters()

        # Open Camera #
        self._current_params = init_params
        sl_params = sl.InitParameters(**init_params)
        sl_params.set_from_serial_number(int(self.serial_number))
        sl_params.camera_image_flip = sl.FLIP_MODE.OFF
        status = self._cam.open(sl_params)
        if status != sl.ERROR_CODE.SUCCESS:
            raise RuntimeError("Camera Failed To Open")

        # Save Intrinsics #
        self.latency = int(2.5 * (1e3 / sl_params.camera_fps))
        calib_params = self._cam.get_camera_information().camera_configuration.calibration_parameters
        self._intrinsics = {
            self.serial_number + "_left": self._process_intrinsics(calib_params.left_cam),
            self.serial_number + "_right": self._process_intrinsics(calib_params.right_cam),
        }

    ### Calibration Utilities ###
    def _process_intrinsics(self, params):
        intrinsics = {}
        intrinsics["cameraMatrix"] = np.array([[params.fx, 0, params.cx], [0, params.fy, params.cy], [0, 0, 1]])
        intrinsics["distCoeffs"] = np.array(list(params.disto))
        return intrinsics

    def get_intrinsics(self):
        return deepcopy(self._intrinsics)

    ### Recording Utilities ###
    def start_recording(self, filename):
        assert filename.endswith(".svo")
        recording_param = sl.RecordingParameters(filename, sl.SVO_COMPRESSION_MODE.H265)
        err = self._cam.enable_recording(recording_param)
        assert err == sl.ERROR_CODE.SUCCESS

    def stop_recording(self):
        self._cam.disable_recording()

    ### Basic Camera Utilities ###
    def _process_frame(self, frame):
        frame = deepcopy(frame.get_data())
        if self.resizer_resolution == (0, 0):
            return frame
        return self.resize_func(frame, self.resizer_resolution)

    def read_camera(self):
        # Skip if Read Unnecesary #
        if self.skip_reading:
            return {}, {}

        # Read Camera #
        timestamp_dict = {self.serial_number + "_read_start": time_ms()}
        err = self._cam.grab(self._runtime)
        if err != sl.ERROR_CODE.SUCCESS:
            return None
        timestamp_dict[self.serial_number + "_read_end"] = time_ms()

        # Benchmark Latency #
        received_time = self._cam.get_timestamp(sl.TIME_REFERENCE.IMAGE).get_milliseconds()
        timestamp_dict[self.serial_number + "_frame_received"] = received_time
        timestamp_dict[self.serial_number + "_estimated_capture"] = received_time - self.latency

        # Return Data #
        data_dict = {}

        if self.image:
            if self.concatenate_images:
                self._cam.retrieve_image(self._sbs_img, sl.VIEW.SIDE_BY_SIDE, resolution=self.zed_resolution)
                data_dict["image"] = {self.serial_number: self._process_frame(self._sbs_img)}
            else:
                self._cam.retrieve_image(self._left_img, sl.VIEW.LEFT, resolution=self.zed_resolution)
                self._cam.retrieve_image(self._right_img, sl.VIEW.RIGHT, resolution=self.zed_resolution)
                data_dict["image"] = {
                    self.serial_number + "_left": self._process_frame(self._left_img),
                    self.serial_number + "_right": self._process_frame(self._right_img),
                }
        # if self.depth:
        # 	self._cam.retrieve_measure(self._left_depth, sl.MEASURE.DEPTH, resolution=self.resolution)
        # 	self._cam.retrieve_measure(self._right_depth, sl.MEASURE.DEPTH_RIGHT, resolution=self.resolution)
        # 	data_dict['depth'] = {
        # 		self.serial_number + '_left': self._left_depth.get_data().copy(),
        # 		self.serial_number + '_right': self._right_depth.get_data().copy()}
        # if self.pointcloud:
        # 	self._cam.retrieve_measure(self._left_pointcloud, sl.MEASURE.XYZRGBA, resolution=self.resolution)
        # 	self._cam.retrieve_measure(self._right_pointcloud, sl.MEASURE.XYZRGBA_RIGHT, resolution=self.resolution)
        # 	data_dict['pointcloud'] = {
        # 		self.serial_number + '_left': self._left_pointcloud.get_data().copy(),
        # 		self.serial_number + '_right': self._right_pointcloud.get_data().copy()}

        return data_dict, timestamp_dict

    def disable_camera(self):
        if self.current_mode == "disabled":
            return
        if hasattr(self, "_cam"):
            self._current_params = None
            self._cam.close()
        self.current_mode = "disabled"

    def is_running(self):
        return self.current_mode != "disabled"


================================================
FILE: droid/camera_utils/info.py
================================================
from droid.misc.parameters import *

camera_type_dict = {
    hand_camera_id: 0,
    varied_camera_1_id: 1,
    varied_camera_2_id: 1,
}

camera_type_to_string_dict = {
    0: "hand_camera",
    1: "varied_camera",
    2: "fixed_camera",
}

camera_name_dict = {
    hand_camera_id: "Hand Camera",
    varied_camera_1_id: "Varied Camera #1",
    varied_camera_2_id: "Varied Camera #2",
}


def get_camera_name(cam_id):
    if cam_id in camera_name_dict:
        return camera_name_dict[cam_id]
    return cam_id


def get_camera_type(cam_id):
    if cam_id not in camera_type_dict:
        return None
    type_int = camera_type_dict[cam_id]
    type_str = camera_type_to_string_dict[type_int]
    return type_str


================================================
FILE: droid/camera_utils/recording_readers/mp4_reader.py
================================================
import json
import os
from copy import deepcopy

import cv2

resize_func_map = {"cv2": cv2.resize, None: None}


class MP4Reader:
    def __init__(self, filepath, serial_number):
        # Save Parameters #
        self.serial_number = serial_number
        self._index = 0

        # Open Video Reader #
        self._mp4_reader = cv2.VideoCapture(filepath)
        if not self._mp4_reader.isOpened():
            print(filepath)
            raise RuntimeError("Corrupted MP4 File")

        # Load Recording Timestamps #
        timestamp_filepath = filepath[:-4] + "_timestamps.json"
        if not os.path.isfile(timestamp_filepath):
            self._recording_timestamps = []
        with open(timestamp_filepath, "r") as jsonFile:
            self._recording_timestamps = json.load(jsonFile)

    def set_reading_parameters(
        self,
        image=True,
        concatenate_images=False,
        resolution=(0, 0),
        resize_func=None,
    ):
        # Save Parameters #
        self.image = image
        self.concatenate_images = concatenate_images
        self.resolution = resolution
        self.resize_func = resize_func_map[resize_func]
        self.skip_reading = not image
        if self.skip_reading:
            return

    def get_frame_resolution(self):
        width = self._mp4_reader.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH)
        height = self._mp4_reader.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)
        return (width, height)

    def get_frame_count(self):
        if self.skip_reading:
            return 0
        frame_count = int(self._mp4_reader.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT))
        return frame_count

    def set_frame_index(self, index):
        if self.skip_reading:
            return

        if index < self._index:
            self._mp4_reader.set(cv2.CAP_PROP_POS_FRAMES, index - 1)
            self._index = index

        while self._index < index:
            self.read_camera(ignore_data=True)

    def _process_frame(self, frame):
        frame = deepcopy(frame)
        if self.resolution == (0, 0):
            return frame
        return self.resize_func(frame, self.resolution)
        # return cv2.resize(frame, self.resolution)#, interpolation=cv2.INTER_AREA)

    def read_camera(self, ignore_data=False, correct_timestamp=None, return_timestamp=False):
        # Skip if Read Unnecesary #
        if self.skip_reading:
            return {}

        # Read Camera #
        success, frame = self._mp4_reader.read()
        try:
            received_time = self._recording_timestamps[self._index]
        except IndexError:
            received_time = None

        self._index += 1
        if not success:
            return None
        if ignore_data:
            return None

        # Check Image Timestamp #
        timestamps_given = (received_time is not None) and (correct_timestamp is not None)
        if timestamps_given and (correct_timestamp != received_time):
            print("Timestamps did not match...")
            return None

        # Return Data #
        data_dict = {}

        if self.concatenate_images:
            data_dict["image"] = {self.serial_number: self._process_frame(frame)}
        else:
            single_width = frame.shape[1] // 2
            data_dict["image"] = {
                self.serial_number + "_left": self._process_frame(frame[:, :single_width, :]),
                self.serial_number + "_right": self._process_frame(frame[:, single_width:, :]),
            }

        if return_timestamp:
            return data_dict, received_time
        return data_dict

    def disable_camera(self):
        if hasattr(self, "_mp4_reader"):
            self._mp4_reader.release()


================================================
FILE: droid/camera_utils/recording_readers/svo_reader.py
================================================
from copy import deepcopy

import cv2

try:
    import pyzed.sl as sl
except ModuleNotFoundError:
    print("WARNING: You have not setup the ZED cameras, and currently cannot use them")

resize_func_map = {"cv2": cv2.resize, None: None}


class SVOReader:
    def __init__(self, filepath, serial_number):
        # Save Parameters #
        self.serial_number = serial_number
        self._index = 0

        # Initialize Readers #
        self._sbs_img = sl.Mat()
        self._left_img = sl.Mat()
        self._right_img = sl.Mat()
        self._left_depth = sl.Mat()
        self._right_depth = sl.Mat()
        self._left_pointcloud = sl.Mat()
        self._right_pointcloud = sl.Mat()

        # Set SVO path for playback
        init_parameters = sl.InitParameters()
        init_parameters.camera_image_flip = sl.FLIP_MODE.OFF
        init_parameters.set_from_svo_file(filepath)

        # Open the ZED
        self._cam = sl.Camera()
        status = self._cam.open(init_parameters)
        if status != sl.ERROR_CODE.SUCCESS:
            print("Zed Error: " + repr(status))

    def set_reading_parameters(
        self,
        image=True,
        depth=False,
        pointcloud=False,
        concatenate_images=False,
        resolution=(0, 0),
        resize_func=None,
    ):
        # Save Parameters #
        self.image = image
        self.depth = depth
        self.pointcloud = pointcloud
        self.concatenate_images = concatenate_images
        self.resize_func = resize_func_map[resize_func]

        if self.resize_func is None:
            self.zed_resolution = sl.Resolution(*resolution)
            self.resizer_resolution = (0, 0)
        else:
            self.zed_resolution = sl.Resolution(0, 0)
            self.resizer_resolution = resolution

        self.skip_reading = not any([image, depth, pointcloud])
        if self.skip_reading:
            return

    def get_frame_resolution(self):
        camera_info = self._cam.get_camera_information().camera_configuration
        width = camera_info.resolution.width
        height = camera_info.resolution.height
        return (width, height)

    def get_frame_count(self):
        if self.skip_reading:
            return 0
        return self._cam.get_svo_number_of_frames()

    def set_frame_index(self, index):
        if self.skip_reading:
            return

        if index < self._index:
            self._cam.set_svo_position(index)
            self._index = index

        while self._index < index:
            self.read_camera(ignore_data=True)

    def _process_frame(self, frame):
        frame = deepcopy(frame.get_data())
        if self.resizer_resolution == (0, 0):
            return frame
        return self.resize_func(frame, self.resizer_resolution)

    def read_camera(self, ignore_data=False, correct_timestamp=None, return_timestamp=False):
        # Skip if Read Unnecesary #
        if self.skip_reading:
            return {}

        # Read Camera #
        self._index += 1
        err = self._cam.grab()
        if err != sl.ERROR_CODE.SUCCESS:
            return None
        if ignore_data:
            return None

        # Check Image Timestamp #
        received_time = self._cam.get_timestamp(sl.TIME_REFERENCE.IMAGE).get_milliseconds()
        timestamp_error = (correct_timestamp is not None) and (correct_timestamp != received_time)

        if timestamp_error:
            print("Timestamps did not match...")
            return None

        # Return Data #
        data_dict = {}

        if self.image:
            if self.concatenate_images:
                self._cam.retrieve_image(self._sbs_img, sl.VIEW.SIDE_BY_SIDE, resolution=self.zed_resolution)
                data_dict["image"] = {self.serial_number: self._process_frame(self._sbs_img)}
            else:
                self._cam.retrieve_image(self._left_img, sl.VIEW.LEFT, resolution=self.zed_resolution)
                self._cam.retrieve_image(self._right_img, sl.VIEW.RIGHT, resolution=self.zed_resolution)
                data_dict["image"] = {
                    self.serial_number + "_left": self._process_frame(self._left_img),
                    self.serial_number + "_right": self._process_frame(self._right_img),
                }
        # if self.depth:
        # 	self._cam.retrieve_measure(self._left_depth, sl.MEASURE.DEPTH, resolution=self.resolution)
        # 	self._cam.retrieve_measure(self._right_depth, sl.MEASURE.DEPTH_RIGHT, resolution=self.resolution)
        # 	data_dict['depth'] = {
        # 		self.serial_number + '_left': self._left_depth.get_data().copy(),
        # 		self.serial_number + '_right': self._right_depth.get_data().copy()}
        # if self.pointcloud:
        # 	self._cam.retrieve_measure(self._left_pointcloud, sl.MEASURE.XYZRGBA, resolution=self.resolution)
        # 	self._cam.retrieve_measure(self._right_pointcloud, sl.MEASURE.XYZRGBA_RIGHT, resolution=self.resolution)
        # 	data_dict['pointcloud'] = {
        # 		self.serial_number + '_left': self._left_pointcloud.get_data().copy(),
        # 		self.serial_number + '_right': self._right_pointcloud.get_data().copy()}

        if return_timestamp:
            return data_dict, received_time
        return data_dict

    def disable_camera(self):
        if hasattr(self, "_cam"):
            self._cam.close()


================================================
FILE: droid/camera_utils/wrappers/__init__.py
================================================


================================================
FILE: droid/camera_utils/wrappers/multi_camera_wrapper.py
================================================
import os
import random
from collections import defaultdict

from droid.camera_utils.camera_readers.zed_camera import gather_zed_cameras
from droid.camera_utils.info import get_camera_type


class MultiCameraWrapper:
    def __init__(self, camera_kwargs={}):
        # Open Cameras #
        zed_cameras = gather_zed_cameras()
        self.camera_dict = {cam.serial_number: cam for cam in zed_cameras}

        # Set Correct Parameters #
        for cam_id in self.camera_dict.keys():
            cam_type = get_camera_type(cam_id)
            curr_cam_kwargs = camera_kwargs.get(cam_type, {})
            self.camera_dict[cam_id].set_reading_parameters(**curr_cam_kwargs)

        # Launch Camera #
        self.set_trajectory_mode()

    ### Calibration Functions ###
    def get_camera(self, camera_id):
        return self.camera_dict[camera_id]

    def enable_advanced_calibration(self):
        for cam in self.camera_dict.values():
            cam.enable_advanced_calibration()

    def disable_advanced_calibration(self):
        for cam in self.camera_dict.values():
            cam.disable_advanced_calibration()

    def set_calibration_mode(self, cam_id):
        # If High Res Calibration, Only One Can Run #
        close_all = any([cam.high_res_calibration for cam in self.camera_dict.values()])

        if close_all:
            for curr_cam_id in self.camera_dict:
                if curr_cam_id != cam_id:
                    self.camera_dict[curr_cam_id].disable_camera()

        self.camera_dict[cam_id].set_calibration_mode()

    def set_trajectory_mode(self):
        # If High Res Calibration, Close All #
        close_all = any(
            [cam.high_res_calibration and cam.current_mode == "calibration" for cam in self.camera_dict.values()]
        )

        if close_all:
            for cam in self.camera_dict.values():
                cam.disable_camera()

        # Put All Cameras In Trajectory Mode #
        for cam in self.camera_dict.values():
            cam.set_trajectory_mode()

    ### Data Storing Functions ###
    def start_recording(self, recording_folderpath):
        subdir = os.path.join(recording_folderpath, "SVO")
        if not os.path.isdir(subdir):
            os.makedirs(subdir)
        for cam in self.camera_dict.values():
            filepath = os.path.join(subdir, cam.serial_number + ".svo")
            cam.start_recording(filepath)

    def stop_recording(self):
        for cam in self.camera_dict.values():
            cam.stop_recording()

    ### Basic Camera Functions ###
    def read_cameras(self):
        full_obs_dict = defaultdict(dict)
        full_timestamp_dict = {}

        # Read Cameras In Randomized Order #
        all_cam_ids = list(self.camera_dict.keys())
        random.shuffle(all_cam_ids)

        for cam_id in all_cam_ids:
            if not self.camera_dict[cam_id].is_running():
                continue
            data_dict, timestamp_dict = self.camera_dict[cam_id].read_camera()

            for key in data_dict:
                full_obs_dict[key].update(data_dict[key])
            full_timestamp_dict.update(timestamp_dict)

        return full_obs_dict, full_timestamp_dict

    def disable_cameras(self):
        for camera in self.camera_dict.values():
            camera.disable_camera()


================================================
FILE: droid/camera_utils/wrappers/recorded_multi_camera_wrapper.py
================================================
import glob
import random
from collections import defaultdict

from droid.camera_utils.info import get_camera_type
from droid.camera_utils.recording_readers.mp4_reader import MP4Reader
from droid.camera_utils.recording_readers.svo_reader import SVOReader


class RecordedMultiCameraWrapper:
    def __init__(self, recording_folderpath, camera_kwargs={}):
        # Save Camera Info #
        self.camera_kwargs = camera_kwargs

        # Open Camera Readers #
        svo_filepaths = glob.glob(recording_folderpath + "/*.svo")
        mp4_filepaths = glob.glob(recording_folderpath + "/*.mp4")
        all_filepaths = svo_filepaths + mp4_filepaths

        self.camera_dict = {}
        for f in all_filepaths:
            serial_number = f.split("/")[-1][:-4]
            cam_type = get_camera_type(serial_number)
            camera_kwargs.get(cam_type, {})

            if f.endswith(".svo"):
                Reader = SVOReader
            elif f.endswith(".mp4"):
                Reader = MP4Reader
            else:
                raise ValueError

            self.camera_dict[serial_number] = Reader(f, serial_number)

    def read_cameras(self, index=None, camera_type_dict={}, timestamp_dict={}):
        full_obs_dict = defaultdict(dict)

        # Read Cameras In Randomized Order #
        all_cam_ids = list(self.camera_dict.keys())
        random.shuffle(all_cam_ids)

        for cam_id in all_cam_ids:
            cam_type = camera_type_dict[cam_id]
            curr_cam_kwargs = self.camera_kwargs.get(cam_type, {})
            self.camera_dict[cam_id].set_reading_parameters(**curr_cam_kwargs)

            timestamp = timestamp_dict.get(cam_id + "_frame_received", None)
            if index is not None:
                self.camera_dict[cam_id].set_frame_index(index)

            data_dict = self.camera_dict[cam_id].read_camera(correct_timestamp=timestamp)

            # Process Returned Data #
            if data_dict is None:
                return None
            for key in data_dict:
                full_obs_dict[key].update(data_dict[key])

        return full_obs_dict

    def disable_cameras(self):
        for camera in self.camera_dict.values():
            camera.disable_camera()


================================================
FILE: droid/controllers/oculus_controller.py
================================================
import time

import numpy as np
from oculus_reader.reader import OculusReader

from droid.misc.subprocess_utils import run_threaded_command
from droid.misc.transformations import add_angles, euler_to_quat, quat_diff, quat_to_euler, rmat_to_quat


def vec_to_reorder_mat(vec):
    X = np.zeros((len(vec), len(vec)))
    for i in range(X.shape[0]):
        ind = int(abs(vec[i])) - 1
        X[i, ind] = np.sign(vec[i])
    return X


class VRPolicy:
    def __init__(
        self,
        right_controller: bool = True,
        max_lin_vel: float = 1,
        max_rot_vel: float = 1,
        max_gripper_vel: float = 1,
        spatial_coeff: float = 1,
        pos_action_gain: float = 5,
        rot_action_gain: float = 2,
        gripper_action_gain: float = 3,
        rmat_reorder: list = [-2, -1, -3, 4],
    ):
        self.oculus_reader = OculusReader()
        self.vr_to_global_mat = np.eye(4)
        self.max_lin_vel = max_lin_vel
        self.max_rot_vel = max_rot_vel
        self.max_gripper_vel = max_gripper_vel
        self.spatial_coeff = spatial_coeff
        self.pos_action_gain = pos_action_gain
        self.rot_action_gain = rot_action_gain
        self.gripper_action_gain = gripper_action_gain
        self.global_to_env_mat = vec_to_reorder_mat(rmat_reorder)
        self.controller_id = "r" if right_controller else "l"
        self.reset_orientation = True
        self.reset_state()

        # Start State Listening Thread #
        run_threaded_command(self._update_internal_state)

    def reset_state(self):
        self._state = {
            "poses": {},
            "buttons": {"A": False, "B": False, "X": False, "Y": False},
            "movement_enabled": False,
            "controller_on": True,
        }
        self.update_sensor = True
        self.reset_origin = True
        self.robot_origin = None
        self.vr_origin = None
        self.vr_state = None

    def _update_internal_state(self, num_wait_sec=5, hz=50):
        last_read_time = time.time()
        while True:
            # Regulate Read Frequency #
            time.sleep(1 / hz)

            # Read Controller
            time_since_read = time.time() - last_read_time
            poses, buttons = self.oculus_reader.get_transformations_and_buttons()
            self._state["controller_on"] = time_since_read < num_wait_sec
            if poses == {}:
                continue

            # Determine Control Pipeline #
            toggled = self._state["movement_enabled"] != buttons[self.controller_id.upper() + "G"]
            self.update_sensor = self.update_sensor or buttons[self.controller_id.upper() + "G"]
            self.reset_orientation = self.reset_orientation or buttons[self.controller_id.upper() + "J"]
            self.reset_origin = self.reset_origin or toggled

            # Save Info #
            self._state["poses"] = poses
            self._state["buttons"] = buttons
            self._state["movement_enabled"] = buttons[self.controller_id.upper() + "G"]
            self._state["controller_on"] = True
            last_read_time = time.time()

            # Update Definition Of "Forward" #
            stop_updating = self._state["buttons"][self.controller_id.upper() + "J"] or self._state["movement_enabled"]
            if self.reset_orientation:
                rot_mat = np.asarray(self._state["poses"][self.controller_id])
                if stop_updating:
                    self.reset_orientation = False
                # try to invert the rotation matrix, if not possible, then just use the identity matrix
                try:
                    rot_mat = np.linalg.inv(rot_mat)
                except:
                    print(f"exception for rot mat: {rot_mat}")
                    rot_mat = np.eye(4)
                    self.reset_orientation = True
                self.vr_to_global_mat = rot_mat

    def _process_reading(self):
        rot_mat = np.asarray(self._state["poses"][self.controller_id])
        rot_mat = self.global_to_env_mat @ self.vr_to_global_mat @ rot_mat
        vr_pos = self.spatial_coeff * rot_mat[:3, 3]
        vr_quat = rmat_to_quat(rot_mat[:3, :3])
        vr_gripper = self._state["buttons"]["rightTrig" if self.controller_id == "r" else "leftTrig"][0]

        self.vr_state = {"pos": vr_pos, "quat": vr_quat, "gripper": vr_gripper}

    def _limit_velocity(self, lin_vel, rot_vel, gripper_vel):
        """Scales down the linear and angular magnitudes of the action"""
        lin_vel_norm = np.linalg.norm(lin_vel)
        rot_vel_norm = np.linalg.norm(rot_vel)
        gripper_vel_norm = np.linalg.norm(gripper_vel)
        if lin_vel_norm > self.max_lin_vel:
            lin_vel = lin_vel * self.max_lin_vel / lin_vel_norm
        if rot_vel_norm > self.max_rot_vel:
            rot_vel = rot_vel * self.max_rot_vel / rot_vel_norm
        if gripper_vel_norm > self.max_gripper_vel:
            gripper_vel = gripper_vel * self.max_gripper_vel / gripper_vel_norm
        return lin_vel, rot_vel, gripper_vel

    def _calculate_action(self, state_dict, include_info=False):
        # Read Sensor #
        if self.update_sensor:
            self._process_reading()
            self.update_sensor = False

        # Read Observation
        robot_pos = np.array(state_dict["cartesian_position"][:3])
        robot_euler = state_dict["cartesian_position"][3:]
        robot_quat = euler_to_quat(robot_euler)
        robot_gripper = state_dict["gripper_position"]

        # Reset Origin On Release #
        if self.reset_origin:
            self.robot_origin = {"pos": robot_pos, "quat": robot_quat}
            self.vr_origin = {"pos": self.vr_state["pos"], "quat": self.vr_state["quat"]}
            self.reset_origin = False

        # Calculate Positional Action #
        robot_pos_offset = robot_pos - self.robot_origin["pos"]
        target_pos_offset = self.vr_state["pos"] - self.vr_origin["pos"]
        pos_action = target_pos_offset - robot_pos_offset

        # Calculate Euler Action #
        robot_quat_offset = quat_diff(robot_quat, self.robot_origin["quat"])
        target_quat_offset = quat_diff(self.vr_state["quat"], self.vr_origin["quat"])
        quat_action = quat_diff(target_quat_offset, robot_quat_offset)
        euler_action = quat_to_euler(quat_action)

        # Calculate Gripper Action #
        gripper_action = (self.vr_state["gripper"] * 1.5) - robot_gripper

        # Calculate Desired Pose #
        target_pos = pos_action + robot_pos
        target_euler = add_angles(euler_action, robot_euler)
        target_cartesian = np.concatenate([target_pos, target_euler])
        target_gripper = self.vr_state["gripper"]

        # Scale Appropriately #
        pos_action *= self.pos_action_gain
        euler_action *= self.rot_action_gain
        gripper_action *= self.gripper_action_gain
        lin_vel, rot_vel, gripper_vel = self._limit_velocity(pos_action, euler_action, gripper_action)

        # Prepare Return Values #
        info_dict = {"target_cartesian_position": target_cartesian, "target_gripper_position": target_gripper}
        action = np.concatenate([lin_vel, rot_vel, [gripper_vel]])
        action = action.clip(-1, 1)

        # Return #
        if include_info:
            return action, info_dict
        else:
            return action

    def get_info(self):
        return {
            "success": self._state["buttons"]["A"] if self.controller_id == 'r' else self._state["buttons"]["X"],
            "failure": self._state["buttons"]["B"] if self.controller_id == 'r' else self._state["buttons"]["Y"],
            "movement_enabled": self._state["movement_enabled"],
            "controller_on": self._state["controller_on"],
        }

    def forward(self, obs_dict, include_info=False):
        if self._state["poses"] == {}:
            action = np.zeros(7)
            if include_info:
                return action, {}
            else:
                return action
        return self._calculate_action(obs_dict["robot_state"], include_info=include_info)


================================================
FILE: droid/data_loading/data_loader.py
================================================
from torch.utils.data import DataLoader
from torch.utils.data.datapipes.iter import Shuffler

from droid.data_loading.dataset import TrajectoryDataset
from droid.data_loading.trajectory_sampler import *


def create_dataloader(
    data_folderpaths,
    recording_prefix="MP4",
    batch_size=32,
    num_workers=6,
    buffer_size=1000,
    prefetch_factor=2,
    traj_loading_kwargs={},
    timestep_filtering_kwargs={},
    camera_kwargs={},
    image_transform_kwargs={},
):
    traj_sampler = TrajectorySampler(
        data_folderpaths,
        recording_prefix=recording_prefix,
        traj_loading_kwargs=traj_loading_kwargs,
        timestep_filtering_kwargs=timestep_filtering_kwargs,
        image_transform_kwargs=image_transform_kwargs,
        camera_kwargs=camera_kwargs,
    )
    dataset = TrajectoryDataset(traj_sampler)
    shuffled_dataset = Shuffler(dataset, buffer_size=buffer_size)
    dataloader = DataLoader(
        shuffled_dataset, batch_size=batch_size, num_workers=num_workers, prefetch_factor=prefetch_factor
    )

    return dataloader


def create_train_test_data_loader(data_loader_kwargs={}, data_processing_kwargs={}, camera_kwargs={}):
    # Generate Train / Test Split #
    data_filtering_kwargs = data_loader_kwargs.pop("data_filtering_kwargs", {})
    train_folderpaths, test_folderpaths = generate_train_test_split(**data_filtering_kwargs)

    # Create Train / Test Dataloaders #
    train_dataloader = create_dataloader(
        train_folderpaths, **data_loader_kwargs, **data_processing_kwargs, camera_kwargs=camera_kwargs
    )
    test_dataloader = create_dataloader(
        test_folderpaths, **data_loader_kwargs, **data_processing_kwargs, camera_kwargs=camera_kwargs
    )

    return train_dataloader, test_dataloader


# This is nice bc we only have one buffer at a time, but isn't working
class TrainTestDataloader:
    def __init__(self, data_loader_kwargs={}, data_processing_kwargs={}, camera_kwargs={}):
        # Generate Train / Test Split #
        data_filtering_kwargs = data_loader_kwargs.pop("data_filtering_kwargs", {})
        self.train_folderpaths, self.test_folderpaths = generate_train_test_split(**data_filtering_kwargs)

        # Create Dataloader Generator #
        self.generate_dataloader = lambda folderpaths: create_dataloader(
            folderpaths, **data_loader_kwargs, **data_processing_kwargs, camera_kwargs=camera_kwargs
        )

    def set_train_mode(self):
        self.dataloader = self.generate_dataloader(self.train_folderpaths)

    def set_test_mode(self):
        self.dataloader = self.generate_dataloader(self.test_folderpaths)

    def __iter__(self):
        return iter(self.dataloader)


================================================
FILE: droid/data_loading/dataset.py
================================================
import torch
from torch.utils.data import IterableDataset


class TrajectoryDataset(IterableDataset):
    def __init__(self, trajectory_sampler):
        self._trajectory_sampler = trajectory_sampler

    def _refresh_generator(self):
        worker_info = torch.utils.data.get_worker_info()
        new_samples = self._trajectory_sampler.fetch_samples(worker_info=worker_info)
        self._sample_generator = iter(new_samples)

    def __iter__(self):
        self._refresh_generator()
        while True:
            try:
                yield next(self._sample_generator)
            except StopIteration:
                self._refresh_generator()


================================================
FILE: droid/data_loading/tf_data_loader.py
================================================
from functools import partial
from typing import Dict

import tensorflow as tf


def get_type_spec(path: str) -> Dict[str, tf.TensorSpec]:
    """Get a type spec from a tfrecord file.

    Args:
        path (str): Path to a single tfrecord file.

    Returns:
        dict: A dictionary mapping feature names to tf.TensorSpecs.
    """
    data = next(iter(tf.data.TFRecordDataset(path))).numpy()
    example = tf.train.Example()
    example.ParseFromString(data)

    out = {}
    for key, value in example.features.feature.items():
        data = value.bytes_list.value[0]
        tensor_proto = tf.make_tensor_proto([])
        tensor_proto.ParseFromString(data)
        dtype = tf.dtypes.as_dtype(tensor_proto.dtype)
        shape = [d.size for d in tensor_proto.tensor_shape.dim]
        shape[0] = None  # first dimension is trajectory length, which is variable
        out[key] = tf.TensorSpec(shape=shape, dtype=dtype)

    return out


def get_tf_dataloader(
    path: str,
    *,
    batch_size: int,
    shuffle_buffer_size: int = 25000,
    cache: bool = False,
) -> tf.data.Dataset:
    # get the tfrecord files
    paths = tf.io.gfile.glob(tf.io.gfile.join(path, "*.tfrecord"))

    # extract the type spec
    type_spec = get_type_spec(paths[0])

    # read the tfrecords (yields raw serialized examples)
    dataset = tf.data.TFRecordDataset(paths, num_parallel_reads=tf.data.AUTOTUNE)

    # decode the examples (yields trajectories)
    dataset = dataset.map(partial(_decode_example, type_spec=type_spec), num_parallel_calls=tf.data.AUTOTUNE)

    # cache all the dataloading (uses a lot of memory)
    if cache:
        dataset = dataset.cache()

    # do any trajectory-level transforms here (e.g. filtering, goal relabeling)

    # unbatch to get individual transitions
    dataset = dataset.unbatch()

    # process each transition
    dataset = dataset.map(_process_transition, num_parallel_calls=tf.data.AUTOTUNE)

    # do any transition-level transformations here (e.g. augmentations)

    # shuffle the dataset
    dataset = dataset.shuffle(shuffle_buffer_size)

    dataset = dataset.repeat()

    # batch the dataset
    dataset = dataset.batch(batch_size, num_parallel_calls=tf.data.AUTOTUNE)

    # always prefetch last
    dataset = dataset.prefetch(tf.data.AUTOTUNE)

    return dataset


def _decode_example(example_proto: tf.Tensor, type_spec: Dict[str, tf.TensorSpec]) -> Dict[str, tf.Tensor]:
    features = {key: tf.io.FixedLenFeature([], tf.string) for key in type_spec.keys()}
    parsed_features = tf.io.parse_single_example(example_proto, features)
    parsed_tensors = {key: tf.io.parse_tensor(parsed_features[key], spec.dtype) for key, spec in type_spec.items()}

    for key in parsed_tensors:
        parsed_tensors[key] = tf.ensure_shape(parsed_tensors[key], type_spec[key].shape)

    return parsed_tensors


def _process_transition(transition: Dict[str, tf.Tensor]) -> Dict[str, tf.Tensor]:
    for key in transition:
        if "image" in key:
            transition[key] = tf.io.decode_jpeg(transition[key])

            # convert to float and normalize to [-1, 1]
            transition[key] = tf.cast(transition[key], tf.float32) / 127.5 - 1.0
    return transition


if __name__ == "__main__":
    ### EXAMPLE USAGE ###
    import tqdm

    dataset = get_tf_dataloader(
        "/tmp/franka_tfrecord_test/train",
        batch_size=8,
        shuffle_buffer_size=1,
    )
    with tqdm.tqdm() as pbar:
        for batch in dataset.as_numpy_iterator():
            for key, value in batch.items():
                pbar.write(f"{key}: {value.shape}")
            images = batch["observation/image/20521388_right"]
            pbar.update(len(images))
            # for image in images:
            #     plt.imshow(image / 2 + 0.5)
            #     plt.savefig("test.png")


================================================
FILE: droid/data_loading/trajectory_sampler.py
================================================
import os

import h5py
import numpy as np

from droid.data_processing.timestep_processing import TimestepProcesser
from droid.trajectory_utils.misc import load_trajectory


def crawler(dirname, filter_func=None):
    subfolders = [f.path for f in os.scandir(dirname) if f.is_dir()]
    traj_files = [f.path for f in os.scandir(dirname) if (f.is_file() and "trajectory.h5" in f.path)]

    if len(traj_files):
        # Only Save Desired Data #
        if filter_func is None:
            use_data = True
        else:
            hdf5_file = h5py.File(traj_files[0], "r")
            use_data = filter_func(hdf5_file.attrs)
            hdf5_file.close()

        if use_data:
            return [dirname]

    all_folderpaths = []
    for child_dirname in subfolders:
        child_paths = crawler(child_dirname, filter_func=filter_func)
        all_folderpaths.extend(child_paths)

    return all_folderpaths


def generate_train_test_split(filter_func=None, remove_failures=True, train_p=0.9):
    # Collect And Split #
    all_folderpaths = collect_data_folderpaths(filter_func=filter_func, remove_failures=remove_failures)

    # Split Into Train / Test #
    num_train_traj = int(train_p * len(all_folderpaths))
    all_i
Download .txt
gitextract_3u1e4tmg/

├── .docker/
│   ├── README.md
│   ├── laptop/
│   │   ├── Dockerfile.laptop
│   │   ├── Dockerfile.laptop_fr3
│   │   ├── Dockerfile.laptop_panda
│   │   ├── docker-compose-laptop-data-upload.yaml
│   │   ├── docker-compose-laptop.yaml
│   │   └── entrypoint.sh
│   └── nuc/
│       ├── Dockerfile.nuc
│       ├── Dockerfile.nuc_fr3
│       ├── Dockerfile.nuc_panda
│       └── docker-compose-nuc.yaml
├── .github/
│   └── workflows/
│       ├── build_container_laptop_fr3.yaml
│       ├── build_container_laptop_panda.yaml
│       ├── build_container_nuc_fr3.yaml
│       ├── build_container_nuc_panda.yaml
│       ├── pages.yaml
│       └── pre-commit.yaml
├── .gitignore
├── .gitmodules
├── .pre-commit-config.yaml
├── Makefile
├── README.md
├── config/
│   ├── fr3/
│   │   ├── franka_hardware.yaml
│   │   └── franka_panda.yaml
│   └── panda/
│       ├── franka_hardware.yaml
│       └── franka_panda.yaml
├── docs/
│   ├── Gemfile
│   ├── _config.yml
│   ├── _sass/
│   │   └── color_schemes/
│   │       └── droid.scss
│   ├── contribution-guidelines.md
│   ├── example-workflows/
│   │   ├── calibrating-cameras.md
│   │   ├── data-collection.md
│   │   ├── example-workflows.md
│   │   ├── teleoperation.md
│   │   └── training-policies.md
│   ├── hardware-setup/
│   │   ├── assembly.md
│   │   ├── hardware-setup.md
│   │   └── shopping-list.md
│   ├── index.md
│   ├── software-setup/
│   │   ├── docker.md
│   │   ├── host-installation.md
│   │   └── software-setup.md
│   └── the-droid-dataset.md
├── droid/
│   ├── __init__.py
│   ├── calibration/
│   │   ├── __init__.py
│   │   ├── calibration_info.json
│   │   └── calibration_utils.py
│   ├── camera_utils/
│   │   ├── __init__.py
│   │   ├── camera_readers/
│   │   │   ├── __init__.py
│   │   │   └── zed_camera.py
│   │   ├── info.py
│   │   ├── recording_readers/
│   │   │   ├── mp4_reader.py
│   │   │   └── svo_reader.py
│   │   └── wrappers/
│   │       ├── __init__.py
│   │       ├── multi_camera_wrapper.py
│   │       └── recorded_multi_camera_wrapper.py
│   ├── controllers/
│   │   └── oculus_controller.py
│   ├── data_loading/
│   │   ├── data_loader.py
│   │   ├── dataset.py
│   │   ├── tf_data_loader.py
│   │   └── trajectory_sampler.py
│   ├── data_processing/
│   │   ├── data_transforms.py
│   │   └── timestep_processing.py
│   ├── evaluation/
│   │   ├── eval_launcher.py
│   │   ├── eval_launcher_robomimic.py
│   │   ├── policy_wrapper.py
│   │   └── rt1_wrapper.py
│   ├── franka/
│   │   ├── __init__.py
│   │   ├── launch_gripper.sh
│   │   ├── launch_robot.sh
│   │   └── robot.py
│   ├── misc/
│   │   ├── parameters.py
│   │   ├── pointcloud_utils.py
│   │   ├── server_interface.py
│   │   ├── subprocess_utils.py
│   │   ├── time.py
│   │   ├── transformations.py
│   │   └── version_control/
│   │       ├── 1_0.json
│   │       ├── 1_1.json
│   │       └── loader.py
│   ├── plotting/
│   │   ├── __init__.py
│   │   ├── analysis_func.py
│   │   ├── misc.py
│   │   └── text.py
│   ├── postprocessing/
│   │   ├── __init__.py
│   │   ├── parse.py
│   │   ├── schema.py
│   │   ├── stages.py
│   │   └── util/
│   │       ├── __init__.py
│   │       ├── svo2depth.py
│   │       ├── svo2mp4.py
│   │       └── validate.py
│   ├── robot_env.py
│   ├── robot_ik/
│   │   ├── arm.py
│   │   ├── franka/
│   │   │   ├── fr3.xml
│   │   │   ├── mesh/
│   │   │   │   ├── finger.obj
│   │   │   │   ├── hand.obj
│   │   │   │   ├── link0.obj
│   │   │   │   ├── link1.obj
│   │   │   │   ├── link2.obj
│   │   │   │   ├── link3.obj
│   │   │   │   ├── link4.obj
│   │   │   │   ├── link5.obj
│   │   │   │   ├── link6.obj
│   │   │   │   └── link7.obj
│   │   │   └── panda.xml
│   │   └── robot_ik_solver.py
│   ├── training/
│   │   ├── model_trainer.py
│   │   └── models/
│   │       └── policy_network.py
│   ├── trajectory_utils/
│   │   ├── misc.py
│   │   ├── trajectory_reader.py
│   │   └── trajectory_writer.py
│   └── user_interface/
│       ├── __init__.py
│       ├── data_collector.py
│       ├── eval_gui.py
│       ├── gui.py
│       ├── gui_info.json
│       ├── gui_parameters.py
│       ├── misc.py
│       └── text.py
├── pyproject.toml
├── scripts/
│   ├── README.md
│   ├── convert/
│   │   ├── svo_to_depth.py
│   │   ├── svo_to_mp4.py
│   │   └── to_tfrecord.py
│   ├── evaluation/
│   │   ├── evaluate_policy.py
│   │   ├── evaluate_rt1.py
│   │   └── results.ipynb
│   ├── labeling/
│   │   ├── label_data.py
│   │   └── task_label_filepath.json
│   ├── main.py
│   ├── postprocess.py
│   ├── server/
│   │   ├── launch_server.sh
│   │   └── run_server.py
│   ├── setup/
│   │   ├── README.md
│   │   ├── intro.txt
│   │   ├── laptop_setup.sh
│   │   └── nuc_setup.sh
│   ├── tests/
│   │   ├── calibrate_cameras.py
│   │   ├── collect_trajectory.py
│   │   ├── memory_leak.py
│   │   └── replay_trajectory.py
│   ├── training/
│   │   ├── sanity_check/
│   │   │   ├── image_obs.py
│   │   │   └── state_obs.py
│   │   ├── sweepable_train_policy [wip].py
│   │   └── train_policy.py
│   └── visualizations/
│       ├── create_plots.py
│       ├── visualize_data.py
│       ├── visualize_day.py
│       └── visualize_trajectory.py
└── setup.py
Download .txt
SYMBOL INDEX (509 symbols across 53 files)

FILE: droid/calibration/calibration_utils.py
  function load_calibration_info (line 34) | def load_calibration_info(keep_time=False):
  function update_calibration_info (line 44) | def update_calibration_info(cam_id, transformation):
  function check_calibration_info (line 52) | def check_calibration_info(required_ids, time_threshold=3600):
  function visualize_calibration (line 68) | def visualize_calibration(calibration_dict):
  function calibration_traj (line 88) | def calibration_traj(t, pos_scale=0.1, angle_scale=0.2, hand_camera=False):
  class CharucoDetector (line 102) | class CharucoDetector:
    method __init__ (line 103) | def __init__(
    method process_image (line 122) | def process_image(self, image):
    method add_sample (line 157) | def add_sample(self, cam_id, image, pose):
    method calculate_target_to_cam (line 164) | def calculate_target_to_cam(self, readings, train=True):
    method augment_image (line 238) | def augment_image(self, cam_id, image, visualize=False, visual_type=["...
  class ThirdPersonCameraCalibrator (line 284) | class ThirdPersonCameraCalibrator(CharucoDetector):
    method __init__ (line 285) | def __init__(
    method calibrate (line 293) | def calibrate(self, cam_id):
    method _calibrate_cam_to_base (line 296) | def _calibrate_cam_to_base(self, cam_id=None, readings=None, gripper_p...
    method _calibrate_gripper_to_target (line 333) | def _calibrate_gripper_to_target(self, cam_id=None, readings=None, gri...
    method _calculate_gripper_to_base (line 370) | def _calculate_gripper_to_base(self, train_readings, train_gripper_pos...
    method is_calibration_accurate (line 414) | def is_calibration_accurate(self, cam_id):
  class HandCameraCalibrator (line 454) | class HandCameraCalibrator(CharucoDetector):
    method __init__ (line 455) | def __init__(self, camera, lin_error_threshold=1e-3, rot_error_thresho...
    method calibrate (line 461) | def calibrate(self, cam_id):
    method _calibrate_cam_to_gripper (line 464) | def _calibrate_cam_to_gripper(self, cam_id=None, readings=None, grippe...
    method _calibrate_base_to_target (line 499) | def _calibrate_base_to_target(self, cam_id=None, readings=None, grippe...
    method _calculate_gripper_to_base (line 534) | def _calculate_gripper_to_base(self, train_readings, train_gripper_pos...
    method is_calibration_accurate (line 579) | def is_calibration_accurate(self, cam_id):

FILE: droid/camera_utils/camera_readers/zed_camera.py
  function gather_zed_cameras (line 15) | def gather_zed_cameras():
  class ZedCamera (line 40) | class ZedCamera:
    method __init__ (line 41) | def __init__(self, camera):
    method enable_advanced_calibration (line 53) | def enable_advanced_calibration(self):
    method disable_advanced_calibration (line 56) | def disable_advanced_calibration(self):
    method set_reading_parameters (line 59) | def set_reading_parameters(
    method set_calibration_mode (line 79) | def set_calibration_mode(self):
    method set_trajectory_mode (line 96) | def set_trajectory_mode(self):
    method _configure_camera (line 115) | def _configure_camera(self, init_params):
    method _process_intrinsics (line 148) | def _process_intrinsics(self, params):
    method get_intrinsics (line 154) | def get_intrinsics(self):
    method start_recording (line 158) | def start_recording(self, filename):
    method stop_recording (line 164) | def stop_recording(self):
    method _process_frame (line 168) | def _process_frame(self, frame):
    method read_camera (line 174) | def read_camera(self):
    method disable_camera (line 220) | def disable_camera(self):
    method is_running (line 228) | def is_running(self):

FILE: droid/camera_utils/info.py
  function get_camera_name (line 22) | def get_camera_name(cam_id):
  function get_camera_type (line 28) | def get_camera_type(cam_id):

FILE: droid/camera_utils/recording_readers/mp4_reader.py
  class MP4Reader (line 10) | class MP4Reader:
    method __init__ (line 11) | def __init__(self, filepath, serial_number):
    method set_reading_parameters (line 29) | def set_reading_parameters(
    method get_frame_resolution (line 45) | def get_frame_resolution(self):
    method get_frame_count (line 50) | def get_frame_count(self):
    method set_frame_index (line 56) | def set_frame_index(self, index):
    method _process_frame (line 67) | def _process_frame(self, frame):
    method read_camera (line 74) | def read_camera(self, ignore_data=False, correct_timestamp=None, retur...
    method disable_camera (line 114) | def disable_camera(self):

FILE: droid/camera_utils/recording_readers/svo_reader.py
  class SVOReader (line 13) | class SVOReader:
    method __init__ (line 14) | def __init__(self, filepath, serial_number):
    method set_reading_parameters (line 39) | def set_reading_parameters(
    method get_frame_resolution (line 66) | def get_frame_resolution(self):
    method get_frame_count (line 72) | def get_frame_count(self):
    method set_frame_index (line 77) | def set_frame_index(self, index):
    method _process_frame (line 88) | def _process_frame(self, frame):
    method read_camera (line 94) | def read_camera(self, ignore_data=False, correct_timestamp=None, retur...
    method disable_camera (line 146) | def disable_camera(self):

FILE: droid/camera_utils/wrappers/multi_camera_wrapper.py
  class MultiCameraWrapper (line 9) | class MultiCameraWrapper:
    method __init__ (line 10) | def __init__(self, camera_kwargs={}):
    method get_camera (line 25) | def get_camera(self, camera_id):
    method enable_advanced_calibration (line 28) | def enable_advanced_calibration(self):
    method disable_advanced_calibration (line 32) | def disable_advanced_calibration(self):
    method set_calibration_mode (line 36) | def set_calibration_mode(self, cam_id):
    method set_trajectory_mode (line 47) | def set_trajectory_mode(self):
    method start_recording (line 62) | def start_recording(self, recording_folderpath):
    method stop_recording (line 70) | def stop_recording(self):
    method read_cameras (line 75) | def read_cameras(self):
    method disable_cameras (line 94) | def disable_cameras(self):

FILE: droid/camera_utils/wrappers/recorded_multi_camera_wrapper.py
  class RecordedMultiCameraWrapper (line 10) | class RecordedMultiCameraWrapper:
    method __init__ (line 11) | def __init__(self, recording_folderpath, camera_kwargs={}):
    method read_cameras (line 35) | def read_cameras(self, index=None, camera_type_dict={}, timestamp_dict...
    method disable_cameras (line 61) | def disable_cameras(self):

FILE: droid/controllers/oculus_controller.py
  function vec_to_reorder_mat (line 10) | def vec_to_reorder_mat(vec):
  class VRPolicy (line 18) | class VRPolicy:
    method __init__ (line 19) | def __init__(
    method reset_state (line 48) | def reset_state(self):
    method _update_internal_state (line 61) | def _update_internal_state(self, num_wait_sec=5, hz=50):
    method _process_reading (line 102) | def _process_reading(self):
    method _limit_velocity (line 111) | def _limit_velocity(self, lin_vel, rot_vel, gripper_vel):
    method _calculate_action (line 124) | def _calculate_action(self, state_dict, include_info=False):
    method get_info (line 179) | def get_info(self):
    method forward (line 187) | def forward(self, obs_dict, include_info=False):

FILE: droid/data_loading/data_loader.py
  function create_dataloader (line 8) | def create_dataloader(
  function create_train_test_data_loader (line 37) | def create_train_test_data_loader(data_loader_kwargs={}, data_processing...
  class TrainTestDataloader (line 54) | class TrainTestDataloader:
    method __init__ (line 55) | def __init__(self, data_loader_kwargs={}, data_processing_kwargs={}, c...
    method set_train_mode (line 65) | def set_train_mode(self):
    method set_test_mode (line 68) | def set_test_mode(self):
    method __iter__ (line 71) | def __iter__(self):

FILE: droid/data_loading/dataset.py
  class TrajectoryDataset (line 5) | class TrajectoryDataset(IterableDataset):
    method __init__ (line 6) | def __init__(self, trajectory_sampler):
    method _refresh_generator (line 9) | def _refresh_generator(self):
    method __iter__ (line 14) | def __iter__(self):

FILE: droid/data_loading/tf_data_loader.py
  function get_type_spec (line 7) | def get_type_spec(path: str) -> Dict[str, tf.TensorSpec]:
  function get_tf_dataloader (line 33) | def get_tf_dataloader(
  function _decode_example (line 80) | def _decode_example(example_proto: tf.Tensor, type_spec: Dict[str, tf.Te...
  function _process_transition (line 91) | def _process_transition(transition: Dict[str, tf.Tensor]) -> Dict[str, t...

FILE: droid/data_loading/trajectory_sampler.py
  function crawler (line 10) | def crawler(dirname, filter_func=None):
  function generate_train_test_split (line 34) | def generate_train_test_split(filter_func=None, remove_failures=True, tr...
  function collect_data_folderpaths (line 54) | def collect_data_folderpaths(filter_func=None, remove_failures=True):
  class TrajectorySampler (line 68) | class TrajectorySampler:
    method __init__ (line 69) | def __init__(
    method fetch_samples (line 86) | def fetch_samples(self, worker_info=None):

FILE: droid/data_processing/data_transforms.py
  class ImageTransformer (line 5) | class ImageTransformer:
    method __init__ (line 6) | def __init__(
    method forward (line 38) | def forward(self, timestep):

FILE: droid/data_processing/timestep_processing.py
  class TimestepProcesser (line 11) | class TimestepProcesser:
    method __init__ (line 12) | def __init__(
    method forward (line 37) | def forward(self, timestep):

FILE: droid/evaluation/eval_launcher.py
  function eval_launcher (line 14) | def eval_launcher(variant, run_id, exp_id):

FILE: droid/evaluation/eval_launcher_robomimic.py
  function eval_launcher (line 20) | def eval_launcher(variant, run_id, exp_id):
  function get_goal_im (line 156) | def get_goal_im(variant, run_id, exp_id):

FILE: droid/evaluation/policy_wrapper.py
  function converter_helper (line 10) | def converter_helper(data, batchify=True):
  function np_dict_to_torch_dict (line 23) | def np_dict_to_torch_dict(np_dict, batchify=True):
  class PolicyWrapper (line 40) | class PolicyWrapper:
    method __init__ (line 41) | def __init__(self, policy, timestep_filtering_kwargs, image_transform_...
    method forward (line 53) | def forward(self, observation):
  class PolicyWrapperRobomimic (line 69) | class PolicyWrapperRobomimic:
    method __init__ (line 70) | def __init__(self, policy, timestep_filtering_kwargs, image_transform_...
    method convert_raw_extrinsics_to_Twc (line 83) | def convert_raw_extrinsics_to_Twc(self, raw_data):
    method forward (line 98) | def forward(self, observation):
    method reset (line 141) | def reset(self):
  class FrameStackWrapper (line 146) | class FrameStackWrapper:
    method __init__ (line 152) | def __init__(self, num_frames):
    method _set_initial_obs_history (line 167) | def _set_initial_obs_history(self, init_obs):
    method reset (line 183) | def reset(self):
    method get_obs_history (line 186) | def get_obs_history(self):
    method add_obs (line 198) | def add_obs(self, obs):

FILE: droid/evaluation/rt1_wrapper.py
  function resize (line 13) | def resize(image):
  class RT1Policy (line 19) | class RT1Policy(GoalCondPolicy):
    method __init__ (line 20) | def __init__(self, checkpoint_path, goal_images=None, language_instruc...
    method _run_dummy_inference (line 34) | def _run_dummy_inference(self):
    method _normalize_task_name (line 41) | def _normalize_task_name(self, task_name):
    method compute_embedding (line 54) | def compute_embedding(self, task_name):
    method forward (line 61) | def forward(self, observation):
    method load_goal_imgs (line 94) | def load_goal_imgs(self, img_dict):
    method load_lang (line 100) | def load_lang(self, text):

FILE: droid/franka/robot.py
  class FrankaRobot (line 18) | class FrankaRobot:
    method launch_controller (line 19) | def launch_controller(self):
    method launch_robot (line 35) | def launch_robot(self):
    method kill_controller (line 42) | def kill_controller(self):
    method update_command (line 46) | def update_command(self, command, action_space="cartesian_velocity", g...
    method update_pose (line 54) | def update_pose(self, command, velocity=False, blocking=False):
    method update_joints (line 77) | def update_joints(self, command, velocity=False, blocking=False, carte...
    method update_gripper (line 117) | def update_gripper(self, command, velocity=True, blocking=False):
    method add_noise_to_joints (line 125) | def add_noise_to_joints(self, original_joints, cartesian_noise):
    method get_joint_positions (line 144) | def get_joint_positions(self):
    method get_joint_velocities (line 147) | def get_joint_velocities(self):
    method get_gripper_position (line 150) | def get_gripper_position(self):
    method get_ee_pose (line 153) | def get_ee_pose(self):
    method get_robot_state (line 158) | def get_robot_state(self):
    method adaptive_time_to_go (line 184) | def adaptive_time_to_go(self, desired_joint_position, t_min=0, t_max=4):
    method create_action_dict (line 191) | def create_action_dict(self, action, action_space, gripper_action_spac...

FILE: droid/misc/pointcloud_utils.py
  function numpy_to_o3d (line 12) | def numpy_to_o3d(numpy_pcd):
  function o3d_to_numpy (line 27) | def o3d_to_numpy(o3d_pcd):
  function rgbd_to_pcd (line 34) | def rgbd_to_pcd(color, depth, camera_matrix):
  function visualize_pointcloud (line 56) | def visualize_pointcloud(pcd):
  function transform_pointcloud (line 62) | def transform_pointcloud(pcd, cam2base):
  function pairwise_registration (line 70) | def pairwise_registration(source, target):
  function full_registration (line 93) | def full_registration(pcds):
  function combine_pointclouds (line 118) | def combine_pointclouds(o3d_pcd_dict, cam2base_dict=None, reference_key=...

FILE: droid/misc/server_interface.py
  function attempt_n_times (line 7) | def attempt_n_times(function_list, max_attempts, sleep_time=0.1):
  class ServerInterface (line 23) | class ServerInterface:
    method __init__ (line 24) | def __init__(self, ip_address="127.0.0.1", launch=True):
    method establish_connection (line 32) | def establish_connection(self):
    method launch_controller (line 36) | def launch_controller(self):
    method launch_robot (line 39) | def launch_robot(self):
    method kill_controller (line 42) | def kill_controller(self):
    method update_command (line 45) | def update_command(self, command, action_space="cartesian_velocity", g...
    method create_action_dict (line 49) | def create_action_dict(self, command, action_space="cartesian_velocity"):
    method update_pose (line 53) | def update_pose(self, command, velocity=True, blocking=False):
    method update_joints (line 56) | def update_joints(self, command, velocity=True, blocking=False, cartes...
    method update_gripper (line 61) | def update_gripper(self, command, velocity=True, blocking=False):
    method get_ee_pose (line 64) | def get_ee_pose(self):
    method get_joint_positions (line 67) | def get_joint_positions(self):
    method get_joint_velocities (line 70) | def get_joint_velocities(self):
    method get_gripper_state (line 73) | def get_gripper_state(self):
    method get_robot_state (line 76) | def get_robot_state(self):

FILE: droid/misc/subprocess_utils.py
  function run_terminal_command (line 6) | def run_terminal_command(command):
  function run_threaded_command (line 14) | def run_threaded_command(command, args=(), daemon=True):
  function run_multiprocessed_command (line 21) | def run_multiprocessed_command(command, args=()):

FILE: droid/misc/time.py
  function time_ms (line 4) | def time_ms():

FILE: droid/misc/transformations.py
  function quat_to_euler (line 6) | def quat_to_euler(quat, degrees=False):
  function euler_to_quat (line 11) | def euler_to_quat(euler, degrees=False):
  function rmat_to_euler (line 15) | def rmat_to_euler(rot_mat, degrees=False):
  function euler_to_rmat (line 20) | def euler_to_rmat(euler, degrees=False):
  function rmat_to_quat (line 24) | def rmat_to_quat(rot_mat, degrees=False):
  function quat_to_rmat (line 29) | def quat_to_rmat(quat, degrees=False):
  function quat_diff (line 34) | def quat_diff(target, source):
  function angle_diff (line 39) | def angle_diff(target, source, degrees=False):
  function pose_diff (line 46) | def pose_diff(target, source, degrees=False):
  function add_quats (line 54) | def add_quats(delta, source):
  function add_angles (line 59) | def add_angles(delta, source, degrees=False):
  function add_poses (line 66) | def add_poses(delta, source, degrees=False):
  function change_pose_frame (line 74) | def change_pose_frame(pose, frame, degrees=False):

FILE: droid/misc/version_control/loader.py
  function load_version_info (line 8) | def load_version_info(version_number):

FILE: droid/plotting/analysis_func.py
  function analysis_func (line 23) | def analysis_func(hdf5_filepath, hdf5_file=None):

FILE: droid/plotting/misc.py
  function data_crawler (line 17) | def data_crawler(dirname, func_list=None, ignore_failure=True):
  function task_mapper (line 40) | def task_mapper(task_description):
  function grab_3rd_person_extrinsics (line 47) | def grab_3rd_person_extrinsics(camera_extrinsics, camera_type_dict):
  function estimate_pos_angle_density (line 63) | def estimate_pos_angle_density(pose_list):
  function get_bucket_index (line 91) | def get_bucket_index(timestamp):

FILE: droid/postprocessing/parse.py
  function parse_datetime (line 19) | def parse_datetime(date_str: str, mode="day") -> datetime:
  function parse_user (line 26) | def parse_user(
  function parse_existing_metadata (line 47) | def parse_existing_metadata(trajectory_dir: str):
  function parse_data_directory (line 57) | def parse_data_directory(data_dir: str, lab_agnostic: bool = False, proc...
  function parse_timestamp (line 72) | def parse_timestamp(trajectory_dir: Path) -> str:
  function parse_trajectory (line 95) | def parse_trajectory(

FILE: droid/postprocessing/schema.py
  function get_uuid (line 12) | def get_uuid(*, uuid: str, **_) -> str:
  function get_lab (line 16) | def get_lab(*, lab: str, **_) -> str:
  function get_user (line 20) | def get_user(*, user: str, **_) -> str:
  function get_user_id (line 24) | def get_user_id(*, user_id: str, **_) -> str:
  function get_date (line 28) | def get_date(*, timestamp: str, **_) -> str:
  function get_timestamp (line 32) | def get_timestamp(*, timestamp: str, **_) -> str:
  function get_hdf5_path (line 36) | def get_hdf5_path(*, hdf5_path: str, **_) -> str:
  function get_building (line 40) | def get_building(*, attrs: Dict, **_) -> str:
  function get_scene_id (line 44) | def get_scene_id(*, attrs: Dict, **_) -> str:
  function get_success (line 48) | def get_success(*, attrs: Dict, **_) -> Optional[bool]:
  function get_robot_serial (line 52) | def get_robot_serial(*, attrs: Dict, **_) -> str:
  function get_droid_version (line 56) | def get_droid_version(*, attrs: Dict, **_) -> str:
  function get_current_task (line 60) | def get_current_task(*, attrs: Dict, **_) -> str:
  function get_trajectory_length (line 64) | def get_trajectory_length(*, trajectory_length: int, **_) -> int:
  function get_wrist_cam_serial (line 68) | def get_wrist_cam_serial(*, ctype2extrinsics: Dict, **_) -> str:
  function get_ext1_cam_serial (line 72) | def get_ext1_cam_serial(*, ctype2extrinsics: Dict, **_) -> str:
  function get_ext2_cam_serial (line 76) | def get_ext2_cam_serial(*, ctype2extrinsics: Dict, **_) -> str:
  function get_wrist_cam_extrinsics (line 80) | def get_wrist_cam_extrinsics(*, ctype2extrinsics: Dict, **_) -> List[flo...
  function get_ext1_cam_extrinsics (line 84) | def get_ext1_cam_extrinsics(*, ctype2extrinsics: Dict, **_) -> List[float]:
  function get_ext2_cam_extrinsics (line 88) | def get_ext2_cam_extrinsics(*, ctype2extrinsics: Dict, **_) -> List[float]:
  function get_path_placeholder (line 92) | def get_path_placeholder(**_) -> None:

FILE: droid/postprocessing/stages.py
  function run_indexing (line 36) | def run_indexing(
  function run_processing (line 119) | def run_processing(
  function run_upload (line 236) | def run_upload(

FILE: droid/postprocessing/util/svo2depth.py
  function export_depth (line 18) | def export_depth(
  function convert_depths (line 97) | def convert_depths(

FILE: droid/postprocessing/util/svo2mp4.py
  function export_mp4 (line 17) | def export_mp4(svo_file: Path, mp4_dir: Path, stereo_view: str = "left",...
  function convert_mp4s (line 101) | def convert_mp4s(

FILE: droid/postprocessing/util/validate.py
  function validate_user2id (line 19) | def validate_user2id(registered_lab_members: Dict[str, Dict[str, str]]) ...
  function validate_day_dir (line 33) | def validate_day_dir(day_dir: Path) -> bool:
  function validate_svo_existence (line 41) | def validate_svo_existence(trajectory_dir: Path) -> bool:
  function validate_metadata_record (line 59) | def validate_metadata_record(metadata_record: Dict) -> bool:

FILE: droid/robot_env.py
  class RobotEnv (line 15) | class RobotEnv(gym.Env):
    method __init__ (line 16) | def __init__(self, action_space="cartesian_velocity", gripper_action_s...
    method step (line 49) | def step(self, action):
    method reset (line 65) | def reset(self, randomize=False):
    method update_robot (line 75) | def update_robot(self, action, action_space="cartesian_velocity", grip...
    method create_action_dict (line 84) | def create_action_dict(self, action):
    method read_cameras (line 87) | def read_cameras(self):
    method get_state (line 90) | def get_state(self):
    method get_camera_extrinsics (line 97) | def get_camera_extrinsics(self, state_dict):
    method get_observation (line 108) | def get_observation(self):

FILE: droid/robot_ik/arm.py
  class RobotArm (line 11) | class RobotArm(robot_arm.RobotArm):
    method _build (line 12) | def _build(self, model_file):
    method _create_body (line 15) | def _create_body(self):
    method name (line 23) | def name(self) -> str:
    method joints (line 27) | def joints(self):
    method actuators (line 32) | def actuators(self):
    method mjcf_model (line 37) | def mjcf_model(self):
    method update_state (line 41) | def update_state(self, physics: mjcf.Physics, qpos: np.ndarray, qvel: ...
    method set_joint_angles (line 45) | def set_joint_angles(self, physics: mjcf.Physics, qpos: np.ndarray) ->...
    method base_site (line 49) | def base_site(self) -> types.MjcfElement:
    method wrist_site (line 53) | def wrist_site(self) -> types.MjcfElement:
    method initialize_episode (line 56) | def initialize_episode(self, physics: mjcf.Physics, random_state: np.r...
  class FrankaArm (line 62) | class FrankaArm(RobotArm):
    method _build (line 63) | def _build(self):

FILE: droid/robot_ik/robot_ik_solver.py
  class RobotIKSolver (line 8) | class RobotIKSolver:
    method __init__ (line 9) | def __init__(self):
    method cartesian_velocity_to_joint_velocity (line 44) | def cartesian_velocity_to_joint_velocity(self, cartesian_velocity, rob...
    method gripper_velocity_to_delta (line 59) | def gripper_velocity_to_delta(self, gripper_velocity):
    method cartesian_velocity_to_delta (line 69) | def cartesian_velocity_to_delta(self, cartesian_velocity):
    method joint_velocity_to_delta (line 88) | def joint_velocity_to_delta(self, joint_velocity):
    method gripper_delta_to_velocity (line 103) | def gripper_delta_to_velocity(self, gripper_delta):
    method cartesian_delta_to_velocity (line 106) | def cartesian_delta_to_velocity(self, cartesian_delta):
    method joint_delta_to_velocity (line 116) | def joint_delta_to_velocity(self, joint_delta):

FILE: droid/training/model_trainer.py
  function exp_launcher (line 18) | def exp_launcher(variant, run_id, exp_id):
  class ModelTrainer (line 56) | class ModelTrainer:
    method __init__ (line 57) | def __init__(
    method compute_loss (line 85) | def compute_loss(self, batch, test=False):
    method save_policy (line 91) | def save_policy(self, epoch):
    method train_batch (line 95) | def train_batch(self, batch):
    method test_batch (line 107) | def test_batch(self, batch):
    method train_epoch (line 110) | def train_epoch(self, epoch):
    method test_epoch (line 124) | def test_epoch(self, epoch, test_batches=100):
    method prepare_logdir (line 138) | def prepare_logdir(self):
    method output_diagnostics (line 153) | def output_diagnostics(self, epoch):
    method update_plots (line 179) | def update_plots(self, epoch):
    method train (line 190) | def train(self):

FILE: droid/training/models/policy_network.py
  class Residual (line 7) | class Residual(nn.Module):
    method __init__ (line 8) | def __init__(self, in_channels, num_hiddens, num_residual_hiddens):
    method forward (line 24) | def forward(self, x):
  class ResidualStack (line 28) | class ResidualStack(nn.Module):
    method __init__ (line 29) | def __init__(self, in_channels, num_hiddens, num_residual_layers, num_...
    method forward (line 36) | def forward(self, x):
  class Encoder (line 42) | class Encoder(nn.Module):
    method __init__ (line 43) | def __init__(self, in_channels, num_hiddens, num_residual_layers, num_...
    method forward (line 61) | def forward(self, inputs):
  class ImagePolicy (line 79) | class ImagePolicy(nn.Module):
    method __init__ (line 80) | def __init__(
    method create_camera_encoder (line 115) | def create_camera_encoder(self, input_dim):
    method create_fully_connected (line 141) | def create_fully_connected(self, input_dim, output_dim, num_layers, nu...
    method initialize_networks (line 162) | def initialize_networks(self, timestep):
    method compute_loss (line 197) | def compute_loss(self, timestep):
    method encode_timestep (line 205) | def encode_timestep(self, timestep):
    method forward (line 252) | def forward(self, timestep):

FILE: droid/trajectory_utils/misc.py
  function collect_trajectory (line 19) | def collect_trajectory(
  function calibrate_camera (line 133) | def calibrate_camera(
  function replay_trajectory (line 275) | def replay_trajectory(
  function load_trajectory (line 318) | def load_trajectory(
  function visualize_timestep (line 394) | def visualize_timestep(timestep, max_width=1000, max_height=500, aspect_...
  function visualize_trajectory (line 446) | def visualize_trajectory(

FILE: droid/trajectory_utils/trajectory_reader.py
  function create_video_file (line 7) | def create_video_file(suffix=".mp4", byte_contents=None):
  function get_hdf5_length (line 20) | def get_hdf5_length(hdf5_file, keys_to_ignore=[]):
  function load_hdf5_to_dict (line 42) | def load_hdf5_to_dict(hdf5_file, index, keys_to_ignore=[]):
  class TrajectoryReader (line 60) | class TrajectoryReader:
    method __init__ (line 61) | def __init__(self, filepath, read_images=True):
    method length (line 69) | def length(self):
    method read_timestep (line 72) | def read_timestep(self, index=None, keys_to_ignore=[]):
    method _uncompress_images (line 96) | def _uncompress_images(self):
    method close (line 115) | def close(self):

FILE: droid/trajectory_utils/trajectory_writer.py
  function write_dict_to_hdf5 (line 14) | def write_dict_to_hdf5(hdf5_file, data_dict, keys_to_ignore=["image", "d...
  class TrajectoryWriter (line 47) | class TrajectoryWriter:
    method __init__ (line 48) | def __init__(self, filepath, metadata=None, exists_ok=False, save_imag...
    method write_timestep (line 68) | def write_timestep(self, timestep):
    method _update_metadata (line 73) | def _update_metadata(self, metadata):
    method _write_from_queue (line 77) | def _write_from_queue(self, writer, queue):
    method _update_video_files (line 86) | def _update_video_files(self, timestep):
    method create_video_file (line 107) | def create_video_file(self, video_id, suffix):
    method close (line 112) | def close(self, metadata=None):

FILE: droid/user_interface/data_collector.py
  class DataCollecter (line 18) | class DataCollecter:
    method __init__ (line 19) | def __init__(self, env, controller, policy=None, save_data=True, save_...
    method reset_robot (line 49) | def reset_robot(self, randomize=False):
    method get_user_feedback (line 54) | def get_user_feedback(self):
    method enable_advanced_calibration (line 58) | def enable_advanced_calibration(self):
    method disable_advanced_calibration (line 62) | def disable_advanced_calibration(self):
    method set_calibration_mode (line 66) | def set_calibration_mode(self, cam_id):
    method set_trajectory_mode (line 69) | def set_trajectory_mode(self):
    method collect_trajectory (line 72) | def collect_trajectory(self, info=None, practice=False, reset_robot=Tr...
    method calibrate_camera (line 116) | def calibrate_camera(self, cam_id, reset_robot=True):
    method check_calibration_info (line 131) | def check_calibration_info(self, remove_hand_camera=False):
    method get_gui_imgs (line 137) | def get_gui_imgs(self, obs):
    method get_camera_feed (line 148) | def get_camera_feed(self):
    method change_trajectory_status (line 158) | def change_trajectory_status(self, success=False):

FILE: droid/user_interface/eval_gui.py
  class Condition (line 33) | class Condition:
  class GoalCondPolicy (line 38) | class GoalCondPolicy(ABC):
    method __init__ (line 39) | def __init__(self):
    method load_goal_imgs (line 43) | def load_goal_imgs(self, img_dict):
    method load_lang (line 51) | def load_lang(self, text):
  class EvalGUI (line 58) | class EvalGUI(ctk.CTk):
    method __init__ (line 60) | def __init__(self, policy, env=None, eval_dir=None, fullscreen=False):
    method show_frame (line 150) | def show_frame(self, frame_id, refresh_page=True, wait=False):
    method swap_img_order (line 171) | def swap_img_order(self, i, j):
    method set_img (line 174) | def set_img(self, i, widget=None, width=None, height=None, use_camera_...
    method get_goal_img_snapshots (line 187) | def get_goal_img_snapshots(self, idxs=[]):
    method fetch_goal_directories (line 206) | def fetch_goal_directories(self):
    method update_time_index (line 217) | def update_time_index(self):
    method robot_reset (line 222) | def robot_reset(self, event):
    method refresh_enter_variable (line 229) | def refresh_enter_variable(self):
    method listen_for_robot_info (line 233) | def listen_for_robot_info(self):
    method update_camera_feed (line 253) | def update_camera_feed(self, sleep=0.05):
  class EvalConfigurationPage (line 262) | class EvalConfigurationPage(ctk.CTkFrame):
    method __init__ (line 263) | def __init__(self, parent, controller):
    method toggle_randomize_btn (line 336) | def toggle_randomize_btn(self):
    method toggle_save_btn (line 339) | def toggle_save_btn(self):
    method update_goal_radio_btns (line 342) | def update_goal_radio_btns(self):
    method goal_img_changed (line 357) | def goal_img_changed(self):
    method load_goal_imgs_from_dir (line 361) | def load_goal_imgs_from_dir(self):
    method place_image_gc_elements (line 376) | def place_image_gc_elements(self):
    method forget_image_gc_elements (line 382) | def forget_image_gc_elements(self):
    method toggle_text_box (line 388) | def toggle_text_box(self):
    method toggle_capture_goal (line 394) | def toggle_capture_goal(self):
    method moniter_keys (line 400) | def moniter_keys(self, event):
    method practice_robot (line 409) | def practice_robot(self):
    method eval_robot (line 413) | def eval_robot(self):
  class CaptureGoal (line 425) | class CaptureGoal(ctk.CTkFrame):
    method __init__ (line 426) | def __init__(self, parent, controller):
    method is_page_inactive (line 506) | def is_page_inactive(self):
    method press_A (line 511) | def press_A(self, event):
    method press_B (line 516) | def press_B(self, event):
    method update_image_grid (line 521) | def update_image_grid(self, i):
    method update_camera_feed (line 527) | def update_camera_feed(self, i, w_coeff=1.0, h_coeff=1.0):
    method moniter_keys (line 541) | def moniter_keys(self, event):
    method initialize_page (line 552) | def initialize_page(self):
    method collect_trajectory (line 578) | def collect_trajectory(self):
    method update_timer (line 589) | def update_timer(self, start_time):
    method end_trajectory (line 611) | def end_trajectory(self):
    method set_home_frame (line 626) | def set_home_frame(self, frame):
    method set_mode (line 629) | def set_mode(self, mode):
    method edit_trajectory (line 636) | def edit_trajectory(self, save):

FILE: droid/user_interface/gui.py
  class RobotGUI (line 26) | class RobotGUI(tk.Tk):
    method __init__ (line 27) | def __init__(self, robot=None, fullscreen=False, right_controller=True):
    method show_frame (line 106) | def show_frame(self, frame_id, refresh_page=True, wait=False):
    method swap_img_order (line 127) | def swap_img_order(self, i, j):
    method set_img (line 130) | def set_img(self, i, widget=None, width=None, height=None, use_camera_...
    method update_time_index (line 143) | def update_time_index(self):
    method robot_reset (line 148) | def robot_reset(self, event):
    method refresh_enter_variable (line 155) | def refresh_enter_variable(self):
    method listen_for_robot_info (line 159) | def listen_for_robot_info(self):
    method update_camera_feed (line 179) | def update_camera_feed(self, sleep=0.05):
  class LoginPage (line 189) | class LoginPage(tk.Frame):
    method __init__ (line 190) | def __init__(self, parent, controller):
    method click_yes (line 267) | def click_yes(self):
    method click_no (line 270) | def click_no(self):
    method check_completeness (line 273) | def check_completeness(self):
  class RobotResetPage (line 293) | class RobotResetPage(tk.Frame):
    method __init__ (line 294) | def __init__(self, parent, controller):
    method launch_page (line 304) | def launch_page(self):
    method set_home_frame (line 308) | def set_home_frame(self, frame):
  class CanRobotResetPage (line 312) | class CanRobotResetPage(tk.Frame):
    method __init__ (line 313) | def __init__(self, parent, controller):
    method set_next_page (line 327) | def set_next_page(self, page):
    method moniter_keys (line 330) | def moniter_keys(self, event):
  class ControllerOffPage (line 337) | class ControllerOffPage(tk.Frame):
    method __init__ (line 338) | def __init__(self, parent, controller):
    method moniter_keys (line 349) | def moniter_keys(self, event):
  class CalibrationPage (line 355) | class CalibrationPage(tk.Frame):
    method __init__ (line 356) | def __init__(self, parent, controller):
    method change_calibration_mode (line 428) | def change_calibration_mode(self, advanced_on):
    method calibrate_camera (line 438) | def calibrate_camera(self, cam_id):
    method initialize_page (line 446) | def initialize_page(self):
    method exit_page (line 460) | def exit_page(self):
  class IncompleteCalibration (line 465) | class IncompleteCalibration(tk.Frame):
    method __init__ (line 466) | def __init__(self, parent, controller):
  class OldCalibration (line 492) | class OldCalibration(tk.Frame):
    method __init__ (line 493) | def __init__(self, parent, controller):
  class OldScene (line 532) | class OldScene(tk.Frame):
    method __init__ (line 533) | def __init__(self, parent, controller):
  class PreferredTasksPage (line 572) | class PreferredTasksPage(tk.Frame):
    method __init__ (line 573) | def __init__(self, parent, controller):
    method moniter_keys (line 622) | def moniter_keys(self, event):
    method initialize_page (line 631) | def initialize_page(self):
  class SceneConfigurationPage (line 635) | class SceneConfigurationPage(tk.Frame):
    method __init__ (line 636) | def __init__(self, parent, controller):
    method moniter_keys (line 756) | def moniter_keys(self, event):
    method finish_setup (line 774) | def finish_setup(self):
    method mark_new_scene (line 802) | def mark_new_scene(self):
    method get_new_tasks (line 807) | def get_new_tasks(self):
    method practice_robot (line 813) | def practice_robot(self):
    method initialize_page (line 817) | def initialize_page(self):
  class RequestedBehaviorPage (line 822) | class RequestedBehaviorPage(tk.Frame):
    method __init__ (line 823) | def __init__(self, parent, controller):
    method change_trajectory_status (line 899) | def change_trajectory_status(self, success):
    method resample (line 905) | def resample(self, e):
    method initialize_page (line 910) | def initialize_page(self):
    method sample_new_task (line 917) | def sample_new_task(self):
    method sample_compositional_task (line 927) | def sample_compositional_task(self):
    method sample_single_task (line 932) | def sample_single_task(self):
    method get_task_weight (line 945) | def get_task_weight(self, task):
    method start_trajectory (line 950) | def start_trajectory(self, event):
    method keep_last_task (line 955) | def keep_last_task(self):
  class SceneChangesPage (line 959) | class SceneChangesPage(tk.Frame):
    method __init__ (line 960) | def __init__(self, parent, controller):
    method show_camera_feed (line 1004) | def show_camera_feed(self, event):
    method sample_change (line 1011) | def sample_change(self):
    method initialize_page (line 1022) | def initialize_page(self):
  class CameraPage (line 1027) | class CameraPage(tk.Frame):
    method __init__ (line 1028) | def __init__(self, parent, controller):
    method update_image_grid (line 1111) | def update_image_grid(self, i):
    method update_camera_feed (line 1125) | def update_camera_feed(self, i, w_coeff=1.0, h_coeff=1.0):
    method moniter_keys (line 1139) | def moniter_keys(self, event):
    method initialize_page (line 1150) | def initialize_page(self):
    method collect_trajectory (line 1179) | def collect_trajectory(self):
    method update_timer (line 1185) | def update_timer(self, start_time):
    method end_trajectory (line 1207) | def end_trajectory(self):
    method set_home_frame (line 1235) | def set_home_frame(self, frame):
    method set_mode (line 1238) | def set_mode(self, mode):
    method edit_trajectory (line 1241) | def edit_trajectory(self, save):
  class EnlargedImagePage (line 1249) | class EnlargedImagePage(tk.Frame):
    method __init__ (line 1250) | def __init__(self, parent, controller):
    method set_image_index (line 1267) | def set_image_index(self, img_index):
    method return_to_camera_grid (line 1270) | def return_to_camera_grid(self, e):
    method update_camera_feed (line 1275) | def update_camera_feed(self):
  class CalibrateCamera (line 1286) | class CalibrateCamera(tk.Frame):
    method __init__ (line 1287) | def __init__(self, parent, controller, num_views=2):
    method press_A (line 1337) | def press_A(self, event):
    method press_B (line 1348) | def press_B(self, event):
    method collect_trajectory (line 1356) | def collect_trajectory(self):
    method end_trajectory (line 1368) | def end_trajectory(self, success):
    method set_camera_id (line 1384) | def set_camera_id(self, cam_id):
    method update_camera_feed (line 1399) | def update_camera_feed(self, i, w_coeff=1.0, h_coeff=1.0):

FILE: droid/user_interface/misc.py
  function load_gui_info (line 11) | def load_gui_info():
  function update_gui_info (line 19) | def update_gui_info(user=None, building=None, scene_id=None):
  function generate_scene_id (line 35) | def generate_scene_id():

FILE: scripts/convert/svo_to_depth.py
  class DROIDConversionConfig (line 60) | class DROIDConversionConfig:
  function postprocess (line 87) | def postprocess(cfg: DROIDConversionConfig) -> None:

FILE: scripts/convert/svo_to_mp4.py
  class DROIDConversionConfig (line 53) | class DROIDConversionConfig:
  function postprocess (line 78) | def postprocess(cfg: DROIDConversionConfig) -> None:

FILE: scripts/convert/to_tfrecord.py
  function flatten (line 97) | def flatten(x):
  function tensor_feature (line 108) | def tensor_feature(value):
  function resize_and_encode (line 112) | def resize_and_encode(image, size):
  function create_tfrecord (line 144) | def create_tfrecord(paths, output_path, tqdm_func, global_tqdm):
  function main (line 178) | def main(_):

FILE: scripts/evaluation/evaluate_rt1.py
  function main (line 6) | def main():

FILE: scripts/labeling/label_data.py
  function load_task_info (line 15) | def load_task_info():
  function update_task_label (line 23) | def update_task_label(folderpath, traj_id):
  function label_trajectory (line 32) | def label_trajectory(
  function check_trajectory (line 103) | def check_trajectory(
  function filter_func (line 175) | def filter_func(h5_metadata):

FILE: scripts/postprocess.py
  class DROIDUploadConfig (line 55) | class DROIDUploadConfig:
  function postprocess (line 80) | def postprocess(cfg: DROIDUploadConfig) -> None:

FILE: scripts/tests/memory_leak.py
  function example_script (line 13) | def example_script():
  function traj_sampling_script (line 80) | def traj_sampling_script():
  function load_random_traj_script (line 84) | def load_random_traj_script():
  function camera_wrapper_script (line 98) | def camera_wrapper_script():
  function single_reader_script (line 113) | def single_reader_script():

FILE: scripts/training/train_policy.py
  function filter_func (line 10) | def filter_func(h5_metadata, put_in_only=False):
Copy disabled (too large) Download .json
Condensed preview — 151 files, each showing path, character count, and a content snippet. Download the .json file for the full structured content (20,007K chars).
[
  {
    "path": ".docker/README.md",
    "chars": 2149,
    "preview": "# Overview\n\nIn order to simplify the setup and deployment of DROID across different machines we supply Dockerfiles for b"
  },
  {
    "path": ".docker/laptop/Dockerfile.laptop",
    "chars": 4106,
    "preview": "FROM nvidia/cuda:12.1.0-devel-ubuntu22.04\n\n# build args\nARG ROBOT_TYPE=${ROBOT_TYPE}\nARG LIBFRANKA_VERSION=${LIBFRANKA_V"
  },
  {
    "path": ".docker/laptop/Dockerfile.laptop_fr3",
    "chars": 3982,
    "preview": "FROM nvidia/cuda:12.1.0-devel-ubuntu22.04\n\n# set robot parameters\nENV ROBOT_TYPE=fr3\nENV LIBFRANKA_VERSION=0.10.0\nENV RO"
  },
  {
    "path": ".docker/laptop/Dockerfile.laptop_panda",
    "chars": 4021,
    "preview": "FROM nvidia/cuda:12.1.0-devel-ubuntu22.04\n\n# set robot parameters\nENV ROBOT_TYPE=panda\nENV LIBFRANKA_VERSION=0.9.0\nENV R"
  },
  {
    "path": ".docker/laptop/docker-compose-laptop-data-upload.yaml",
    "chars": 1175,
    "preview": "version: \"3\"\n\nservices: \n  laptop_setup:\n    image: ghcr.io/droid-dataset/droid_laptop:${ROBOT_TYPE}\n    environment:\n  "
  },
  {
    "path": ".docker/laptop/docker-compose-laptop.yaml",
    "chars": 1158,
    "preview": "version: \"3\"\n\nservices: \n  laptop_setup:\n    image: ghcr.io/droid-dataset/droid_laptop:${ROBOT_TYPE}\n    environment:\n  "
  },
  {
    "path": ".docker/laptop/entrypoint.sh",
    "chars": 114,
    "preview": "#!/bin/bash\n\n# activate conda\nsource ~/miniconda3/bin/activate\nconda activate robot\n\n# run user command\nexec \"$@\"\n"
  },
  {
    "path": ".docker/nuc/Dockerfile.nuc",
    "chars": 3176,
    "preview": "# pull ubuntu base image\nFROM ubuntu:bionic\n\n# build args\nARG ROBOT_TYPE=${ROBOT_TYPE}\nARG LIBFRANKA_VERSION=${LIBFRANKA"
  },
  {
    "path": ".docker/nuc/Dockerfile.nuc_fr3",
    "chars": 3049,
    "preview": "# pull ubuntu base image\nFROM ubuntu:bionic\n\n# set robot parameters\nENV ROBOT_TYPE=fr3\nENV LIBFRANKA_VERSION=0.10.0\nENV "
  },
  {
    "path": ".docker/nuc/Dockerfile.nuc_panda",
    "chars": 3050,
    "preview": "# pull ubuntu base image\nFROM ubuntu:bionic\n\n# set robot parameters\nENV ROBOT_TYPE=panda\nENV LIBFRANKA_VERSION=0.9.0\nENV"
  },
  {
    "path": ".docker/nuc/docker-compose-nuc.yaml",
    "chars": 1110,
    "preview": "version: \"3\"\n\nservices: \n  setup_nuc:\n    image: ghcr.io/droid-dataset/droid_nuc:${ROBOT_TYPE}\n    environment:\n      RO"
  },
  {
    "path": ".github/workflows/build_container_laptop_fr3.yaml",
    "chars": 1535,
    "preview": "name: build laptop fr3 container and host on ghcr.io\n\non:\n  push:\n    branches: ['main']\n    paths:\n      - .gitmodules\n"
  },
  {
    "path": ".github/workflows/build_container_laptop_panda.yaml",
    "chars": 1537,
    "preview": "name: build laptop panda container and host on ghcr.io\n\non:\n  push:\n    branches: ['main']\n    paths:\n      - .gitmodule"
  },
  {
    "path": ".github/workflows/build_container_nuc_fr3.yaml",
    "chars": 1262,
    "preview": "name: build nuc fr3 container and host on ghcr.io\n\non:\n  push:\n    branches: ['main']\n    paths:\n      - .gitmodules\n   "
  },
  {
    "path": ".github/workflows/build_container_nuc_panda.yaml",
    "chars": 1270,
    "preview": "name: build nuc panda container and host on ghcr.io\n\non:\n  push:\n    branches: ['main']\n    paths:\n      - .gitmodules\n "
  },
  {
    "path": ".github/workflows/pages.yaml",
    "chars": 1472,
    "preview": "name: Deploy Documentation\n\non:\n  push:\n    branches: \n      - \"main\"\n    paths:\n      - \"docs/**\"\n  workflow_dispatch:\n"
  },
  {
    "path": ".github/workflows/pre-commit.yaml",
    "chars": 406,
    "preview": "name: pre-commit\n\non:\n  workflow_dispatch:\n  schedule:\n    - cron: \"0 9 * * 1\"\n  pull_request:\n  push:\n    branches: [ma"
  },
  {
    "path": ".gitignore",
    "chars": 11,
    "preview": "*.DS_Store\n"
  },
  {
    "path": ".gitmodules",
    "chars": 215,
    "preview": "[submodule \"droid/oculus_reader\"]\n\tpath = droid/oculus_reader\n\turl = git@github.com:rail-berkeley/oculus_reader.git\n[sub"
  },
  {
    "path": ".pre-commit-config.yaml",
    "chars": 867,
    "preview": "# See https://pre-commit.com for more information\n# See https://pre-commit.com/hooks.html for more hooks\nexclude: \".git\""
  },
  {
    "path": "Makefile",
    "chars": 491,
    "preview": ".PHONY: help check autoformat\n.DEFAULT: help\n\n# Generates a useful overview/help message for various make features - add"
  },
  {
    "path": "README.md",
    "chars": 1676,
    "preview": "# The DROID Robot Platform\n\nThis repository contains the code for setting up your DROID robot platform and using it to c"
  },
  {
    "path": "config/fr3/franka_hardware.yaml",
    "chars": 2324,
    "preview": "hz: 1000\nuse_real_time: true\nexec: franka_panda_client\n\nrobot_client:\n  _target_: polymetis.robot_client.executable_robo"
  },
  {
    "path": "config/fr3/franka_panda.yaml",
    "chars": 590,
    "preview": "# @package _group_\nrobot_description_path: \"franka_panda/panda_arm.urdf\"\ncontrolled_joints:  [0, 1, 2, 3, 4, 5, 6]\nnum_d"
  },
  {
    "path": "config/panda/franka_hardware.yaml",
    "chars": 2324,
    "preview": "hz: 1000\nuse_real_time: true\nexec: franka_panda_client\n\nrobot_client:\n  _target_: polymetis.robot_client.executable_robo"
  },
  {
    "path": "config/panda/franka_panda.yaml",
    "chars": 591,
    "preview": "# @package _group_\nrobot_description_path: \"franka_panda/panda_arm.urdf\"\ncontrolled_joints:  [0, 1, 2, 3, 4, 5, 6]\nnum_d"
  },
  {
    "path": "docs/Gemfile",
    "chars": 287,
    "preview": "source 'https://rubygems.org'\n\ngem \"jekyll\", \"~> 4.3.3\" # installed by `gem jekyll`\n# gem \"webrick\"        # required wh"
  },
  {
    "path": "docs/_config.yml",
    "chars": 327,
    "preview": "title: DROID Docs\ndescription: Developer documentation for recreating the DROID platform setup.\ntheme: just-the-docs\n\nur"
  },
  {
    "path": "docs/_sass/color_schemes/droid.scss",
    "chars": 216,
    "preview": "@import \"./color_schemes/light\";\n\n// add custom colours here\n$cardinal-red: #8C1515; // can be updated assumed primary a"
  },
  {
    "path": "docs/contribution-guidelines.md",
    "chars": 724,
    "preview": "---\nlayout: default\ntitle: Contribution Guidelines\nnav_order: 6\n---\n\n# Contribution Guidelines\n\nCommunity contributions "
  },
  {
    "path": "docs/example-workflows/calibrating-cameras.md",
    "chars": 2131,
    "preview": "---\nlayout: default\ntitle: Calibrating Cameras\nparent: Example Workflows\nnav_order: 2\n---\n\n# Prerequisites\n\nThis guide a"
  },
  {
    "path": "docs/example-workflows/data-collection.md",
    "chars": 3574,
    "preview": "---\nlayout: default\ntitle: Collecting Data\nparent: Example Workflows\nnav_order: 3\n---\n\n# Prerequisites\n\n**Important:** B"
  },
  {
    "path": "docs/example-workflows/example-workflows.md",
    "chars": 1061,
    "preview": "---\nlayout: default\ntitle: Example Workflows\nnav_order: 5\nhas_children: true\nhas_toc: false\npermalink: /docs/example-wor"
  },
  {
    "path": "docs/example-workflows/teleoperation.md",
    "chars": 1872,
    "preview": "---\nlayout: default\ntitle: Teleoperating the Robot\nparent: Example Workflows\nnav_order: 1\n---\n\n# Prerequisites\n\nThis gui"
  },
  {
    "path": "docs/example-workflows/training-policies.md",
    "chars": 293,
    "preview": "---\nlayout: default\ntitle: Training Policies\nparent: Example Workflows\nnav_order: 4\n---\n\n# Training Policies\n\n* Please f"
  },
  {
    "path": "docs/hardware-setup/assembly.md",
    "chars": 23472,
    "preview": "---\nlayout: default\ntitle: Assembly\nparent: Hardware Setup\nnav_order: 2\n---\n\nThis assembly guide starts from constructin"
  },
  {
    "path": "docs/hardware-setup/hardware-setup.md",
    "chars": 1448,
    "preview": "---\nlayout: default\ntitle: Hardware Setup\nnav_order: 2\nhas_children: true\nhas_toc: false\npermalink: /docs/hardware-setup"
  },
  {
    "path": "docs/hardware-setup/shopping-list.md",
    "chars": 8593,
    "preview": "---\nlayout: default\ntitle: Shopping List\nparent: Hardware Setup\nnav_order: 1\n---\n\n# Shopping List\n\n## Approximate Total "
  },
  {
    "path": "docs/index.md",
    "chars": 1367,
    "preview": "---\nlayout: default\ntitle: Introduction\nnav_order: 1\n---\n\n# 🤖 **D**istributed **RO**bot **I**nteraction **D**ataset\n\n![]"
  },
  {
    "path": "docs/software-setup/docker.md",
    "chars": 10226,
    "preview": "---\nlayout: default\ntitle: Running Application through Docker\nhas_children: false\nnav_order: 1\nparent: Software Setup\n--"
  },
  {
    "path": "docs/software-setup/host-installation.md",
    "chars": 18244,
    "preview": "---\nlayout: default\ntitle: Running Application on Host\nhas_children: false\nnav_order: 2\nparent: Software Setup\n---\n\nThis"
  },
  {
    "path": "docs/software-setup/software-setup.md",
    "chars": 1646,
    "preview": "---\nlayout: default\ntitle: Software Setup\nhas_children: true\nhas_toc: false\nnav_order: 3\npermalink: /docs/software-setup"
  },
  {
    "path": "docs/the-droid-dataset.md",
    "chars": 6673,
    "preview": "---\nlayout: default\ntitle: The DROID Dataset\nnav_order: 4\n---\n\n## 🔍 Exploring the Dataset\n\nIt is possible to interactive"
  },
  {
    "path": "droid/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "droid/calibration/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "droid/calibration/calibration_info.json",
    "chars": 1498,
    "preview": "{\"26405488_left\": {\"pose\": [0.1113325872574514, 0.8207481722451926, 0.04550905451430457, -1.64633974845365, -0.178570163"
  },
  {
    "path": "droid/calibration/calibration_utils.py",
    "chars": 23903,
    "preview": "import json\nimport os\nimport time\nfrom collections import defaultdict\n\nimport cv2\nimport matplotlib.pyplot as plt\nimport"
  },
  {
    "path": "droid/camera_utils/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "droid/camera_utils/camera_readers/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "droid/camera_utils/camera_readers/zed_camera.py",
    "chars": 8518,
    "preview": "from copy import deepcopy\n\nimport cv2\nimport numpy as np\n\nfrom droid.misc.parameters import hand_camera_id\nfrom droid.mi"
  },
  {
    "path": "droid/camera_utils/info.py",
    "chars": 713,
    "preview": "from droid.misc.parameters import *\n\ncamera_type_dict = {\n    hand_camera_id: 0,\n    varied_camera_1_id: 1,\n    varied_c"
  },
  {
    "path": "droid/camera_utils/recording_readers/mp4_reader.py",
    "chars": 3691,
    "preview": "import json\nimport os\nfrom copy import deepcopy\n\nimport cv2\n\nresize_func_map = {\"cv2\": cv2.resize, None: None}\n\n\nclass M"
  },
  {
    "path": "droid/camera_utils/recording_readers/svo_reader.py",
    "chars": 5329,
    "preview": "from copy import deepcopy\n\nimport cv2\n\ntry:\n    import pyzed.sl as sl\nexcept ModuleNotFoundError:\n    print(\"WARNING: Yo"
  },
  {
    "path": "droid/camera_utils/wrappers/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "droid/camera_utils/wrappers/multi_camera_wrapper.py",
    "chars": 3299,
    "preview": "import os\nimport random\nfrom collections import defaultdict\n\nfrom droid.camera_utils.camera_readers.zed_camera import ga"
  },
  {
    "path": "droid/camera_utils/wrappers/recorded_multi_camera_wrapper.py",
    "chars": 2219,
    "preview": "import glob\nimport random\nfrom collections import defaultdict\n\nfrom droid.camera_utils.info import get_camera_type\nfrom "
  },
  {
    "path": "droid/controllers/oculus_controller.py",
    "chars": 8037,
    "preview": "import time\n\nimport numpy as np\nfrom oculus_reader.reader import OculusReader\n\nfrom droid.misc.subprocess_utils import r"
  },
  {
    "path": "droid/data_loading/data_loader.py",
    "chars": 2692,
    "preview": "from torch.utils.data import DataLoader\nfrom torch.utils.data.datapipes.iter import Shuffler\n\nfrom droid.data_loading.da"
  },
  {
    "path": "droid/data_loading/dataset.py",
    "chars": 652,
    "preview": "import torch\nfrom torch.utils.data import IterableDataset\n\n\nclass TrajectoryDataset(IterableDataset):\n    def __init__(s"
  },
  {
    "path": "droid/data_loading/tf_data_loader.py",
    "chars": 3827,
    "preview": "from functools import partial\nfrom typing import Dict\n\nimport tensorflow as tf\n\n\ndef get_type_spec(path: str) -> Dict[st"
  },
  {
    "path": "droid/data_loading/trajectory_sampler.py",
    "chars": 3770,
    "preview": "import os\n\nimport h5py\nimport numpy as np\n\nfrom droid.data_processing.timestep_processing import TimestepProcesser\nfrom "
  },
  {
    "path": "droid/data_processing/data_transforms.py",
    "chars": 1584,
    "preview": "import cv2\nfrom torchvision import transforms as T\n\n\nclass ImageTransformer:\n    def __init__(\n        self, remove_alph"
  },
  {
    "path": "droid/data_processing/timestep_processing.py",
    "chars": 5301,
    "preview": "from collections import defaultdict\nfrom copy import deepcopy\nfrom itertools import chain\n\nimport numpy as np\n\nfrom droi"
  },
  {
    "path": "droid/evaluation/eval_launcher.py",
    "chars": 2878,
    "preview": "import json\nimport os\n\nimport numpy as np\nimport torch\n\nfrom droid.controllers.oculus_controller import VRPolicy\nfrom dr"
  },
  {
    "path": "droid/evaluation/eval_launcher_robomimic.py",
    "chars": 9469,
    "preview": "import json\nimport os\nimport numpy as np\nimport torch\nfrom collections import OrderedDict\nfrom copy import deepcopy\n\nfro"
  },
  {
    "path": "droid/evaluation/policy_wrapper.py",
    "chars": 8667,
    "preview": "import numpy as np\nimport torch\nfrom collections import deque\n\nfrom droid.data_processing.timestep_processing import Tim"
  },
  {
    "path": "droid/evaluation/rt1_wrapper.py",
    "chars": 4100,
    "preview": "from pathlib import Path\nfrom PIL import Image\nimport tensorflow as tf\nimport numpy as np\nfrom tf_agents.policies import"
  },
  {
    "path": "droid/franka/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "droid/franka/launch_gripper.sh",
    "chars": 180,
    "preview": "source ~/anaconda3/etc/profile.d/conda.sh\nconda activate polymetis-local\npkill -9 gripper\nchmod a+rw /dev/ttyUSB0\nlaunch"
  },
  {
    "path": "droid/franka/launch_robot.sh",
    "chars": 163,
    "preview": "source ~/anaconda3/etc/profile.d/conda.sh\nconda activate polymetis-local\npkill -9 run_server\npkill -9 franka_panda_cl\nla"
  },
  {
    "path": "droid/franka/robot.py",
    "chars": 11102,
    "preview": "# ROBOT SPECIFIC IMPORTS\nimport os\nimport time\n\nimport grpc\nimport numpy as np\nimport torch\nfrom polymetis import Grippe"
  },
  {
    "path": "droid/misc/parameters.py",
    "chars": 566,
    "preview": "import os\nfrom cv2 import aruco\n\n# Robot Params #\nnuc_ip = \"\"\nrobot_ip = \"\"\nlaptop_ip = \"\"\nsudo_password = \"\"\nrobot_type"
  },
  {
    "path": "droid/misc/pointcloud_utils.py",
    "chars": 5625,
    "preview": "import cv2\nimport numpy as np\nimport open3d as o3d\nfrom scipy.spatial.transform import Rotation as R\n\n# VOXEL_SIZE = 0.0"
  },
  {
    "path": "droid/misc/server_interface.py",
    "chars": 2621,
    "preview": "import time\n\nimport numpy as np\nimport zerorpc\n\n\ndef attempt_n_times(function_list, max_attempts, sleep_time=0.1):\n    i"
  },
  {
    "path": "droid/misc/subprocess_utils.py",
    "chars": 595,
    "preview": "import multiprocessing\nimport subprocess\nimport threading\n\n\ndef run_terminal_command(command):\n    process = subprocess."
  },
  {
    "path": "droid/misc/time.py",
    "chars": 68,
    "preview": "import time\n\n\ndef time_ms():\n    return time.time_ns() // 1_000_000\n"
  },
  {
    "path": "droid/misc/transformations.py",
    "chars": 2417,
    "preview": "import numpy as np\nfrom scipy.spatial.transform import Rotation as R\n\n\n### Conversions ###\ndef quat_to_euler(quat, degre"
  },
  {
    "path": "droid/misc/version_control/1_0.json",
    "chars": 400,
    "preview": "{\"DROID Version\": \"1.0\",\n\"oculus_params\": {\n\t\"saving_target_info\": false,\n\t\"pos_action_gain\": 3,\n\t\"rot_action_gain\": 3\n\t"
  },
  {
    "path": "droid/misc/version_control/1_1.json",
    "chars": 380,
    "preview": "{\"DROID Version\": \"1.1\",\n\"oculus_params\": {\n\t\"saving_target_info\": true,\n\t\"pos_action_gain\": 5,\n\t\"rot_action_gain\": 2\n},"
  },
  {
    "path": "droid/misc/version_control/loader.py",
    "chars": 506,
    "preview": "import json\nimport os\n\n# Prepare Filepath #\ndir_path = os.path.dirname(os.path.realpath(__file__))\n\n\ndef load_version_in"
  },
  {
    "path": "droid/plotting/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "droid/plotting/analysis_func.py",
    "chars": 2504,
    "preview": "import os\nfrom collections import defaultdict\n\nimport h5py\nimport numpy as np\n\nfrom droid.plotting.misc import *\nfrom dr"
  },
  {
    "path": "droid/plotting/misc.py",
    "chars": 3657,
    "preview": "import datetime\nimport os\nimport time\n\nimport h5py\nimport numpy as np\nfrom dateutil.relativedelta import relativedelta\nf"
  },
  {
    "path": "droid/plotting/text.py",
    "chars": 1928,
    "preview": "user_to_lab = {\n    \"Alexander Khazatsky\": \"IRIS\",\n    \"Sasha Khazatsky\": \"IRIS\",\n    \"Ethan Foster\": \"IRIS\",\n    \"Emma "
  },
  {
    "path": "droid/postprocessing/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "droid/postprocessing/parse.py",
    "chars": 5381,
    "preview": "\"\"\"\nparse.py\n\nCore parsing logic -- takes a path to a raw demonstration directory (comprised of `trajectory.h5` and the "
  },
  {
    "path": "droid/postprocessing/schema.py",
    "chars": 4161,
    "preview": "\"\"\"\nschema.py\n\nSchema & processing functions for extracting and formatting metadata from an individual DROID trajectory "
  },
  {
    "path": "droid/postprocessing/stages.py",
    "chars": 13196,
    "preview": "\"\"\"\nstages.py\n\nFunctions capturing logic for the various postprocessing stages:\n    - Stage 1 :: \"Indexing\"  -->  Quickl"
  },
  {
    "path": "droid/postprocessing/util/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "droid/postprocessing/util/svo2depth.py",
    "chars": 5606,
    "preview": "\"\"\"\nsvo2mp4.py\n\nUtility scripts for using the ZED Python SDK and FFMPEG to convert raw `.svo` files to `.mp4` files (inc"
  },
  {
    "path": "droid/postprocessing/util/svo2mp4.py",
    "chars": 6274,
    "preview": "\"\"\"\nsvo2mp4.py\n\nUtility scripts for using the ZED Python SDK and FFMPEG to convert raw `.svo` files to `.mp4` files (inc"
  },
  {
    "path": "droid/postprocessing/util/validate.py",
    "chars": 2710,
    "preview": "\"\"\"\nvalidate.py\n\nHelper functions for validating almost every part of the postprocessing pipeline; the general principle"
  },
  {
    "path": "droid/robot_env.py",
    "chars": 4887,
    "preview": "from copy import deepcopy\n\nimport gym\nimport numpy as np\n\nfrom droid.calibration.calibration_utils import load_calibrati"
  },
  {
    "path": "droid/robot_ik/arm.py",
    "chars": 2259,
    "preview": "import os\n\nimport numpy as np\nfrom dm_control import mjcf\nfrom dm_robotics.moma.models import types\nfrom dm_robotics.mom"
  },
  {
    "path": "droid/robot_ik/franka/fr3.xml",
    "chars": 5593,
    "preview": "<mujoco>\n  <compiler angle=\"radian\" meshdir=\"./mesh\" texturedir=\"./texture\" />\n  <asset>\n    <mesh name=\"link0\" file=\"li"
  },
  {
    "path": "droid/robot_ik/franka/mesh/finger.obj",
    "chars": 65311,
    "preview": "# Blender 3.1.0\n# www.blender.org\nmtllib finger.mtl\no node1\nv -0.000900 0.009423 0.048367\nv -0.003461 0.009426 0.045455\n"
  },
  {
    "path": "droid/robot_ik/franka/mesh/hand.obj",
    "chars": 737320,
    "preview": "# Blender 3.1.0\n# www.blender.org\nmtllib hand.mtl\no node1\nv 0.000201 -0.096509 -0.019288\nv 0.001418 -0.093994 -0.020067\n"
  },
  {
    "path": "droid/robot_ik/franka/mesh/link0.obj",
    "chars": 2287454,
    "preview": "# Blender 3.1.0\n# www.blender.org\nmtllib link0.mtl\no node3\nv 0.038582 -0.041082 0.091530\nv 0.039578 -0.039506 0.099507\nv"
  },
  {
    "path": "droid/robot_ik/franka/mesh/link1.obj",
    "chars": 1237475,
    "preview": "# Blender 3.1.0\n# www.blender.org\nmtllib link1.mtl\no node0\nv 0.012617 -0.117214 -0.033973\nv 0.029343 -0.121457 0.005389\n"
  },
  {
    "path": "droid/robot_ik/franka/mesh/link2.obj",
    "chars": 1251845,
    "preview": "# Blender 3.1.0\n# www.blender.org\nmtllib link2.mtl\no node0\nv -0.049679 -0.194000 -0.003474\nv -0.046306 -0.194000 -0.0031"
  },
  {
    "path": "droid/robot_ik/franka/mesh/link3.obj",
    "chars": 1417908,
    "preview": "# Blender 3.1.0\n# www.blender.org\nmtllib link3.mtl\no node3\nv 0.051797 0.000500 0.045648\nv 0.047808 0.000500 0.042709\nv 0"
  },
  {
    "path": "droid/robot_ik/franka/mesh/link4.obj",
    "chars": 1462434,
    "preview": "# Blender 3.1.0\n# www.blender.org\nmtllib link4.mtl\no node0\nv -0.065453 0.008937 0.000500\nv -0.063289 0.007335 0.000500\nv"
  },
  {
    "path": "droid/robot_ik/franka/mesh/link5.obj",
    "chars": 2032965,
    "preview": "# Blender 3.1.0\n# www.blender.org\nmtllib link5.mtl\no node2\nv 0.023728 0.115957 -0.037857\nv 0.022891 0.115613 -0.037172\nv"
  },
  {
    "path": "droid/robot_ik/franka/mesh/link6.obj",
    "chars": 2273095,
    "preview": "# Blender 3.1.0\n# www.blender.org\nmtllib link6.mtl\no node0_001\nv -0.036565 -0.030980 -0.014800\nv 0.047505 -0.004112 -0.0"
  },
  {
    "path": "droid/robot_ik/franka/mesh/link7.obj",
    "chars": 1270360,
    "preview": "# Blender 3.1.0\n# www.blender.org\nmtllib link7.mtl\no node7\nv -0.038168 -0.034776 0.082568\nv -0.036436 -0.036503 0.089827"
  },
  {
    "path": "droid/robot_ik/franka/panda.xml",
    "chars": 5594,
    "preview": "<mujoco>\n  <compiler angle=\"radian\" meshdir=\"./mesh\" texturedir=\"./texture\" />\n  <asset>\n    <mesh name=\"link0\" file=\"li"
  },
  {
    "path": "droid/robot_ik/robot_ik_solver.py",
    "chars": 4737,
    "preview": "import numpy as np\nfrom dm_control import mjcf\nfrom dm_robotics.moma.effectors import arm_effector, cartesian_6d_velocit"
  },
  {
    "path": "droid/training/model_trainer.py",
    "chars": 6566,
    "preview": "import csv\nimport json\nimport os\nimport shutil\nimport time\nfrom collections import OrderedDict, defaultdict\n\nimport matp"
  },
  {
    "path": "droid/training/models/policy_network.py",
    "chars": 8639,
    "preview": "import torch\nimport torch.utils.data\nfrom torch import nn\nfrom torch.nn import functional as F\n\n\nclass Residual(nn.Modul"
  },
  {
    "path": "droid/trajectory_utils/misc.py",
    "chars": 17426,
    "preview": "import time\nfrom collections import defaultdict\nfrom copy import deepcopy\n\nimport cv2\nimport numpy as np\nfrom PIL import"
  },
  {
    "path": "droid/trajectory_utils/trajectory_reader.py",
    "chars": 3572,
    "preview": "import tempfile\n\nimport h5py\nimport imageio\n\n\ndef create_video_file(suffix=\".mp4\", byte_contents=None):\n    # Create Tem"
  },
  {
    "path": "droid/trajectory_utils/trajectory_writer.py",
    "chars": 4636,
    "preview": "import os\nimport tempfile\nfrom collections import defaultdict\nfrom copy import deepcopy\nfrom queue import Empty, Queue\n\n"
  },
  {
    "path": "droid/user_interface/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "droid/user_interface/data_collector.py",
    "chars": 6491,
    "preview": "import os\nimport time\nfrom copy import deepcopy\nfrom datetime import date\n\nimport cv2\nimport h5py\n\nimport droid.trajecto"
  },
  {
    "path": "droid/user_interface/eval_gui.py",
    "chars": 23380,
    "preview": "from abc import ABC, abstractmethod\nfrom datetime import date\nimport numpy as np\n\n\nfrom tkinter import *\nfrom droid.user"
  },
  {
    "path": "droid/user_interface/gui.py",
    "chars": 51187,
    "preview": "# Tkinter Imports #\nimport math\nimport random\nimport threading\nimport time\nimport tkinter as tk\nimport webbrowser\n\n# Fun"
  },
  {
    "path": "droid/user_interface/gui_info.json",
    "chars": 111,
    "preview": "{\"user\": \"Ethan Foster\", \"building\": \"Gates\", \"scene_id_timestamp\": 1686263566.9465497, \"scene_id\": 4823049285}"
  },
  {
    "path": "droid/user_interface/gui_parameters.py",
    "chars": 989,
    "preview": "# IMPORTANT VARIABLES #\nnew_task_prob = 0.1\ncompositional_task_prob = 0.1\nscene_change_prob = 0.2\nmove_robot_frequency ="
  },
  {
    "path": "droid/user_interface/misc.py",
    "chars": 958,
    "preview": "import json\nimport os\nimport time\n\nimport numpy as np\n\ndir_path = os.path.dirname(os.path.realpath(__file__))\ngui_info_f"
  },
  {
    "path": "droid/user_interface/text.py",
    "chars": 6775,
    "preview": "all_tasks = {\n    \"Articulated Tasks\": [\n        \"Press button (ex: light switch, elevators, microwave button)\",\n       "
  },
  {
    "path": "pyproject.toml",
    "chars": 1475,
    "preview": "[build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"droid\"\nauthors = [\n "
  },
  {
    "path": "scripts/README.md",
    "chars": 10329,
    "preview": "# Postprocessing Instructions\n\nRunning `scripts/postprocess.py` will iterate through all folders/files dumped as part of"
  },
  {
    "path": "scripts/convert/svo_to_depth.py",
    "chars": 8066,
    "preview": "\"\"\"\nScript for extracting MP4 data from SVO files.\n\nPerforms the following:\n    - Checks for \"cached\" uploads in `DROID/"
  },
  {
    "path": "scripts/convert/svo_to_mp4.py",
    "chars": 7382,
    "preview": "\"\"\"\nScript for extracting MP4 data from SVO files.\n\nPerforms the following:\n    - Checks for \"cached\" uploads in `DROID/"
  },
  {
    "path": "scripts/convert/to_tfrecord.py",
    "chars": 8478,
    "preview": "import os\n\nimport numpy as np\nimport tensorflow as tf\nimport tqdm\nfrom absl import app, flags, logging\nfrom tqdm_multipr"
  },
  {
    "path": "scripts/evaluation/evaluate_policy.py",
    "chars": 2666,
    "preview": "from droid.evaluation.eval_launcher_robomimic import eval_launcher, get_goal_im\nimport matplotlib.pyplot as plt\nimport o"
  },
  {
    "path": "scripts/evaluation/evaluate_rt1.py",
    "chars": 613,
    "preview": "import argparse\nfrom droid.user_interface.eval_gui import EvalGUI\nfrom droid.evaluation.rt1_wrapper import RT1Policy\n\n\nd"
  },
  {
    "path": "scripts/evaluation/results.ipynb",
    "chars": 4915007,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n "
  },
  {
    "path": "scripts/labeling/label_data.py",
    "chars": 7290,
    "preview": "import json\nimport os\nfrom collections import defaultdict\n\nfrom droid.camera_utils.wrappers.recorded_multi_camera_wrappe"
  },
  {
    "path": "scripts/labeling/task_label_filepath.json",
    "chars": 37339,
    "preview": "{\"Fri_Mar__3_16:33:25_2023\": false, \"Fri_Mar__3_15:31:24_2023\": false, \"Fri_Mar__3_16:33:59_2023\": true, \"Fri_Mar__3_16:"
  },
  {
    "path": "scripts/main.py",
    "chars": 1138,
    "preview": "from droid.controllers.oculus_controller import VRPolicy\nfrom droid.robot_env import RobotEnv\nfrom droid.user_interface."
  },
  {
    "path": "scripts/postprocess.py",
    "chars": 7352,
    "preview": "\"\"\"\npostprocess.py\n\nCore script for processing & uploading collected demonstration data to the DROID Amazon S3 bucket.\n\n"
  },
  {
    "path": "scripts/server/launch_server.sh",
    "chars": 178,
    "preview": "#!/bin/bash\nsource ~/anaconda3/etc/profile.d/conda.sh\nconda activate polymetis-local\ncd $( cd -- \"$( dirname -- \"${BASH_"
  },
  {
    "path": "scripts/server/run_server.py",
    "chars": 202,
    "preview": "import zerorpc\n\nfrom droid.franka.robot import FrankaRobot\n\nif __name__ == \"__main__\":\n    robot_client = FrankaRobot()\n"
  },
  {
    "path": "scripts/setup/README.md",
    "chars": 1868,
    "preview": "# Overview\n\nThe scripts in this directory can be run to setup the NUC and laptop for the DROID project. Before running t"
  },
  {
    "path": "scripts/setup/intro.txt",
    "chars": 3282,
    "preview": " .----------------.  .----------------.  .----------------.  .----------------.                     \n| .--------------. "
  },
  {
    "path": "scripts/setup/laptop_setup.sh",
    "chars": 6634,
    "preview": "#!/bin/bash\n\n# Function to display devices and ask for confirmation\nfunction confirm_devices {\n    devices=$(adb devices"
  },
  {
    "path": "scripts/setup/nuc_setup.sh",
    "chars": 4538,
    "preview": "#!/bin/bash\n\n# print out nice ascii art\nascii=$(cat ./intro.txt)\necho \"$ascii\"\n\n\necho \"Welcome to the DROID setup proces"
  },
  {
    "path": "scripts/tests/calibrate_cameras.py",
    "chars": 298,
    "preview": "from droid.controllers.oculus_controller import VRPolicy\nfrom droid.robot_env import RobotEnv\nfrom droid.trajectory_util"
  },
  {
    "path": "scripts/tests/collect_trajectory.py",
    "chars": 279,
    "preview": "from droid.controllers.oculus_controller import VRPolicy\nfrom droid.robot_env import RobotEnv\nfrom droid.trajectory_util"
  },
  {
    "path": "scripts/tests/memory_leak.py",
    "chars": 5444,
    "preview": "import glob\nimport random\n\nimport matplotlib.pyplot as plt\nimport psutil\nimport pyzed.sl as sl\nfrom tqdm import tqdm\n\nfr"
  },
  {
    "path": "scripts/tests/replay_trajectory.py",
    "chars": 408,
    "preview": "from droid.robot_env import RobotEnv\nfrom droid.trajectory_utils.misc import replay_trajectory\n\ntrajectory_folderpath = "
  },
  {
    "path": "scripts/training/sanity_check/image_obs.py",
    "chars": 1965,
    "preview": "from droid.training.model_trainer import exp_launcher\n\nvariant = dict(\n    exp_name=\"sanity_check_image_obs\",\n    use_gp"
  },
  {
    "path": "scripts/training/sanity_check/state_obs.py",
    "chars": 1977,
    "preview": "from droid.training.model_trainer import exp_launcher\n\nvariant = dict(\n    exp_name=\"sanity_check_state_obs\",\n    use_gp"
  },
  {
    "path": "scripts/training/sweepable_train_policy [wip].py",
    "chars": 2309,
    "preview": "from droid.training.model_trainer import exp_launcher\n\n# import rlkit.util.hyperparameter as hyp\n\n\"\"\"\nTHIS IS A WORK IN "
  },
  {
    "path": "scripts/training/train_policy.py",
    "chars": 2219,
    "preview": "import json\n\nfrom droid.training.model_trainer import exp_launcher\n\ntask_label_filepath = \"/home/sasha/DROID/scripts/lab"
  },
  {
    "path": "scripts/visualizations/create_plots.py",
    "chars": 4303,
    "preview": "import os\n\nimport matplotlib.pyplot as plt\n\nfrom droid.plotting.analysis_func import *\nfrom droid.plotting.misc import *"
  },
  {
    "path": "scripts/visualizations/visualize_data.py",
    "chars": 584,
    "preview": "from torch.utils.data.datapipes.iter import Shuffler\n\nfrom droid.data_loading.dataset import TrajectoryDataset\nfrom droi"
  },
  {
    "path": "scripts/visualizations/visualize_day.py",
    "chars": 709,
    "preview": "import random\n\nfrom droid.data_loading.trajectory_sampler import crawler\nfrom droid.trajectory_utils.misc import visuali"
  },
  {
    "path": "scripts/visualizations/visualize_trajectory.py",
    "chars": 524,
    "preview": "from droid.trajectory_utils.misc import visualize_trajectory\n\ntrajectory_folderpath = \"/home/sasha/DROID/data/success/20"
  },
  {
    "path": "setup.py",
    "chars": 186,
    "preview": "\"\"\"\nsetup.py\nPEP 621 switches most of Packaging to `pyproject.toml` -- yet keep a \"dummy\" setup.py for external code tha"
  }
]

About this extraction

This page contains the full source code of the droid-dataset/droid GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 151 files (18.6 MB), approximately 4.9M tokens, and a symbol index with 509 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!