Showing preview only (391K chars total). Download the full file or copy to clipboard to get everything.
Repository: aws-deepracer-community/deepracer-for-cloud
Branch: master
Commit: 6f4d1c768a4f
Files: 101
Total size: 363.0 KB
Directory structure:
gitextract_rez3pf4j/
├── .github/
│ └── workflows/
│ └── syntax-check.yml
├── .gitignore
├── LICENSE
├── README.md
├── bin/
│ ├── activate.sh
│ ├── autorun.sh
│ ├── detect.sh
│ ├── init.sh
│ ├── module/
│ │ ├── droa.sh
│ │ └── summary.sh
│ ├── prepare-mac.sh
│ ├── prepare.sh
│ ├── runonce.sh
│ └── scripts_wrapper.sh
├── defaults/
│ ├── debug-reward_function.py
│ ├── dependencies.json
│ ├── docker-daemon.json
│ ├── hyperparameters.json
│ ├── model_metadata.json
│ ├── model_metadata_cont.json
│ ├── model_metadata_sac.json
│ ├── reward_function.py
│ ├── template-run.env
│ ├── template-system.env
│ └── template-worker.env
├── docker/
│ ├── docker-compose-aws.yml
│ ├── docker-compose-cwlog.yml
│ ├── docker-compose-endpoint.yml
│ ├── docker-compose-eval-swarm.yml
│ ├── docker-compose-eval.yml
│ ├── docker-compose-keys.yml
│ ├── docker-compose-local-xorg-wsl.yml
│ ├── docker-compose-local-xorg.yml
│ ├── docker-compose-local.yml
│ ├── docker-compose-metrics.yml
│ ├── docker-compose-mount.yml
│ ├── docker-compose-robomaker-multi.yml
│ ├── docker-compose-robomaker-scripts.yml
│ ├── docker-compose-simapp.yml
│ ├── docker-compose-training-swarm.yml
│ ├── docker-compose-training.yml
│ ├── docker-compose-webviewer-swarm.yml
│ ├── docker-compose-webviewer.yml
│ └── metrics/
│ ├── configuration.env
│ ├── grafana/
│ │ └── provisioning/
│ │ ├── dashboards/
│ │ │ ├── dashboard.yml
│ │ │ └── deepracer-training-template.json
│ │ └── datasources/
│ │ └── influxdb.yml
│ └── telegraf/
│ └── etc/
│ └── telegraf.conf
├── docs/
│ ├── _config.yml
│ ├── docker.md
│ ├── droa.md
│ ├── head-to-head.md
│ ├── index.md
│ ├── installation.md
│ ├── mac.md
│ ├── metrics.md
│ ├── multi_gpu.md
│ ├── multi_run.md
│ ├── multi_worker.md
│ ├── opengl.md
│ ├── reference.md
│ ├── video.md
│ └── windows.md
├── requirements.txt
├── scripts/
│ ├── droa/
│ │ ├── __init__.py
│ │ ├── auth.py
│ │ ├── delete_model.py
│ │ ├── download_logs.py
│ │ ├── get_model.py
│ │ ├── import_model.py
│ │ └── list_models.py
│ ├── evaluation/
│ │ ├── prepare-config.py
│ │ ├── start.sh
│ │ └── stop.sh
│ ├── log-analysis/
│ │ ├── start.sh
│ │ └── stop.sh
│ ├── metrics/
│ │ ├── start.sh
│ │ └── stop.sh
│ ├── training/
│ │ ├── increment.sh
│ │ ├── prepare-config.py
│ │ ├── start.sh
│ │ └── stop.sh
│ ├── upload/
│ │ ├── download-model.sh
│ │ ├── increment.sh
│ │ ├── prepare-config.py
│ │ ├── upload-car.sh
│ │ └── upload-model.sh
│ └── viewer/
│ ├── index.template.html
│ ├── start.sh
│ └── stop.sh
└── utils/
├── Dockerfile.gpu-detect
├── cuda-check-tf.py
├── cuda-check.sh
├── download-car-model.py
├── evaluate.sh
├── sample-createspot.sh
├── setup-xorg.sh
├── start-local-browser.sh
├── start-xorg.sh
├── timed-stop.sh
└── upload-rotate.sh
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/workflows/syntax-check.yml
================================================
name: Syntax Check
on:
pull_request:
branches:
- master
- dev
jobs:
bash-syntax:
name: Bash Syntax Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check bash scripts in bin/
run: |
find bin/ -name '*.sh' | sort | while read -r f; do
bash -n "$f" && echo "OK: $f"
done
- name: Check bash scripts in scripts/
run: |
find scripts/ -name '*.sh' | sort | while read -r f; do
bash -n "$f" && echo "OK: $f"
done
python-syntax:
name: Python Syntax Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.x'
- name: Check Python scripts in scripts/
run: |
find scripts/ -name '*.py' | sort | while read -r f; do
python3 -m py_compile "$f" && echo "OK: $f"
done
docker-compose-syntax:
name: Docker Compose Syntax Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check docker-compose YAML files
run: |
find docker/ -name 'docker-compose*.yml' | sort | while read -r f; do
python3 -c 'import sys, yaml; yaml.safe_load(open(sys.argv[1])) or True; print("OK: " + sys.argv[1])' "$f" \
|| { echo "FAIL: $f"; exit 1; }
done
================================================
FILE: .gitignore
================================================
.vscode/
.venv/
custom_files/
logs/
docker/volumes/
recording/
recording
/*.env
/*.bak
/*.tar
/*.json
DONE
data/
tmp/
autorun.s3url
nohup.out
/*.sh
_
experiments/
# Python
__pycache__/
*.py[cod]
*.pyo
*.pyd
================================================
FILE: LICENSE
================================================
Copyright 2019-2023 AWS DeepRacer Community. All Rights Reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this
software and associated documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
================================================
FILE: README.md
================================================
# DeepRacer-For-Cloud
Provides a quick and easy way to get up and running with a DeepRacer training environment using a cloud virtual machine or a local compter, such [AWS EC2 Accelerated Computing instances](https://aws.amazon.com/ec2/instance-types/?nc1=h_ls#Accelerated_Computing) or the Azure [N-Series Virtual Machines](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu).
DRfC runs on Ubuntu 22.04 and 24.04. GPU acceleration requires a NVIDIA GPU, preferrably with more than 8GB of VRAM. ARM64/Graviton instances (AWS Graviton, Apple Silicon) are also supported for CPU-only training.
**Experimental:** macOS is supported in CPU-only mode via [Colima](https://github.com/abiosoft/colima), on both AWS Mac EC2 instances and local Mac hardware (Intel and Apple Silicon). See [docs/mac.md](docs/mac.md) for setup instructions.
## Introduction
DeepRacer-For-Cloud (DRfC) started as an extension of the work done by Alex (https://github.com/alexschultz/deepracer-for-dummies), which is again a wrapper around the amazing work done by Chris (https://github.com/crr0004/deepracer). With the introduction of the second generation Deepracer Console the repository has been split up. This repository contains the scripts needed to *run* the training, but depends on Docker Hub to provide pre-built docker images. All the under-the-hood building capabilities are in the [Deepracer Simapp](https://github.com/aws-deepracer-community/deepracer-simapp) repository.
As if December 2025 the original DeepRacer service in the AWS console is no longer available, and is replaced by [DeepRacer-on-AWS](https://aws.amazon.com/solutions/implementations/deepracer-on-aws/) which you can install in your own AWS environment. DeepRacer-For-Cloud is independent of any AWS service, so it is not directly impacted by this change.
## Main Features
DRfC supports a wide set of features to ensure that you can focus on creating the best model:
* User-friendly
* Based on the continously updated community [Robomaker](https://github.com/aws-deepracer-community/deepracer-simapp) container, supporting a wide range of CPU and GPU setups.
* Wide set of scripts (`dr-*`) enables effortless training.
* Detection of your AWS DeepRacer Console models; allows upload of a locally trained model to any of them.
* Modes
* Time Trial
* Object Avoidance
* Head-to-Bot
* Training
* Multiple Robomaker instances per Sagemaker (N:1) to improve training progress.
* Multiple training sessions in parallel - each being (N:1) if hardware supports it - to test out things in parallel.
* Connect multiple nodes together (Swarm-mode only) to combine the powers of multiple computers/instances.
* Evaluation
* Evaluate independently from training.
* Save evaluation run to MP4 file in S3.
* Logging
* Training metrics and trace files are stored to S3.
* Optional integration with AWS CloudWatch.
* Optional exposure of Robomaker internal log-files.
* Technology
* Supports both Docker Swarm (used for connecting multiple nodes together) and Docker Compose
## Tech Stack
DRfC is built on top of the [AWS DeepRacer Simapp](https://github.com/aws-deepracer-community/deepracer-simapp) — a single Docker image used for three purposes:
* **Robomaker** — one or more containers providing robotics simulation via ROS and Gazebo
* **Sagemaker** — container running the model training job
* **RL Coach** — container that bootstraps the Sagemaker container using the Sagemaker SDK and Sagemaker Local
### Core Technologies
| Component | Version |
|-----------|---------|
| Ubuntu | 24.04 |
| Python | 3.12 |
| TensorFlow | 2.20 |
| CUDA | 12.6 (GPU only) |
| Redis | 8.6.1 |
| ROS | 2 Jazzy |
| Gazebo | Harmonic |
## Recommended AWS Instance Types
| Use case | Instance type | Notes |
|----------|--------------|-------|
| GPU | `g4dn.2xlarge` | NVIDIA T4, fastest training |
| Intel CPU | `c7i.2xlarge` | Latest Intel CPU generation, cost-effective CPU training |
| ARM CPU (Graviton) | `c8g.2xlarge` | AWS Graviton4, best price/performance for CPU |
### Images
Pre-built images are available on [Docker Hub](https://hub.docker.com/repository/docker/awsdeepracercommunity/deepracer-simapp) as `awsdeepracercommunity/deepracer-simapp:<VERSION>-cpu` (CPU) and `awsdeepracercommunity/deepracer-simapp:<VERSION>-gpu` (CUDA GPU). Both support OpenGL acceleration.
During installation DRfC will automatically pull the latest image based on whether you have a GPU or CPU installation.
## Documentation
Full documentation can be found on the [Deepracer-for-Cloud GitHub Pages](https://aws-deepracer-community.github.io/deepracer-for-cloud).
For importing and managing models via the community [DeepRacer on AWS (DRoA)](https://aws.amazon.com/solutions/implementations/deepracer-on-aws/) console, see the [DRoA integration guide](docs/droa.md).
## Support
* For general support it is suggested to join the [AWS DeepRacing Community](https://deepracing.io/). The Community Slack has a channel #dr-training-local where the community provides active support.
* Create a GitHub issue if you find an actual code issue, or where updates to documentation would be required.
================================================
FILE: bin/activate.sh
================================================
#!/usr/bin/env bash
# Portable readlink -f: BSD readlink (macOS) does not support -f.
_realpath() {
if command -v realpath >/dev/null 2>&1; then
realpath "$1"
elif command -v grealpath >/dev/null 2>&1; then
grealpath "$1"
else
readlink -f "$1"
fi
}
# Portable version comparison: sort -V is GNU-only; macOS ships BSD sort.
verlte() {
local v1 v2
v1="$1" v2="$2"
# Split into numeric fields and compare segment by segment.
IFS='.' read -r -a a1 <<< "$v1"
IFS='.' read -r -a a2 <<< "$v2"
local i
for (( i=0; i<${#a1[@]} || i<${#a2[@]}; i++ )); do
local n1=${a1[$i]:-0} n2=${a2[$i]:-0}
if (( n1 < n2 )); then return 0; fi
if (( n1 > n2 )); then return 1; fi
done
return 0
}
# Find a free /24 subnet in 192.168.200-254/24 that doesn't conflict with
# existing Docker networks or host routes.
function find_free_subnet() {
local USED NW_IDS
NW_IDS=$(docker network ls -q 2>/dev/null)
USED=$(
{ [[ -n "$NW_IDS" ]] && docker network inspect $NW_IDS \
--format '{{range .IPAM.Config}}{{.Subnet}}{{end}}' 2>/dev/null; \
if ip route show 2>/dev/null | grep -q .; then
ip route show 2>/dev/null | awk '{print $1}' | grep -E '^[0-9]+\.'
else
# macOS: parse inet routes from netstat
netstat -rn -f inet 2>/dev/null | awk 'NR>4 && $1~/^[0-9]/{print $1}'
fi; } | sort -u
)
for j in $(seq 200 254); do
local CANDIDATE="192.168.${j}.0/24"
if ! echo "$USED" | grep -qF "$CANDIDATE"; then
echo "$CANDIDATE"
return 0
fi
done
return 1
}
# Create the sagemaker-local Docker network with the required compose labels.
function _create_sagemaker_network() {
local NW_SUBNET=$(find_free_subnet)
local SWARM_FLAGS
[[ "${DR_DOCKER_STYLE,,}" == "swarm" ]] && SWARM_FLAGS="-d overlay --attachable --scope swarm"
docker network create "$SAGEMAKER_NW" $SWARM_FLAGS \
${NW_SUBNET:+--subnet=$NW_SUBNET} \
--label com.docker.compose.network=sagemaker-local \
--label com.docker.compose.project=sagemaker-local >/dev/null 2>&1
}
function dr-update-env {
local _saved_experiment="${DR_EXPERIMENT_NAME:-}"
if [[ -f "$DIR/system.env" ]]; then
LINES=$(grep -v '^#' $DIR/system.env)
for l in $LINES; do
env_var=$(echo $l | cut -f1 -d\=)
env_val=$(echo $l | cut -f2 -d\=)
eval "export $env_var=$env_val"
done
else
echo "File system.env does not exist."
return 1
fi
# Restore DR_EXPERIMENT_NAME if it was pre-set (e.g. via -e flag) so it takes
# precedence over any value in system.env.
if [[ -n "$_saved_experiment" ]]; then
export DR_EXPERIMENT_NAME="$_saved_experiment"
fi
if [[ ! -z $DR_EXPERIMENT_NAME ]]; then
if [[ ! -d "$DIR/experiments" ]]; then
echo "Experiments directory $DIR/experiments does not exist."
return 1
fi
if [[ ! -d "$DIR/experiments/$DR_EXPERIMENT_NAME" ]]; then
echo "Experiment directory $DIR/experiments/$DR_EXPERIMENT_NAME does not exist."
return 1
fi
export DR_CONFIG="$DIR/experiments/$DR_EXPERIMENT_NAME/run.env"
fi
if [[ -f "$DR_CONFIG" ]]; then
LINES=$(grep -v '^#' $DR_CONFIG)
for l in $LINES; do
env_var=$(echo $l | cut -f1 -d\=)
env_val=$(echo $l | cut -f2 -d\=)
eval "export $env_var=$env_val"
done
else
echo "File ${DR_CONFIG} does not exist."
return 1
fi
if [[ -z "${DR_RUN_ID}" ]]; then
export DR_RUN_ID=0
fi
if [[ "${DR_DOCKER_STYLE,,}" == "swarm" ]]; then
export DR_ROBOMAKER_TRAIN_PORT=$(expr 8080 + $DR_RUN_ID)
export DR_ROBOMAKER_EVAL_PORT=$(expr 8180 + $DR_RUN_ID)
export DR_ROBOMAKER_GUI_PORT=$(expr 5900 + $DR_RUN_ID)
else
export DR_ROBOMAKER_TRAIN_PORT="8080-8089"
export DR_ROBOMAKER_EVAL_PORT="8080-8089"
export DR_ROBOMAKER_GUI_PORT="5901-5920"
fi
# Setting the default region to ensure that things work also in the
# non default regions.
export AWS_DEFAULT_REGION=${DR_AWS_APP_REGION}
}
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-}")" >/dev/null 2>&1 && pwd)"
DIR="$(dirname $SCRIPT_DIR)"
export DR_DIR=$DIR
# Parse arguments: -e <experiment-name> or a positional config file path
_DR_OPT_EXPERIMENT=""
OPTIND=1
while getopts ":e:" _opt; do
case $_opt in
e) _DR_OPT_EXPERIMENT="$OPTARG" ;;
\?) break ;;
esac
done
shift $(( OPTIND - 1 ))
unset _opt OPTIND
if [[ -n "$_DR_OPT_EXPERIMENT" ]]; then
export DR_EXPERIMENT_NAME="$_DR_OPT_EXPERIMENT"
fi
unset _DR_OPT_EXPERIMENT
EXPERIMENT_FLAG="$( grep DR_EXPERIMENT_NAME $DIR/system.env | grep -v \#)"
if [[ -f "$1" ]]; then
export DR_CONFIG=$(_realpath "$1")
dr-update-env || return 1
elif [[ -n "${DR_EXPERIMENT_NAME:-}" ]] || [[ -n "$EXPERIMENT_FLAG" ]]; then
dr-update-env || return 1
elif [[ -f "$DIR/run.env" ]]; then
export DR_CONFIG="$DIR/run.env"
dr-update-env || return 1
else
echo "No configuration file."
return 1
fi
## Activate Python virtual environment
if [[ -f "${DR_DIR}/.venv/bin/activate" ]]; then
source "${DR_DIR}/.venv/bin/activate"
else
echo "WARNING: Python venv not found at ${DR_DIR}/.venv. Run bin/prepare.sh to create it."
fi
# Check if Docker runs -- if not, then start it.
if [[ "$(type service 2>/dev/null)" ]]; then
service docker status >/dev/null || sudo service docker start
fi
## Check if WSL2
if [[ -f /proc/version ]] && grep -qi Microsoft /proc/version && grep -q "WSL2" /proc/version; then
IS_WSL2="yes"
fi
# Check if we will use Docker Swarm or Docker Compose
# If not defined then use Swarm
if [[ -z "${DR_DOCKER_STYLE}" ]]; then
export DR_DOCKER_STYLE="swarm"
fi
if [[ "${DR_DOCKER_STYLE,,}" == "swarm" ]]; then
export DR_DOCKER_FILE_SEP="-c"
SWARM_NODE=$(docker node inspect self | jq .[0].ID -r)
SWARM_NODE_UPDATE=$(docker node update --label-add Sagemaker=true $SWARM_NODE)
else
export DR_DOCKER_FILE_SEP="-f"
fi
# Check if sagemaker-local network has required compose label; recreate if missing
SAGEMAKER_NW='sagemaker-local'
if ! docker network ls --format '{{.Name}}' | grep -q "^${SAGEMAKER_NW}$"; then
echo "Network $SAGEMAKER_NW does not exist. Creating."
_create_sagemaker_network
else
NW_LABEL_NETWORK=$(docker network inspect "$SAGEMAKER_NW" --format '{{index .Labels "com.docker.compose.network"}}')
if [[ "$NW_LABEL_NETWORK" != "sagemaker-local" ]]; then
echo "Network $SAGEMAKER_NW is missing required label."
NW_CONTAINERS=$(docker network inspect "$SAGEMAKER_NW" --format '{{len .Containers}}')
if [[ "${NW_CONTAINERS:-0}" -gt 0 ]]; then
dr-stop-all
fi
docker network rm "$SAGEMAKER_NW" >/dev/null 2>&1
_create_sagemaker_network
echo "Network $SAGEMAKER_NW recreated with required labels."
fi
fi
# Check if CUDA_VISIBLE_DEVICES is configured.
if [[ -n "${CUDA_VISIBLE_DEVICES}" ]]; then
echo "WARNING: You have CUDA_VISIBLE_DEVICES defined. The will no longer work as"
echo " expected. To control GPU assignment use DR_ROBOMAKER_CUDA_DEVICES"
echo " and DR_SAGEMAKER_CUDA_DEVICES and rlcoach v5.0.1 or later."
fi
# Check if CUDA_VISIBLE_DEVICES is configured.
if [ "${DR_CLOUD,,}" == "local" ] && [ -z "${DR_MINIO_IMAGE}" ]; then
echo "WARNING: You have not configured DR_MINIO_IMAGE in system.env."
echo " System will default to tag RELEASE.2022-10-24T18-35-07Z"
export DR_MINIO_IMAGE="RELEASE.2022-10-24T18-35-07Z"
fi
# Prepare the docker compose files depending on parameters
if [[ "${DR_CLOUD,,}" == "azure" ]]; then
export DR_LOCAL_S3_ENDPOINT_URL="http://localhost:9000"
export DR_MINIO_URL="http://minio:9000"
DR_LOCAL_PROFILE_ENDPOINT_URL="--profile $DR_LOCAL_S3_PROFILE --endpoint-url $DR_LOCAL_S3_ENDPOINT_URL"
DR_TRAIN_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml"
DR_EVAL_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml"
DR_MINIO_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local.yml"
elif [[ "${DR_CLOUD,,}" == "local" ]]; then
export DR_LOCAL_S3_ENDPOINT_URL="http://localhost:9000"
export DR_MINIO_URL="http://minio:9000"
DR_LOCAL_PROFILE_ENDPOINT_URL="--profile $DR_LOCAL_S3_PROFILE --endpoint-url $DR_LOCAL_S3_ENDPOINT_URL"
DR_TRAIN_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml"
DR_EVAL_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml"
DR_MINIO_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local.yml"
elif [[ "${DR_CLOUD,,}" == "remote" ]]; then
export DR_LOCAL_S3_ENDPOINT_URL="$DR_REMOTE_MINIO_URL"
export DR_MINIO_URL="$DR_REMOTE_MINIO_URL"
DR_LOCAL_PROFILE_ENDPOINT_URL="--profile $DR_LOCAL_S3_PROFILE --endpoint-url $DR_LOCAL_S3_ENDPOINT_URL"
DR_TRAIN_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml"
DR_EVAL_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml"
DR_MINIO_COMPOSE_FILE=""
elif [[ "${DR_CLOUD,,}" == "aws" ]]; then
DR_LOCAL_PROFILE_ENDPOINT_URL=""
DR_TRAIN_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-aws.yml"
DR_EVAL_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-aws.yml"
else
DR_LOCAL_PROFILE_ENDPOINT_URL=""
DR_TRAIN_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training.yml"
DR_EVAL_COMPOSE_FILE="$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval.yml"
fi
# Add host X support for Linux and WSL2
if [[ "${DR_HOST_X,,}" == "true" ]]; then
if [[ "$IS_WSL2" == "yes" ]]; then
# Check if package x11-server-utils is installed
if ! command -v xset &> /dev/null; then
echo "WARNING: Package x11-server-utils is not installed. Please install it to enable X11 support."
fi
if [[ "${DR_DOCKER_STYLE,,}" == "swarm" && "${DR_USE_GUI,,}" == "true" ]]; then
echo "WARNING: Cannot use GUI in Swarm mode. Please switch to Compose mode."
fi
DR_TRAIN_COMPOSE_FILE="$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local-xorg-wsl.yml"
DR_EVAL_COMPOSE_FILE="$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local-xorg-wsl.yml"
else
DR_TRAIN_COMPOSE_FILE="$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local-xorg.yml"
DR_EVAL_COMPOSE_FILE="$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local-xorg.yml"
fi
fi
# Prevent docker swarms to restart
if [[ "${DR_DOCKER_STYLE,,}" == "swarm" ]]; then
DR_TRAIN_COMPOSE_FILE="$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training-swarm.yml"
DR_EVAL_COMPOSE_FILE="$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval-swarm.yml"
fi
# Enable logs in CloudWatch
if [[ "${DR_CLOUD_WATCH_ENABLE,,}" == "true" ]]; then
DR_TRAIN_COMPOSE_FILE="$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-cwlog.yml"
DR_EVAL_COMPOSE_FILE="$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-cwlog.yml"
fi
# Enable local simapp mount
if [[ -d "${DR_ROBOMAKER_MOUNT_SIMAPP_DIR,,}" ]]; then
DR_TRAIN_COMPOSE_FILE="$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-simapp.yml"
DR_EVAL_COMPOSE_FILE="$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-simapp.yml"
fi
# Enable local scripts mount
if [[ -d "${DR_ROBOMAKER_MOUNT_SCRIPTS_DIR,,}" ]]; then
DR_TRAIN_COMPOSE_FILE="$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-robomaker-scripts.yml"
DR_EVAL_COMPOSE_FILE="$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-robomaker-scripts.yml"
fi
## Check if we have an AWS IAM assumed role, or if we need to set specific credentials.
## On macOS/Darwin, IMDS is not reachable from inside the Colima VM, so always use
## explicit keys from the configured AWS profile.
if [[ "$(uname -s)" != "Darwin" ]] && [ "${DR_CLOUD,,}" == "aws" ] && [ $(aws --output json sts get-caller-identity 2>/dev/null | jq '.Arn' | awk /assumed-role/ | wc -l) -gt 0 ]; then
export DR_LOCAL_S3_AUTH_MODE="role"
else
export DR_LOCAL_ACCESS_KEY_ID=$(aws --profile $DR_LOCAL_S3_PROFILE configure get aws_access_key_id | xargs)
export DR_LOCAL_SECRET_ACCESS_KEY=$(aws --profile $DR_LOCAL_S3_PROFILE configure get aws_secret_access_key | xargs)
if [[ -z "${DR_LOCAL_ACCESS_KEY_ID}" || -z "${DR_LOCAL_SECRET_ACCESS_KEY}" ]]; then
echo "ERROR: AWS credentials not found in profile '${DR_LOCAL_S3_PROFILE}'."
echo " Run: aws configure --profile ${DR_LOCAL_S3_PROFILE}"
return 1
fi
DR_TRAIN_COMPOSE_FILE="$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-keys.yml"
DR_EVAL_COMPOSE_FILE="$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-keys.yml"
export DR_UPLOAD_PROFILE="--profile $DR_UPLOAD_S3_PROFILE"
export DR_LOCAL_S3_AUTH_MODE="profile"
fi
export DR_TRAIN_COMPOSE_FILE
export DR_EVAL_COMPOSE_FILE
export DR_LOCAL_PROFILE_ENDPOINT_URL
if [[ -n "${DR_MINIO_COMPOSE_FILE}" ]]; then
export MINIO_UID=$(id -u)
export MINIO_USERNAME=$(id -u -n)
export MINIO_GID=$(id -g)
export MINIO_GROUPNAME=$(id -g -n)
if [[ "${DR_DOCKER_STYLE,,}" == "swarm" ]]; then
if [ "$DR_DOCKER_MAJOR_VERSION" -gt 24 ]; then
DETACH_FLAG="--detach=true"
fi
docker stack deploy $DR_MINIO_COMPOSE_FILE $DETACH_FLAG s3
else
docker compose $DR_MINIO_COMPOSE_FILE -p s3 up -d
fi
fi
## Version check
if [[ -z "$DR_SIMAPP_SOURCE" || -z "$DR_SIMAPP_VERSION" ]]; then
DEFAULT_SIMAPP_VERSION=$(jq -r '.containers.simapp | select (.!=null)' $DIR/defaults/dependencies.json)
echo "ERROR: Variable DR_SIMAPP_SOURCE or DR_SIMAPP_VERSION not defined."
echo ""
echo "As of version 5.3 the variables DR_SIMAPP_SOURCE and DR_SIMAPP_VERSION are required in system.env."
echo "To continue to use the separate Sagemaker, Robomaker and RL Coach images, run 'git checkout legacy'."
echo ""
echo "Please add the following lines to your system.env file:"
echo "DR_SIMAPP_SOURCE=awsdeepracercommunity/deepracer-simapp"
echo "DR_SIMAPP_VERSION=${DEFAULT_SIMAPP_VERSION}-gpu"
return
fi
DEPENDENCY_VERSION=$(jq -r '.master_version | select (.!=null)' $DIR/defaults/dependencies.json)
SIMAPP_VER=$(docker inspect ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION} 2>/dev/null | jq -r .[].Config.Labels.version)
if [ -z "$SIMAPP_VER" ]; then SIMAPP_VER=$SIMAPP_VERSION; fi
if [ -z "$SIMAPP_VER" ]; then
# Image not pulled -- fall back to checking the configured version tag
SIMAPP_VER=$(echo ${DR_SIMAPP_VERSION} | grep -oP '^\d+\.\d+(\.\d+)?')
fi
if [ -n "$SIMAPP_VER" ] && ! verlte $DEPENDENCY_VERSION $SIMAPP_VER; then
echo "WARNING: Incompatible version of Deepracer Simapp. Expected >$DEPENDENCY_VERSION. Got $SIMAPP_VER."
fi
# Get Docker version
DOCKER_VERSION=$(docker --version | grep -oE '[0-9]+\.[0-9]+\.[0-9]+' | head -1)
DR_DOCKER_MAJOR_VERSION=$(echo $DOCKER_VERSION | cut -d. -f1)
export DR_DOCKER_MAJOR_VERSION
## Create a dr-local-aws command
alias dr-local-aws='aws $DR_LOCAL_PROFILE_ENDPOINT_URL'
source $SCRIPT_DIR/scripts_wrapper.sh
source $SCRIPT_DIR/module/summary.sh
source $SCRIPT_DIR/module/droa.sh
function dr-update {
dr-update-env
}
function dr-reload {
source $DIR/bin/activate.sh $DR_CONFIG
}
## Show summary after activation if not in quiet mode and if in interactive shell
[[ $- == *i* && "${DR_QUIET_ACTIVATE,,}" != "true" ]] && dr-summary
================================================
FILE: bin/autorun.sh
================================================
#!/usr/bin/env bash
## this is the default autorun script
## file should run automatically after init.sh completes.
## this script downloads your configured run.env, system.env and any custom container requests
INSTALL_DIR_TEMP="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." >/dev/null 2>&1 && pwd)"
## retrieve the s3_location name you sent the instance in user data launch
## assumed to first line of file
S3_LOCATION=$(awk 'NR==1 {print; exit}' $INSTALL_DIR_TEMP/autorun.s3url)
source $INSTALL_DIR_TEMP/bin/activate.sh
## get the updatated run.env and system.env files and any others you stashed in s3
aws s3 sync s3://$S3_LOCATION $INSTALL_DIR_TEMP
## get the right docker containers, if needed
SYSENV="$INSTALL_DIR_TEMP/system.env"
SAGEMAKER_IMAGE=$(cat $SYSENV | grep DR_SAGEMAKER_IMAGE | sed 's/.*=//')
ROBOMAKER_IMAGE=$(cat $SYSENV | grep DR_ROBOMAKER_IMAGE | sed 's/.*=//')
docker pull awsdeepracercommunity/deepracer-sagemaker:$SAGEMAKER_IMAGE
docker pull awsdeepracercommunity/deepracer-robomaker:$ROBOMAKER_IMAGE
dr-reload
date | tee $INSTALL_DIR_TEMP/DONE-AUTORUN
## start training
cd $INSTALL_DIR_TEMP/scripts/training
./start.sh
================================================
FILE: bin/detect.sh
================================================
#!/usr/bin/env bash
## What am I?
if [[ -f /var/run/cloud-init/instance-data.json ]]; then
# We have a cloud-init environment (Azure or AWS).
CLOUD_NAME=$(jq -r '.v1."cloud-name"' /var/run/cloud-init/instance-data.json)
if [[ "${CLOUD_NAME}" == "azure" ]]; then
export CLOUD_NAME
export CLOUD_INSTANCETYPE=$(jq -r '.ds."meta_data".imds.compute."vmSize"' /var/run/cloud-init/instance-data.json)
elif [[ "${CLOUD_NAME}" == "aws" ]]; then
export CLOUD_NAME
export CLOUD_INSTANCETYPE=$(jq -r '.ds."meta-data"."instance-type"' /var/run/cloud-init/instance-data.json)
else
export CLOUD_NAME=local
fi
else
export CLOUD_NAME=local
fi
================================================
FILE: bin/init.sh
================================================
#!/usr/bin/env bash
trap ctrl_c INT
function ctrl_c() {
echo "Requested to stop."
exit 1
}
# Portable sed -i: BSD sed (macOS) requires an explicit empty-string backup suffix.
if sed --version 2>/dev/null | grep -q GNU; then
sedi() { sed -i "$@"; }
else
sedi() { sed -i '' "$@"; }
fi
# Find a free /24 subnet in 192.168.200-254/24 that doesn't conflict with
# existing Docker networks or host routes.
function find_free_subnet() {
local USED NW_IDS
NW_IDS=$(docker network ls -q 2>/dev/null)
USED=$(
{ [[ -n "$NW_IDS" ]] && docker network inspect $NW_IDS \
--format '{{range .IPAM.Config}}{{.Subnet}}{{end}}' 2>/dev/null; \
if ip route show 2>/dev/null | grep -q .; then
ip route show 2>/dev/null | awk '{print $1}' | grep -E '^[0-9]+\.'
else
# macOS: parse inet routes from netstat
netstat -rn -f inet 2>/dev/null | awk 'NR>4 && $1~/^[0-9]/{print $1}'
fi; } | sort -u
)
for j in $(seq 200 254); do
local CANDIDATE="192.168.${j}.0/24"
if ! echo "$USED" | grep -qF "$CANDIDATE"; then
echo "$CANDIDATE"
return 0
fi
done
return 1
}
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")" >/dev/null 2>&1 && pwd)"
INSTALL_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")/.." >/dev/null 2>&1 && pwd)"
if [[ "$INSTALL_DIR" == *\ * ]]; then
echo "Deepracer-for-Cloud cannot be installed in path with spaces. Exiting."
exit 1
fi
OPT_ARCH="gpu"
OPT_CLOUD=""
OPT_STYLE="swarm"
while getopts ":m:c:a:s:" opt; do
case $opt in
a)
OPT_ARCH="$OPTARG"
;;
m)
OPT_MOUNT="$OPTARG"
;;
c)
OPT_CLOUD="$OPTARG"
;;
s)
OPT_STYLE="$OPTARG"
;;
\?)
echo "Invalid option -$OPTARG" >&2
exit 1
;;
esac
done
# Check if cloud type is set, if not try to detect it. If detection fails, default to local.
if [[ -z "$OPT_CLOUD" ]]; then
source $SCRIPT_DIR/detect.sh
OPT_CLOUD=$CLOUD_NAME
echo "Detected cloud type to be $CLOUD_NAME"
fi
# Check GPU
if [ "$OPT_ARCH" = "gpu" ]; then
if GPUS="$(docker run --rm --gpus all --pull=missing \
nvcr.io/nvidia/cuda:12.6.3-base-ubuntu24.04 \
bash -lc 'nvidia-smi -L | wc -l')" ; then
if [ "${GPUS:-0}" -ge 1 ]; then
echo "Detected ${GPUS} GPU(s) inside docker."
else
echo "No GPU detected in docker. Using CPU"
OPT_ARCH="cpu"
fi
else
echo "Failed to run GPU test container. Using CPU"
OPT_ARCH="cpu"
fi
fi
cd $INSTALL_DIR
# create directory structure for docker volumes
mkdir -p $INSTALL_DIR/data $INSTALL_DIR/data/minio $INSTALL_DIR/data/minio/bucket
mkdir -p $INSTALL_DIR/data/logs $INSTALL_DIR/data/analysis $INSTALL_DIR/data/scripts $INSTALL_DIR/tmp
sudo mkdir -p /tmp/sagemaker
sudo chmod -R g+w /tmp/sagemaker
# create symlink to current user's home .aws directory
# NOTE: AWS cli must be installed for this to work
# https://docs.aws.amazon.com/cli/latest/userguide/install-linux-al2017.html
mkdir -p $(eval echo "~${USER}")/.aws $INSTALL_DIR/docker/volumes/
ln -sf $(eval echo "~${USER}")/.aws $INSTALL_DIR/docker/volumes/
# copy rewardfunctions
mkdir -p $INSTALL_DIR/custom_files
cp $INSTALL_DIR/defaults/hyperparameters.json $INSTALL_DIR/custom_files/
cp $INSTALL_DIR/defaults/model_metadata.json $INSTALL_DIR/custom_files/
cp $INSTALL_DIR/defaults/reward_function.py $INSTALL_DIR/custom_files/
cp $INSTALL_DIR/defaults/template-system.env $INSTALL_DIR/system.env
cp $INSTALL_DIR/defaults/template-run.env $INSTALL_DIR/run.env
if [[ "${OPT_CLOUD}" == "aws" ]]; then
IMDS_TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
AWS_EC2_AVAIL_ZONE=$(curl -s -H "X-aws-ec2-metadata-token: $IMDS_TOKEN" http://169.254.169.254/latest/meta-data/placement/availability-zone)
AWS_REGION="$(echo $AWS_EC2_AVAIL_ZONE | sed 's/[a-z]$//')"
sedi "s/<AWS_DR_BUCKET>/not-defined/g" $INSTALL_DIR/system.env
sedi "s/<LOCAL_PROFILE>/default/g" $INSTALL_DIR/system.env
elif [[ "${OPT_CLOUD}" == "remote" ]]; then
AWS_REGION="us-east-1"
sedi "s/<LOCAL_PROFILE>/minio/g" $INSTALL_DIR/system.env
sedi "s/<AWS_DR_BUCKET>/not-defined/g" $INSTALL_DIR/system.env
echo "Please run 'aws configure --profile minio' to set the credentials"
echo "Please define DR_REMOTE_MINIO_URL in system.env to point to remote minio instance."
else
AWS_REGION="us-east-1"
MINIO_PROFILE="minio"
sedi "s/<LOCAL_PROFILE>/$MINIO_PROFILE/g" $INSTALL_DIR/system.env
sedi "s/<AWS_DR_BUCKET>/not-defined/g" $INSTALL_DIR/system.env
aws configure --profile $MINIO_PROFILE get aws_access_key_id >/dev/null 2>/dev/null
if [[ "$?" -ne 0 ]]; then
echo "Creating default minio credentials in AWS profile '$MINIO_PROFILE'"
aws configure --profile $MINIO_PROFILE set aws_access_key_id $(openssl rand -base64 12)
aws configure --profile $MINIO_PROFILE set aws_secret_access_key $(openssl rand -base64 12)
aws configure --profile $MINIO_PROFILE set region us-east-1
fi
fi
sedi "s/<AWS_DR_BUCKET_ROLE>/to-be-defined/g" $INSTALL_DIR/system.env
sedi "s/<CLOUD_REPLACE>/$OPT_CLOUD/g" $INSTALL_DIR/system.env
sedi "s/<REGION_REPLACE>/$AWS_REGION/g" $INSTALL_DIR/system.env
if [[ "${OPT_ARCH}" == "gpu" ]]; then
SAGEMAKER_TAG="gpu"
else
SAGEMAKER_TAG="cpu"
fi
#set proxys if required
for arg in "$@"; do
IFS='=' read -ra part <<<"$arg"
if [ "${part[0]}" == "--http_proxy" ] || [ "${part[0]}" == "--https_proxy" ] || [ "${part[0]}" == "--no_proxy" ]; then
var=${part[0]:2}=${part[1]}
args="${args} --build-arg ${var}"
fi
done
# Download docker images. Change to build statements if locally built images are desired.
SIMAPP_VERSION=$(jq -r '.containers.simapp | select (.!=null)' $INSTALL_DIR/defaults/dependencies.json)
sedi "s/<SIMAPP_VERSION_TAG>/$SIMAPP_VERSION-$SAGEMAKER_TAG/g" $INSTALL_DIR/system.env
docker pull awsdeepracercommunity/deepracer-simapp:$SIMAPP_VERSION-$SAGEMAKER_TAG
# create the network sagemaker-local if it doesn't exit
SAGEMAKER_NW='sagemaker-local'
if [[ "${OPT_STYLE}" == "swarm" ]]; then
docker node ls >/dev/null 2>/dev/null
if [ $? -eq 0 ]; then
echo "Swarm exists. Exiting."
exit 1
fi
docker swarm init
if [ $? -ne 0 ]; then
if ip route 2>/dev/null | grep -q default; then
DEFAULT_IFACE=$(ip route | grep default | awk '{print $5}')
DEFAULT_IP=$(ip addr show $DEFAULT_IFACE | grep "inet\b" | awk '{print $2}' | cut -d/ -f1)
else
# macOS fallback
DEFAULT_IFACE=$(route -n get default 2>/dev/null | awk '/interface:/{print $2}')
DEFAULT_IP=$(ipconfig getifaddr "$DEFAULT_IFACE" 2>/dev/null)
fi
if [ -z "$DEFAULT_IP" ]; then
echo "Could not determine default IP address. Exiting."
exit 1
fi
echo "Error when creating swarm, trying again with advertise address $DEFAULT_IP."
docker swarm init --advertise-addr $DEFAULT_IP
if [ $? -ne 0 ]; then
echo "Cound not create swarm. Exiting."
exit 1
fi
fi
SWARM_NODE=$(docker node inspect self | jq .[0].ID -r)
docker node update --label-add Sagemaker=true $SWARM_NODE >/dev/null 2>/dev/null
docker node update --label-add Robomaker=true $SWARM_NODE >/dev/null 2>/dev/null
NW_SUBNET=$(find_free_subnet)
docker network ls | grep -q $SAGEMAKER_NW && docker network rm $SAGEMAKER_NW >/dev/null 2>&1
docker network create $SAGEMAKER_NW -d overlay --attachable --scope swarm \
${NW_SUBNET:+--subnet=$NW_SUBNET} \
--label com.docker.compose.network=sagemaker-local \
--label com.docker.compose.project=sagemaker-local
elif [[ "${OPT_STYLE}" == "compose" ]]; then
NW_SUBNET=$(find_free_subnet)
docker network ls | grep -q $SAGEMAKER_NW || \
docker network create $SAGEMAKER_NW ${NW_SUBNET:+--subnet=$NW_SUBNET}
else
echo "Unknown docker style ${OPT_STYLE}. Exiting."
exit 1
fi
sedi "s/<DOCKER_STYLE>/${OPT_STYLE}/g" $INSTALL_DIR/system.env
# ensure our variables are set on startup - not for local setup.
if [[ "${OPT_CLOUD}" != "local" ]]; then
touch "$HOME/.profile"
NUM_IN_PROFILE=$(grep -c "$INSTALL_DIR/bin/activate.sh" "$HOME/.profile" || true)
if [ "$NUM_IN_PROFILE" -eq 0 ]; then
echo "source $INSTALL_DIR/bin/activate.sh" >>$HOME/.profile
fi
fi
# mark as done
date | tee $INSTALL_DIR/DONE
## Optional auturun feature
# if using automation scripts to auto configure and run
# you must pass s3_training_location.txt to this instance in order for this to work
if [[ -f "$INSTALL_DIR/autorun.s3url" ]]; then
## read in first line. first line always assumed to be training location regardless what else is in file
TRAINING_LOC=$(awk 'NR==1 {print; exit}' $INSTALL_DIR/autorun.s3url)
#get bucket name
TRAINING_BUCKET=${TRAINING_LOC%%/*}
#get prefix. minor exception handling in case there is no prefix and a root bucket is passed
if [[ "$TRAINING_LOC" == *"/"* ]]; then
TRAINING_PREFIX=${TRAINING_LOC#*/}
else
TRAINING_PREFIX=""
fi
##check if custom autorun script exists in s3 training bucket. If not, use default in this repo
aws s3api head-object --bucket $TRAINING_BUCKET --key $TRAINING_PREFIX/autorun.sh || not_exist=true
if [ $not_exist ]; then
echo "custom file does not exist, using local copy"
else
echo "custom script does exist, use it"
aws s3 cp s3://$TRAINING_LOC/autorun.sh $INSTALL_DIR/bin/autorun.sh
fi
chmod +x $INSTALL_DIR/bin/autorun.sh
bash -c "source $INSTALL_DIR/bin/autorun.sh"
fi
================================================
FILE: bin/module/droa.sh
================================================
#!/usr/bin/env bash
# DRoA (DeepRacer on AWS) shell functions.
# Sourced by bin/activate.sh alongside scripts_wrapper.sh and summary.sh.
function droa-list-models {
dr-update-env && python3 "${DR_DIR}/scripts/droa/list_models.py" "$@"
}
function droa-get-model {
dr-update-env && python3 "${DR_DIR}/scripts/droa/get_model.py" "$@"
}
function droa-download-logs {
dr-update-env && python3 "${DR_DIR}/scripts/droa/download_logs.py" "$@"
}
function droa-delete-model {
dr-update-env && python3 "${DR_DIR}/scripts/droa/delete_model.py" "$@"
}
function droa-import-model {
dr-update-env && python3 "${DR_DIR}/scripts/droa/import_model.py" "$@"
}
================================================
FILE: bin/module/summary.sh
================================================
function dr-summary {
# ANSI colour palette
local RST='\033[0m'
local BOLD='\033[1m'
local DIM='\033[2m'
local C_BORDER='\033[38;5;33m' # blue
local C_HEADER='\033[38;5;39m' # bright blue
local C_KEY='\033[38;5;250m' # light grey
local C_VAL='\033[38;5;222m' # amber
local C_OK='\033[38;5;82m' # green
local C_WARN='\033[38;5;220m' # yellow
local C_ERR='\033[38;5;196m' # red
local C_SECTION='\033[38;5;75m' # sky blue
# ── dynamic width / height ──────────────────────────────────────────────
local TERM_W WIDE=false W
TERM_W=$(tput cols 2>/dev/null || echo 80)
TERM_H=$(tput lines 2>/dev/null || echo 24)
_dr_lines=0 # running line counter (non-local so helpers can increment)
if [[ $TERM_W -ge 120 ]]; then
W=118 # total box width = W+2 = 120
WIDE=true
else
W=$(( TERM_W - 2 ))
[[ $W -lt 78 ]] && W=78
fi
# Two-column content widths: │ space WL space │ space WR space │ = WL+WR+7 = W+2
local WL=$(( (W - 5) / 2 ))
local WR=$(( W - 5 - WL ))
# ── helpers ───────────────────────────────────────────────────────────────
_dr_hline() {
local L="$1" M="$2" R="$3"
printf "${C_BORDER}${L}"; printf "${M}%.0s" $(seq 1 $W); printf "${R}${RST}\n"
(( ++_dr_lines ))
}
_dr_row() {
local text="$1"
local plain; plain=$(echo -e "$text" | sed 's/\x1b\[[0-9;]*m//g')
local pad=$(( W - ${#plain} - 2 ))
[[ $pad -lt 0 ]] && pad=0
printf "${C_BORDER}│${RST} %b%-*s ${C_BORDER}│${RST}\n" "$text" "$pad" ""
(( ++_dr_lines ))
}
_dr_blank() { _dr_row ""; }
_dr_section() {
_dr_hline "├" "─" "┤"
local label=" ${BOLD}${C_SECTION}$1${RST}"
[[ -n "${2:-}" ]] && label+="${DIM} $2${RST}"
_dr_row "$label"
_dr_hline "├" "─" "┤"
}
_dr_kv() {
local k="$1" v="$2" s="${3:-}"
local vc="$C_VAL"
[[ "$s" == "ok" ]] && vc="$C_OK"
[[ "$s" == "warn" ]] && vc="$C_WARN"
[[ "$s" == "err" ]] && vc="$C_ERR"
_dr_row " ${C_KEY}$(printf '%-22s' "$k")${RST} ${vc}${v}${RST}"
}
_dr_hline_2col() { # L M1 SEP M2 R
local L="$1" M1="$2" SEP="$3" M2="$4" R="$5"
local LD=$(( WL + 2 )) RD=$(( WR + 2 ))
printf "${C_BORDER}${L}"
printf "${M1}%.0s" $(seq 1 $LD)
printf "${SEP}"
printf "${M2}%.0s" $(seq 1 $RD)
printf "${R}${RST}\n"
(( ++_dr_lines ))
}
_dr_row_2col() {
local lt="$1" rt="${2:-}"
local lp; lp=$(echo -e "$lt" | sed 's/\x1b\[[0-9;]*m//g')
local rp; rp=$(echo -e "$rt" | sed 's/\x1b\[[0-9;]*m//g')
local lpad=$(( WL - ${#lp} )) rpad=$(( WR - ${#rp} ))
[[ $lpad -lt 0 ]] && lpad=0
[[ $rpad -lt 0 ]] && rpad=0
printf "${C_BORDER}│${RST} %b%-*s ${C_BORDER}│${RST} %b%-*s ${C_BORDER}│${RST}\n" \
"$lt" "$lpad" "" "$rt" "$rpad" ""
(( ++_dr_lines ))
}
# ── spinner (shown while pre-compute phase runs) ─────────────────────────
local _dr_spinner_pid=""
if [[ -t 1 ]]; then
{ (
local frames=('⠋' '⠙' '⠹' '⠸' '⠼' '⠴' '⠦' '⠧' '⠇' '⠏') i=0
while true; do
printf '\r \033[38;5;33m%s\033[0m \033[2mLoading DeepRacer-for-Cloud...\033[0m' \
"${frames[i]}" >/dev/tty 2>/dev/null
(( i = (i + 1) % 4 ))
sleep 0.12
done
) & } 2>/dev/null
_dr_spinner_pid=$!
fi
# ── pre-compute git branch / update status ───────────────────────────────
local _git_branch _git_update_available=false
_git_branch=$(git -C "$DR_DIR" rev-parse --abbrev-ref HEAD 2>/dev/null || true)
timeout 5 git -C "$DR_DIR" fetch --quiet origin 2>/dev/null || true
local _local_hash _remote_hash
_local_hash=$(git -C "$DR_DIR" rev-parse HEAD 2>/dev/null || true)
_remote_hash=$(git -C "$DR_DIR" rev-parse '@{u}' 2>/dev/null || true)
if [[ -n "$_local_hash" && -n "$_remote_hash" && "$_local_hash" != "$_remote_hash" ]]; then
_git_update_available=true
fi
# ── pre-compute dynamic values ────────────────────────────────────────────
local cloud_val="${DR_CLOUD:-n/a}"
[[ "${DR_CLOUD,,}" == "aws" ]] && cloud_val="aws"
[[ "${DR_CLOUD,,}" == "remote" ]] && cloud_val="remote"
local s3_color
if aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3api head-bucket \
--bucket "${DR_LOCAL_S3_BUCKET}" >/dev/null 2>&1; then
s3_color="${C_OK}"
else
s3_color="${C_ERR}"
fi
local nvidia_runtime
if docker info --format '{{json .Runtimes}}' 2>/dev/null | grep -q '"nvidia"'; then
nvidia_runtime="${C_OK}available${RST}"
else
nvidia_runtime="${C_WARN}not found${RST}"
fi
# ── stop spinner and clear its line before rendering ─────────────────────
if [[ -n "${_dr_spinner_pid:-}" ]]; then
kill "$_dr_spinner_pid" 2>/dev/null
disown "$_dr_spinner_pid" 2>/dev/null
wait "$_dr_spinner_pid" 2>/dev/null
printf '\r\033[K' >/dev/tty 2>/dev/null || true
fi
# ── header ────────────────────────────────────────────────────────────────
echo; (( ++_dr_lines ))
_dr_hline "╭" "─" "╮"
_dr_row " ${BOLD}${C_HEADER}DeepRacer for Cloud — Environment Summary${RST}"
local _meta_row
if [[ -n "${DR_EXPERIMENT_NAME:-}" ]]; then
_meta_row=" ${DIM}Experiment: ${RST}${C_VAL}${DR_EXPERIMENT_NAME}${RST}"
else
local _rel_config
_rel_config=$(realpath --relative-to="${PWD}" "${DR_CONFIG}" 2>/dev/null || basename "${DR_CONFIG}")
_meta_row=" ${DIM}Config: ${RST}${C_VAL}${_rel_config}${RST}"
fi
local _branch_row="${DIM} Branch: ${RST}${C_VAL}${_git_branch:-unknown}${RST}"
if [[ "$_git_update_available" == true ]]; then
_branch_row+=" ${C_WARN}⬆ update available — run 'git pull'${RST}"
fi
_dr_row "${_meta_row}${_branch_row}"
# ── system config + run config ────────────────────────────────────────────
if [[ "$WIDE" == true ]]; then
local CKW=18 # key column width in 2-col mode
_dr_hline_2col "├" "─" "┬" "─" "┤"
_dr_row_2col \
" ${BOLD}${C_SECTION}System Configuration${RST}" \
" ${BOLD}${C_SECTION}Run Configuration${RST}${DIM} ID: ${DR_RUN_ID:-0}${RST}"
_dr_hline_2col "├" "─" "┼" "─" "┤"
local lrows=() rrows=()
lrows+=(" ${C_KEY}$(printf '%-*s' $CKW 'Docker style')${RST} ${C_VAL}${DR_DOCKER_STYLE:-swarm}${RST}")
lrows+=(" ${C_KEY}$(printf '%-*s' $CKW 'Cloud / Bucket')${RST} ${DIM}${cloud_val}${RST} ${s3_color}${DR_LOCAL_S3_BUCKET:-n/a}${RST}")
lrows+=(" ${C_KEY}$(printf '%-*s' $CKW 'Workers')${RST} ${C_VAL}${DR_WORKERS:-1}${RST}")
lrows+=(" ${C_KEY}$(printf '%-*s' $CKW 'NVIDIA runtime')${RST} ${nvidia_runtime}")
rrows+=(" ${C_KEY}$(printf '%-*s' $CKW 'Model prefix')${RST} ${C_VAL}${DR_LOCAL_S3_MODEL_PREFIX:-n/a}${RST}")
rrows+=(" ${C_KEY}$(printf '%-*s' $CKW 'Race type')${RST} ${C_VAL}${DR_RACE_TYPE:-n/a}${RST}")
rrows+=(" ${C_KEY}$(printf '%-*s' $CKW 'World / track')${RST} ${C_VAL}${DR_WORLD_NAME:-n/a}${RST}")
rrows+=(" ${C_KEY}$(printf '%-*s' $CKW 'Car name')${RST} ${C_VAL}${DR_CAR_NAME:-n/a}${RST}")
local max_r=$(( ${#lrows[@]} > ${#rrows[@]} ? ${#lrows[@]} : ${#rrows[@]} ))
for (( i=0; i<max_r; i++ )); do
_dr_row_2col "${lrows[$i]:-}" "${rrows[$i]:-}"
done
_dr_hline_2col "├" "─" "┴" "─" "┤"
else
_dr_section "System Configuration"
_dr_row " ${C_KEY}$(printf '%-22s' 'Docker style')${RST} ${C_VAL}${DR_DOCKER_STYLE:-swarm}${RST}"
_dr_row " ${C_KEY}$(printf '%-22s' 'Cloud / Bucket')${RST} ${DIM}${cloud_val}${RST} ${s3_color}${DR_LOCAL_S3_BUCKET:-n/a}${RST}"
_dr_kv "Workers" "${DR_WORKERS:-1}"
_dr_row " ${C_KEY}$(printf '%-22s' 'NVIDIA runtime')${RST} ${nvidia_runtime}"
_dr_section "Run Configuration" "ID: ${DR_RUN_ID:-0}"
_dr_kv "Model prefix" "${DR_LOCAL_S3_MODEL_PREFIX:-n/a}"
_dr_kv "Race type" "${DR_RACE_TYPE:-n/a}"
_dr_kv "World / track" "${DR_WORLD_NAME:-n/a}"
_dr_kv "Car name" "${DR_CAR_NAME:-n/a}"
fi
# ── simapp version check (used inline in docker images section) ───────────
local simapp_update_available=false _required_simapp_ver=""
_required_simapp_ver=$(jq -r '.containers.simapp | select (.!=null)' "$DR_DIR/defaults/dependencies.json" 2>/dev/null || true)
if [[ -n "$_required_simapp_ver" && -n "${DR_SIMAPP_VERSION:-}" ]]; then
local _configured_simapp_ver
_configured_simapp_ver=$(echo "${DR_SIMAPP_VERSION}" | grep -oP '^\d+\.\d+(\.\d+)?')
if [[ -n "$_configured_simapp_ver" ]] && ! verlte "$_required_simapp_ver" "$_configured_simapp_ver"; then
simapp_update_available=true
fi
fi
# ── docker images ─────────────────────────────────────────────────────────
if [[ "$WIDE" == true ]]; then
# 2-col closing line already drawn; just add section label row
local label=" ${BOLD}${C_SECTION}Configured Docker Images${RST}"
_dr_row "$label"
_dr_hline "├" "─" "┤"
else
_dr_section "Configured Docker Images"
fi
local simapp_img="${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}"
local simapp_disp="${simapp_img/awsdeepracercommunity/[a-d-c]}"
local simapp_id; simapp_id=$(docker image inspect "$simapp_img" --format '{{slice .Id 7 19}}' 2>/dev/null)
local analysis_img="awsdeepracercommunity/deepracer-analysis:${DR_ANALYSIS_IMAGE:-cpu}"
local analysis_disp="${analysis_img/awsdeepracercommunity/[a-d-c]}"
local analysis_id; analysis_id=$(docker image inspect "$analysis_img" --format '{{slice .Id 7 19}}' 2>/dev/null)
local minio_img="" minio_disp="" minio_id=""
if [[ "${DR_CLOUD,,}" == "local" || "${DR_CLOUD,,}" == "azure" ]]; then
minio_img="minio/minio:${DR_MINIO_IMAGE:-latest}"
minio_disp="$minio_img"
minio_id=$(docker image inspect "$minio_img" --format '{{slice .Id 7 19}}' 2>/dev/null)
if [[ -z "$minio_id" ]]; then
minio_id=$(docker images minio/minio --format '{{slice .ID 0 12}}' 2>/dev/null | head -1)
fi
fi
local _simapp_upd_note=""
[[ "$simapp_update_available" == true ]] && _simapp_upd_note=" ${C_WARN}⬆ update available (→ ${_required_simapp_ver})${RST}"
if [[ "$WIDE" == true ]]; then
local IKW=14
if [[ -n "$simapp_id" ]]; then
_dr_row " ${C_KEY}$(printf '%-*s' $IKW 'SimApp')${RST} ${C_OK}${simapp_disp}${RST} ${DIM}ID: ${simapp_id} ✓ local${RST}${_simapp_upd_note}"
else
_dr_row " ${C_KEY}$(printf '%-*s' $IKW 'SimApp')${RST} ${C_WARN}${simapp_disp} (not pulled)${RST}${_simapp_upd_note}"
fi
if [[ -n "$analysis_id" ]]; then
_dr_row " ${C_KEY}$(printf '%-*s' $IKW 'Analysis')${RST} ${C_OK}${analysis_disp}${RST} ${DIM}ID: ${analysis_id} ✓ local${RST}"
else
_dr_row " ${C_KEY}$(printf '%-*s' $IKW 'Analysis')${RST} ${C_WARN}${analysis_disp} (not pulled)${RST}"
fi
if [[ -n "$minio_img" ]]; then
if [[ -n "$minio_id" ]]; then
_dr_row " ${C_KEY}$(printf '%-*s' $IKW 'MinIO')${RST} ${C_OK}${minio_disp}${RST} ${DIM}ID: ${minio_id} ✓ local${RST}"
else
_dr_row " ${C_KEY}$(printf '%-*s' $IKW 'MinIO')${RST} ${C_WARN}${minio_disp} (not pulled)${RST}"
fi
fi
else
if [[ -n "$simapp_id" ]]; then
_dr_kv "SimApp" "${simapp_disp}" "ok"
_dr_row " ${DIM}$(printf '%22s' '') ID: ${simapp_id} ✓ local${RST}${_simapp_upd_note}"
else
_dr_kv "SimApp" "${simapp_disp} (not pulled)${_simapp_upd_note}" "warn"
fi
if [[ -n "$analysis_id" ]]; then
_dr_kv "Analysis" "${analysis_disp}" "ok"
_dr_row " ${DIM}$(printf '%22s' '') ID: ${analysis_id} ✓ local${RST}"
else
_dr_kv "Analysis" "${analysis_disp} (not pulled)" "warn"
fi
if [[ -n "$minio_img" ]]; then
if [[ -n "$minio_id" ]]; then
_dr_kv "MinIO" "${minio_disp}" "ok"
_dr_row " ${DIM}$(printf '%22s' '') ID: ${minio_id} ✓ local${RST}"
else
_dr_kv "MinIO" "${minio_disp} (not pulled)" "warn"
fi
fi
fi
# ── services and containers ───────────────────────────────────────────────
_dr_section "DeepRacer Services And Containers"
local found_any=false
if [[ "${DR_DOCKER_STYLE,,}" == "swarm" ]]; then
local stack_lines
stack_lines=$(docker stack ls --format '{{.Name}}\t{{.Services}}' 2>/dev/null || true)
if [[ -n "$stack_lines" ]]; then
found_any=true
_dr_row " ${DIM}Swarm stacks:${RST}"
while IFS=$'\t' read -r stname stsvcs; do
_dr_row " ${C_KEY}$(printf '%-30s' "$stname")${RST} ${C_VAL}${stsvcs} service(s)${RST}"
done <<< "$stack_lines"
fi
local svc_lines
svc_lines=$(docker service ls --format '{{.Name}}\t{{.Replicas}}\t{{.Image}}' 2>/dev/null \
| grep -i '^deepracer' || true)
if [[ -n "$svc_lines" ]]; then
found_any=true
_dr_row " ${DIM}Swarm services:${RST}"
while IFS=$'\t' read -r sname sreplicas simage; do
local desired actual
desired=$(echo "$sreplicas" | cut -d'/' -f2)
actual=$(echo "$sreplicas" | cut -d'/' -f1)
local rep_color="$C_OK"
[[ "$actual" != "$desired" ]] && rep_color="$C_WARN"
local simage_disp="${simage/awsdeepracercommunity/[a-d-c]}"
_dr_row " ${C_KEY}$(printf '%-30s' "$sname")${RST} ${rep_color}$(printf '%-8s' "$sreplicas")${RST} ${DIM}${simage_disp}${RST}"
done <<< "$svc_lines"
fi
local container_lines
container_lines=$(docker ps --format '{{.Names}}\t{{.Status}}\t{{.Image}}' 2>/dev/null \
| while IFS=$'\t' read -r cn cs ci; do
if echo "$cn" | grep -qiE '^deepracer|robomaker|sagemaker|minio|rl_coach|analysis' \
|| [[ "$ci" == "$simapp_img"* ]]; then
printf '%s\t%s\n' "$cn" "$cs"
fi
done)
if [[ -n "$container_lines" ]]; then
found_any=true
local n_ctrs; n_ctrs=$(echo "$container_lines" | wc -l)
_dr_row " ${DIM}Containers:${RST}"
# 3 lines reserved for footer (blank row + closing hline + trailing newline)
if (( _dr_lines + n_ctrs + 3 > TERM_H )); then
_dr_row " ${DIM}${n_ctrs} container(s) running ${C_WARN}(terminal too short to list)${RST}"
else
while IFS=$'\t' read -r cname cstatus; do
local status_color="$C_OK"
[[ "$cstatus" != Up* ]] && status_color="$C_WARN"
_dr_row " ${C_KEY}$(printf '%-30s' "$cname")${RST} ${status_color}${cstatus}${RST}"
done <<< "$container_lines"
fi
fi
else
local proj_lines
proj_lines=$(docker compose ls --format json 2>/dev/null \
| jq -r '.[] | select(.Name | test("deepracer|s3"; "i")) | "\(.Name)\t\(.Status)"' 2>/dev/null || true)
if [[ -n "$proj_lines" ]]; then
found_any=true
_dr_row " ${DIM}Compose projects:${RST}"
while IFS=$'\t' read -r pname pstatus; do
local pstatus_color="$C_OK"
[[ "$pstatus" != *running* ]] && pstatus_color="$C_WARN"
_dr_row " ${C_KEY}$(printf '%-30s' "$pname")${RST} ${pstatus_color}${pstatus}${RST}"
done <<< "$proj_lines"
fi
local container_lines
container_lines=$(docker ps --format '{{.Names}}\t{{.Status}}\t{{.Image}}' 2>/dev/null \
| while IFS=$'\t' read -r cn cs ci; do
if echo "$cn" | grep -qiE '^deepracer|robomaker|sagemaker|minio|rl_coach|analysis' \
|| [[ "$ci" == "$simapp_img"* ]]; then
printf '%s\t%s\n' "$cn" "$cs"
fi
done)
if [[ -n "$container_lines" ]]; then
found_any=true
local n_ctrs; n_ctrs=$(echo "$container_lines" | wc -l)
_dr_row " ${DIM}Compose services:${RST}"
if (( _dr_lines + n_ctrs + 3 > TERM_H )); then
_dr_row " ${DIM}${n_ctrs} container(s) running ${C_WARN}(terminal too short to list)${RST}"
else
while IFS=$'\t' read -r cname cstatus; do
local status_color="$C_OK"
[[ "$cstatus" != Up* ]] && status_color="$C_WARN"
_dr_row " ${C_KEY}$(printf '%-30s' "$cname")${RST} ${status_color}${cstatus}${RST}"
done <<< "$container_lines"
fi
fi
fi
if [[ "$found_any" == false ]]; then
_dr_row " ${C_WARN}No DeepRacer-related services or containers running.${RST}"
fi
# ── footer ────────────────────────────────────────────────────────────────
_dr_blank
_dr_hline "╰" "─" "╯"
echo
}
================================================
FILE: bin/prepare-mac.sh
================================================
#!/usr/bin/env bash
set -euo pipefail
trap ctrl_c INT
function ctrl_c() {
echo "Requested to stop."
exit 1
}
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
## Only allow macOS
if [[ "$(uname -s)" != "Darwin" ]]; then
echo "ERROR: This script is for macOS only. Use prepare.sh for Linux."
exit 1
fi
MACOS_VERSION=$(sw_vers -productVersion)
MACOS_MAJOR=$(echo "$MACOS_VERSION" | cut -d. -f1)
# Supported: Monterey (12), Ventura (13), Sonoma (14), Sequoia (15)
SUPPORTED_MACOS_MAJOR=(12 13 14 15)
VERSION_OK=false
for V in "${SUPPORTED_MACOS_MAJOR[@]}"; do
if [[ "${MACOS_MAJOR}" -eq "$V" ]]; then
VERSION_OK=true
break
fi
done
if [[ "$VERSION_OK" != true ]]; then
echo "WARNING: macOS ${MACOS_VERSION} is not a tested version."
echo " Supported: Monterey (12), Ventura (13), Sonoma (14), Sequoia (15)"
fi
echo "Detected macOS ${MACOS_VERSION}"
## macOS does not support NVIDIA GPUs -- always CPU
ARCH="cpu"
echo "macOS does not support NVIDIA GPUs. Using CPU mode."
## Detect Apple Silicon vs Intel
CPU_ARCH=$(uname -m)
if [[ "${CPU_ARCH}" == "arm64" ]]; then
echo "Apple Silicon (arm64) detected."
BREW_PREFIX="/opt/homebrew"
else
echo "Intel (x86_64) detected."
BREW_PREFIX="/usr/local"
fi
## Install Homebrew if not present
if ! command -v brew >/dev/null 2>&1; then
echo "Installing Homebrew..."
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
eval "$("${BREW_PREFIX}/bin/brew" shellenv)"
fi
## Update Homebrew
brew update
## Install required packages (no awscli here -- installed separately below)
brew install jq python3 git screen bash
## Set Homebrew bash as the default shell if not already
BREW_BASH="${BREW_PREFIX}/bin/bash"
if ! grep -qF "${BREW_BASH}" /etc/shells; then
echo "Adding ${BREW_BASH} to /etc/shells..."
echo "${BREW_BASH}" | sudo tee -a /etc/shells
fi
if [[ "$(dscl . -read /Users/"$(id -un)" UserShell | awk '{print $2}')" != "${BREW_BASH}" ]]; then
echo "Setting default shell to ${BREW_BASH}..."
sudo chsh -s "${BREW_BASH}" "$(id -un)"
fi
## Ensure bash 5 + Homebrew PATH are set up for all SSH sessions.
## Sets PATH first so --login re-entry is safe (BASH_VERSINFO guard prevents looping).
BASH_PROFILE="${HOME}/.bash_profile"
BOOTSTRAP_MARKER="# drfc-bash5-bootstrap"
if ! grep -qF "${BOOTSTRAP_MARKER}" "${BASH_PROFILE}" 2>/dev/null; then
cat >> "${BASH_PROFILE}" <<EOF
${BOOTSTRAP_MARKER}
eval "\$(${BREW_PREFIX}/bin/brew shellenv)"
export PATH="/usr/local/bin:\$PATH" # AWS CLI v2
if [ -x "${BREW_BASH}" ] && [ "\${BASH_VERSINFO[0]:-0}" -lt 5 ]; then
exec "${BREW_BASH}" --login
fi
EOF
echo "Added bash 5 + PATH bootstrap to ${BASH_PROFILE}."
fi
## Install boto3 and pyyaml
if pip3 install boto3 pyyaml --break-system-packages 2>/dev/null; then
echo "boto3 and pyyaml installed."
else
pip3 install boto3 pyyaml
fi
## Install AWS CLI v2 via official pkg installer (avoids Homebrew Python conflicts)
if command -v aws >/dev/null 2>&1; then
echo "AWS CLI already installed: $(aws --version 2>&1)"
else
echo "Installing AWS CLI v2 via official installer..."
TMP_PKG=$(mktemp /tmp/AWSCLIV2.XXXXXX.pkg)
curl -fsSL "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "${TMP_PKG}"
sudo installer -pkg "${TMP_PKG}" -target /
rm -f "${TMP_PKG}"
echo "AWS CLI installed: $(aws --version 2>&1)"
fi
## Detect cloud
# detect.sh relies on cloud-init which is typically absent on macOS.
# Fall back to probing the AWS Instance Metadata Service (IMDSv2).
CLOUD_NAME="local"
if [[ -f /var/run/cloud-init/instance-data.json ]]; then
source "$DIR/detect.sh"
else
if IMDS_TOKEN=$(curl -s --connect-timeout 2 \
-X PUT "http://169.254.169.254/latest/api/token" \
-H "X-aws-ec2-metadata-token-ttl-seconds: 21600" 2>/dev/null) \
&& [[ -n "${IMDS_TOKEN}" ]]; then
CLOUD_NAME="aws"
CLOUD_INSTANCETYPE=$(curl -s --connect-timeout 2 \
-H "X-aws-ec2-metadata-token: ${IMDS_TOKEN}" \
"http://169.254.169.254/latest/meta-data/instance-type" 2>/dev/null || echo "unknown")
export CLOUD_NAME
export CLOUD_INSTANCETYPE
else
export CLOUD_NAME
fi
fi
echo "Detected cloud type ${CLOUD_NAME}"
## Install Docker CLI and Colima (headless Docker runtime for macOS)
## Colima is preferred over Docker Desktop for headless/EC2 use.
if brew list --formula colima &>/dev/null; then
echo "Colima already installed."
else
brew install colima
fi
if command -v docker >/dev/null 2>&1; then
echo "Docker CLI already installed."
else
brew install docker
fi
## Install docker-compose v2 (as CLI plugin)
if brew list --formula docker-compose &>/dev/null; then
echo "docker-compose already installed."
else
brew install docker-compose
fi
## Register docker-compose as a Docker CLI plugin
mkdir -p "${HOME}/.docker/cli-plugins"
ln -sfn "$(brew --prefix)/opt/docker-compose/bin/docker-compose" \
"${HOME}/.docker/cli-plugins/docker-compose"
## Start Colima if not already running
if colima status 2>/dev/null | grep -q "Running"; then
echo "Colima is already running."
else
echo "Starting Colima..."
if [[ "${CPU_ARCH}" == "arm64" ]] && [[ "${MACOS_MAJOR}" -ge 13 ]]; then
# Apple Silicon + macOS 13+: use Virtualization.framework (vz) for much
# lower hypervisor overhead vs QEMU. virtiofs gives better I/O than sshfs.
colima start --cpu 8 --memory 12 --disk 60 \
--vm-type vz --mount-type virtiofs
elif [[ "${CPU_ARCH}" == "arm64" ]]; then
colima start --cpu 8 --memory 12 --disk 60 --mount-type virtiofs
else
# Intel Mac
colima start --cpu 4 --memory 8 --disk 60 --mount-type virtiofs
fi
fi
## Ensure docker socket is reachable
if ! docker info >/dev/null 2>&1; then
echo "ERROR: Docker is not reachable. Check that Colima is running: colima status"
exit 1
fi
echo "Docker is available via Colima."
## Create /tmp/sagemaker inside the Colima VM.
## On macOS, Docker runs inside Colima's Linux VM so bind-mounts must exist there,
## not on the macOS host. /tmp persists across colima stop/start but not colima delete.
colima ssh -- sudo mkdir -p /tmp/sagemaker
colima ssh -- sudo chmod -R ug+w /tmp/sagemaker
echo "/tmp/sagemaker created inside Colima VM."
## Ensure Colima auto-starts on login (launchd)
if ! launchctl list 2>/dev/null | grep -q "com.abiosoft.colima.default"; then
brew services start colima || true
fi
## Completion message
echo ""
echo "First stage done. Log out and back in, then run init.sh -c ${CLOUD_NAME} -a ${ARCH}"
echo ""
echo "Notes:"
echo " - Log out and back in for the new default shell (bash 5) to take effect."
echo " - Colima must be running before using DeepRacer-for-Cloud."
echo " Start it manually with: colima start"
echo " - On Apple Silicon (arm64), amd64/x86_64 container images require"
echo " Rosetta 2. Install it with: softwareupdate --install-rosetta"
echo " Then restart Colima with: colima start --arch x86_64"
echo " - No reboot is required."
================================================
FILE: bin/prepare.sh
================================================
#!/usr/bin/env bash
set -euo pipefail
trap ctrl_c INT
function ctrl_c() {
echo "Requested to stop."
exit 1
}
export DEBIAN_FRONTEND=noninteractive
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
# Only allow supported Ubuntu versions
. /etc/os-release
SUPPORTED_VERSIONS=("22.04" "24.04" "24.10" "25.04" "25.10")
DISTRIBUTION=${ID}${VERSION_ID//./}
UBUNTU_MAJOR_VERSION=$(echo $VERSION_ID | cut -d. -f1)
UBUNTU_MINOR_VERSION=$(echo $VERSION_ID | cut -d. -f2)
if [[ "$ID" == "ubuntu" ]]; then
VERSION_OK=false
for V in "${SUPPORTED_VERSIONS[@]}"; do
if [[ "$VERSION_ID" == "$V" ]]; then
VERSION_OK=true
break
fi
done
if [[ "$VERSION_OK" != true ]]; then
echo "ERROR: Ubuntu $VERSION_ID is not a supported version. Supported versions: ${SUPPORTED_VERSIONS[*]}"
exit 1
fi
fi
## Check if WSL2
IS_WSL2=""
if grep -qi Microsoft /proc/version && grep -q "WSL2" /proc/version; then
IS_WSL2="yes"
fi
# Remove needrestart in all Ubuntu 2x.04/2x.10+ (future-proof)
if [[ "${ID}" == "ubuntu" && ${UBUNTU_MAJOR_VERSION} -ge 22 && -z "${IS_WSL2}" ]]; then
sudo apt remove -y needrestart || true
fi
## Patch system
sudo apt update && sudo apt-mark hold grub-pc && sudo apt -y -o \
DPkg::options::="--force-confdef" -o DPkg::options::="--force-confold" -qq upgrade
## Install required packages
sudo apt install --no-install-recommends -y jq python3-boto3 python3-venv screen git curl
## Install AWS CLI
if [[ "${ID}" == "ubuntu" && ( ${UBUNTU_MAJOR_VERSION} -eq 22 ) ]]; then
sudo apt install -y awscli
else
if command -v snap >/dev/null 2>&1; then
sudo snap install aws-cli --classic
else
echo "WARNING: snap not available, AWS CLI not installed"
fi
fi
## Create Python virtual environment
VENV_DIR="${DIR}/../.venv"
if [[ ! -d "${VENV_DIR}" ]]; then
echo "Creating Python virtual environment at ${VENV_DIR}"
python3 -m venv --prompt drfc "${VENV_DIR}"
fi
echo "Installing Python requirements into virtual environment"
"${VENV_DIR}/bin/pip" install --quiet -r "${DIR}/../requirements.txt"
## Detect cloud
source $DIR/detect.sh
echo "Detected cloud type ${CLOUD_NAME}"
## Do I have a GPU
GPUS=0
if [[ -z "${IS_WSL2}" ]]; then
GPUS=$(lspci | awk '/NVIDIA/ && ( /VGA/ || /3D controller/ ) ' | wc -l)
else
if [[ -f /usr/lib/wsl/lib/nvidia-smi ]]; then
GPUS=$(nvidia-smi --query-gpu=name --format=csv,noheader | wc -l)
fi
fi
if [ $? -ne 0 ] || [ $GPUS -eq 0 ]; then
ARCH="cpu"
echo "No NVIDIA GPU detected. Will not install drivers."
else
ARCH="gpu"
fi
## Adding Nvidia Drivers
if [[ "${ARCH}" == "gpu" && -z "${IS_WSL2}" ]]; then
DRIVER_OK=false
# Find all installed nvidia-driver-XXX packages (status 'ii'), extract version, and check if >= 525
for PKG in $(dpkg -l | awk '$1 == "ii" && /nvidia-driver-[0-9]+/ {print $2}'); do
DRIVER_VER=$(echo "${PKG}" | sed -E 's/nvidia-driver-([0-9]+).*/\1/')
if [[ ${DRIVER_VER} -ge 560 ]]; then
echo "NVIDIA driver ${DRIVER_VER} already installed."
DRIVER_OK=true
break
fi
done
if [[ "${DRIVER_OK}" != true ]]; then
# Try to install the highest available driver >= 560
HIGHEST_DRIVER=$(apt-cache search --names-only '^nvidia-driver-[0-9]+$' | awk '{print $1}' | grep -oE '[0-9]+$' | awk '$1 >= 560' | sort -nr | head -n1)
if [[ -n "${HIGHEST_DRIVER}" ]]; then
sudo apt install -y "nvidia-driver-${HIGHEST_DRIVER}" --no-install-recommends -o Dpkg::Options::="--force-overwrite"
elif apt-cache show nvidia-driver-560-server &>/dev/null; then
sudo apt install -y nvidia-driver-560-server --no-install-recommends -o Dpkg::Options::="--force-overwrite"
else
echo "No supported NVIDIA driver >= 560 found for this Ubuntu version."
exit 1
fi
fi
fi
## Installing Docker
sudo apt install -y --no-install-recommends docker.io docker-buildx docker-compose-v2
## Install Nvidia Docker Container
if [[ "${ARCH}" == "gpu" ]]; then
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg &&
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list |
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' |
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update && sudo apt install -y --no-install-recommends nvidia-docker2 nvidia-container-runtime
if [ -f "/etc/docker/daemon.json" ]; then
echo "Altering /etc/docker/daemon.json with default-runtime nvidia."
cat /etc/docker/daemon.json | jq 'del(."default-runtime") + {"default-runtime": "nvidia"}' | sudo tee /etc/docker/daemon.json
else
echo "Creating /etc/docker/daemon.json with default-runtime nvidia."
sudo cp "${DIR}/../defaults/docker-daemon.json" /etc/docker/daemon.json
fi
fi
## Enable and start docker
if [[ -n "${IS_WSL2}" ]]; then
sudo service docker restart
else
sudo systemctl enable docker
sudo systemctl restart docker
fi
## Ensure user can run docker
sudo usermod -a -G docker "$(id -un)"
## Reboot to load driver -- continue install if in cloud-init
CLOUD_INIT=$(pstree -s $BASHPID | awk /cloud-init/ | wc -l)
if [[ "${CLOUD_INIT}" -ne 0 ]]; then
echo "Rebooting in 5 seconds. Will continue with install."
cd "${DIR}"
./runonce.sh "./init.sh -c ${CLOUD_NAME} -a ${ARCH}"
sleep 5s
sudo shutdown -r +1
elif [[ -n "${IS_WSL2}" || "${ARCH}" == "cpu" ]]; then
echo "First stage done. Log out, then log back in and run init.sh -c ${CLOUD_NAME} -a ${ARCH}"
echo "Note: You may need to log out and back in for docker group membership to take effect."
else
echo "First stage done. Please reboot and run init.sh -c ${CLOUD_NAME} -a ${ARCH}"
echo "Note: Reboot is required for NVIDIA drivers and docker group membership to take effect."
fi
================================================
FILE: bin/runonce.sh
================================================
#!/usr/bin/env bash
if [[ $# -eq 0 ]]; then
echo "Schedules a command to be run after the next reboot."
echo "Usage: $(basename $0) <command>"
echo " $(basename $0) -p <path> <command>"
echo " $(basename $0) -r <command>"
else
REMOVE=0
COMMAND=${!#}
SCRIPTPATH=$PATH
while getopts ":r:p:" optionName; do
case "$optionName" in
r)
REMOVE=1
COMMAND=$OPTARG
;;
p) SCRIPTPATH=$OPTARG ;;
esac
done
SCRIPT="${HOME}/.$(basename $0)_$(echo $COMMAND | sed 's/[^a-zA-Z0-9_]/_/g')"
if [[ ! -f $SCRIPT ]]; then
echo "PATH=$SCRIPTPATH" >>$SCRIPT
echo "cd $(pwd)" >>$SCRIPT
echo "logger -t $(basename $0) -p local3.info \"COMMAND=$COMMAND ; USER=\$(whoami) ($(logname)) ; PWD=$(pwd) ; PATH=\$PATH\"" >>$SCRIPT
echo "$COMMAND | logger -t $(basename $0) -p local3.info" >>$SCRIPT
echo "$0 -r \"$(echo $COMMAND | sed 's/\"/\\\"/g')\"" >>$SCRIPT
chmod +x $SCRIPT
fi
CRONTAB="${HOME}/.$(basename $0)_temp_crontab_$RANDOM"
ENTRY="@reboot $SCRIPT"
echo "$(crontab -l 2>/dev/null)" | grep -v "$ENTRY" | grep -v "^# DO NOT EDIT THIS FILE - edit the master and reinstall.$" | grep -v "^# ([^ ]* installed on [^)]*)$" | grep -v "^# (Cron version [^$]*\$[^$]*\$)$" >$CRONTAB
if [[ $REMOVE -eq 0 ]]; then
echo "$ENTRY" >>$CRONTAB
fi
crontab $CRONTAB
rm $CRONTAB
if [[ $REMOVE -ne 0 ]]; then
rm $SCRIPT
fi
fi
================================================
FILE: bin/scripts_wrapper.sh
================================================
#!/usr/bin/env bash
function _dr_is_macos {
[[ "$(uname -s)" == "Darwin" ]]
}
if ! declare -F _realpath >/dev/null 2>&1; then
function _realpath {
if command -v realpath >/dev/null 2>&1; then
realpath "$1"
elif command -v grealpath >/dev/null 2>&1; then
grealpath "$1"
elif ! _dr_is_macos && readlink -f / >/dev/null 2>&1; then
readlink -f "$1"
else
python3 -c 'import os, sys; print(os.path.realpath(sys.argv[1]))' "$1"
fi
}
fi
export -f _realpath
function _dr_require_colima {
if ! _dr_is_macos; then
return 0
fi
if ! command -v colima >/dev/null 2>&1; then
echo "ERROR: Colima is required on macOS. Run bin/prepare-mac.sh or install and start Colima." >&2
return 1
fi
if ! colima status 2>/dev/null | grep -qi "running"; then
echo "ERROR: Colima is not running. Start it with: colima start" >&2
return 1
fi
}
function _dr_ensure_sagemaker_dir {
if _dr_is_macos; then
_dr_require_colima || return 1
colima ssh -- sudo mkdir -p /tmp/sagemaker
colima ssh -- sudo chmod -R ug+w /tmp/sagemaker
elif [ ! -d /tmp/sagemaker ]; then
sudo mkdir -p /tmp/sagemaker
sudo chmod -R g+w /tmp/sagemaker
fi
}
function _dr_runtime_cat {
if _dr_is_macos; then
_dr_require_colima || return 1
colima ssh -- sudo cat "$1"
else
sudo cat "$1"
fi
}
function _dr_find_sagemaker_compose_files {
local compose_service_name="$1"
if _dr_is_macos; then
_dr_require_colima || return 1
colima ssh -- sudo env COMPOSE_SERVICE_NAME="$compose_service_name" sh -lc 'find /tmp/sagemaker -name docker-compose.yaml -exec grep -l -- "$COMPOSE_SERVICE_NAME" {} +'
else
sudo find /tmp/sagemaker -name docker-compose.yaml -exec grep -l -- "$compose_service_name" {} +
fi
}
function _dr_compose_file_matches_run {
local compose_file="$1"
local compose_content
compose_content=$(_dr_runtime_cat "$compose_file" 2>/dev/null) || return 1
grep -Fq "RUN_ID=${DR_RUN_ID}" <<<"$compose_content" && grep -Fq "${DR_LOCAL_S3_MODEL_PREFIX}" <<<"$compose_content"
}
function dr-upload-custom-files {
eval CUSTOM_TARGET=$(echo s3://$DR_LOCAL_S3_BUCKET/$DR_LOCAL_S3_CUSTOM_FILES_PREFIX/)
echo "Uploading files to $CUSTOM_TARGET"
if [[ -z $DR_EXPERIMENT_NAME ]]; then
aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 sync $DR_DIR/custom_files/ $CUSTOM_TARGET
else
aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 sync $DR_DIR/experiments/$DR_EXPERIMENT_NAME/custom_files/ $CUSTOM_TARGET
fi
}
function dr-upload-model {
dr-update-env && ${DR_DIR}/scripts/upload/upload-model.sh "$@"
}
function dr-download-model {
dr-update-env && ${DR_DIR}/scripts/upload/download-model.sh "$@"
}
function dr-upload-car-zip {
dr-update-env && ${DR_DIR}/scripts/upload/upload-car.sh "$@"
}
function dr-list-aws-models {
echo "Due to changes in AWS DeepRacer Console this command is no longer available."
}
function dr-set-upload-model {
echo "Due to changes in AWS DeepRacer Console this command is no longer available."
}
function dr-increment-upload-model {
dr-update-env && ${DR_DIR}/scripts/upload/increment.sh "$@" && dr-update-env
}
function dr-download-custom-files {
eval CUSTOM_TARGET=$(echo s3://$DR_LOCAL_S3_BUCKET/$DR_LOCAL_S3_CUSTOM_FILES_PREFIX/)
echo "Downloading files from $CUSTOM_TARGET"
if [[ -z $DR_EXPERIMENT_NAME ]]; then
aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 sync $CUSTOM_TARGET $DR_DIR/custom_files/
else
aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 sync $CUSTOM_TARGET $DR_DIR/experiments/$DR_EXPERIMENT_NAME/custom_files/
fi
}
function dr-start-training {
dr-update-env
$DR_DIR/scripts/training/start.sh "$@"
}
function dr-increment-training {
dr-update-env && ${DR_DIR}/scripts/training/increment.sh "$@" && dr-update-env
}
function dr-stop-training {
bash -c "cd $DR_DIR/scripts/training && ./stop.sh"
}
function dr-start-evaluation {
dr-update-env
$DR_DIR/scripts/evaluation/start.sh "$@"
}
function dr-stop-evaluation {
bash -c "cd $DR_DIR/scripts/evaluation && ./stop.sh"
}
function dr-stop-all {
# Step 1: Stop all stacks (swarm) or all compose projects (compose)
if [[ "${DR_DOCKER_STYLE,,}" == "swarm" ]]; then
docker stack ls --format '{{.Name}}' | while read -r STACK; do
echo "Removing stack: $STACK"
docker stack rm "$STACK"
done
else
while IFS=$'\t' read -r NAME CONFIGS; do
echo "Stopping compose project: $NAME"
local CONFIG_FLAGS
CONFIG_FLAGS=$(echo "$CONFIGS" | tr ',' '\n' | sed 's/^/-f /' | tr '\n' ' ')
docker compose $CONFIG_FLAGS -p "$NAME" down
done < <(docker compose ls --format json 2>/dev/null \
| jq -r '.[] | [.Name, .ConfigFiles] | @tsv')
fi
# Step 2: Stop the s3/minio stack if still running
if [[ "${DR_DOCKER_STYLE,,}" == "swarm" ]]; then
if docker stack ls --format '{{.Name}}' | grep -qx 's3'; then
echo "Removing stack: s3"
docker stack rm s3
fi
else
if docker compose ls --format json 2>/dev/null | jq -e '.[] | select(.Name == "s3")' >/dev/null 2>&1; then
echo "Stopping compose project: s3"
docker compose -p s3 down
fi
fi
echo "Waiting 10 seconds for stacks and services to stop..."
sleep 10
# Step 3: Stop any remaining containers still attached to sagemaker-local
local REMAINING
REMAINING=$(docker network inspect sagemaker-local --format '{{json .Containers}}' 2>/dev/null \
| jq -r 'keys[] | select(test("^[0-9a-f]{64}$"))' 2>/dev/null)
if [[ -n "$REMAINING" ]]; then
echo "Stopping remaining containers on sagemaker-local:"
echo "$REMAINING" | while read -r CONTAINER_ID; do
local CONTAINER_NAME
CONTAINER_NAME=$(docker inspect --format '{{.Name}}' "$CONTAINER_ID" | sed 's|^/||')
echo " Stopping: $CONTAINER_NAME"
docker stop "$CONTAINER_ID"
done
fi
}
function dr-start-tournament {
echo "Tournaments are no longer supported. Use Head-to-Model evaluation instead."
}
function dr-start-loganalysis {
bash -c "cd $DR_DIR/scripts/log-analysis && ./start.sh"
}
function dr-stop-loganalysis {
eval LOG_ANALYSIS_ID=$(docker ps | awk ' /deepracer-analysis/ { print $1 }')
if [ -n "$LOG_ANALYSIS_ID" ]; then
bash -c "cd $DR_DIR/scripts/log-analysis && ./stop.sh"
else
echo "Log-analysis is not running."
fi
}
function dr-logs-sagemaker {
local OPTIND
OPT_TIME="--since 5m"
while getopts ":w:a" opt; do
case $opt in
w)
OPT_WAIT=$OPTARG
;;
a)
OPT_TIME=""
;;
\?)
echo "Invalid option -$OPTARG" >&2
;;
esac
done
SAGEMAKER_CONTAINER=$(dr-find-sagemaker)
if [[ -z "$SAGEMAKER_CONTAINER" ]]; then
if [[ -n "$OPT_WAIT" ]]; then
WAIT_TIME=$OPT_WAIT
echo "Waiting up to $WAIT_TIME seconds for Sagemaker to start up..."
until [ -n "$SAGEMAKER_CONTAINER" ]; do
sleep 1
((WAIT_TIME--))
if [ "$WAIT_TIME" -lt 1 ]; then
echo "Sagemaker is not running."
return 1
fi
SAGEMAKER_CONTAINER=$(dr-find-sagemaker)
done
else
echo "Sagemaker is not running."
return 1
fi
fi
if [[ "$TERM_PROGRAM" == "vscode" ]]; then
echo "VS Code terminal detected. Displaying Sagemaker logs inline."
docker logs $OPT_TIME -f $SAGEMAKER_CONTAINER
elif [[ "${DR_HOST_X,,}" == "true" && -n "$DISPLAY" ]]; then
if [ -x "$(command -v gnome-terminal)" ]; then
gnome-terminal --tab --title "DR-${DR_RUN_ID}: Sagemaker - ${SAGEMAKER_CONTAINER}" -- /usr/usr/bin/env bash -c "docker logs $OPT_TIME -f ${SAGEMAKER_CONTAINER}" 2>/dev/null
echo "Sagemaker container $SAGEMAKER_CONTAINER logs opened in separate gnome-terminal. "
elif [ -x "$(command -v x-terminal-emulator)" ]; then
x-terminal-emulator -e /bin/sh -c "docker logs $OPT_TIME -f ${SAGEMAKER_CONTAINER}" 2>/dev/null
echo "Sagemaker container $SAGEMAKER_CONTAINER logs opened in separate terminal. "
else
echo 'Could not find a terminal emulator. Displaying inline.'
docker logs $OPT_TIME -f $SAGEMAKER_CONTAINER
fi
else
docker logs $OPT_TIME -f $SAGEMAKER_CONTAINER
fi
}
function dr-find-sagemaker {
STACK_NAME="deepracer-$DR_RUN_ID"
SAGEMAKER_CONTAINERS=$(docker ps | awk ' /simapp/ { print $1 } ' | xargs)
if [[ -n "$SAGEMAKER_CONTAINERS" ]]; then
for CONTAINER in $SAGEMAKER_CONTAINERS; do
CONTAINER_NAME=$(docker ps --format '{{.Names}}' --filter id=$CONTAINER)
CONTAINER_PREFIX=$(echo $CONTAINER_NAME | perl -n -e'/(.*)-(algo-(.)-(.*))/; print $1')
COMPOSE_SERVICE_NAME=$(echo $CONTAINER_NAME | perl -n -e'/(.*)-(algo-(.)-(.*))/; print $2')
if [[ -n "$COMPOSE_SERVICE_NAME" ]]; then
COMPOSE_FILES=$(_dr_find_sagemaker_compose_files "$COMPOSE_SERVICE_NAME")
for COMPOSE_FILE in $COMPOSE_FILES; do
if _dr_compose_file_matches_run "$COMPOSE_FILE"; then
echo $CONTAINER
fi
done
fi
done
fi
}
function dr-logs-robomaker {
OPT_REPLICA=1
OPT_EVAL=""
local OPTIND
OPT_TIME="--since 5m"
while getopts ":w:n:ea" opt; do
case $opt in
w)
OPT_WAIT=$OPTARG
;;
n)
OPT_REPLICA=$OPTARG
;;
e)
OPT_EVAL="-e"
;;
a)
OPT_TIME=""
;;
\?)
echo "Invalid option -$OPTARG" >&2
;;
esac
done
ROBOMAKER_CONTAINER=$(dr-find-robomaker -n ${OPT_REPLICA} ${OPT_EVAL})
if [[ -z "$ROBOMAKER_CONTAINER" ]]; then
if [[ -n "$OPT_WAIT" ]]; then
WAIT_TIME=$OPT_WAIT
echo "Waiting up to $WAIT_TIME seconds for Robomaker #${OPT_REPLICA} to start up..."
until [ -n "$ROBOMAKER_CONTAINER" ]; do
sleep 1
((WAIT_TIME--))
if [ "$WAIT_TIME" -lt 1 ]; then
echo "Robomaker #${OPT_REPLICA} is not running."
return 1
fi
ROBOMAKER_CONTAINER=$(dr-find-robomaker -n ${OPT_REPLICA} ${OPT_EVAL})
done
else
echo "Robomaker #${OPT_REPLICA} is not running."
return 1
fi
fi
if [[ "$TERM_PROGRAM" == "vscode" ]]; then
echo "VS Code terminal detected. Displaying Robomaker #${OPT_REPLICA} logs inline."
docker logs $OPT_TIME -f $ROBOMAKER_CONTAINER
elif [[ "${DR_HOST_X,,}" == "true" && -n "$DISPLAY" ]]; then
if [ -x "$(command -v gnome-terminal)" ]; then
gnome-terminal --tab --title "DR-${DR_RUN_ID}: Robomaker #${OPT_REPLICA} - ${ROBOMAKER_CONTAINER}" -- /usr/usr/bin/env bash -c "docker logs $OPT_TIME -f ${ROBOMAKER_CONTAINER}" 2>/dev/null
echo "Robomaker #${OPT_REPLICA} ($ROBOMAKER_CONTAINER) logs opened in separate gnome-terminal. "
elif [ -x "$(command -v x-terminal-emulator)" ]; then
x-terminal-emulator -e /bin/sh -c "docker logs $OPT_TIME -f ${ROBOMAKER_CONTAINER}" 2>/dev/null
echo "Robomaker #${OPT_REPLICA} ($ROBOMAKER_CONTAINER) logs opened in separate terminal. "
else
echo 'Could not find a terminal emulator. Displaying inline.'
docker logs $OPT_TIME -f $ROBOMAKER_CONTAINER
fi
else
docker logs $OPT_TIME -f $ROBOMAKER_CONTAINER
fi
}
function dr-find-robomaker {
local OPTIND
OPT_PREFIX="deepracer"
while getopts ":n:e" opt; do
case $opt in
n)
OPT_REPLICA=$OPTARG
;;
e)
OPT_PREFIX="-eval"
;;
\?)
echo "Invalid option -$OPTARG" >&2
;;
esac
done
if [[ "${DR_DOCKER_STYLE,,}" == "swarm" ]]; then
eval ROBOMAKER_ID=$(docker ps | grep "${OPT_PREFIX}-${DR_RUN_ID}_robomaker.${OPT_REPLICA}" | cut -f1 -d\ | head -1)
else
eval ROBOMAKER_ID=$(docker ps | grep "${OPT_PREFIX}-${DR_RUN_ID}-robomaker-${OPT_REPLICA}" | cut -f1 -d\ | head -1)
fi
if [ -n "$ROBOMAKER_ID" ]; then
echo $ROBOMAKER_ID
fi
}
function dr-get-robomaker-stats {
local OPTIND
OPT_REPLICA=1
while getopts ":n:" opt; do
case $opt in
n)
OPT_REPLICA=$OPTARG
;;
\?)
echo "Invalid option -$OPTARG" >&2
;;
esac
done
eval ROBOMAKER_ID=$(dr-find-robomaker -n $OPT_REPLICA)
if [ -n "$ROBOMAKER_ID" ]; then
echo "Showing statistics for Robomaker #$OPT_REPLICA - container $ROBOMAKER_ID"
docker exec -ti $ROBOMAKER_ID bash -c "gz stats"
else
echo "Robomaker #$OPT_REPLICA is not running."
fi
}
function dr-logs-loganalysis {
eval LOG_ANALYSIS_ID=$(docker ps | awk ' /deepracer-analysis/ { print $1 }')
if [ -n "$LOG_ANALYSIS_ID" ]; then
docker logs -f $LOG_ANALYSIS_ID
else
echo "Log-analysis is not running."
fi
}
function dr-url-loganalysis {
LOG_ANALYSIS_ID=$(docker ps --filter "name=deepracer-analysis" --format "{{.ID}}" | head -1)
if [ -n "$LOG_ANALYSIS_ID" ]; then
URL=$(docker logs "$LOG_ANALYSIS_ID" 2>&1 | grep -oE 'http://127\.0\.0\.1:[0-9]+[^ ]*token=[a-f0-9]+' | tail -1)
if [ -n "$URL" ]; then
echo "${URL/127.0.0.1/localhost}"
else
echo "Jupyter URL not found yet. Try again in a moment."
fi
else
echo "Log-analysis is not running."
fi
}
function dr-view-stream {
${DR_DIR}/utils/start-local-browser.sh "$@"
}
function dr-start-viewer {
$DR_DIR/scripts/viewer/start.sh "$@"
}
function dr-stop-viewer {
$DR_DIR/scripts/viewer/stop.sh "$@"
}
function dr-update-viewer {
$DR_DIR/scripts/viewer/stop.sh "$@"
$DR_DIR/scripts/viewer/start.sh "$@"
}
function dr-start-metrics {
$DR_DIR/scripts/metrics/start.sh "$@"
}
function dr-stop-metrics {
$DR_DIR/scripts/metrics/stop.sh "$@"
}
================================================
FILE: defaults/debug-reward_function.py
================================================
import math
import numpy
import time
class Reward:
'''
Debugging reward function to be used to track performance of local training.
Will print out the Real-Time-Factor (RTF), as well as how many
steps-per-second (sim-time) that the system is able to deliver.
'''
def __init__(self, verbose=False, track_time=False):
self.verbose = verbose
self.track_time = track_time
if track_time:
TIME_WINDOW=10
self.time = numpy.zeros([TIME_WINDOW, 2])
if verbose:
print("Initializing Reward Class")
def get_time(self):
wall_time_incr = numpy.max(self.time[:,0]) - numpy.min(self.time[:,0])
sim_time_incr = numpy.max(self.time[:,1]) - numpy.min(self.time[:,1])
rtf = sim_time_incr / wall_time_incr
fps = (self.time.shape[0] - 1) / sim_time_incr
return rtf, fps
def record_time(self, steps, sim_time=0.0):
index = int(steps) % self.time.shape[0]
self.time[index,0] = time.time()
self.time[index,1] = sim_time
def reward_function(self, params):
# Read input parameters
steps = params["steps"]
if self.track_time:
self.record_time(steps, sim_time=params.get("sim_time", 0.0))
if self.track_time:
if steps >= self.time.shape[0]:
rtf, fps = self.get_time()
print("TIME: s: {}, rtf: {}, fps:{}".format(int(steps), round(rtf, 2), round(fps, 2) ))
return 1.0
reward_object = Reward(verbose=False, track_time=True)
def reward_function(params):
return reward_object.reward_function(params)
================================================
FILE: defaults/dependencies.json
================================================
{
"master_version": "6.0",
"containers": {
"simapp": "6.0.4"
}
}
================================================
FILE: defaults/docker-daemon.json
================================================
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
================================================
FILE: defaults/hyperparameters.json
================================================
{
"batch_size": 64,
"beta_entropy": 0.01,
"discount_factor": 0.99,
"e_greedy_value": 0.05,
"epsilon_steps": 10000,
"exploration_type": "categorical",
"loss_type": "huber",
"lr": 0.0003,
"num_episodes_between_training": 20,
"num_epochs": 5,
"stack_size": 1,
"term_cond_avg_score": 350.0,
"term_cond_max_episodes": 1000,
"sac_alpha": 0.2
}
================================================
FILE: defaults/model_metadata.json
================================================
{
"action_space": [
{
"steering_angle": -30,
"speed": 0.6
},
{
"steering_angle": -15,
"speed": 0.6
},
{
"steering_angle": 0,
"speed": 0.6
},
{
"steering_angle": 15,
"speed": 0.6
},
{
"steering_angle": 30,
"speed": 0.6
}
],
"sensor": ["FRONT_FACING_CAMERA"],
"neural_network": "DEEP_CONVOLUTIONAL_NETWORK_SHALLOW",
"training_algorithm": "clipped_ppo",
"action_space_type": "discrete",
"version": "5"
}
================================================
FILE: defaults/model_metadata_cont.json
================================================
{
"action_space": {
"speed": {
"high": 2,
"low": 1
},
"steering_angle": {
"high": 30,
"low": -30
}
},
"sensor": [
"FRONT_FACING_CAMERA"
],
"neural_network": "DEEP_CONVOLUTIONAL_NETWORK_SHALLOW",
"training_algorithm": "clipped_ppo",
"action_space_type": "continuous",
"version": "5"
}
================================================
FILE: defaults/model_metadata_sac.json
================================================
{
"action_space": {"speed": {"high": 2, "low": 1}, "steering_angle": {"high": 30, "low": -30}},
"sensor": ["FRONT_FACING_CAMERA"],
"neural_network": "DEEP_CONVOLUTIONAL_NETWORK_SHALLOW",
"training_algorithm": "sac",
"action_space_type": "continuous",
"version": "4"
}
================================================
FILE: defaults/reward_function.py
================================================
def reward_function(params):
'''
Example of penalize steering, which helps mitigate zig-zag behaviors
'''
# Read input parameters
distance_from_center = params['distance_from_center']
track_width = params['track_width']
steering = abs(params['steering_angle']) # Only need the absolute steering angle
# Calculate 3 marks that are farther and father away from the center line
marker_1 = 0.1 * track_width
marker_2 = 0.25 * track_width
marker_3 = 0.5 * track_width
# Give higher reward if the car is closer to center line and vice versa
if distance_from_center <= marker_1:
reward = 1
elif distance_from_center <= marker_2:
reward = 0.5
elif distance_from_center <= marker_3:
reward = 0.1
else:
reward = 1e-3 # likely crashed/ close to off track
# Steering penality threshold, change the number based on your action space setting
ABS_STEERING_THRESHOLD = 15
# Penalize reward if the car is steering too much
if steering > ABS_STEERING_THRESHOLD:
reward *= 0.8
return float(reward)
================================================
FILE: defaults/template-run.env
================================================
DR_RUN_ID=0
DR_WORLD_NAME=reinvent_base
DR_RACE_TYPE=TIME_TRIAL
DR_CAR_NAME=FastCar
DR_CAR_BODY_SHELL_TYPE=deepracer
DR_CAR_COLOR=Red
DR_DISPLAY_NAME=$DR_CAR_NAME
DR_RACER_NAME=$DR_CAR_NAME
DR_ENABLE_DOMAIN_RANDOMIZATION=False
DR_EVAL_NUMBER_OF_TRIALS=3
DR_EVAL_IS_CONTINUOUS=True
DR_EVAL_MAX_RESETS=100
DR_EVAL_OFF_TRACK_PENALTY=5.0
DR_EVAL_COLLISION_PENALTY=5.0
DR_EVAL_SAVE_MP4=False
DR_EVAL_CHECKPOINT=last
DR_EVAL_OPP_S3_MODEL_PREFIX=rl-deepracer-sagemaker
DR_EVAL_OPP_CAR_BODY_SHELL_TYPE=deepracer
DR_EVAL_OPP_CAR_NAME=FasterCar
DR_EVAL_OPP_DISPLAY_NAME=$DR_EVAL_OPP_CAR_NAME
DR_EVAL_OPP_RACER_NAME=$DR_EVAL_OPP_CAR_NAME
DR_EVAL_DEBUG_REWARD=False
DR_EVAL_RESET_BEHIND_DIST=1.0
DR_EVAL_REVERSE_DIRECTION=False
#DR_EVAL_RTF=1.0
DR_TRAIN_CHANGE_START_POSITION=True
DR_TRAIN_REVERSE_DIRECTION=False
DR_TRAIN_ALTERNATE_DRIVING_DIRECTION=False
DR_TRAIN_START_POSITION_OFFSET=0.0
DR_TRAIN_ROUND_ROBIN_ADVANCE_DIST=0.05
DR_TRAIN_MULTI_CONFIG=False
DR_TRAIN_MIN_EVAL_TRIALS=5
DR_TRAIN_BEST_MODEL_METRIC=progress
#DR_TRAIN_RTF=1.0
#DR_TRAIN_MAX_STEPS_PER_ITERATION=10000
DR_LOCAL_S3_MODEL_PREFIX=rl-deepracer-sagemaker
DR_LOCAL_S3_PRETRAINED=False
DR_LOCAL_S3_PRETRAINED_PREFIX=rl-sagemaker-pretrained
DR_LOCAL_S3_PRETRAINED_CHECKPOINT=last
DR_LOCAL_S3_CUSTOM_FILES_PREFIX=custom_files
DR_LOCAL_S3_TRAINING_PARAMS_FILE=training_params.yaml
DR_LOCAL_S3_EVAL_PARAMS_FILE=evaluation_params.yaml
DR_LOCAL_S3_MODEL_METADATA_KEY=$DR_LOCAL_S3_CUSTOM_FILES_PREFIX/model_metadata.json
DR_LOCAL_S3_HYPERPARAMETERS_KEY=$DR_LOCAL_S3_CUSTOM_FILES_PREFIX/hyperparameters.json
DR_LOCAL_S3_REWARD_KEY=$DR_LOCAL_S3_CUSTOM_FILES_PREFIX/reward_function.py
DR_LOCAL_S3_METRICS_PREFIX=$DR_LOCAL_S3_MODEL_PREFIX/metrics
DR_UPLOAD_S3_PREFIX=$DR_LOCAL_S3_MODEL_PREFIX-1
DR_OA_NUMBER_OF_OBSTACLES=6
DR_OA_MIN_DISTANCE_BETWEEN_OBSTACLES=2.0
DR_OA_RANDOMIZE_OBSTACLE_LOCATIONS=False
DR_OA_IS_OBSTACLE_BOT_CAR=False
DR_OA_OBSTACLE_TYPE=box_obstacle
DR_OA_OBJECT_POSITIONS=
DR_H2B_IS_LANE_CHANGE=False
DR_H2B_LOWER_LANE_CHANGE_TIME=3.0
DR_H2B_UPPER_LANE_CHANGE_TIME=5.0
DR_H2B_LANE_CHANGE_DISTANCE=1.0
DR_H2B_NUMBER_OF_BOT_CARS=3
DR_H2B_MIN_DISTANCE_BETWEEN_BOT_CARS=2.0
DR_H2B_RANDOMIZE_BOT_CAR_LOCATIONS=False
DR_H2B_BOT_CAR_SPEED=0.2
DR_H2B_BOT_CAR_PENALTY=5.0
================================================
FILE: defaults/template-system.env
================================================
DR_CLOUD=<CLOUD_REPLACE>
DR_AWS_APP_REGION=<REGION_REPLACE>
DR_UPLOAD_S3_PROFILE=default
DR_UPLOAD_S3_BUCKET=<AWS_DR_BUCKET>
DR_LOCAL_S3_BUCKET=bucket
DR_LOCAL_S3_PROFILE=<LOCAL_PROFILE>
DR_GUI_ENABLE=False
DR_KINESIS_STREAM_NAME=
DR_CAMERA_MAIN_ENABLE=True
DR_CAMERA_SUB_ENABLE=False
DR_CAMERA_KVS_ENABLE=True
DR_ENABLE_EXTRA_KVS_OVERLAY=False
DR_SIMAPP_SOURCE=awsdeepracercommunity/deepracer-simapp
DR_SIMAPP_VERSION=<SIMAPP_VERSION_TAG>
DR_MINIO_IMAGE=latest
DR_ANALYSIS_IMAGE=cpu
DR_WORKERS=1
DR_ROBOMAKER_MOUNT_LOGS=False
# DR_ROBOMAKER_MOUNT_SIMAPP_DIR=
# DR_ROBOMAKER_MOUNT_SCRIPTS_DIR=${DR_DIR}/data/scripts
DR_CLOUD_WATCH_ENABLE=False
DR_CLOUD_WATCH_LOG_STREAM_PREFIX=
DR_DOCKER_STYLE=<DOCKER_STYLE>
DR_HOST_X=False
DR_WEBVIEWER_PORT=8100
DR_QUIET_ACTIVATE=False
# DR_DISPLAY=:99
# DR_REMOTE_MINIO_URL=http://mynas:9000
# DR_ROBOMAKER_CUDA_DEVICES=0
# DR_SAGEMAKER_CUDA_DEVICES=0
# DR_EXPERIMENT_NAME=
# DR_TELEGRAF_HOST=telegraf
# DR_TELEGRAF_PORT=8092
## DRoA Integration
# DR_DROA_URL=https://xxxx.cloudfront.net
# DR_DROA_USERNAME=user@example.com
================================================
FILE: defaults/template-worker.env
================================================
DR_WORLD_NAME=reInvent2019_track
DR_RACE_TYPE=TIME_TRIAL
DR_CAR_COLOR=Blue
DR_ENABLE_DOMAIN_RANDOMIZATION=False
DR_TRAIN_CHANGE_START_POSITION=True
DR_TRAIN_ALTERNATE_DRIVING_DIRECTION=False
DR_TRAIN_ROUND_ROBIN_ADVANCE_DIST=0.05
DR_TRAIN_START_POSITION_OFFSET=0.0
DR_OA_NUMBER_OF_OBSTACLES=6
DR_OA_MIN_DISTANCE_BETWEEN_OBSTACLES=2.0
DR_OA_RANDOMIZE_OBSTACLE_LOCATIONS=False
DR_OA_IS_OBSTACLE_BOT_CAR=False
DR_OA_OBSTACLE_TYPE=box_obstacle
DR_OA_OBJECT_POSITIONS=
DR_H2B_IS_LANE_CHANGE=False
DR_H2B_LOWER_LANE_CHANGE_TIME=3.0
DR_H2B_UPPER_LANE_CHANGE_TIME=5.0
DR_H2B_LANE_CHANGE_DISTANCE=1.0
DR_H2B_NUMBER_OF_BOT_CARS=3
DR_H2B_MIN_DISTANCE_BETWEEN_BOT_CARS=2.0
DR_H2B_RANDOMIZE_BOT_CAR_LOCATIONS=False
DR_H2B_BOT_CAR_SPEED=0.2
================================================
FILE: docker/docker-compose-aws.yml
================================================
version: '3.7'
services:
rl_coach:
environment:
- AWS_METADATA_SERVICE_TIMEOUT=3
- AWS_METADATA_SERVICE_NUM_ATTEMPTS=5
robomaker:
environment:
- AWS_METADATA_SERVICE_TIMEOUT=3
- AWS_METADATA_SERVICE_NUM_ATTEMPTS=5
================================================
FILE: docker/docker-compose-cwlog.yml
================================================
version: '3.7'
services:
rl_coach:
logging:
driver: awslogs
options:
awslogs-group: '/deepracer-for-cloud'
awslogs-create-group: 'true'
awslogs-region: ${DR_AWS_APP_REGION}
tag: "${DR_CLOUD_WATCH_LOG_STREAM_PREFIX}{{.Name}}"
robomaker:
logging:
driver: awslogs
options:
awslogs-group: '/deepracer-for-cloud'
awslogs-create-group: 'true'
awslogs-region: ${DR_AWS_APP_REGION}
tag: "${DR_CLOUD_WATCH_LOG_STREAM_PREFIX}{{.Name}}"
================================================
FILE: docker/docker-compose-endpoint.yml
================================================
version: '3.7'
services:
rl_coach:
environment:
- S3_ENDPOINT_URL=${DR_MINIO_URL}
robomaker:
environment:
- S3_ENDPOINT_URL=${DR_MINIO_URL}
================================================
FILE: docker/docker-compose-eval-swarm.yml
================================================
version: '3.7'
services:
rl_coach:
deploy:
restart_policy:
condition: none
placement:
constraints: [node.labels.Sagemaker == true ]
robomaker:
deploy:
restart_policy:
condition: none
replicas: 1
placement:
constraints: [node.labels.Robomaker == true ]
environment:
- DOCKER_REPLICA_SLOT={{.Task.Slot}}
================================================
FILE: docker/docker-compose-eval.yml
================================================
version: '3.7'
networks:
default:
external: true
name: sagemaker-local
services:
rl_coach:
image: ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}
command: ["/bin/bash", "-c", "echo No work for coach in Evaluation Mode"]
robomaker:
image: ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}
command: ["${ROBOMAKER_COMMAND:-}"]
ports:
- "${DR_ROBOMAKER_EVAL_PORT}:8080"
environment:
- CUDA_VISIBLE_DEVICES=${DR_ROBOMAKER_CUDA_DEVICES:-}
- DEBUG_REWARD=${DR_EVAL_DEBUG_REWARD}
- WORLD_NAME=${DR_WORLD_NAME}
- MODEL_S3_PREFIX=${DR_LOCAL_S3_MODEL_PREFIX}
- MODEL_S3_BUCKET=${DR_LOCAL_S3_BUCKET}
- APP_REGION=${DR_AWS_APP_REGION}
- S3_YAML_NAME=${DR_CURRENT_PARAMS_FILE}
- KINESIS_VIDEO_STREAM_NAME=${DR_KINESIS_STREAM_NAME}
- ENABLE_KINESIS=${DR_CAMERA_KVS_ENABLE}
- ENABLE_GUI=${DR_GUI_ENABLE}
- ROLLOUT_IDX=0
- RTF_OVERRIDE=${DR_EVAL_RTF:-}
- ROS_MASTER_URI=http://localhost:11311/
- ROS_IP=127.0.0.1
- GAZEBO_ARGS=${DR_GAZEBO_ARGS:-}
- GAZEBO_RENDER_ENGINE=${DR_GAZEBO_RENDER_ENGINE:-ogre2}
- TELEGRAF_HOST=${DR_TELEGRAF_HOST:-}
- TELEGRAF_PORT=${DR_TELEGRAF_PORT:-}
init: true
================================================
FILE: docker/docker-compose-keys.yml
================================================
version: '3.7'
services:
rl_coach:
environment:
- AWS_ACCESS_KEY_ID=${DR_LOCAL_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${DR_LOCAL_SECRET_ACCESS_KEY}
robomaker:
environment:
- AWS_ACCESS_KEY_ID=${DR_LOCAL_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${DR_LOCAL_SECRET_ACCESS_KEY}
================================================
FILE: docker/docker-compose-local-xorg-wsl.yml
================================================
version: '3.7'
services:
robomaker:
environment:
- DISPLAY
- USE_EXTERNAL_X=${DR_HOST_X}
- QT_X11_NO_MITSHM=1
- LD_LIBRARY_PATH=/usr/lib/wsl/lib
volumes:
- '/tmp/.X11-unix/:/tmp/.X11-unix'
- '/mnt/wslg:/mnt/wslg'
- '/usr/lib/wsl:/usr/lib/wsl'
devices:
- /dev/dxg
================================================
FILE: docker/docker-compose-local-xorg.yml
================================================
version: '3.7'
services:
robomaker:
environment:
- DISPLAY
- USE_EXTERNAL_X=${DR_HOST_X}
- XAUTHORITY=/root/.Xauthority
- QT_X11_NO_MITSHM=1
- NVIDIA_DRIVER_CAPABILITIES=all
volumes:
- '/tmp/.X11-unix/:/tmp/.X11-unix'
- '${XAUTHORITY}:/root/.Xauthority'
================================================
FILE: docker/docker-compose-local.yml
================================================
version: '3.7'
networks:
default:
external: true
name: sagemaker-local
services:
minio:
image: minio/minio:${DR_MINIO_IMAGE}
ports:
- "9000:9000"
- "9001:9001"
command: server /data --console-address ":9001"
environment:
- MINIO_ROOT_USER=${DR_LOCAL_ACCESS_KEY_ID}
- MINIO_ROOT_PASSWORD=${DR_LOCAL_SECRET_ACCESS_KEY}
- MINIO_UID
- MINIO_GID
- MINIO_USERNAME
- MINIO_GROUPNAME
volumes:
- ${DR_DIR}/data/minio:/data
================================================
FILE: docker/docker-compose-metrics.yml
================================================
version: '3.7'
networks:
default:
external: true
name: sagemaker-local
services:
telegraf:
image: telegraf:1.18-alpine
volumes:
- ./metrics/telegraf/etc/telegraf.conf:/etc/telegraf/telegraf.conf:ro
depends_on:
- influxdb
links:
- influxdb
ports:
- '127.0.0.1:8125:8125/udp'
- '127.0.0.1:8092:8092/udp'
influxdb:
image: influxdb:1.8-alpine
env_file: ./metrics/configuration.env
ports:
- '127.0.0.1:8886:8086'
volumes:
- influxdb_data:/var/lib/influxdb
grafana:
image: grafana/grafana:10.4.2
depends_on:
- influxdb
env_file: ./metrics/configuration.env
links:
- influxdb
ports:
- '3000:3000'
volumes:
- grafana_data:/var/lib/grafana
- ./metrics/grafana/provisioning/:/etc/grafana/provisioning/
volumes:
grafana_data: {}
influxdb_data: {}
================================================
FILE: docker/docker-compose-mount.yml
================================================
version: '3.7'
services:
robomaker:
volumes:
- "${DR_MOUNT_DIR}:/root/.ros/log"
================================================
FILE: docker/docker-compose-robomaker-multi.yml
================================================
version: '3.7'
services:
robomaker:
volumes:
- "${DR_DIR}/tmp/comms.${DR_RUN_ID}:/mnt/comms"
================================================
FILE: docker/docker-compose-robomaker-scripts.yml
================================================
version: '3.7'
services:
robomaker:
volumes:
- '${DR_ROBOMAKER_MOUNT_SCRIPTS_DIR}:/scripts'
================================================
FILE: docker/docker-compose-simapp.yml
================================================
version: '3.7'
services:
robomaker:
volumes:
- '${DR_ROBOMAKER_MOUNT_SIMAPP_DIR}:/opt/simapp'
================================================
FILE: docker/docker-compose-training-swarm.yml
================================================
version: '3.7'
services:
rl_coach:
deploy:
restart_policy:
condition: none
placement:
constraints: [node.labels.Sagemaker == true ]
robomaker:
deploy:
restart_policy:
condition: none
replicas: ${DR_WORKERS}
placement:
constraints: [node.labels.Robomaker == true ]
environment:
- DOCKER_REPLICA_SLOT={{.Task.Slot}}
================================================
FILE: docker/docker-compose-training.yml
================================================
version: "3.7"
networks:
default:
external: true
name: sagemaker-local
services:
rl_coach:
image: ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}
command: ["source /root/sagemaker-venv/bin/activate && python3 /opt/ml/code/rl_coach/start.py"]
working_dir: "/opt/ml/code/"
environment:
- RUN_ID=${DR_RUN_ID}
- AWS_REGION=${DR_AWS_APP_REGION}
- SAGEMAKER_IMAGE=${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}
- PRETRAINED=${DR_LOCAL_S3_PRETRAINED}
- PRETRAINED_S3_PREFIX=${DR_LOCAL_S3_PRETRAINED_PREFIX}
- PRETRAINED_S3_BUCKET=${DR_LOCAL_S3_BUCKET}
- PRETRAINED_CHECKPOINT=${DR_LOCAL_S3_PRETRAINED_CHECKPOINT}
- MODEL_S3_PREFIX=${DR_LOCAL_S3_MODEL_PREFIX}
- MODEL_S3_BUCKET=${DR_LOCAL_S3_BUCKET}
- HYPERPARAMETER_FILE_S3_KEY=${DR_LOCAL_S3_HYPERPARAMETERS_KEY}
- MODELMETADATA_FILE_S3_KEY=${DR_LOCAL_S3_MODEL_METADATA_KEY}
- CUDA_VISIBLE_DEVICES=${DR_SAGEMAKER_CUDA_DEVICES:-}
- MAX_MEMORY_STEPS=${DR_TRAIN_MAX_STEPS_PER_ITERATION:-}
- TELEGRAF_HOST=${DR_TELEGRAF_HOST:-}
- TELEGRAF_PORT=${DR_TELEGRAF_PORT:-}
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/tmp/sagemaker:/tmp/sagemaker"
robomaker:
image: ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}
command: ["${ROBOMAKER_COMMAND:-}"]
ports:
- "${DR_ROBOMAKER_TRAIN_PORT}:8080"
- "${DR_ROBOMAKER_GUI_PORT}:5900"
environment:
- WORLD_NAME=${DR_WORLD_NAME}
- SAGEMAKER_SHARED_S3_PREFIX=${DR_LOCAL_S3_MODEL_PREFIX}
- SAGEMAKER_SHARED_S3_BUCKET=${DR_LOCAL_S3_BUCKET}
- APP_REGION=${DR_AWS_APP_REGION}
- S3_YAML_NAME=${DR_CURRENT_PARAMS_FILE}
- KINESIS_VIDEO_STREAM_NAME=${DR_KINESIS_STREAM_NAME}
- ENABLE_KINESIS=${DR_CAMERA_KVS_ENABLE}
- ENABLE_GUI=${DR_GUI_ENABLE}
- CUDA_VISIBLE_DEVICES=${DR_ROBOMAKER_CUDA_DEVICES:-}
- MULTI_CONFIG
- RTF_OVERRIDE=${DR_TRAIN_RTF:-}
- ROS_MASTER_URI=http://localhost:11311/
- ROS_IP=127.0.0.1
- GAZEBO_ARGS=${DR_GAZEBO_ARGS:-}
- GAZEBO_RENDER_ENGINE=${DR_GAZEBO_RENDER_ENGINE:-ogre2}
- TELEGRAF_HOST=${DR_TELEGRAF_HOST:-}
- TELEGRAF_PORT=${DR_TELEGRAF_PORT:-}
init: true
================================================
FILE: docker/docker-compose-webviewer-swarm.yml
================================================
version: '3.7'
networks:
default:
external: true
name: sagemaker-local
services:
proxy:
deploy:
restart_policy:
condition: none
replicas: 1
placement:
constraints: [node.labels.Sagemaker == true ]
================================================
FILE: docker/docker-compose-webviewer.yml
================================================
version: '3.7'
networks:
default:
external: true
name: sagemaker-local
services:
proxy:
image: nginx
ports:
- "${DR_WEBVIEWER_PORT}:80"
volumes:
- ${DR_VIEWER_HTML}:/usr/share/nginx/html/index.html
- ${DR_NGINX_CONF}:/etc/nginx/conf.d/default.conf
================================================
FILE: docker/metrics/configuration.env
================================================
# Grafana options
GF_SECURITY_ADMIN_USER=admin
GF_SECURITY_ADMIN_PASSWORD=admin
GF_INSTALL_PLUGINS=
# InfluxDB options
INFLUXDB_DB=influx
INFLUXDB_ADMIN_USER=admin
INFLUXDB_ADMIN_PASSWORD=admin
================================================
FILE: docker/metrics/grafana/provisioning/dashboards/dashboard.yml
================================================
apiVersion: 1
providers:
- name: 'Default'
folder: ''
options:
path: /etc/grafana/provisioning/dashboards
================================================
FILE: docker/metrics/grafana/provisioning/dashboards/deepracer-training-template.json
================================================
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "datasource",
"uid": "grafana"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"limit": 100,
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 2,
"id": 1,
"links": [],
"panels": [
{
"datasource": {},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 9,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "reward"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 11,
"w": 24,
"x": 0,
"y": 0
},
"id": 6,
"options": {
"legend": {
"calcs": [
"min",
"mean",
"max",
"lastNotNull"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true,
"sortBy": "Max",
"sortDesc": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"alias": "$tag_model training reward",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "dr_training_episodes",
"orderByTime": "ASC",
"policy": "default",
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"reward"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "phase",
"operator": "=",
"value": "training"
}
]
},
{
"alias": "$tag_model complete lap reward",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "dr_training_episodes",
"orderByTime": "ASC",
"policy": "default",
"refId": "B",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"reward"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "status",
"operator": "=",
"value": "Lap complete"
}
]
},
{
"alias": "$tag_model eval reward",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "dr_training_episodes",
"orderByTime": "ASC",
"policy": "default",
"refId": "C",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"reward"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "phase",
"operator": "=",
"value": "evaluation"
}
]
}
],
"title": "Reward",
"type": "timeseries"
},
{
"datasource": {},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "points",
"fillOpacity": 3,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 4,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": ".*eval progress moving average"
},
"properties": [
{
"id": "custom.drawStyle",
"value": "line"
},
{
"id": "custom.showPoints",
"value": "never"
},
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byRegexp",
"options": ".*training progress moving average"
},
"properties": [
{
"id": "custom.drawStyle",
"value": "line"
},
{
"id": "custom.showPoints",
"value": "never"
},
{
"id": "color",
"value": {
"fixedColor": "blue",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 10,
"w": 24,
"x": 0,
"y": 11
},
"id": 4,
"options": {
"legend": {
"calcs": [
"min",
"mean",
"max"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"alias": "$tag_model training progress",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "dr_training_episodes",
"orderByTime": "ASC",
"policy": "default",
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"progress"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "phase",
"operator": "=",
"value": "training"
}
]
},
{
"alias": "$tag_model eval progress",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "dr_training_episodes",
"orderByTime": "ASC",
"policy": "default",
"refId": "B",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"progress"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "phase",
"operator": "=",
"value": "evaluation"
}
]
},
{
"alias": "$tag_model eval progress moving average",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "dr_training_episodes",
"orderByTime": "ASC",
"policy": "default",
"refId": "C",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"progress"
],
"type": "field"
},
{
"params": [],
"type": "mean"
},
{
"params": [
30
],
"type": "moving_average"
}
]
],
"tags": [
{
"key": "phase",
"operator": "=",
"value": "evaluation"
}
]
},
{
"alias": "$tag_model training progress moving average",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "dr_training_episodes",
"orderByTime": "ASC",
"policy": "default",
"refId": "D",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"progress"
],
"type": "field"
},
{
"params": [],
"type": "mean"
},
{
"params": [
30
],
"type": "moving_average"
}
]
],
"tags": [
{
"key": "phase",
"operator": "=",
"value": "training"
}
]
}
],
"title": "Progress",
"type": "timeseries"
},
{
"datasource": {},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "points",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 4,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"decimals": 3,
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "ms"
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": ".*eval lap moving average"
},
"properties": [
{
"id": "custom.drawStyle",
"value": "line"
},
{
"id": "custom.showPoints",
"value": "never"
},
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
},
{
"id": "custom.lineWidth",
"value": 2
}
]
},
{
"matcher": {
"id": "byRegexp",
"options": ".*training lap moving average"
},
"properties": [
{
"id": "custom.drawStyle",
"value": "line"
},
{
"id": "custom.showPoints",
"value": "never"
},
{
"id": "color",
"value": {
"fixedColor": "blue",
"mode": "fixed"
}
},
{
"id": "custom.lineWidth",
"value": 2
}
]
}
]
},
"gridPos": {
"h": 10,
"w": 24,
"x": 0,
"y": 21
},
"id": 2,
"options": {
"legend": {
"calcs": [
"min",
"mean",
"max",
"lastNotNull"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"alias": "$tag_model training lap ",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "dr_training_episodes",
"orderByTime": "ASC",
"policy": "default",
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"elapsed_time"
],
"type": "field"
},
{
"params": [],
"type": "min"
}
]
],
"tags": [
{
"key": "status",
"operator": "=",
"value": "Lap complete"
},
{
"condition": "AND",
"key": "phase",
"operator": "=",
"value": "training"
}
]
},
{
"alias": "$tag_model eval lap ",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "dr_training_episodes",
"orderByTime": "ASC",
"policy": "default",
"refId": "B",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"elapsed_time"
],
"type": "field"
},
{
"params": [],
"type": "min"
}
]
],
"tags": [
{
"key": "status",
"operator": "=",
"value": "Lap complete"
},
{
"condition": "AND",
"key": "phase",
"operator": "=",
"value": "evaluation"
}
]
},
{
"alias": "$tag_model eval lap moving average",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "dr_training_episodes",
"orderByTime": "ASC",
"policy": "default",
"refId": "C",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"elapsed_time"
],
"type": "field"
},
{
"params": [],
"type": "min"
},
{
"params": [
30
],
"type": "moving_average"
}
]
],
"tags": [
{
"key": "status",
"operator": "=",
"value": "Lap complete"
},
{
"condition": "AND",
"key": "phase",
"operator": "=",
"value": "evaluation"
}
]
},
{
"alias": "$tag_model training lap moving average",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "dr_training_episodes",
"orderByTime": "ASC",
"policy": "default",
"refId": "D",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"elapsed_time"
],
"type": "field"
},
{
"params": [],
"type": "min"
},
{
"params": [
30
],
"type": "moving_average"
}
]
],
"tags": [
{
"key": "status",
"operator": "=",
"value": "Lap complete"
},
{
"condition": "AND",
"key": "phase",
"operator": "=",
"value": "training"
}
]
}
],
"title": "Training Complete Lap times",
"type": "timeseries"
},
{
"datasource": {},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "points",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": ".*entropy moving average"
},
"properties": [
{
"id": "custom.drawStyle",
"value": "line"
},
{
"id": "custom.showPoints",
"value": "never"
},
{
"id": "color",
"value": {
"fixedColor": "blue",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 11,
"w": 24,
"x": 0,
"y": 31
},
"id": 7,
"options": {
"legend": {
"calcs": [
"min",
"mean",
"max",
"lastNotNull"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"alias": "$tag_model entropy",
"datasource": {},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"measurement": "dr_sagemaker_epochs",
"orderByTime": "ASC",
"policy": "default",
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"entropy"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": []
},
{
"alias": "$tag_model entropy moving average",
"datasource": {},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"model"
],
"type": "tag"
}
],
"hide": false,
"measurement": "dr_sagemaker_epochs",
"orderByTime": "ASC",
"policy": "default",
"refId": "B",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"entropy"
],
"type": "field"
},
{
"params": [],
"type": "mean"
},
{
"params": [
10
],
"type": "moving_average"
}
]
],
"tags": []
}
],
"title": "Epoch",
"type": "timeseries"
}
],
"refresh": "10s",
"schemaVersion": 39,
"tags": [],
"templating": {
"list": []
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "DeepRacer Training template",
"uid": "adke0lwv5zwg0e",
"version": 1,
"weekStart": ""
}
================================================
FILE: docker/metrics/grafana/provisioning/datasources/influxdb.yml
================================================
# config file version
apiVersion: 1
# list of datasources that should be deleted from the database
deleteDatasources:
- name: Influxdb
orgId: 1
# list of datasources to insert/update depending
# whats available in the database
datasources:
# <string, required> name of the datasource. Required
- name: InfluxDB
# <string, required> datasource type. Required
type: influxdb
# <string, required> access mode. direct or proxy. Required
access: proxy
# <int> org id. will default to orgId 1 if not specified
orgId: 1
# <string> url
url: http://influxdb:8086
# <string> database password, if used
password: "admin"
# <string> database user, if used
user: "admin"
# <string> database name, if used
database: "influx"
# <bool> enable/disable basic auth
basicAuth: false
# withCredentials:
# <bool> mark as default datasource. Max one per org
isDefault: true
# <map> fields that will be converted to json and stored in json_data
jsonData:
timeInterval: "5s"
# graphiteVersion: "1.1"
# tlsAuth: false
# tlsAuthWithCACert: false
# # <string> json object of data that will be encrypted.
# secureJsonData:
# tlsCACert: "..."
# tlsClientCert: "..."
# tlsClientKey: "..."
version: 1
# <bool> allow users to edit datasources from the UI.
editable: false
================================================
FILE: docker/metrics/telegraf/etc/telegraf.conf
================================================
# Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared inputs, and sent to the declared outputs.
# Plugins must be declared in here to be active.
# To deactivate a plugin, comment out the name and any variables.
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
# file would generate.
# Global tags can be specified here in key="value" format.
[global_tags]
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
# rack = "1a"
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
interval = "5s"
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
## Telegraf will cache metric_buffer_limit metrics for each output, and will
## flush this buffer on a successful write.
metric_buffer_limit = 10000
## Flush the buffer whenever full, regardless of flush_interval.
flush_buffer_when_full = true
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
## Default flushing interval for all outputs. You shouldn't set this below
## interval. Maximum flush_interval will be flush_interval + flush_jitter
flush_interval = "1s"
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
## Run telegraf in debug mode
debug = false
## Run telegraf in quiet mode
quiet = false
## Override default hostname, if empty use os.Hostname()
hostname = ""
###############################################################################
# OUTPUTS #
###############################################################################
# Configuration for influxdb server to send metrics to
[[outputs.influxdb]]
# The full HTTP or UDP endpoint URL for your InfluxDB instance.
# Multiple urls can be specified but it is assumed that they are part of the same
# cluster, this means that only ONE of the urls will be written to each interval.
# urls = ["udp://localhost:8089"] # UDP endpoint example
urls = ["http://influxdb:8086"] # required
# The target database for metrics (telegraf will create it if not exists)
database = "influx" # required
# Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
# note: using second precision greatly helps InfluxDB compression
precision = "s"
## Write timeout (for the InfluxDB client), formatted as a string.
## If not provided, will default to 5s. 0s means no timeout (not recommended).
timeout = "5s"
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
# Set the user agent for HTTP POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
# Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
# udp_payload = 512
###############################################################################
# INPUTS #
###############################################################################
# Statsd Server
[[inputs.statsd]]
## Protocol, must be "tcp", "udp4", "udp6" or "udp" (default=udp)
protocol = "udp"
## MaxTCPConnection - applicable when protocol is set to tcp (default=250)
max_tcp_connections = 250
## Enable TCP keep alive probes (default=false)
tcp_keep_alive = false
## Specifies the keep-alive period for an active network connection.
## Only applies to TCP sockets and will be ignored if tcp_keep_alive is false.
## Defaults to the OS configuration.
# tcp_keep_alive_period = "2h"
## Address and port to host UDP listener on
service_address = ":8125"
## The following configuration options control when telegraf clears it's cache
## of previous values. If set to false, then telegraf will only clear it's
## cache when the daemon is restarted.
## Reset gauges every interval (default=true)
delete_gauges = true
## Reset counters every interval (default=true)
delete_counters = true
## Reset sets every interval (default=true)
delete_sets = true
## Reset timings & histograms every interval (default=true)
delete_timings = true
## Percentiles to calculate for timing & histogram stats
percentiles = [90]
## separator to use between elements of a statsd metric
metric_separator = "_"
## Parses tags in the datadog statsd format
## http://docs.datadoghq.com/guides/dogstatsd/
parse_data_dog_tags = false
## Statsd data translation templates, more info can be read here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#graphite
# templates = [
# "cpu.* measurement*"
# ]
## Number of UDP messages allowed to queue up, once filled,
## the statsd server will start dropping packets
allowed_pending_messages = 10000
## Number of timing/histogram values to track per-measurement in the
## calculation of percentiles. Raising this limit increases the accuracy
## of percentiles but also increases the memory usage and cpu time.
percentile_limit = 1000
## Maximum socket buffer size in bytes, once the buffer fills up, metrics
## will start dropping. Defaults to the OS default.
# read_buffer_size = 65535
# Read metrics about cpu usage
[[inputs.cpu]]
## Whether to report per-cpu stats or not
percpu = true
## Whether to report total system cpu stats or not
totalcpu = true
## Comment this line if you want the raw CPU time metrics
fielddrop = ["time_*"]
# Read metrics about disk usage by mount point
[[inputs.disk]]
## By default, telegraf gather stats for all mountpoints.
## Setting mountpoints will restrict the stats to the specified mountpoints.
# mount_points = ["/"]
## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
## present on /run, /var/run, /dev/shm or /dev).
ignore_fs = ["tmpfs", "devtmpfs"]
# Read metrics about disk IO by device
[[inputs.diskio]]
## By default, telegraf will gather stats for all devices including
## disk partitions.
## Setting devices will restrict the stats to the specified devices.
# devices = ["sda", "sdb"]
## Uncomment the following line if you need disk serial numbers.
# skip_serial_number = false
# Get kernel statistics from /proc/stat
[[inputs.kernel]]
# no configuration
# Read metrics about memory usage
[[inputs.mem]]
# no configuration
# Get the number of processes and group them by status
[[inputs.processes]]
# no configuration
# Read metrics about swap memory usage
[[inputs.swap]]
# no configuration
# Read metrics about system load & uptime
[[inputs.system]]
# no configuration
# Read metrics about network interface usage
[[inputs.net]]
# collect data only about specific interfaces
# interfaces = ["eth0"]
[[inputs.netstat]]
# no configuration
[[inputs.interrupts]]
# no configuration
[[inputs.linux_sysctl_fs]]
# no configuration
[[inputs.socket_listener]]
service_address = "udp://:8092"
================================================
FILE: docs/_config.yml
================================================
---
theme: jekyll-theme-slate
markdown: GFM
name: Deepracer-for-Cloud
plugins:
- jekyll-relative-links
relative_links:
enabled: true
collections: false
================================================
FILE: docs/docker.md
================================================
# About the Docker setup
DRfC supports running Docker in to modes `swarm` and `compose` - this behaviour is configured in `system.env` through `DR_DOCKER_STYLE`.
## Swarm Mode
Docker Swarm mode is the default. Docker Swarm makes it possible to connect multiple hosts together to spread the load -- esp. useful if one wants to run multiple Robomaker workers, but can also be useful locally if one has two computers that each are not powerful enough to run DeepRacer.
In Swarm mode DRfC creates Stacks, using `docker stack`. During operations one can check running stacks through `docker stack ls`, and running services through `docker stack <id> ls`.
DRfC is installed only on the manager. (The first installed host.) Swarm workers are 'dumb' and do not need to have DRfC installed.
### Key features
* Allows user to connect multiple computers on the same network. (In AWS the instances must be connected on same VPC, and instances must be allowed to communicate.)
* Supports [multiple Robomaker workers](multi_worker.md)
* Supports [running multiple parallel experiments](multi_run.md)
### Limitations
* The Sagemaker container can only be run on the manager.
* Docker images are downloaded from Docker Hub. Locally built images are allowed only if they have a unique tag, not in Docker Hub. If you have multiple Docker nodes ensure that they all have the image available.
### Connecting Workers
* On the manager run `docker swarm join-token manager`.
* On the worker run the command that was displayed on the manager `docker swarm join --token <token> <ip>:<port>`.
### Ports
Docker Swarm will automatically put a load-balancer in front of all replicas in a service. This means that the ROS Web View, which provides a video stream of the DeepRacer during training, will be load balanced - sharing one port (`8080`). If you have multiple workers (even across multiple hosts) then press F5 to cycle through them.
## Compose Mode
In Compose mode DRfC creates Services, using `docker compose`. During operations one can check running stacks through `docker service ls`, and running services through `docker service ps`.
### Key features
* Supports [multiple Robomaker workers](multi_worker.md)
* Supports [running multiple parallel experiments](multi_run.md)
* Supports [GPU Accelerated OpenGL for Robomaker](opengl.md)
### Limitations
* Workload cannot be spread across multiple hosts.
### Ports
In the case of using Docker Compose the different Robomaker worker will require unique ports for ROS Web Vew and VNC. Docker will assign these dynamically. Use `docker ps` to see which container has been assigned which ports.
================================================
FILE: docs/droa.md
================================================
# DeepRacer on AWS (DRoA) Integration
[DeepRacer on AWS](https://aws.amazon.com/solutions/implementations/deepracer-on-aws/) is the community-hosted replacement for the original AWS DeepRacer console. DRfC includes a set of `droa-*` commands that let you manage models in your DRoA installation directly from the command line.
## Prerequisites
### Install DRoA
Follow the [DeepRacer on AWS installation guide](https://github.com/aws-deepracer-community/deepracer-on-aws) to deploy DRoA into your own AWS account.
### Configure DRfC
In `system.env` set:
```bash
DR_DROA_URL=https://<your-droa-domain> # e.g. https://deepracer.aws.example.com
DR_DROA_USERNAME=<your-droa-email>
```
`DR_DROA_URL` is the base URL of your DRoA deployment. At runtime, DRfC fetches `<DR_DROA_URL>/env.js` to discover the region, Cognito pools, API endpoint, and upload bucket automatically — no additional AWS config required.
> **Security**: never store your DRoA password in `system.env`. All commands prompt for it interactively (or accept `--password` on the CLI). Credentials are cached in `~/.droa-cache/` for the duration of the session token.
### Python environment
Run `bin/prepare.sh` to create the `.venv` virtual environment and install the required Python packages (`boto3`, `pyyaml`, `requests`, `deepracer-utils`). After `source bin/activate.sh` the venv is active and all `droa-*` commands are available.
---
## Commands
### `droa-list-models`
List all models in your DRoA installation, sorted newest-first.
```
droa-list-models [--json]
```
Output columns: `modelId`, `name`, `status`, `trainingStatus`, `createdAt`.
| Status | Meaning |
|--------|---------|
| `IMPORTING` | Import in progress |
| `READY` | Available for evaluation |
| `TRAINING` | Training job running |
| `ERROR` | Import or training failed |
| `DELETING` | Deletion in progress |
---
### `droa-get-model`
Show details of a single model.
```
droa-get-model <modelId> [--verbose] [--summary] [--json]
```
| Flag | Description |
|------|-------------|
| *(none)* | Identity, car config, training config, metadata |
| `--verbose` | Adds action space and reward function source |
| `--summary` | Adds mean training metrics (reward, progress) via DeepRacer Utils |
| `--json` | Raw JSON output |
---
### `droa-download-logs`
Download training or evaluation logs for a model.
```
droa-download-logs <modelId> [--asset-type TRAINING_LOGS|EVALUATION_LOGS|PHYSICAL_CAR_MODEL|VIRTUAL_MODEL|VIDEOS]
[--evaluation-id <id>]
[--output <file>]
[--summary]
```
| Flag | Description |
|------|-------------|
| `--asset-type` | Asset type (default: `TRAINING_LOGS`) |
| `--evaluation-id` | Required when `--asset-type EVALUATION_LOGS` |
| `--output` / `-o` | Output file path (default: derived from the presigned URL filename) |
| `--summary` | Print DeepRacer Utils stability summary after download (TRAINING_LOGS only) |
The command polls until the asset is ready (up to 5 minutes for `VIRTUAL_MODEL`).
---
### `droa-delete-model`
Delete a model. Only models with status `READY` or `ERROR` can be deleted.
```
droa-delete-model <modelId> [-y/--yes]
```
Without `--yes`, you are shown the model name and status and must type the model name to confirm. Deletion is asynchronous — the model transitions to `DELETING` status.
---
### `droa-import-model`
Import a locally trained DRFC model into DRoA.
```
droa-import-model (--model-prefix <prefix> | --model-dir <dir>)
[--model-name <name>]
[--model-description <text>]
[--best | --checkpoint <step>]
```
#### Source options
| Option | Description |
|--------|-------------|
| `--model-prefix` | Pull directly from local MinIO S3 (`DR_LOCAL_S3_BUCKET`). Defaults `--model-name` to the prefix. |
| `--model-dir` | Use a pre-assembled local directory containing all required model files. |
#### Checkpoint selection (`--model-prefix` only)
| Flag | Behaviour |
|------|-----------|
| *(none)* | Use the last checkpoint |
| `--best` | Use the best checkpoint |
| `--checkpoint STEP` | Use the checkpoint at the given training step |
#### What happens
1. Model files are pulled from MinIO (path-style S3, using `DR_MINIO_URL` and `DR_LOCAL_S3_PROFILE`).
2. `training_params.yaml` is copied from the bucket (`training_params_1.yaml` preferred for multi-worker runs). If missing, it is generated from `DR_*` environment variables.
3. `WORLD_NAME` direction suffixes (`_cw`, `_ccw`) are stripped and `TRACK_DIRECTION_CLOCKWISE` is added — required by DRoA's track validation.
4. Files are uploaded to the DRoA S3 transit bucket and the import API is called.
#### Required files (when using `--model-dir`)
- `model_metadata.json`
- `reward_function.py`
- `training_params.yaml`
- `hyperparameters.json`
---
## Environment variables reference
| Variable | Location | Description |
|----------|----------|-------------|
| `DR_DROA_URL` | `system.env` | Base URL of your DRoA deployment |
| `DR_DROA_USERNAME` | `system.env` | DRoA login email |
| `DR_MINIO_URL` | `system.env` | MinIO endpoint URL (e.g. `http://minio:9000`) |
| `DR_LOCAL_S3_PROFILE` | `system.env` | boto3 AWS profile name for MinIO access |
| `DR_LOCAL_S3_BUCKET` | `run.env` | Local S3 bucket name |
| `DR_LOCAL_S3_MODEL_PREFIX` | `run.env` | Default model prefix for `--model-prefix` |
All `droa-*` commands also accept `--url`, `--username`, and `--password` flags to override the environment variables.
================================================
FILE: docs/head-to-head.md
================================================
# Head-to-Head Race (Beta)
It is possible to run a head-to-head race, similar to the races in the brackets
run by AWS in the Virtual Circuits to determine the winner of the head-to-bot races.
This replaces the "Tournament Mode".
## Introduction
The concept is that you have two models racing each other, one Purple and one Orange Car. One car
is powered by our primary configured model, and the second car is powered by the model in `DR_EVAL_OPP_S3_MODEL_PREFIX`
## Configuration
### run.env
Configure `run.env` with the following parameters:
* `DR_RACE_TYPE` should be `HEAD_TO_MODEL`.
* `DR_EVAL_OPP_S3_MODEL_PREFIX` will be the S3 prefix for the secondary model.
* `DR_EVAL_OPP_CAR_NAME` is the display name of this model.
Metrics, Traces and Videos will be stored in each models' prefix.
## Run
Run the race with `dr-start-evaluation`; one race will be run.
================================================
FILE: docs/index.md
================================================
# Introduction
Provides a quick and easy way to get up and running with a DeepRacer training environment in AWS or Azure, using either the Azure [N-Series Virtual Machines](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu) or [AWS EC2 Accelerated Computing instances](https://aws.amazon.com/ec2/instance-types/?nc1=h_ls#Accelerated_Computing), or locally on your own desktop or server.
DeepRacer-For-Cloud (DRfC) started as an extension of the work done by Alex (https://github.com/alexschultz/deepracer-for-dummies), which is again a wrapper around the amazing work done by Chris (https://github.com/crr0004/deepracer). With the introduction of the second generation Deepracer Console the repository has been split up. This repository contains the scripts needed to *run* the training, but depends on Docker Hub to provide pre-built docker images. All the under-the-hood building capabilities have been moved to my [Deepracer Build](https://github.com/aws-deepracer-community/deepracer) repository.
# Main Features
DRfC supports a wide set of features to ensure that you can focus on creating the best model:
* User-friendly
* Based on the continously updated community [Robomaker](https://github.com/aws-deepracer-community/deepracer-simapp) and [Sagemaker](https://github.com/aws-deepracer-community/deepracer-sagemaker-container) containers, supporting a wide range of CPU and GPU setups.
* Wide set of scripts (`dr-*`) enables effortless training.
* Detection of your AWS DeepRacer Console models; allows upload of a locally trained model to any of them.
* Modes
* Time Trial
* Object Avoidance
* Head-to-Bot
* Training
* Multiple Robomaker instances per Sagemaker (N:1) to improve training progress.
* Multiple training sessions in parallel - each being (N:1) if hardware supports it - to test out things in parallel.
* Connect multiple nodes together (Swarm-mode only) to combine the powers of multiple computers/instances.
* Evaluation
* Evaluate independently from training.
* Save evaluation run to MP4 file in S3.
* Logging
* Training metrics and trace files are stored to S3.
* Optional integration with AWS CloudWatch.
* Optional exposure of Robomaker internal log-files.
* Technology
* Supports both Docker Swarm (used for connecting multiple nodes together) and Docker Compose (used to support OpenGL)
# Documentation
* [Initial Installation](installation.md)
* [DeepRacer on AWS (DRoA) Integration](droa.md)
* [Reference](reference.md)
* [Using multiple Robomaker workers](multi_worker.md)
* [Managing experiments and running multiple parallel experiments](multi_run.md)
* [GPU Accelerated OpenGL for Robomaker](opengl.md)
* [Having multiple GPUs in one Computer](multi_gpu.md)
* [Installing on Windows](windows.md)
* [Run a Head-to-Head Race](head-to-head.md)
* [Watching the car](video.md)
# Support
* For general support it is suggested to join the [AWS DeepRacing Community](https://deepracing.io/). The Community Slack has a channel #dr-training-local where the community provides active support.
* Create a GitHub issue if you find an actual code issue, or where updates to documentation would be required.
================================================
FILE: docs/installation.md
================================================
# Installing Deepracer-for-Cloud
## Requirements
Depending on your needs as well as specific needs of the cloud platform you can configure your VM to your liking. Both CPU-only as well as GPU systems are supported.
**AWS**:
* EC2 instance of type G3, G4, P2 or P3 - recommendation is g4dn.2xlarge - for GPU enabled training. C5 or M6 types - recommendation is c5.2xlarge - for CPU training.
* Ubuntu 20.04
* Minimum 30 GB, preferred 40 GB of OS disk.
* Ephemeral Drive connected
* Minimum of 8 GB GPU-RAM if running with GPU.
* Recommended at least 6 VCPUs
* S3 bucket. Preferrably in same region as EC2 instance.
* The internal `sagemaker-local` docker network runs by default on `192.168.2.0/24`. Ensure that your AWS IPC does not overlap with this subnet.
**Azure**:
* N-Series VM that comes with NVIDIA Graphics Adapter - recommendation is NC6_Standard
* Ubuntu 20.04
* Standard 30 GB OS drive is sufficient to get started.
* Recommended to add an additional 32 GB data disk if you want to use the Log Analysis container.
* Minimum 8 GB GPU-RAM
* Recommended at least 6 VCPUs
* Storage Account with one Blob container configured for Access Key authentication.
**Local**:
* A modern, comparatively powerful, Intel based system.
* Ubuntu 20.04, other Linux-dristros likely to work.
* 4 core-CPU, equivalent to 8 vCPUs; the more the better.
* NVIDIA Graphics adapter with minimum 8 GB RAM for Sagemaker to run GPU. Robomaker enabled GPU instances need ~1 GB each.
* System RAM + GPU RAM should be at least 32 GB.
* Running DRfC Ubuntu 20.04 on Windows using Windows Subsystem for Linux 2 is possible. See [Installing on Windows](windows.md)
## Installation
The package comes with preparation and setup scripts that would allow a turn-key setup for a fresh virtual machine.
```shell
git clone https://github.com/aws-deepracer-community/deepracer-for-cloud.git
```
**For cloud setup** execute:
```shell
cd deepracer-for-cloud && ./bin/prepare.sh
```
This will prepare the VM by partitioning additional drives as well as installing all prerequisites. After a reboot it will continuee to run `./bin/init.sh` setting up the full repository and downloading the core Docker images. Depending on your environment this may take up to 30 minutes. The scripts will create a file `DONE` once completed.
The installation script will adapt `.profile` to ensure that all settings are applied on login. Otherwise run the activation with `source bin/activate.sh`.
**For local install** it is recommended *not* to run the `bin/prepare.sh` script; it might do more changes than what you want. Rather ensure that all prerequisites are set up and run `bin/init.sh` directly.
See also the [following article](https://awstip.com/deepracer-for-cloud-drfc-local-setup-3c6418b2c75a) for guidance.
The Init Script takes a few parameters:
| Variable | Description |
|----------|-------------|
| `-c <cloud>` | Sets the cloud version to be configured, automatically updates the `DR_CLOUD` parameter in `system.env`. Options are `azure`, `aws` or `local`. Default is `local` |
| `-a <arch>` | Sets the architecture to be configured. Either `cpu` or `gpu`. Default is `gpu`. |
## Environment Setup
The initialization script will attempt to auto-detect your environment (`Azure`, `AWS` or `Local`), and store the outcome in the `DR_CLOUD` parameter in `system.env`. You can also pass in a `-c <cloud>` parameter to override it, e.g. if you want to run the minio-based `local` mode in the cloud.
The main difference between the mode is based on authentication mechanisms and type of storage being configured. The next chapters will review each type of environment on its own.
### AWS
In AWS it is possible to set up authentication to S3 in two ways: Integrated sign-on using [IAM Roles](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) or using access keys.
#### IAM Role
To use IAM Roles:
* An empty S3 bucket in the same region as the EC2 instance.
* An IAM Role that has permissions to:
* Access both the *new* S3 bucket as well as the DeepRacer bucket.
* AmazonVPCReadOnlyAccess
* AmazonKinesisVideoStreamsFullAccess if you want to stream to Kinesis
* CloudWatch
* An EC2 instance with the defined IAM Role assigned.
* Configure `system.env` as follows:
* `DR_LOCAL_S3_PROFILE=default`
* `DR_LOCAL_S3_BUCKET=<bucketname>`
* `DR_UPLOAD_S3_PROFILE=default`
* `DR_UPLOAD_S3_BUCKET=<your-aws-deepracer-bucket>`
* Run `dr-update` for configuration to take effect.
#### Manual setup
For access with IAM user:
* An empty S3 bucket in the same region as the EC2 instance.
* A real AWS IAM user set up with access keys:
* User should have permissions to access the *new* bucket as well as the dedicated DeepRacer S3 bucket.
* Use `aws configure` to configure this into the default profile.
* Configure `system.env` as follows:
* `DR_LOCAL_S3_PROFILE=default`
* `DR_LOCAL_S3_BUCKET=<bucketname>`
* `DR_UPLOAD_S3_PROFILE=default`
* `DR_UPLOAD_S3_BUCKET=<your-aws-deepracer-bucket>`
* Run `dr-update` for configuration to take effect.
### Azure
Minio has deprecated the gateway feature that exposed an Azure Blob Storage as an S3 bucket. Azure mode now sets up minio in the same way as in local mode.
If you want to use awscli (`aws`) to manually move files then use `aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 ...`, as this will set both `--profile` and `--endpoint-url` parameters to match your configuration.
### Local
Local mode runs a minio server that hosts the data in the `docker/volumes` directory. It is otherwise command-compatible with the Azure setup; as the data is accessible via Minio and not via native S3.
In Local mode the script-set requires the following:
* Configure the Minio credentials with `aws configure --profile minio`. The default configuration will use the `minio` profile to configure MINIO. You can choose any username or password, but username needs to be at least length 3, and password at least length 8.
* A real AWS IAM user configured with `aws configure` to enable upload of models into AWS DeepRacer.
* Configure `system.env` as follows:
* `DR_LOCAL_S3_PROFILE=default`
* `DR_LOCAL_S3_BUCKET=<bucketname>`
* `DR_UPLOAD_S3_PROFILE=default`
* `DR_UPLOAD_S3_BUCKET=<your-aws-deepracer-bucket>`
* Run `dr-update` for configuration to take effect.
## First Run
For the first run the following final steps are needed. This creates a training run with all default values in
* Define your custom files in `custom_files/` - samples can be found in `defaults` which you must copy over:
* `hyperparameters.json` - definining the training hyperparameters
* `model_metadata.json` - defining the action space and sensors
* `reward_function.py` - defining the reward function
* Upload the files into the bucket with `dr-upload-custom-files`. This will also start minio if required.
* Start training with `dr-start-training`
After a while you will see the sagemaker logs on the screen.
## Troubleshooting
Here are some hints for troubleshooting specific issues you may encounter
### Local training troubleshooting
| Issue | Troubleshooting hint |
|------------- | ---------------------|
Get messages like "Sagemaker is not running" | Run `docker -ps a` to see if the containers are running or if they stopped due to some errors. If running after a fresh install, try restarting the system.
Check docker errors for specific container | Run `docker logs -f <containerid>`
Get message "Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on interface <your_interface> ..." when running `./bin/init.sh -c local -a cpu` | It means you have multiple IP addresses and you need to specify one within `./bin/init.sh`.<br> If you don't care which one to use, you can get the first one by running ```ifconfig \| grep $(route \| awk '/^default/ {print $8}') -a1 \| grep -o -P '(?<=inet ).*(?= netmask)```.<br> Edit `./bin/init.sh` and locate line `docker swarm init` and change it to `docker swarm init --advertise-addr <your_IP>`.<br> Rerun `./bin/init.sh -c local -a cpu`
I don't have any of the `dr-*` commands | Run `source bin/activate.sh`.
================================================
FILE: docs/mac.md
================================================
# Running DeepRacer-for-Cloud on macOS
DRfC can be run on macOS, both on AWS Mac EC2 instances (mac1/mac2 family) and on local Mac hardware (Intel or Apple Silicon). Because macOS does not support NVIDIA GPUs, training always runs in **CPU mode**.
---
## Architecture overview
On macOS, Docker containers run inside a lightweight Linux VM managed by [Colima](https://github.com/abiosoft/colima) rather than directly on the host. This has a few implications you should be aware of:
| Concern | Impact |
|---|---|
| **No NVIDIA GPU** | Always `cpu` architecture; training is slower than a GPU instance |
| **Colima VM filesystem** | Bind-mount paths (e.g. `/tmp/sagemaker`) must exist inside the VM, not on the macOS host |
| **IMDS not reachable from VM** | IAM role credentials are not automatically available inside containers; explicit AWS keys must be configured |
| **BSD userland** | `sed`, `grep`, `sort`, `readlink` differ from GNU; key shell entrypoints use portable path handling, but custom scripts should still avoid assuming GNU-only flags |
| **bash 3.2 ships with macOS** | A modern bash 5 must be installed via Homebrew and set as the login shell |
---
## Option 1: AWS Mac EC2 instance
AWS offers bare-metal Mac instances (`mac1.metal` for Intel, `mac2.metal` / `mac2-m2.metal` for Apple Silicon). These run macOS natively and support EC2 features like IAM roles, S3, and instance metadata — with the IMDS caveat noted above.
### Prerequisites
* A Mac EC2 instance running macOS Monterey (12) or later
* An IAM role or IAM user with permissions for S3 (and optionally STS, CloudWatch)
* An S3 bucket in the same region as the instance
### Step 1 — Clone the repository
```bash
git clone https://github.com/aws-deepracer-community/deepracer-for-cloud.git
cd deepracer-for-cloud
```
### Step 2 — Run prepare-mac.sh
```bash
bash bin/prepare-mac.sh
```
This script will:
1. Verify macOS version compatibility
2. Install [Homebrew](https://brew.sh) if not present
3. Install required packages: `jq`, `python3`, `git`, `screen`, `bash`
4. Install bash 5 and set it as the default login shell
5. Add a `~/.bash_profile` bootstrap so bash 5 is used even when SSH starts `/bin/bash` (3.2)
6. Install the AWS CLI v2 via the official `.pkg` installer (avoids Homebrew Python conflicts)
7. Install [Colima](https://github.com/abiosoft/colima) and the Docker CLI
8. Start Colima (4 vCPUs, 8 GB RAM, 60 GB disk — adjust as needed)
9. Create `/tmp/sagemaker` inside the Colima VM
10. Install a launchd agent so Colima auto-starts on login
After the script completes, **log out and back in** so the new default shell takes effect.
### Step 3 — Configure AWS credentials
Because containers run inside Colima's Linux VM, they cannot reach the EC2 Instance Metadata Service at `169.254.169.254`. You must provide explicit AWS credentials:
```bash
aws configure --profile default
```
Enter an Access Key ID and Secret Access Key for an IAM user (or long-term credentials). The profile name must match `DR_LOCAL_S3_PROFILE` in `system.env` (default: `default` for AWS cloud setups).
> **Tip:** Create a dedicated IAM user with a policy scoped to your S3 bucket rather than using root or overly broad credentials.
### Step 4 — Run init.sh
```bash
bin/init.sh -c aws -a cpu
```
This sets up the directory structure, configures `system.env` and `run.env`, and pulls the Docker images. Image pulls may take a while depending on bandwidth.
### Step 5 — Activate and train
```bash
source bin/activate.sh
dr-upload-custom-files
dr-start-training -q
```
---
## Option 2: Local Mac (desktop/laptop)
Running DRfC locally on a Mac works for development and small-scale training. Performance is limited by CPU speed and memory.
### Differences from EC2
* No IAM role — configure an IAM user with `aws configure`
* `DR_CLOUD` should be set to `local` in `system.env`, which uses a local MinIO container as the S3 backend
* Colima memory and CPU limits should be tuned to your machine (leave headroom for the macOS host)
### Recommended Colima sizing
| Mac | Recommended Colima config |
|---|---|
| M1/M2/M3 with 16 GB RAM | `--cpu 6 --memory 10 --disk 60` |
| M1/M2/M3 with 32 GB RAM | `--cpu 10 --memory 20 --disk 60` |
| Intel with 16 GB RAM | `--cpu 4 --memory 8 --disk 60` |
To change the sizing after initial setup:
```bash
colima stop
colima start --cpu 6 --memory 10 --disk 60
```
### Apple Silicon (arm64) and container image architecture
The DRfC SimApp images are built for `amd64` (x86_64). On Apple Silicon, Colima runs them via emulation. This works but is slower. To enable it:
```bash
# Install Rosetta 2 if not already present
softwareupdate --install-rosetta
# Start Colima with x86_64 architecture
colima stop
colima start --arch x86_64 --cpu 4 --memory 8 --disk 60
```
> Note: Once Colima is started with `--arch x86_64`, it stays in that mode until deleted. You cannot mix architectures in the same Colima instance.
### Installation steps
```bash
git clone https://github.com/aws-deepracer-community/deepracer-for-cloud.git
cd deepracer-for-cloud
bash bin/prepare-mac.sh
# Log out and back in
bin/init.sh -c local -a cpu
source bin/activate.sh
dr-upload-custom-files
dr-start-training -q
```
---
## Known limitations
| Limitation | Notes |
|---|---|
| CPU-only training | No NVIDIA GPU support on macOS |
| IMDS not reachable from containers | Must use explicit AWS keys; IAM role auto-rotation does not work inside containers |
| `/tmp/sagemaker` must exist in Colima VM | Created automatically by `prepare-mac.sh` and `dr-start-training`; recreate manually after `colima delete` with `colima ssh -- sudo mkdir -p /tmp/sagemaker && colima ssh -- sudo chmod -R a+w /tmp/sagemaker` |
| Colima iptables rules reset on restart | Not relevant with the explicit-keys approach |
| `brew services` fails headlessly | Colima is started via a launchd plist instead |
---
## Troubleshooting
**`bash: ${VAR,,}: bad substitution`**
You are running bash 3.2 (macOS built-in). Run `prepare-mac.sh` to install bash 5, then log out and back in.
**`No configuration file.`** when sourcing `activate.sh`
`init.sh` has not been run yet, or `run.env` does not exist. Run `bin/init.sh` first.
**`docker: command not found`**
Homebrew PATH is not set. Ensure `~/.bash_profile` contains `eval "$(brew shellenv)"` and re-source it.
**`NoCredentialsError: Unable to locate credentials`**
Containers cannot reach IMDS. Run `aws configure --profile default` (or the profile matching `DR_LOCAL_S3_PROFILE`) on the host.
**Colima fails to start**
Check `colima status` and `colima start` output. On freshly allocated Mac EC2 instances the full macOS desktop session may still be initialising — wait a minute and retry.
================================================
FILE: docs/metrics.md
================================================
# Realtime Metrics
It is possible to collect and visualise real-time metrics using the optional telegraf/influxdb/grafana stack.
```mermaid
flowchart TD
A(Robomaker) --> B(Telegraf)
B --> C(InfluxDB)
C --> D(Grafana)
```
When enabled the Robomaker containers will send UDP metrics to Telegraf, which enriches and stores the metrics in the InfluxDB timeseries database container.
Grafana provides a presentation layer for interactive dashboards.
## Initial config and start-up
To enable the feature simply uncomment the lines in system.env for `DR_TELEGRAF_HOST` and `DR_TELEGRAF_PORT`. In most cases the default values should work without modification.
Start the metrics docker stack using `dr-start-metrics`.
Once running Grafana should be accessible via a web browser on port 3000, e.g http://localhost:3000
The default username is `admin`, password `admin`. You will be prompted to set your own password on first login.
*Note: Grafana can take 60-90 seconds to perform initial internal setup the first time it is started. The web UI will not be available until this is complete. You can check the status by viewing the grafana container logs if necessary.*
The metrics stack will remain running until stopped (`dr-stop-metrics`) or the machine is rebooted. It does not need to be restarted in between training runs and should automatically pick up metrics from new models.
## Using the dashboards
A template dashboard is provided to show how to access basic deepracer metrics. You can use this dashboard as a base to build your own more customised dashboards.
After connecting to the Grafana Web UI with a browser use the menu to browse to the Dashboards section.
The template dashboard called `DeepRacer Training template` should be visible, showing graphs of reward, progress, and completed lap times.
As this is an automatically provisioned dashboard you are not able to save changes to it, however you can copy it by clicking on the small cog icon to enter the dashboard settings page, and then clicking `Save as` to make an editable copy.
A full user guide on how to work the dashboards is available on the [Grafana website](https://grafana.com/docs/grafana/latest/dashboards/use-dashboards/).
================================================
FILE: docs/multi_gpu.md
================================================
# Training on a Computer with more than one GPU
In some cases you might end up with having a computer with more than one GPU. This may be common on a workstation
which may have one GPU for general graphics (e.g. GTX 10-series, RTX 20-series), as well as a data center GPU
like a Tesla K40, K80 or M40.
In this setting it can get a bit chaotic as DeepRacer will 'greedily' put any workload on any GPU - which will
lead to Out-of-Memory somewhere down the road.
## Checking available GPUs
You can use Tensorflow to give you an overview of available devices running `utils/cuda-check.sh`.
It will say something like:
```
2020-07-04 12:25:55.179580: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-07-04 12:25:55.547206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties:
name: GeForce GTX 1650 major: 7 minor: 5 memoryClockRate(GHz): 1.68
pciBusID: 0000:04:00.0
totalMemory: 3.82GiB freeMemory: 3.30GiB
2020-07-04 12:25:55.732066: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 1 with properties:
name: Tesla M40 24GB major: 5 minor: 2 memoryClockRate(GHz): 1.112
pciBusID: 0000:81:00.0
totalMemory: 22.41GiB freeMemory: 22.30GiB
2020-07-04 12:25:55.732141: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0, 1
2020-07-04 12:25:56.745647: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-07-04 12:25:56.745719: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0 1
2020-07-04 12:25:56.745732: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N N
2020-07-04 12:25:56.745743: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 1: N N
2020-07-04 12:25:56.745973: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 195 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1650, pci bus id: 0000:04:00.0, compute capability: 7.5)
2020-07-04 12:25:56.750352: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 1147 MB memory) -> physical GPU (device: 1, name: Tesla M40 24GB, pci bus id: 0000:81:00.0, compute capability: 5.2)
2020-07-04 12:25:56.774305: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0, 1
2020-07-04 12:25:56.774408: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-07-04 12:25:56.774425: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0 1
2020-07-04 12:25:56.774436: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N N
2020-07-04 12:25:56.774446: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 1: N N
2020-07-04 12:25:56.774551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/device:GPU:0 with 195 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1650, pci bus id: 0000:04:00.0, compute capability: 7.5)
2020-07-04 12:25:56.774829: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/device:GPU:1 with 1147 MB memory) -> physical GPU (device: 1, name: Tesla M40 24GB, pci bus id: 0000:81:00.0, compute capability: 5.2)
['/device:GPU:0', '/device:GPU:1']
```
In this case the CUDA device #0 is the GTX 1650 and the CUDA device #1 is the Tesla M40.
### Selecting Device
To control the CUDA assignment for Sagemaker abd Robomaker then the following to variables in `system.env`:
```
DR_ROBOMAKER_CUDA_DEVICES=0
DR_SAGEMAKER_CUDA_DEVICES=1
```
The number is the CUDA number of the GPU you want the containers to use.
================================================
FILE: docs/multi_run.md
================================================
# Managing Experiments
## Experiment sub-directories
When iterating on a model you typically need different reward functions, action spaces, hyperparameters, and track settings across runs. By default DRfC stores all of this in `run.env` and `custom_files/` at the root of the installation, which can become difficult to manage over time.
The **experiment sub-directory** feature lets you keep every config and custom file for a training run in its own folder under `experiments/`. DRfC then picks up those files automatically when you activate with the experiment name.
### Directory structure
```
deepracer-for-cloud/
├── experiments/
│ ├── sprint-v1/
│ │ ├── run.env
│ │ ├── worker-2.env # optional – multi-worker only
│ │ └── custom_files/
│ │ ├── reward_function.py
│ │ ├── model_metadata.json
│ │ └── hyperparameters.json
│ └── sprint-v2/
│ ├── run.env
│ └── custom_files/
│ └── ...
├── system.env
└── ...
```
The `experiments/` directory is excluded from git (via `.gitignore`) to avoid committing sensitive configuration and credentials.
### Setting up your first experiment
1. Create the directory structure (run from the DRfC root):
```bash
mkdir -p experiments/sprint-v1/custom_files
```
2. Copy your current run configuration into the experiment:
```bash
cp run.env experiments/sprint-v1/
cp custom_files/* experiments/sprint-v1/custom_files/
```
If you are using multiple workers, copy the worker env files too:
```bash
cp worker-*.env experiments/sprint-v1/
```
3. Activate with the experiment name using the `-e` flag:
```bash
source bin/activate.sh -e sprint-v1
```
### Activating an experiment
There are two ways to select an experiment:
**Option A — `-e` flag (recommended)**
Pass the experiment name when sourcing the activation script. This takes precedence over anything in `system.env`:
```bash
source bin/activate.sh -e sprint-v1
```
**Option B — `DR_EXPERIMENT_NAME` in `system.env`**
Uncomment and set the variable in `system.env`:
```
DR_EXPERIMENT_NAME=sprint-v1
```
Then run `dr-update` or re-source `bin/activate.sh`. Use this option if you want the experiment to persist across shell sessions automatically.
When `DR_EXPERIMENT_NAME` is set (by either method), DRfC will:
- Load `run.env` from `experiments/<name>/run.env`
- Load `worker-N.env` from `experiments/<name>/worker-N.env` (multi-worker)
- Sync `custom_files` to/from `experiments/<name>/custom_files/`
- Show `Experiment: <name>` in `dr-summary`
If the experiment directory does not exist, activation will abort with an error.
### Iterating to a new experiment
Copy the entire experiment folder to a new name and update the model prefix in `run.env`:
```bash
cp -av experiments/sprint-v1 experiments/sprint-v2
```
Edit `experiments/sprint-v2/run.env` to update `DR_LOCAL_S3_MODEL_PREFIX` (and `DR_LOCAL_S3_PRETRAINED_PREFIX` if you want to continue training from the previous experiment's model), then activate the new experiment:
```bash
source bin/activate.sh -e sprint-v2
```
### Custom files upload and download
`dr-upload-custom-files` and `dr-download-custom-files` are experiment-aware. When an experiment is active they sync against `experiments/<name>/custom_files/` instead of the root `custom_files/` directory.
---
# Running Multiple Parallel Experiments
It is possible to run multiple experiments on one computer in parallel. This is possible both in `swarm` and `compose` mode, and is controlled by `DR_RUN_ID` in `run.env`.
The feature works by creating unique prefixes to the container names:
* In Swarm mode this is done through defining a stack name (default: deepracer-0)
* In Compose mode this is done through adding a project name.
## Suggested way to use the feature
By default `run.env` is loaded when DRfC is activated - but it is possible to load a separate configuration through `source bin/activate.sh <filename>`, or through `source bin/activate.sh -e <experiment-name>` when using experiment sub-directories.
The best way to use this feature is to have a bash-shell per experiment, and to load a separate configuration per shell.
After activating one can control each experiment independently through using the `dr-*` commands.
If using local or Azure the S3 / Minio instance will be shared, and is running only once.
================================================
FILE: docs/multi_worker.md
================================================
# Using multiple Robomaker workers
One way to accelerate training is to launch multiple Robomaker workers that feed into one Sagemaker instance.
The number of workers is configured through setting `system.env` `DR_WORKERS` to the desired number of workers. The result is that the number of episodes (hyperparameter `num_episodes_between_training`) will be divivided over the number of workers. The theoretical maximum number of workers equals `num_episodes_between_training`.
The training can be started as normal.
## How many workers do I need?
One Robomaker worker requires 2-4 vCPUs. Tests show that a `c5.4xlarge` instance can run 3 workers and the Sagemaker without a drop in performance. Using OpenGL images reduces the number of vCPUs required per worker.
To avoid issues with the position from which evaluations are run ensure that `( num_episodes_between_training / DR_WORKERS) * DR_TRAIN_ROUND_ROBIN_ADVANCE_DIST = 1.0`.
Example: With 3 workers set `num_episodes_between_training: 30` and `DR_TRAIN_ROUND_ROBIN_ADVANCE_DIST=0.1`.
Note; Sagemaker will stop collecting experiences once you have reached 10.000 steps (3-layer CNN) in an iteration. For longer tracks with 600-1000 steps per completed episodes this will define the upper bound for the number of workers and episodes per iteration.
## Training with different parameters for each worker
It is also possible to use different configurations between workers, such as different tracks (WORLD_NAME). To enable, set DR_TRAIN_MULTI_CONFIG=True inside run.env, then make copies of defaults/template-worker.env in the main deepracer-for-cloud directory with format worker-2.env, worker-3.env, etc. (So alongside run.env, you should have woker-2.env, worker-3.env, etc. run.env is still used for worker 1) Modify the worker env files with your desired changes, which can be more than just the world_name. These additional worker env files are only used if you are training with multiple workers.
## Watching the streams
If you want to watch the streams -- and are in `compose` mode you can use the script `utils/start-local-browser.sh` to dynamically create a HTML that streams the KVS stream from ALL workers at a time.
================================================
FILE: docs/opengl.md
================================================
# GPU Accelerated OpenGL for Robomaker
One way to improve performance, especially of Robomaker, is to enable GPU-accelerated OpenGL. OpenGL can significantly improve Gazebo performance, even where the GPU does not have enough GPU RAM, or is too old, to support Tensorflow.
## Desktop
On a Ubuntu desktop running Unity there are hardly any additional steps required.
* Ensure that a recent Nvidia driver is installed and is running.
* Ensure that nvidia-docker is installed; review `bin/prepare.sh` for steps if you do not want to directly run the script.
* Configure DRfC using the following settings in `system.env`:
* `DR_HOST_X=True`; uses the local X server rather than starting one within the docker container.
* `DR_DISPLAY`; set to the value of your running X server, if not set then `DISPLAY` will be used.
Before running `dr-start-training`/`dr-start-evaluation` ensure that `DR_DISPLAY`/`DISPLAY` and `XAUTHORITY` are defined.
Check that OpenGL is working by looking for `gzserver` in `nvidia-smi`.
If `DR_GUI_ENABLE=True` then the Gazebo UI, rviz and rqt will open up in separate windows. (With multiple workers it can get crowded...)
### Remote connection to Desktop
If you want to start training or evaluation via SSH (e.g. to increment the training whilst you are on the go) there are a few steps to do:
* Ensure that you are actually logged in to the local machine (desktop session is running).
* In the SSH terminal:
* Ensure `DR_DISPLAY` is configured in `system.env`. Otherwise run `export DISPLAY=:1`. [*]
* Run `export XAUTHORITY=/run/user/$(id -u)/gdm/Xauthority` to let X know where the X magic cookie is.
* Run `source bin/activate.sh` as normal.
* Run your `dr-start-training` or `dr-start-evaluation` command.
*Remark*: Setting `DISPLAY` will lead to certain commands (e.g. `dr-logs-sagemaker`) starting in a terminal window on the desktop, rather than the output being showhn in the SSH terminal.
Use of `DR_DISPLAY` is recommended to avoid this.
## Headless Server
Also a headless server with a GPU, e.g. an EC2 instance, or a local computer with a displayless GPU (e.g. Tesla K40, K80, M40).
This also applies for a desktop computer where you are not logged in. In this case also disconnect any monitor cables to avoid conflict.
* Ensure that a Nvidia driver and nvidia-docker is installed; review `bin/prepare.sh` for steps if you do not want to directly run the script.
* Setup an X-server on the host. `utils/setup-xorg.sh` is a basic installation script.
* Configure DRfC using the following settings in `system.env`:
* `DR_HOST_X=True`; uses the local X server rather than starting one within the docker container.
* `DR_DISPLAY`; the X display that the headless X server will start on. (Default is `:99`, avoid using `:0` or `:1` as it may conflict with other X servers.)
Start up the X server with `utils/start-xorg.sh`.
If `DR_GUI_ENABLE=True` then a VNC server will be started on port 5900 so that you can connect and interact with the Gazebo UI.
Check that OpenGL is working by looking for `gzserver` in `nvidia-smi`.
## WSL2 on Windows 11
OpenGL is also supported in WSL2 on Windows 11. By default an Xwayland server is started in Ubuntu 22.04.
To enable OpenGL acceleration perform the following steps:
* Install x11-server-utils with `sudo apt install x11-xserver-utils`.
* Configure DRfC using the following settings in `system.env`:
* `DR_HOST_X=True`; uses the local X server rather than starting one within the docker container.
* `DR_DISPLAY=:0`; the Xwayland starts on :0 by default.
If you want to interact with the Gazebo UI, set `DR_DOCKER_STYLE=compose` and `DR_GUI_ENABLE=True` in `system.env`.
================================================
FILE: docs/reference.md
================================================
# Deepracer-for-Cloud Reference
## Environment Variables
The scripts assume that two files `system.env` containing constant configuration values and `run.env` with run specific values is populated with the required values. Which values go into which file is not really important.
| Variable | Description |
|----------|-------------|
| `DR_RUN_ID` | Used if you have multiple independent training jobs only a single DRfC instance. This is an advanced configuration and generally you should just leave this as the default `0`.|
| `DR_WORLD_NAME` | Defines the track to be used.|
| `DR_RACE_TYPE` | Valid options are `TIME_TRIAL`, `OBJECT_AVOIDANCE`, and `HEAD_TO_BOT`.|
| `DR_CAR_COLOR` | Valid options are `Black`, `Grey`, `Blue`, `Red`, `Orange`, `White`, and `Purple`.|
| `DR_CAR_NAME` | Display name of car; shows in Deepracer Console when uploading.|
| `DR_ENABLE_DOMAIN_RANDOMIZATION` | If `True`, this cycles through different environment colors and lighting each episode. This is typically used to make your model more robust and generalized instead of tightly aligned with the simulator|
| `DR_UPLOAD_S3_PREFIX` | Prefix of the target location. (Typically starts with `DeepRacer-SageMaker-RoboMaker-comm-`|
| `DR_EVAL_NUMBER_OF_TRIALS` | How many laps to complete for evaluation simulations.|
| `DR_EVAL_IS_CONTINUOUS` | If False, your evaluation trial will end if you car goes off track or is in a collision. If True, your car will take the penalty times as configured in those parameters, but continue evaluating the trial.|
| `DR_EVAL_OFF_TRACK_PENALTY` | Number of seconds penalty time added for an off track during evaluation. Only takes effect if `DR_EVAL_IS_CONTINUOUS` is set to True.|
| `DR_EVAL_COLLISION_PENALTY` | Number of seconds penalty time added for a collision during evaluation. Only takes effect if `DR_EVAL_IS_CONTINUOUS` is set to True.|
| `DR_EVAL_SAVE_MP4` | Set to `True` to save MP4 of an evaluation run. |
| `DR_EVAL_REVERSE_DIRECTION` | Set to `True` to reverse the direction in which the car traverses the track.|
| `DR_TRAIN_CHANGE_START_POSITION` | Determines if the racer shall round-robin the starting position during training sessions. (Recommended to be `True` for initial training.)|
| `DR_TRAIN_ALTERNATE_DRIVING_DIRECTION` | `True` or `False`. If `True`, the car will alternate driving between clockwise and counter-clockwise each episode.|
| `DR_TRAIN_START_POSITION_OFFSET` | Used to control where to start the training from on first episode.|
| `DR_TRAIN_ROUND_ROBIN_ADVANCE_DISTANCE` | How far to progress each episode in round robin. 0.05 is 5% of the track. Generally best to try and keep this to even numbers that match with your total number of episodes to allow for even distribution around the track. For example, if 20 episodes per iternation, .05 or .10 or .20 would be good.|
| `DR_TRAIN_MULTI_CONFIG` | `True` or `False`. This is used if you want to use different run.env configurations for each worker in a multi worker training run. See multi config documentation for more details on how to set this up.|
| `DR_TRAIN_MIN_EVAL_TRIALS` | The minimum number of evaluation trials run between each training iteration. Evaluations will continue as long as policy training is occuring and may be more than this number. This establishes the minimum, and is generally useful if you want to speed up training especially when using gpu sagemaker containers.|
| `DR_TRAIN_REVERSE_DIRECTION` | Set to `True` to reverse the direction in which the car traverses the track. |
| `DR_TRAIN_BEST_MODEL_METRIC` | Can be used to control which model is kept as the "best" model. Set to `progress` to select the model with the highest evaluation completion percentage, set to `reward` to select the model with the highest evaluation reward.|
| `DR_TRAIN_MAX_STEPS_PER_ITERATION` | Can be used to control the max number of steps per iteration to use for learning, the excess steps will be discarded to avoid out-of-memory situations, default is 10000. |
| `DR_LOCAL_S3_PRETRAINED` | Determines if training or evaluation shall be based on the model created in a previous session, held in `s3://{DR_LOCAL_S3_BUCKET}/{LOCAL_S3_PRETRAINED_PREFIX}`, accessible by credentials held in profile `{DR_LOCAL_S3_PROFILE}`.|
| `DR_LOCAL_S3_PRETRAINED_PREFIX` | Prefix of pretrained model within S3 bucket.|
| `DR_LOCAL_S3_MODEL_PREFIX` | Prefix of model within S3 bucket.|
| `DR_LOCAL_S3_BUCKET` | Name of S3 bucket which will be used during the session.|
| `DR_LOCAL_S3_CUSTOM_FILES_PREFIX` | Prefix of configuration files within S3 bucket.|
| `DR_LOCAL_S3_TRAINING_PARAMS_FILE` | Name of YAML file that holds parameters sent to robomaker container for configuration during training. Filename is relative to `s3://{DR_LOCAL_S3_BUCKET}/{LOCAL_S3_PRETRAINED_PREFIX}`.|
| `DR_LOCAL_S3_EVAL_PARAMS_FILE` | Name of YAML file that holds parameters sent to robomaker container for configuration during evaluations. Filename is relative to `s3://{DR_LOCAL_S3_BUCKET}/{LOCAL_S3_PRETRAINED_PREFIX}`.|
| `DR_LOCAL_S3_MODEL_METADATA_KEY` | Location where the `model_metadata.json` file is stored.|
| `DR_LOCAL_S3_HYPERPARAMETERS_KEY` | Location where the `hyperparameters.json` file is stored.|
| `DR_LOCAL_S3_REWARD_KEY` | Location where the `reward_function.py` file is stored.|
| `DR_LOCAL_S3_METRICS_PREFIX` | Location where the metrics will be stored.|
| `DR_OA_NUMBER_OF_OBSTACLES` | For Object Avoidance, the number of obstacles on the track.|
| `DR_OA_MIN_DISTANCE_BETWEEN_OBSTACLES` | Minimum distance in meters between obstacles.|
| `DR_OA_RANDOMIZE_OBSTACLE_LOCATIONS` | If True, obstacle locations will randomly change after each episode.|
| `DR_OA_IS_OBSTACLE_BOT_CAR` | If True, obstacles will appear as a stationary car instead of a box.|
| `DR_OA_OBJECT_POSITIONS` | Positions of boxes on the track. Tuples consisting of progress (fraction [0..1]) and inside or outside lane (-1 or 1). Example: `"0.23,-1;0.46,1"`|
| `DR_H2B_IS_LANE_CHANGE` | If True, bot cars will change lanes based on configuration.|
| `DR_H2B_LOWER_LANE_CHANGE_TIME` | Minimum time in seconds before car will change lanes.|
| `DR_H2B_UPPER_LANE_CHANGE_TIME` | Maximum time in seconds before car will change langes.|
| `DR_H2B_LANE_CHANGE_DISTANCE` | Distance in meters how long it will take the car to change lanes.|
| `DR_H2B_NUMBER_OF_BOT_CARS` | Number of bot cars on the track.|
| `DR_H2B_MIN_DISTANCE_BETWEEN_BOT_CARS` | Minimum distance between bot cars.|
| `DR_H2B_RANDOMIZE_BOT_CAR_LOCATIONS` | If True, bot car locations will randomly change after each episode.|
| `DR_H2B_BOT_CAR_SPEED` | How fast the bot cars go in meters per second.|
| `DR_CLOUD` | Can be `azure`, `aws`, `local` or `remote`; determines how the storage will be configured.|
| `DR_AWS_APP_REGION` | (AWS only) Region for other AWS resources (e.g. Kinesis) |
| `DR_UPLOAD_S3_PROFILE` | AWS Cli profile to be used that holds the 'real' S3 credentials needed to upload a model into AWS DeepRacer.|
| `DR_UPLOAD_S3_BUCKET` | Name of the AWS DeepRacer bucket where models will be uploaded. (Typically starts with `aws-deepracer-`.)|
| `DR_LOCAL_S3_PROFILE` | Name of AWS profile with credentials to be used. Stored in `~/.aws/credentials` unless AWS IAM Roles are used.|
| `DR_GUI_ENABLE` | Enable or disable the Gazebo GUI in Robomaker |
| `DR_KINESIS_STREAM_NAME` | Kinesis stream name. Used if you actually publish to the AWS KVS service. Leave blank if you do not want this. |
| `DR_KINESIS_STREAM_ENABLE` | Enable or disable 'Kinesis Stream', True both publishes to a AWS KVS stream (if name not None), and to the topic `/racecar/deepracer/kvs_stream`. Leave True if you want to watch the car racing. |
| `DR_SAGEMAKER_IMAGE` | Determines which sagemaker image will be used for training.|
| `DR_ROBOMAKER_IMAGE` | Determines which robomaker image will be used for training or evaluation.|
| `DR_MINIO_IMAGE` | Determines which Minio image will be used. |
| `DR_COACH_IMAGE` | Determines which coach image will be used for training.|
| `DR_WORKERS` | Number of Robomaker workers to be used for training. See additional documentation for more information about this feature.|
| `DR_ROBOMAKER_MOUNT_LOGS` | True to get logs mounted to `$DR_DIR/data/logs/robomaker/$DR_LOCAL_S3_MODEL_PREFIX`|
| `DR_ROBOMAKER_MOUNT_SIMAPP_DIR` | Path to the altered Robomaker bundle, e.g. `/home/ubuntu/deepracer-simapp/bundle`.|
| `DR_CLOUD_WATCH_ENABLE` | Send log files to AWS CloudWatch.|
| `DR_CLOUD_WATCH_LOG_STREAM_PREFIX` | Add a prefix to the CloudWatch log stream name.|
| `DR_DOCKER_STYLE` | Valid Options are `Swarm` and `Compose`. Use Compose for openGL optimized containers.|
| `DR_HOST_X` | Uses the host X-windows server, rather than starting one inside of Robomaker. Required for OpenGL images.|
| `DR_WEBVIEWER_PORT` | Port for the web-viewer proxy which enables the streaming of all robomaker workers at once.|
| `CUDA_VISIBLE_DEVICES` | Used in multi-GPU configurations. See additional documentation for more information about this feature.|
| `DR_TELEGRAF_HOST` | The hostname to send real-time metrics to. Uncommenting this will enable real-time metrics collection using Telegraf. The telegraf/influxdb/grafana compose stack must already be running (use `dr-start-metrics`) for this to work, and it should usually be set to `telegraf` to send metrics to the telegraf container.
| `DR_TELEGRAF_PORT` | Defines the UDP port to send real-time metrics to. Should usually remain set as 8092.
| `DR_QUIET_ACTIVATE` | Set to `True` to suppress the environment summary dashboard that is displayed when sourcing `bin/activate.sh` in an interactive shell. Defaults to `False`.|
| `DR_EXPERIMENT_NAME` | Optional. When set, DRfC loads `run.env`, `worker-N.env`, and `custom_files/` from `experiments/<name>/` instead of the repository root. Can be set here or passed via `source bin/activate.sh -e <name>`. See [Managing Experiments](multi_run.md).|
## Commands
| Command | Description |
|---------|-------------|
| `dr-update` | Loads in all scripts and environment variables again.|
| `dr-reload` | Re-sources `bin/activate.sh` with the current configuration file.|
| `dr-summary` | Displays the environment summary dashboard (cloud config, Docker images, running services and containers). Runs automatically on interactive shell activation unless `DR_QUIET_ACTIVATE=True`.|
| `dr-update-env` | Loads in all environment variables from `system.env` and `run.env`.|
| `dr-upload-custom-files` | Uploads changed configuration files from `custom_files/` into `s3://{DR_LOCAL_S3_BUCKET}/custom_files`.|
| `dr-download-custom-files` | Downloads changed configuration files from `s3://{DR_LOCAL_S3_BUCKET}/custom_files` into `custom_files/`.|
| `dr-start-training` | Starts a training session in the local VM based on current configuration.|
| `dr-increment-training` | Updates configuration, setting the current model prefix to pretrained, and incrementing a serial.|
| `dr-stop-training` | Stops the current local training session. Uploads log files.|
| `dr-start-evaluation` | Starts a evaluation session in the local VM based on current configuration.|
| `dr-stop-evaluation` | Stops the current local evaluation session. Uploads log files.|
| `dr-start-loganalysis` | Starts a Jupyter log-analysis container, available on port 8888.|
| `dr-stop-loganalysis` | Stops the Jupyter log-analysis container.|
| `dr-start-viewer` | Starts an NGINX proxy to stream all the robomaker streams; accessible remotly.|
| `dr-stop-viewer` | Stops the NGINX proxy.|
| `dr-logs-sagemaker` | Displays the logs from the running Sagemaker container.|
| `dr-logs-robomaker` | Displays the logs from the running Robomaker container.|
| `dr-list-aws-models` | Lists the models that are currently stored in your AWS DeepRacer S3 bucket. |
| `dr-set-upload-model` | Updates the `run.env` with the prefix and name of your selected model. |
| `dr-upload-model` | Uploads the model defined in
gitextract_rez3pf4j/
├── .github/
│ └── workflows/
│ └── syntax-check.yml
├── .gitignore
├── LICENSE
├── README.md
├── bin/
│ ├── activate.sh
│ ├── autorun.sh
│ ├── detect.sh
│ ├── init.sh
│ ├── module/
│ │ ├── droa.sh
│ │ └── summary.sh
│ ├── prepare-mac.sh
│ ├── prepare.sh
│ ├── runonce.sh
│ └── scripts_wrapper.sh
├── defaults/
│ ├── debug-reward_function.py
│ ├── dependencies.json
│ ├── docker-daemon.json
│ ├── hyperparameters.json
│ ├── model_metadata.json
│ ├── model_metadata_cont.json
│ ├── model_metadata_sac.json
│ ├── reward_function.py
│ ├── template-run.env
│ ├── template-system.env
│ └── template-worker.env
├── docker/
│ ├── docker-compose-aws.yml
│ ├── docker-compose-cwlog.yml
│ ├── docker-compose-endpoint.yml
│ ├── docker-compose-eval-swarm.yml
│ ├── docker-compose-eval.yml
│ ├── docker-compose-keys.yml
│ ├── docker-compose-local-xorg-wsl.yml
│ ├── docker-compose-local-xorg.yml
│ ├── docker-compose-local.yml
│ ├── docker-compose-metrics.yml
│ ├── docker-compose-mount.yml
│ ├── docker-compose-robomaker-multi.yml
│ ├── docker-compose-robomaker-scripts.yml
│ ├── docker-compose-simapp.yml
│ ├── docker-compose-training-swarm.yml
│ ├── docker-compose-training.yml
│ ├── docker-compose-webviewer-swarm.yml
│ ├── docker-compose-webviewer.yml
│ └── metrics/
│ ├── configuration.env
│ ├── grafana/
│ │ └── provisioning/
│ │ ├── dashboards/
│ │ │ ├── dashboard.yml
│ │ │ └── deepracer-training-template.json
│ │ └── datasources/
│ │ └── influxdb.yml
│ └── telegraf/
│ └── etc/
│ └── telegraf.conf
├── docs/
│ ├── _config.yml
│ ├── docker.md
│ ├── droa.md
│ ├── head-to-head.md
│ ├── index.md
│ ├── installation.md
│ ├── mac.md
│ ├── metrics.md
│ ├── multi_gpu.md
│ ├── multi_run.md
│ ├── multi_worker.md
│ ├── opengl.md
│ ├── reference.md
│ ├── video.md
│ └── windows.md
├── requirements.txt
├── scripts/
│ ├── droa/
│ │ ├── __init__.py
│ │ ├── auth.py
│ │ ├── delete_model.py
│ │ ├── download_logs.py
│ │ ├── get_model.py
│ │ ├── import_model.py
│ │ └── list_models.py
│ ├── evaluation/
│ │ ├── prepare-config.py
│ │ ├── start.sh
│ │ └── stop.sh
│ ├── log-analysis/
│ │ ├── start.sh
│ │ └── stop.sh
│ ├── metrics/
│ │ ├── start.sh
│ │ └── stop.sh
│ ├── training/
│ │ ├── increment.sh
│ │ ├── prepare-config.py
│ │ ├── start.sh
│ │ └── stop.sh
│ ├── upload/
│ │ ├── download-model.sh
│ │ ├── increment.sh
│ │ ├── prepare-config.py
│ │ ├── upload-car.sh
│ │ └── upload-model.sh
│ └── viewer/
│ ├── index.template.html
│ ├── start.sh
│ └── stop.sh
└── utils/
├── Dockerfile.gpu-detect
├── cuda-check-tf.py
├── cuda-check.sh
├── download-car-model.py
├── evaluate.sh
├── sample-createspot.sh
├── setup-xorg.sh
├── start-local-browser.sh
├── start-xorg.sh
├── timed-stop.sh
└── upload-rotate.sh
SYMBOL INDEX (50 symbols across 11 files)
FILE: defaults/debug-reward_function.py
class Reward (line 5) | class Reward:
method __init__ (line 13) | def __init__(self, verbose=False, track_time=False):
method get_time (line 24) | def get_time(self):
method record_time (line 34) | def record_time(self, steps, sim_time=0.0):
method reward_function (line 40) | def reward_function(self, params):
function reward_function (line 58) | def reward_function(params):
FILE: defaults/reward_function.py
function reward_function (line 1) | def reward_function(params):
FILE: scripts/droa/auth.py
function fetch_env_config (line 35) | def fetch_env_config(site_url: str) -> dict:
function authenticate (line 71) | def authenticate(region: str, client_id: str, username: str, password: s...
function get_aws_credentials (line 86) | def get_aws_credentials(
function _credential_cache_path (line 131) | def _credential_cache_path(identity_pool_id: str, username: str) -> str:
function load_cached_credentials (line 139) | def load_cached_credentials(identity_pool_id: str, username: str) -> dic...
function save_credentials_to_cache (line 157) | def save_credentials_to_cache(identity_pool_id: str, username: str, cred...
class DRoAConfig (line 180) | class DRoAConfig:
method __init__ (line 183) | def __init__(
function load_droa_config (line 202) | def load_droa_config(args) -> "DRoAConfig":
function build_auth (line 261) | def build_auth(url: str, credentials: dict, region: str, site_url: str |...
function add_common_args (line 324) | def add_common_args(parser) -> None:
FILE: scripts/droa/delete_model.py
function fetch_model (line 62) | def fetch_model(cfg, credentials: dict, model_id: str) -> dict:
function delete_model (line 75) | def delete_model(cfg, credentials: dict, model_id: str) -> None:
function parse_args (line 86) | def parse_args() -> argparse.Namespace:
function main (line 104) | def main() -> None:
FILE: scripts/droa/download_logs.py
function get_asset_url (line 73) | def get_asset_url(cfg, credentials, model_id, asset_type, evaluation_id=...
function parse_args (line 109) | def parse_args():
function main (line 141) | def main():
FILE: scripts/droa/get_model.py
function get_model (line 61) | def get_model(cfg, credentials: dict, model_id: str) -> dict:
function _fmt_bytes (line 74) | def _fmt_bytes(n) -> str:
function _kv (line 83) | def _kv(key: str, value, indent: int = 0) -> None:
function print_model (line 90) | def print_model(model: dict, verbose: bool = False) -> None:
function parse_args (line 167) | def parse_args() -> argparse.Namespace:
function main (line 196) | def main() -> None:
FILE: scripts/droa/import_model.py
function _content_type (line 97) | def _content_type(file_path):
function _local_s3_client (line 121) | def _local_s3_client():
function _s3_cp_down (line 130) | def _s3_cp_down(s3, bucket, key, local_path):
function _s3_sync_down (line 137) | def _s3_sync_down(s3, bucket, prefix, local_dir, include_pattern=None):
function _build_training_params (line 158) | def _build_training_params(work_dir, target_bucket, target_prefix):
function _build_from_s3_prefix (line 241) | def _build_from_s3_prefix(model_prefix, checkpoint_mode, checkpoint_num,
function upload_model_folder (line 418) | def upload_model_folder(cfg, model_dir, credentials, validate_required=T...
function call_import_model_api (line 468) | def call_import_model_api(cfg, s3_path, model_name, model_description, c...
function parse_args (line 501) | def parse_args():
function main (line 544) | def main():
FILE: scripts/droa/list_models.py
function list_models (line 63) | def list_models(cfg, credentials: dict) -> list:
function parse_args (line 87) | def parse_args() -> argparse.Namespace:
function main (line 105) | def main() -> None:
FILE: scripts/evaluation/prepare-config.py
function str2bool (line 12) | def str2bool(v):
FILE: utils/cuda-check-tf.py
function get_available_gpus (line 4) | def get_available_gpus():
FILE: utils/download-car-model.py
function check_model_file (line 29) | def check_model_file(prefix):
function list_matching_prefixes (line 58) | def list_matching_prefixes(bucket_name, prefix_pattern):
function download_and_rename_model_file (line 78) | def download_and_rename_model_file(prefix, file_key, output_folder="."):
function validate_s3_connection (line 101) | def validate_s3_connection():
Condensed preview — 101 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (397K chars).
[
{
"path": ".github/workflows/syntax-check.yml",
"chars": 1428,
"preview": "name: Syntax Check\n\non:\n pull_request:\n branches:\n - master\n - dev\n\njobs:\n bash-syntax:\n name: Bash Sy"
},
{
"path": ".gitignore",
"chars": 209,
"preview": ".vscode/\n.venv/\ncustom_files/\nlogs/\ndocker/volumes/\nrecording/\nrecording\n/*.env\n/*.bak\n/*.tar\n/*.json\nDONE\ndata/\ntmp/\nau"
},
{
"path": "LICENSE",
"chars": 925,
"preview": "Copyright 2019-2023 AWS DeepRacer Community. All Rights Reserved.\n\nPermission is hereby granted, free of charge, to any "
},
{
"path": "README.md",
"chars": 5167,
"preview": "# DeepRacer-For-Cloud\nProvides a quick and easy way to get up and running with a DeepRacer training environment using a "
},
{
"path": "bin/activate.sh",
"chars": 15789,
"preview": "#!/usr/bin/env bash\n\n# Portable readlink -f: BSD readlink (macOS) does not support -f.\n_realpath() {\n if command -v rea"
},
{
"path": "bin/autorun.sh",
"chars": 1149,
"preview": "#!/usr/bin/env bash\n\n## this is the default autorun script\n## file should run automatically after init.sh completes.\n## "
},
{
"path": "bin/detect.sh",
"chars": 695,
"preview": "#!/usr/bin/env bash\n\n## What am I?\nif [[ -f /var/run/cloud-init/instance-data.json ]]; then\n # We have a cloud-init e"
},
{
"path": "bin/init.sh",
"chars": 9883,
"preview": "#!/usr/bin/env bash\n\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n echo \"Requested to stop.\"\n exit 1\n}\n\n# Portable sed -i:"
},
{
"path": "bin/module/droa.sh",
"chars": 656,
"preview": "#!/usr/bin/env bash\n# DRoA (DeepRacer on AWS) shell functions.\n# Sourced by bin/activate.sh alongside scripts_wrapper.sh"
},
{
"path": "bin/module/summary.sh",
"chars": 16005,
"preview": "function dr-summary {\n # ANSI colour palette\n local RST='\\033[0m'\n local BOLD='\\033[1m'\n local DIM='\\033[2m'\n\n loca"
},
{
"path": "bin/prepare-mac.sh",
"chars": 7162,
"preview": "#!/usr/bin/env bash\n\nset -euo pipefail\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n echo \"Requested to stop.\"\n exit 1\n}\n\n"
},
{
"path": "bin/prepare.sh",
"chars": 6124,
"preview": "#!/usr/bin/env bash\n\nset -euo pipefail\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n echo \"Requested to stop.\"\n exit 1\n}\n\n"
},
{
"path": "bin/runonce.sh",
"chars": 1517,
"preview": "#!/usr/bin/env bash\n\nif [[ $# -eq 0 ]]; then\n echo \"Schedules a command to be run after the next reboot.\"\n echo \"U"
},
{
"path": "bin/scripts_wrapper.sh",
"chars": 13478,
"preview": "#!/usr/bin/env bash\n\nfunction _dr_is_macos {\n [[ \"$(uname -s)\" == \"Darwin\" ]]\n}\n\nif ! declare -F _realpath >/dev/null 2"
},
{
"path": "defaults/debug-reward_function.py",
"chars": 1726,
"preview": "import math\r\nimport numpy\r\nimport time\r\n\r\nclass Reward:\r\n\r\n '''\r\n Debugging reward function to be used to track pe"
},
{
"path": "defaults/dependencies.json",
"chars": 85,
"preview": "{\n \"master_version\": \"6.0\",\n \"containers\": {\n \"simapp\": \"6.0.4\"\n }\n}\n"
},
{
"path": "defaults/docker-daemon.json",
"chars": 168,
"preview": "{\n \"runtimes\": {\n \"nvidia\": {\n \"path\": \"nvidia-container-runtime\",\n \"runtimeArgs\": []\n "
},
{
"path": "defaults/hyperparameters.json",
"chars": 395,
"preview": "{\n \"batch_size\": 64,\n \"beta_entropy\": 0.01,\n \"discount_factor\": 0.99,\n \"e_greedy_value\": 0.05,\n \"epsilon_"
},
{
"path": "defaults/model_metadata.json",
"chars": 630,
"preview": "{\n \"action_space\": [\n {\n \"steering_angle\": -30,\n \"speed\": 0.6\n },\n {\n "
},
{
"path": "defaults/model_metadata_cont.json",
"chars": 403,
"preview": "{\n \"action_space\": {\n \"speed\": {\n \"high\": 2,\n \"low\": 1\n },\n \"steering_angl"
},
{
"path": "defaults/model_metadata_sac.json",
"chars": 293,
"preview": "{\n \"action_space\": {\"speed\": {\"high\": 2, \"low\": 1}, \"steering_angle\": {\"high\": 30, \"low\": -30}},\n \"sensor\": [\"FRON"
},
{
"path": "defaults/reward_function.py",
"chars": 1114,
"preview": "def reward_function(params):\n '''\n Example of penalize steering, which helps mitigate zig-zag behaviors\n '''\n "
},
{
"path": "defaults/template-run.env",
"chars": 2231,
"preview": "DR_RUN_ID=0\nDR_WORLD_NAME=reinvent_base\nDR_RACE_TYPE=TIME_TRIAL\nDR_CAR_NAME=FastCar\nDR_CAR_BODY_SHELL_TYPE=deepracer\nDR_"
},
{
"path": "defaults/template-system.env",
"chars": 1061,
"preview": "DR_CLOUD=<CLOUD_REPLACE>\nDR_AWS_APP_REGION=<REGION_REPLACE>\nDR_UPLOAD_S3_PROFILE=default\nDR_UPLOAD_S3_BUCKET=<AWS_DR_BUC"
},
{
"path": "defaults/template-worker.env",
"chars": 727,
"preview": "DR_WORLD_NAME=reInvent2019_track\nDR_RACE_TYPE=TIME_TRIAL\nDR_CAR_COLOR=Blue\nDR_ENABLE_DOMAIN_RANDOMIZATION=False\nDR_TRAIN"
},
{
"path": "docker/docker-compose-aws.yml",
"chars": 251,
"preview": "version: '3.7'\n\nservices:\n rl_coach:\n environment:\n - AWS_METADATA_SERVICE_TIMEOUT=3\n - AWS_METADATA_SERVI"
},
{
"path": "docker/docker-compose-cwlog.yml",
"chars": 527,
"preview": "version: '3.7'\n\nservices:\n rl_coach:\n logging:\n driver: awslogs\n options:\n awslogs-group: '/deeprac"
},
{
"path": "docker/docker-compose-endpoint.yml",
"chars": 165,
"preview": "version: '3.7'\n\nservices:\n rl_coach:\n environment:\n - S3_ENDPOINT_URL=${DR_MINIO_URL}\n robomaker:\n environm"
},
{
"path": "docker/docker-compose-eval-swarm.yml",
"chars": 388,
"preview": "version: '3.7'\n\nservices:\n rl_coach:\n deploy:\n restart_policy:\n condition: none\n placement:\n "
},
{
"path": "docker/docker-compose-eval.yml",
"chars": 1221,
"preview": "version: '3.7'\n\nnetworks:\n default:\n external: true\n name: sagemaker-local\n\nservices:\n rl_coach:\n image: ${DR"
},
{
"path": "docker/docker-compose-keys.yml",
"chars": 309,
"preview": "version: '3.7'\n\nservices:\n rl_coach:\n environment:\n - AWS_ACCESS_KEY_ID=${DR_LOCAL_ACCESS_KEY_ID}\n - AWS_S"
},
{
"path": "docker/docker-compose-local-xorg-wsl.yml",
"chars": 326,
"preview": "version: '3.7'\n\nservices:\n robomaker:\n environment:\n - DISPLAY\n - USE_EXTERNAL_X=${DR_HOST_X}\n - QT_X"
},
{
"path": "docker/docker-compose-local-xorg.yml",
"chars": 306,
"preview": "version: '3.7'\n\nservices:\n robomaker:\n environment:\n - DISPLAY\n - USE_EXTERNAL_X=${DR_HOST_X}\n - XAUT"
},
{
"path": "docker/docker-compose-local.yml",
"chars": 503,
"preview": "\nversion: '3.7'\n\nnetworks:\n default:\n external: true\n name: sagemaker-local\n\nservices:\n minio:\n image: minio/"
},
{
"path": "docker/docker-compose-metrics.yml",
"chars": 888,
"preview": "\nversion: '3.7'\n\nnetworks:\n default:\n external: true\n name: sagemaker-local\n\nservices:\n telegraf:\n image: tel"
},
{
"path": "docker/docker-compose-mount.yml",
"chars": 93,
"preview": "version: '3.7'\n\nservices:\n robomaker:\n volumes:\n - \"${DR_MOUNT_DIR}:/root/.ros/log\"\n"
},
{
"path": "docker/docker-compose-robomaker-multi.yml",
"chars": 106,
"preview": "version: '3.7'\n\nservices:\n robomaker:\n volumes:\n - \"${DR_DIR}/tmp/comms.${DR_RUN_ID}:/mnt/comms\"\n"
},
{
"path": "docker/docker-compose-robomaker-scripts.yml",
"chars": 104,
"preview": "version: '3.7'\n\nservices:\n robomaker:\n volumes:\n - '${DR_ROBOMAKER_MOUNT_SCRIPTS_DIR}:/scripts'"
},
{
"path": "docker/docker-compose-simapp.yml",
"chars": 106,
"preview": "version: '3.7'\n\nservices:\n robomaker:\n volumes:\n - '${DR_ROBOMAKER_MOUNT_SIMAPP_DIR}:/opt/simapp'"
},
{
"path": "docker/docker-compose-training-swarm.yml",
"chars": 400,
"preview": "version: '3.7'\n\nservices:\n rl_coach:\n deploy:\n restart_policy:\n condition: none\n placement:\n "
},
{
"path": "docker/docker-compose-training.yml",
"chars": 2223,
"preview": "version: \"3.7\"\n\nnetworks:\n default:\n external: true\n name: sagemaker-local\n\nservices:\n rl_coach:\n image: ${DR"
},
{
"path": "docker/docker-compose-webviewer-swarm.yml",
"chars": 249,
"preview": "version: '3.7'\n\nnetworks:\n default:\n external: true\n name: sagemaker-local\n\nservices:\n proxy:\n deploy:\n "
},
{
"path": "docker/docker-compose-webviewer.yml",
"chars": 293,
"preview": "version: '3.7'\n\nnetworks:\n default:\n external: true\n name: sagemaker-local\n\nservices:\n proxy:\n image: nginx\n "
},
{
"path": "docker/metrics/configuration.env",
"chars": 195,
"preview": "# Grafana options\nGF_SECURITY_ADMIN_USER=admin\nGF_SECURITY_ADMIN_PASSWORD=admin\nGF_INSTALL_PLUGINS=\n\n# InfluxDB options\n"
},
{
"path": "docker/metrics/grafana/provisioning/dashboards/dashboard.yml",
"chars": 115,
"preview": "apiVersion: 1\n\nproviders:\n- name: 'Default'\n folder: ''\n options:\n path: /etc/grafana/provisioning/dashboards\n"
},
{
"path": "docker/metrics/grafana/provisioning/dashboards/deepracer-training-template.json",
"chars": 29455,
"preview": "{\n \"annotations\": {\n \"list\": [\n {\n \"builtIn\": 1,\n \"datasource\": {\n \"type\": \"datasource\","
},
{
"path": "docker/metrics/grafana/provisioning/datasources/influxdb.yml",
"chars": 1323,
"preview": "# config file version\napiVersion: 1\n\n# list of datasources that should be deleted from the database\ndeleteDatasources:\n "
},
{
"path": "docker/metrics/telegraf/etc/telegraf.conf",
"chars": 7503,
"preview": "# Telegraf configuration\n\n# Telegraf is entirely plugin driven. All metrics are gathered from the\n# declared inputs, and"
},
{
"path": "docs/_config.yml",
"chars": 161,
"preview": "---\ntheme: jekyll-theme-slate\nmarkdown: GFM\nname: Deepracer-for-Cloud\nplugins:\n - jekyll-relative-links\nrelative_links:"
},
{
"path": "docs/docker.md",
"chars": 2642,
"preview": "# About the Docker setup\n\nDRfC supports running Docker in to modes `swarm` and `compose` - this behaviour is configured "
},
{
"path": "docs/droa.md",
"chars": 5557,
"preview": "# DeepRacer on AWS (DRoA) Integration\n\n[DeepRacer on AWS](https://aws.amazon.com/solutions/implementations/deepracer-on-"
},
{
"path": "docs/head-to-head.md",
"chars": 874,
"preview": "# Head-to-Head Race (Beta)\n\nIt is possible to run a head-to-head race, similar to the races in the brackets \nrun by AWS "
},
{
"path": "docs/index.md",
"chars": 3177,
"preview": "# Introduction\n\nProvides a quick and easy way to get up and running with a DeepRacer training environment in AWS or Azur"
},
{
"path": "docs/installation.md",
"chars": 8234,
"preview": "# Installing Deepracer-for-Cloud\n\n## Requirements\n\nDepending on your needs as well as specific needs of the cloud platfo"
},
{
"path": "docs/mac.md",
"chars": 6755,
"preview": "# Running DeepRacer-for-Cloud on macOS\n\nDRfC can be run on macOS, both on AWS Mac EC2 instances (mac1/mac2 family) and o"
},
{
"path": "docs/metrics.md",
"chars": 2236,
"preview": "# Realtime Metrics\n\nIt is possible to collect and visualise real-time metrics using the optional telegraf/influxdb/grafa"
},
{
"path": "docs/multi_gpu.md",
"chars": 3852,
"preview": "# Training on a Computer with more than one GPU\n\nIn some cases you might end up with having a computer with more than on"
},
{
"path": "docs/multi_run.md",
"chars": 4408,
"preview": "# Managing Experiments\n\n## Experiment sub-directories\n\nWhen iterating on a model you typically need different reward fun"
},
{
"path": "docs/multi_worker.md",
"chars": 2199,
"preview": "# Using multiple Robomaker workers\n\nOne way to accelerate training is to launch multiple Robomaker workers that feed int"
},
{
"path": "docs/opengl.md",
"chars": 3712,
"preview": "# GPU Accelerated OpenGL for Robomaker\n\nOne way to improve performance, especially of Robomaker, is to enable GPU-accele"
},
{
"path": "docs/reference.md",
"chars": 12152,
"preview": "# Deepracer-for-Cloud Reference\n\n## Environment Variables\n\nThe scripts assume that two files `system.env` containing con"
},
{
"path": "docs/video.md",
"chars": 2083,
"preview": "# Watching the car\n\nThere are multiple ways to watch the car during training and evaluation. The ports and 'features' de"
},
{
"path": "docs/windows.md",
"chars": 3547,
"preview": "# Installing on Windows\n\n## Prerequisites\n\nThe basic installation steps to get a NVIDIA GPU / CUDA enabled Ubuntu subsys"
},
{
"path": "requirements.txt",
"chars": 37,
"preview": "boto3\npyyaml\nrequests\ndeepracer-utils"
},
{
"path": "scripts/droa/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "scripts/droa/auth.py",
"chars": 13701,
"preview": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: A"
},
{
"path": "scripts/droa/delete_model.py",
"chars": 5329,
"preview": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: A"
},
{
"path": "scripts/droa/download_logs.py",
"chars": 8042,
"preview": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: A"
},
{
"path": "scripts/droa/get_model.py",
"chars": 8549,
"preview": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: A"
},
{
"path": "scripts/droa/import_model.py",
"chars": 25842,
"preview": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: A"
},
{
"path": "scripts/droa/list_models.py",
"chars": 4963,
"preview": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: A"
},
{
"path": "scripts/evaluation/prepare-config.py",
"chars": 8501,
"preview": "#!/usr/bin/python3\n\nimport boto3\nfrom datetime import datetime\nimport sys\nimport os \nimport time\nimport json\nimport io\ni"
},
{
"path": "scripts/evaluation/start.sh",
"chars": 3548,
"preview": "#!/usr/bin/env bash\n\nsource $DR_DIR/bin/scripts_wrapper.sh\n\nusage() {\n echo \"Usage: $0 [-q] [-c]\"\n echo \" -q "
},
{
"path": "scripts/evaluation/stop.sh",
"chars": 362,
"preview": "#!/usr/bin/env bash\n\nSTACK_NAME=\"deepracer-eval-$DR_RUN_ID\"\n\n# Check if we will use Docker Swarm or Docker Compose\nif [["
},
{
"path": "scripts/log-analysis/start.sh",
"chars": 1087,
"preview": "#!/usr/bin/env bash\n\nif docker ps --filter \"name=deepracer-analysis\" --format \"{{.Names}}\" | grep -q \"^deepracer-analysi"
},
{
"path": "scripts/log-analysis/stop.sh",
"chars": 197,
"preview": "#!/usr/bin/env bash\n\necho \"Stopping log-analysis container...\"\nif docker stop deepracer-analysis > /dev/null 2>&1; then\n"
},
{
"path": "scripts/metrics/start.sh",
"chars": 131,
"preview": "#!/usr/bin/env bash\n\nCOMPOSE_FILES=./docker/docker-compose-metrics.yml\n\ndocker compose -f $COMPOSE_FILES -p deepracer-me"
},
{
"path": "scripts/metrics/stop.sh",
"chars": 130,
"preview": "#!/usr/bin/env bash\n\nCOMPOSE_FILES=./docker/docker-compose-metrics.yml\n\ndocker compose -f $COMPOSE_FILES -p deepracer-me"
},
{
"path": "scripts/training/increment.sh",
"chars": 2972,
"preview": "#!/usr/bin/env bash\n\nusage() {\n echo \"Usage: $0 [-f] [-w] [-p <model-prefix>] [-d <delimiter>]\"\n echo \"\"\n echo "
},
{
"path": "scripts/training/prepare-config.py",
"chars": 13699,
"preview": "#!/usr/bin/python3\n\nfrom datetime import datetime\nimport boto3\nimport sys\nimport os \nimport time\nimport json\nimport io\ni"
},
{
"path": "scripts/training/start.sh",
"chars": 7969,
"preview": "#!/usr/bin/env bash\n\nsource $DR_DIR/bin/scripts_wrapper.sh\n\nusage() {\n echo \"Usage: $0 [-w] [-q | -s | -r [n] | -a ] [-"
},
{
"path": "scripts/training/stop.sh",
"chars": 1874,
"preview": "#!/usr/bin/env bash\nsource $DR_DIR/bin/scripts_wrapper.sh\n\nSTACK_NAME=\"deepracer-$DR_RUN_ID\"\n\nSAGEMAKER_CONTAINERS=$(dr-"
},
{
"path": "scripts/upload/download-model.sh",
"chars": 3633,
"preview": "#!/usr/bin/env bash\n\nusage() {\n echo \"Usage: $0 [-f] [-w] [-d] -s <source-prefix> -t <target-prefix>\"\n echo \" -f"
},
{
"path": "scripts/upload/increment.sh",
"chars": 2853,
"preview": "#!/usr/bin/env bash\n\nusage() {\n echo \"Usage: $0 [-f] [-w] [-p <model-prefix>] [-d <delimiter>]\"\n echo \"\"\n echo "
},
{
"path": "scripts/upload/prepare-config.py",
"chars": 3868,
"preview": "#!/usr/bin/python3\n\nimport boto3\nimport sys\nimport os \nimport time\nimport json\nimport io\nimport yaml\n\nconfig = {}\nconfig"
},
{
"path": "scripts/upload/upload-car.sh",
"chars": 2384,
"preview": "#!/usr/bin/env bash\n\nusage() {\n echo \"Usage: $0 [-L] [-f]\"\n echo \" -f Force. Do not ask for confirmat"
},
{
"path": "scripts/upload/upload-model.sh",
"chars": 8798,
"preview": "#!/usr/bin/env bash\n\nusage() {\n echo \"Usage: $0 [-f] [-w] [-d] [-b] [-1] [-i] [-I] [-L] [-c <checkpoint>] [-p <model-pr"
},
{
"path": "scripts/viewer/index.template.html",
"chars": 17371,
"preview": "<!DOCTYPE html>\r\n<html lang=\"en\">\r\n\r\n<head>\r\n <title>DR-$DR_RUN_ID - $DR_LOCAL_S3_MODEL_PREFIX</title>\r\n <style>\r\n"
},
{
"path": "scripts/viewer/start.sh",
"chars": 3908,
"preview": "#!/usr/bin/env bash\n\nusage() {\n echo \"Usage: $0 [-t topic] [-w width] [-h height] [-q quality] -b [browser-command] -p "
},
{
"path": "scripts/viewer/stop.sh",
"chars": 445,
"preview": "#!/usr/bin/env bash\n\nSTACK_NAME=\"deepracer-$DR_RUN_ID-viewer\"\nCOMPOSE_FILES=$DR_DIR/docker/docker-compose-webviewer.yml\n"
},
{
"path": "utils/Dockerfile.gpu-detect",
"chars": 298,
"preview": "FROM nvcr.io/nvidia/cuda:12.6.3-base-ubuntu24.04\nRUN apt-get update && apt-get install -y --no-install-recommends wget p"
},
{
"path": "utils/cuda-check-tf.py",
"chars": 393,
"preview": "from tensorflow.python.client import device_lib\nimport tensorflow as tf\n\ndef get_available_gpus():\n local_device_prot"
},
{
"path": "utils/cuda-check.sh",
"chars": 290,
"preview": "#!/usr/bin/env bash\n\nCONTAINER_ID=$(docker create --rm -ti -e CUDA_VISIBLE_DEVICES --name cuda-check awsdeepracercommuni"
},
{
"path": "utils/download-car-model.py",
"chars": 4818,
"preview": "#!/usr/bin/env python3\n\"\"\"\nThis script checks for model files in an S3 bucket, downloads, and renames them based on a sp"
},
{
"path": "utils/evaluate.sh",
"chars": 2149,
"preview": "#!/usr/bin/env bash\n\n# This script evaluates DeepRacer models by managing the evaluation process.\n# It requires one argu"
},
{
"path": "utils/sample-createspot.sh",
"chars": 6611,
"preview": "#!/usr/bin/env bash\n\n## This is sample code that will generally show you how to launch a spot instance on aws and lever"
},
{
"path": "utils/setup-xorg.sh",
"chars": 988,
"preview": "#!/usr/bin/env bash\n\nset -e\n\n# Script to install basic X-Windows on a headless instance (e.g. in EC2)\n\n# Script shall ru"
},
{
"path": "utils/start-local-browser.sh",
"chars": 2010,
"preview": "#!/usr/bin/env bash\n\nsource $DR_DIR/bin/scripts_wrapper.sh\n\nusage() {\n echo \"Usage: $0 [-t topic] [-w width] [-h height"
},
{
"path": "utils/start-xorg.sh",
"chars": 1478,
"preview": "#!/usr/bin/env bash\n\nset -e\n\n# Script shall run as user, not root. Sudo will be used when needed.\nif [[ $EUID == 0 ]]; t"
},
{
"path": "utils/timed-stop.sh",
"chars": 201,
"preview": "#!/usr/bin/env bash\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" >/dev/null 2>&1 && pwd)\"\nDR_DIR=\"$(dirname $SCRIP"
},
{
"path": "utils/upload-rotate.sh",
"chars": 4587,
"preview": "#!/usr/bin/env bash\n# This script uploads the latest DeepRacer model and activates the necessary environment.\n# It proce"
}
]
About this extraction
This page contains the full source code of the aws-deepracer-community/deepracer-for-cloud GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 101 files (363.0 KB), approximately 100.3k tokens, and a symbol index with 50 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.