[
  {
    "path": ".github/workflows/syntax-check.yml",
    "content": "name: Syntax Check\n\non:\n  pull_request:\n    branches:\n      - master\n      - dev\n\njobs:\n  bash-syntax:\n    name: Bash Syntax Check\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n\n      - name: Check bash scripts in bin/\n        run: |\n          find bin/ -name '*.sh' | sort | while read -r f; do\n            bash -n \"$f\" && echo \"OK: $f\"\n          done\n\n      - name: Check bash scripts in scripts/\n        run: |\n          find scripts/ -name '*.sh' | sort | while read -r f; do\n            bash -n \"$f\" && echo \"OK: $f\"\n          done\n\n  python-syntax:\n    name: Python Syntax Check\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n\n      - uses: actions/setup-python@v5\n        with:\n          python-version: '3.x'\n\n      - name: Check Python scripts in scripts/\n        run: |\n          find scripts/ -name '*.py' | sort | while read -r f; do\n            python3 -m py_compile \"$f\" && echo \"OK: $f\"\n          done\n\n  docker-compose-syntax:\n    name: Docker Compose Syntax Check\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n\n      - name: Check docker-compose YAML files\n        run: |\n          find docker/ -name 'docker-compose*.yml' | sort | while read -r f; do\n            python3 -c 'import sys, yaml; yaml.safe_load(open(sys.argv[1])) or True; print(\"OK: \" + sys.argv[1])' \"$f\" \\\n              || { echo \"FAIL: $f\"; exit 1; }\n          done\n"
  },
  {
    "path": ".gitignore",
    "content": ".vscode/\n.venv/\ncustom_files/\nlogs/\ndocker/volumes/\nrecording/\nrecording\n/*.env\n/*.bak\n/*.tar\n/*.json\nDONE\ndata/\ntmp/\nautorun.s3url\nnohup.out\n/*.sh\n_\nexperiments/\n\n\n# Python\n__pycache__/\n*.py[cod]\n*.pyo\n*.pyd\n"
  },
  {
    "path": "LICENSE",
    "content": "Copyright 2019-2023 AWS DeepRacer Community. All Rights Reserved.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this\nsoftware and associated documentation files (the \"Software\"), to deal in the Software\nwithout restriction, including without limitation the rights to use, copy, modify,\nmerge, publish, distribute, sublicense, and/or sell copies of the Software, and to\npermit persons to whom the Software is furnished to do so.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\nINCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\nPARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\nHOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\nOF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\nSOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# DeepRacer-For-Cloud\nProvides a quick and easy way to get up and running with a DeepRacer training environment using a cloud virtual machine or a local compter, such [AWS EC2 Accelerated Computing instances](https://aws.amazon.com/ec2/instance-types/?nc1=h_ls#Accelerated_Computing) or the Azure [N-Series Virtual Machines](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu).\n\nDRfC runs on Ubuntu 22.04 and 24.04. GPU acceleration requires a NVIDIA GPU, preferrably with more than 8GB of VRAM. ARM64/Graviton instances (AWS Graviton, Apple Silicon) are also supported for CPU-only training.\n\n**Experimental:** macOS is supported in CPU-only mode via [Colima](https://github.com/abiosoft/colima), on both AWS Mac EC2 instances and local Mac hardware (Intel and Apple Silicon). See [docs/mac.md](docs/mac.md) for setup instructions.\n\n## Introduction\n\nDeepRacer-For-Cloud (DRfC) started as an extension of the work done by Alex (https://github.com/alexschultz/deepracer-for-dummies), which is again a wrapper around the amazing work done by Chris (https://github.com/crr0004/deepracer). With the introduction of the second generation Deepracer Console the repository has been split up. This repository contains the scripts needed to *run* the training, but depends on Docker Hub to provide pre-built docker images. All the under-the-hood building capabilities are in the [Deepracer Simapp](https://github.com/aws-deepracer-community/deepracer-simapp) repository.\n\nAs if December 2025 the original DeepRacer service in the AWS console is no longer available, and is replaced by [DeepRacer-on-AWS](https://aws.amazon.com/solutions/implementations/deepracer-on-aws/) which you can install in your own AWS environment. DeepRacer-For-Cloud is independent of any AWS service, so it is not directly impacted by this change.\n\n## Main Features\n\nDRfC supports a wide set of features to ensure that you can focus on creating the best model:\n* User-friendly\n\t* Based on the continously updated community [Robomaker](https://github.com/aws-deepracer-community/deepracer-simapp) container, supporting a wide range of CPU and GPU setups.\n\t* Wide set of scripts (`dr-*`) enables effortless training.\n\t* Detection of your AWS DeepRacer Console models; allows upload of a locally trained model to any of them.\n* Modes\n\t* Time Trial\n\t* Object Avoidance\n\t* Head-to-Bot\n* Training\n\t* Multiple Robomaker instances per Sagemaker (N:1) to improve training progress.\n\t* Multiple training sessions in parallel - each being (N:1) if hardware supports it - to test out things in parallel.\n\t* Connect multiple nodes together (Swarm-mode only) to combine the powers of multiple computers/instances.\n* Evaluation\n\t* Evaluate independently from training.\n\t* Save evaluation run to MP4 file in S3.\n* Logging\n\t* Training metrics and trace files are stored to S3.\n\t* Optional integration with AWS CloudWatch.\n\t* Optional exposure of Robomaker internal log-files.\n* Technology\n\t* Supports both Docker Swarm (used for connecting multiple nodes together) and Docker Compose\n\n## Tech Stack\n\nDRfC is built on top of the [AWS DeepRacer Simapp](https://github.com/aws-deepracer-community/deepracer-simapp) — a single Docker image used for three purposes:\n\n* **Robomaker** — one or more containers providing robotics simulation via ROS and Gazebo\n* **Sagemaker** — container running the model training job\n* **RL Coach** — container that bootstraps the Sagemaker container using the Sagemaker SDK and Sagemaker Local\n\n### Core Technologies\n\n| Component | Version |\n|-----------|---------|\n| Ubuntu | 24.04 |\n| Python | 3.12 |\n| TensorFlow | 2.20 |\n| CUDA | 12.6 (GPU only) |\n| Redis | 8.6.1 |\n| ROS | 2 Jazzy |\n| Gazebo | Harmonic |\n\n## Recommended AWS Instance Types\n\n| Use case | Instance type | Notes |\n|----------|--------------|-------|\n| GPU | `g4dn.2xlarge` | NVIDIA T4, fastest training |\n| Intel CPU | `c7i.2xlarge` | Latest Intel CPU generation, cost-effective CPU training |\n| ARM CPU (Graviton) | `c8g.2xlarge` | AWS Graviton4, best price/performance for CPU |\n\n### Images\n\nPre-built images are available on [Docker Hub](https://hub.docker.com/repository/docker/awsdeepracercommunity/deepracer-simapp) as `awsdeepracercommunity/deepracer-simapp:<VERSION>-cpu` (CPU) and `awsdeepracercommunity/deepracer-simapp:<VERSION>-gpu` (CUDA GPU). Both support OpenGL acceleration.\n\nDuring installation DRfC will automatically pull the latest image based on whether you have a GPU or CPU installation.\n\n## Documentation\n\nFull documentation can be found on the [Deepracer-for-Cloud GitHub Pages](https://aws-deepracer-community.github.io/deepracer-for-cloud).\n\nFor importing and managing models via the community [DeepRacer on AWS (DRoA)](https://aws.amazon.com/solutions/implementations/deepracer-on-aws/) console, see the [DRoA integration guide](docs/droa.md).\n\n## Support\n\n* For general support it is suggested to join the [AWS DeepRacing Community](https://deepracing.io/). The Community Slack has a channel #dr-training-local where the community provides active support.\n* Create a GitHub issue if you find an actual code issue, or where updates to documentation would be required.\n"
  },
  {
    "path": "bin/activate.sh",
    "content": "#!/usr/bin/env bash\n\n# Portable readlink -f: BSD readlink (macOS) does not support -f.\n_realpath() {\n  if command -v realpath >/dev/null 2>&1; then\n    realpath \"$1\"\n  elif command -v grealpath >/dev/null 2>&1; then\n    grealpath \"$1\"\n  else\n    readlink -f \"$1\"\n  fi\n}\n\n# Portable version comparison: sort -V is GNU-only; macOS ships BSD sort.\nverlte() {\n  local v1 v2\n  v1=\"$1\" v2=\"$2\"\n  # Split into numeric fields and compare segment by segment.\n  IFS='.' read -r -a a1 <<< \"$v1\"\n  IFS='.' read -r -a a2 <<< \"$v2\"\n  local i\n  for (( i=0; i<${#a1[@]} || i<${#a2[@]}; i++ )); do\n    local n1=${a1[$i]:-0} n2=${a2[$i]:-0}\n    if (( n1 < n2 )); then return 0; fi\n    if (( n1 > n2 )); then return 1; fi\n  done\n  return 0\n}\n\n# Find a free /24 subnet in 192.168.200-254/24 that doesn't conflict with\n# existing Docker networks or host routes.\nfunction find_free_subnet() {\n  local USED NW_IDS\n  NW_IDS=$(docker network ls -q 2>/dev/null)\n  USED=$(\n    { [[ -n \"$NW_IDS\" ]] && docker network inspect $NW_IDS \\\n          --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}' 2>/dev/null; \\\n      if ip route show 2>/dev/null | grep -q .; then\n          ip route show 2>/dev/null | awk '{print $1}' | grep -E '^[0-9]+\\.'\n      else\n          # macOS: parse inet routes from netstat\n          netstat -rn -f inet 2>/dev/null | awk 'NR>4 && $1~/^[0-9]/{print $1}'\n      fi; } | sort -u\n  )\n  for j in $(seq 200 254); do\n    local CANDIDATE=\"192.168.${j}.0/24\"\n    if ! echo \"$USED\" | grep -qF \"$CANDIDATE\"; then\n      echo \"$CANDIDATE\"\n      return 0\n    fi\n  done\n  return 1\n}\n\n# Create the sagemaker-local Docker network with the required compose labels.\nfunction _create_sagemaker_network() {\n  local NW_SUBNET=$(find_free_subnet)\n  local SWARM_FLAGS\n  [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]] && SWARM_FLAGS=\"-d overlay --attachable --scope swarm\"\n  docker network create \"$SAGEMAKER_NW\" $SWARM_FLAGS \\\n    ${NW_SUBNET:+--subnet=$NW_SUBNET} \\\n    --label com.docker.compose.network=sagemaker-local \\\n    --label com.docker.compose.project=sagemaker-local >/dev/null 2>&1\n}\n\nfunction dr-update-env {\n\n  local _saved_experiment=\"${DR_EXPERIMENT_NAME:-}\"\n\n  if [[ -f \"$DIR/system.env\" ]]; then\n    LINES=$(grep -v '^#' $DIR/system.env)\n    for l in $LINES; do\n      env_var=$(echo $l | cut -f1 -d\\=)\n      env_val=$(echo $l | cut -f2 -d\\=)\n      eval \"export $env_var=$env_val\"\n    done\n  else\n    echo \"File system.env does not exist.\"\n    return 1\n  fi\n\n  # Restore DR_EXPERIMENT_NAME if it was pre-set (e.g. via -e flag) so it takes\n  # precedence over any value in system.env.\n  if [[ -n \"$_saved_experiment\" ]]; then\n    export DR_EXPERIMENT_NAME=\"$_saved_experiment\"\n  fi\n\n  if [[ ! -z $DR_EXPERIMENT_NAME ]]; then\n    if [[ ! -d \"$DIR/experiments\" ]]; then\n      echo \"Experiments directory $DIR/experiments does not exist.\"\n      return 1\n    fi\n    if [[ ! -d \"$DIR/experiments/$DR_EXPERIMENT_NAME\" ]]; then\n      echo \"Experiment directory $DIR/experiments/$DR_EXPERIMENT_NAME does not exist.\"\n      return 1\n    fi\n    export DR_CONFIG=\"$DIR/experiments/$DR_EXPERIMENT_NAME/run.env\"\n  fi\n\n  if [[ -f \"$DR_CONFIG\" ]]; then\n    LINES=$(grep -v '^#' $DR_CONFIG)\n    for l in $LINES; do\n      env_var=$(echo $l | cut -f1 -d\\=)\n      env_val=$(echo $l | cut -f2 -d\\=)\n      eval \"export $env_var=$env_val\"\n    done\n  else\n    echo \"File ${DR_CONFIG} does not exist.\"\n    return 1\n  fi\n\n  if [[ -z \"${DR_RUN_ID}\" ]]; then\n    export DR_RUN_ID=0\n  fi\n\n  if [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n    export DR_ROBOMAKER_TRAIN_PORT=$(expr 8080 + $DR_RUN_ID)\n    export DR_ROBOMAKER_EVAL_PORT=$(expr 8180 + $DR_RUN_ID)\n    export DR_ROBOMAKER_GUI_PORT=$(expr 5900 + $DR_RUN_ID)\n  else\n    export DR_ROBOMAKER_TRAIN_PORT=\"8080-8089\"\n    export DR_ROBOMAKER_EVAL_PORT=\"8080-8089\"\n    export DR_ROBOMAKER_GUI_PORT=\"5901-5920\"\n  fi\n\n  # Setting the default region to ensure that things work also in the\n  # non default regions.\n  export AWS_DEFAULT_REGION=${DR_AWS_APP_REGION}\n\n}\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]:-}\")\" >/dev/null 2>&1 && pwd)\"\nDIR=\"$(dirname $SCRIPT_DIR)\"\nexport DR_DIR=$DIR\n\n# Parse arguments: -e <experiment-name> or a positional config file path\n_DR_OPT_EXPERIMENT=\"\"\nOPTIND=1\nwhile getopts \":e:\" _opt; do\n  case $_opt in\n    e) _DR_OPT_EXPERIMENT=\"$OPTARG\" ;;\n    \\?) break ;;\n  esac\ndone\nshift $(( OPTIND - 1 ))\nunset _opt OPTIND\n\nif [[ -n \"$_DR_OPT_EXPERIMENT\" ]]; then\n  export DR_EXPERIMENT_NAME=\"$_DR_OPT_EXPERIMENT\"\nfi\nunset _DR_OPT_EXPERIMENT\n\nEXPERIMENT_FLAG=\"$( grep DR_EXPERIMENT_NAME $DIR/system.env | grep -v \\#)\"\n\nif [[ -f \"$1\" ]]; then\n  export DR_CONFIG=$(_realpath \"$1\")\n  dr-update-env || return 1\nelif [[ -n \"${DR_EXPERIMENT_NAME:-}\" ]] || [[ -n \"$EXPERIMENT_FLAG\" ]]; then\n  dr-update-env || return 1\nelif [[ -f \"$DIR/run.env\" ]]; then\n  export DR_CONFIG=\"$DIR/run.env\"\n  dr-update-env || return 1\nelse\n  echo \"No configuration file.\"\n  return 1\nfi\n\n## Activate Python virtual environment\nif [[ -f \"${DR_DIR}/.venv/bin/activate\" ]]; then\n  source \"${DR_DIR}/.venv/bin/activate\"\nelse\n  echo \"WARNING: Python venv not found at ${DR_DIR}/.venv. Run bin/prepare.sh to create it.\"\nfi\n\n# Check if Docker runs -- if not, then start it.\nif [[ \"$(type service 2>/dev/null)\" ]]; then\n  service docker status >/dev/null || sudo service docker start\nfi\n\n## Check if WSL2\nif [[ -f /proc/version ]] && grep -qi Microsoft /proc/version && grep -q \"WSL2\" /proc/version; then\n    IS_WSL2=\"yes\"\nfi\n\n# Check if we will use Docker Swarm or Docker Compose\n# If not defined then use Swarm\nif [[ -z \"${DR_DOCKER_STYLE}\" ]]; then\n  export DR_DOCKER_STYLE=\"swarm\"\nfi\n\nif [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n  export DR_DOCKER_FILE_SEP=\"-c\"\n  SWARM_NODE=$(docker node inspect self | jq .[0].ID -r)\n  SWARM_NODE_UPDATE=$(docker node update --label-add Sagemaker=true $SWARM_NODE)\nelse\n  export DR_DOCKER_FILE_SEP=\"-f\"\nfi\n\n# Check if sagemaker-local network has required compose label; recreate if missing\nSAGEMAKER_NW='sagemaker-local'\n\nif ! docker network ls --format '{{.Name}}' | grep -q \"^${SAGEMAKER_NW}$\"; then\n  echo \"Network $SAGEMAKER_NW does not exist. Creating.\"\n  _create_sagemaker_network\nelse\n  NW_LABEL_NETWORK=$(docker network inspect \"$SAGEMAKER_NW\" --format '{{index .Labels \"com.docker.compose.network\"}}')\n  if [[ \"$NW_LABEL_NETWORK\" != \"sagemaker-local\" ]]; then\n    echo \"Network $SAGEMAKER_NW is missing required label.\"\n    NW_CONTAINERS=$(docker network inspect \"$SAGEMAKER_NW\" --format '{{len .Containers}}')\n    if [[ \"${NW_CONTAINERS:-0}\" -gt 0 ]]; then\n      dr-stop-all\n    fi\n    docker network rm \"$SAGEMAKER_NW\" >/dev/null 2>&1\n    _create_sagemaker_network\n    echo \"Network $SAGEMAKER_NW recreated with required labels.\"\n\n  fi\nfi\n\n# Check if CUDA_VISIBLE_DEVICES is configured.\nif [[ -n \"${CUDA_VISIBLE_DEVICES}\" ]]; then\n  echo \"WARNING: You have CUDA_VISIBLE_DEVICES defined. The will no longer work as\"\n  echo \"         expected. To control GPU assignment use DR_ROBOMAKER_CUDA_DEVICES\"\n  echo \"         and DR_SAGEMAKER_CUDA_DEVICES and rlcoach v5.0.1 or later.\"\nfi\n\n# Check if CUDA_VISIBLE_DEVICES is configured.\nif [ \"${DR_CLOUD,,}\" == \"local\" ] && [ -z \"${DR_MINIO_IMAGE}\" ]; then\n  echo \"WARNING: You have not configured DR_MINIO_IMAGE in system.env.\"\n  echo \"         System will default to tag RELEASE.2022-10-24T18-35-07Z\"\n  export DR_MINIO_IMAGE=\"RELEASE.2022-10-24T18-35-07Z\"\nfi\n\n# Prepare the docker compose files depending on parameters\nif [[ \"${DR_CLOUD,,}\" == \"azure\" ]]; then\n  export DR_LOCAL_S3_ENDPOINT_URL=\"http://localhost:9000\"\n  export DR_MINIO_URL=\"http://minio:9000\"\n  DR_LOCAL_PROFILE_ENDPOINT_URL=\"--profile $DR_LOCAL_S3_PROFILE --endpoint-url $DR_LOCAL_S3_ENDPOINT_URL\"\n  DR_TRAIN_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml\"\n  DR_EVAL_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml\"\n  DR_MINIO_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local.yml\"\nelif [[ \"${DR_CLOUD,,}\" == \"local\" ]]; then\n  export DR_LOCAL_S3_ENDPOINT_URL=\"http://localhost:9000\"\n  export DR_MINIO_URL=\"http://minio:9000\"\n  DR_LOCAL_PROFILE_ENDPOINT_URL=\"--profile $DR_LOCAL_S3_PROFILE --endpoint-url $DR_LOCAL_S3_ENDPOINT_URL\"\n  DR_TRAIN_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml\"\n  DR_EVAL_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml\"\n  DR_MINIO_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local.yml\"\nelif [[ \"${DR_CLOUD,,}\" == \"remote\" ]]; then\n  export DR_LOCAL_S3_ENDPOINT_URL=\"$DR_REMOTE_MINIO_URL\"\n  export DR_MINIO_URL=\"$DR_REMOTE_MINIO_URL\"\n  DR_LOCAL_PROFILE_ENDPOINT_URL=\"--profile $DR_LOCAL_S3_PROFILE --endpoint-url $DR_LOCAL_S3_ENDPOINT_URL\"\n  DR_TRAIN_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml\"\n  DR_EVAL_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-endpoint.yml\"\n  DR_MINIO_COMPOSE_FILE=\"\"\nelif [[ \"${DR_CLOUD,,}\" == \"aws\" ]]; then\n  DR_LOCAL_PROFILE_ENDPOINT_URL=\"\"\n  DR_TRAIN_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-aws.yml\"\n  DR_EVAL_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval.yml $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-aws.yml\"\nelse\n  DR_LOCAL_PROFILE_ENDPOINT_URL=\"\"\n  DR_TRAIN_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training.yml\"\n  DR_EVAL_COMPOSE_FILE=\"$DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval.yml\"\nfi\n\n# Add host X support for Linux and WSL2\nif [[ \"${DR_HOST_X,,}\" == \"true\" ]]; then\n  if [[ \"$IS_WSL2\" == \"yes\" ]]; then\n  \n    # Check if package x11-server-utils is installed\n    if ! command -v xset &> /dev/null; then\n      echo \"WARNING: Package x11-server-utils is not installed. Please install it to enable X11 support.\"\n    fi\n  \n    if [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" && \"${DR_USE_GUI,,}\" == \"true\" ]]; then\n      echo \"WARNING: Cannot use GUI in Swarm mode. Please switch to Compose mode.\"\n    fi\n\n    DR_TRAIN_COMPOSE_FILE=\"$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local-xorg-wsl.yml\"\n    DR_EVAL_COMPOSE_FILE=\"$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local-xorg-wsl.yml\"\n  else\n    DR_TRAIN_COMPOSE_FILE=\"$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local-xorg.yml\"\n    DR_EVAL_COMPOSE_FILE=\"$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-local-xorg.yml\"\n  fi\nfi\n\n# Prevent docker swarms to restart\nif [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n  DR_TRAIN_COMPOSE_FILE=\"$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-training-swarm.yml\"\n  DR_EVAL_COMPOSE_FILE=\"$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-eval-swarm.yml\"\nfi\n\n# Enable logs in CloudWatch\nif [[ \"${DR_CLOUD_WATCH_ENABLE,,}\" == \"true\" ]]; then\n  DR_TRAIN_COMPOSE_FILE=\"$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-cwlog.yml\"\n  DR_EVAL_COMPOSE_FILE=\"$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-cwlog.yml\"\nfi\n\n# Enable local simapp mount\nif [[ -d \"${DR_ROBOMAKER_MOUNT_SIMAPP_DIR,,}\" ]]; then\n  DR_TRAIN_COMPOSE_FILE=\"$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-simapp.yml\"\n  DR_EVAL_COMPOSE_FILE=\"$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-simapp.yml\"\nfi\n\n# Enable local scripts mount\nif [[ -d \"${DR_ROBOMAKER_MOUNT_SCRIPTS_DIR,,}\" ]]; then\n  DR_TRAIN_COMPOSE_FILE=\"$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-robomaker-scripts.yml\"\n  DR_EVAL_COMPOSE_FILE=\"$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-robomaker-scripts.yml\"\nfi\n\n## Check if we have an AWS IAM assumed role, or if we need to set specific credentials.\n## On macOS/Darwin, IMDS is not reachable from inside the Colima VM, so always use\n## explicit keys from the configured AWS profile.\nif [[ \"$(uname -s)\" != \"Darwin\" ]] && [ \"${DR_CLOUD,,}\" == \"aws\" ] && [ $(aws --output json sts get-caller-identity 2>/dev/null | jq '.Arn' | awk /assumed-role/ | wc -l) -gt 0 ]; then\n  export DR_LOCAL_S3_AUTH_MODE=\"role\"\nelse\n  export DR_LOCAL_ACCESS_KEY_ID=$(aws --profile $DR_LOCAL_S3_PROFILE configure get aws_access_key_id | xargs)\n  export DR_LOCAL_SECRET_ACCESS_KEY=$(aws --profile $DR_LOCAL_S3_PROFILE configure get aws_secret_access_key | xargs)\n  if [[ -z \"${DR_LOCAL_ACCESS_KEY_ID}\" || -z \"${DR_LOCAL_SECRET_ACCESS_KEY}\" ]]; then\n    echo \"ERROR: AWS credentials not found in profile '${DR_LOCAL_S3_PROFILE}'.\"\n    echo \"       Run: aws configure --profile ${DR_LOCAL_S3_PROFILE}\"\n    return 1\n  fi\n  DR_TRAIN_COMPOSE_FILE=\"$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-keys.yml\"\n  DR_EVAL_COMPOSE_FILE=\"$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DIR/docker/docker-compose-keys.yml\"\n  export DR_UPLOAD_PROFILE=\"--profile $DR_UPLOAD_S3_PROFILE\"\n  export DR_LOCAL_S3_AUTH_MODE=\"profile\"\nfi\n\nexport DR_TRAIN_COMPOSE_FILE\nexport DR_EVAL_COMPOSE_FILE\nexport DR_LOCAL_PROFILE_ENDPOINT_URL\n\nif [[ -n \"${DR_MINIO_COMPOSE_FILE}\" ]]; then\n  export MINIO_UID=$(id -u)\n  export MINIO_USERNAME=$(id -u -n)\n  export MINIO_GID=$(id -g)\n  export MINIO_GROUPNAME=$(id -g -n)\n  if [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n\n    if [ \"$DR_DOCKER_MAJOR_VERSION\" -gt 24 ]; then\n      DETACH_FLAG=\"--detach=true\"\n    fi\n\n    docker stack deploy $DR_MINIO_COMPOSE_FILE $DETACH_FLAG s3\n  else\n    docker compose $DR_MINIO_COMPOSE_FILE -p s3 up -d\n  fi\n\nfi\n\n## Version check\nif [[ -z \"$DR_SIMAPP_SOURCE\" || -z \"$DR_SIMAPP_VERSION\" ]]; then\n  DEFAULT_SIMAPP_VERSION=$(jq -r '.containers.simapp | select (.!=null)' $DIR/defaults/dependencies.json)\n  echo \"ERROR: Variable DR_SIMAPP_SOURCE or DR_SIMAPP_VERSION not defined.\"\n  echo \"\"\n  echo \"As of version 5.3 the variables DR_SIMAPP_SOURCE and DR_SIMAPP_VERSION are required in system.env.\"\n  echo \"To continue to use the separate Sagemaker, Robomaker and RL Coach images, run 'git checkout legacy'.\"\n  echo \"\"\n  echo \"Please add the following lines to your system.env file:\"\n  echo \"DR_SIMAPP_SOURCE=awsdeepracercommunity/deepracer-simapp\"\n  echo \"DR_SIMAPP_VERSION=${DEFAULT_SIMAPP_VERSION}-gpu\"\n  return\nfi\n\nDEPENDENCY_VERSION=$(jq -r '.master_version  | select (.!=null)' $DIR/defaults/dependencies.json)\n\nSIMAPP_VER=$(docker inspect ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION} 2>/dev/null | jq -r .[].Config.Labels.version)\nif [ -z \"$SIMAPP_VER\" ]; then SIMAPP_VER=$SIMAPP_VERSION; fi\nif [ -z \"$SIMAPP_VER\" ]; then\n  # Image not pulled -- fall back to checking the configured version tag\n  SIMAPP_VER=$(echo ${DR_SIMAPP_VERSION} | grep -oP '^\\d+\\.\\d+(\\.\\d+)?')\nfi\nif [ -n \"$SIMAPP_VER\" ] && ! verlte $DEPENDENCY_VERSION $SIMAPP_VER; then\n  echo \"WARNING: Incompatible version of Deepracer Simapp. Expected >$DEPENDENCY_VERSION. Got $SIMAPP_VER.\"\nfi\n\n# Get Docker version\nDOCKER_VERSION=$(docker --version | grep -oE '[0-9]+\\.[0-9]+\\.[0-9]+' | head -1)\nDR_DOCKER_MAJOR_VERSION=$(echo $DOCKER_VERSION | cut -d. -f1)\nexport DR_DOCKER_MAJOR_VERSION\n\n## Create a dr-local-aws command\nalias dr-local-aws='aws $DR_LOCAL_PROFILE_ENDPOINT_URL'\n\nsource $SCRIPT_DIR/scripts_wrapper.sh\nsource $SCRIPT_DIR/module/summary.sh\nsource $SCRIPT_DIR/module/droa.sh\n\nfunction dr-update {\n  dr-update-env\n}\n\nfunction dr-reload {\n  source $DIR/bin/activate.sh $DR_CONFIG\n}\n\n## Show summary after activation if not in quiet mode and if in interactive shell\n[[ $- == *i* && \"${DR_QUIET_ACTIVATE,,}\" != \"true\" ]] && dr-summary\n"
  },
  {
    "path": "bin/autorun.sh",
    "content": "#!/usr/bin/env bash\n\n## this is the default autorun script\n## file should run automatically after init.sh completes.\n## this script downloads your configured run.env, system.env and any custom container requests\n\nINSTALL_DIR_TEMP=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")/..\" >/dev/null 2>&1 && pwd)\"\n\n## retrieve the s3_location name you sent the instance in user data launch\n## assumed to first line of file\nS3_LOCATION=$(awk 'NR==1 {print; exit}' $INSTALL_DIR_TEMP/autorun.s3url)\n\nsource $INSTALL_DIR_TEMP/bin/activate.sh\n\n## get the updatated run.env and system.env files and any others you stashed in s3\naws s3 sync s3://$S3_LOCATION $INSTALL_DIR_TEMP\n\n## get the right docker containers, if needed\nSYSENV=\"$INSTALL_DIR_TEMP/system.env\"\nSAGEMAKER_IMAGE=$(cat $SYSENV | grep DR_SAGEMAKER_IMAGE | sed 's/.*=//')\nROBOMAKER_IMAGE=$(cat $SYSENV | grep DR_ROBOMAKER_IMAGE | sed 's/.*=//')\n\ndocker pull awsdeepracercommunity/deepracer-sagemaker:$SAGEMAKER_IMAGE\ndocker pull awsdeepracercommunity/deepracer-robomaker:$ROBOMAKER_IMAGE\n\ndr-reload\n\ndate | tee $INSTALL_DIR_TEMP/DONE-AUTORUN\n\n## start training\ncd $INSTALL_DIR_TEMP/scripts/training\n./start.sh\n"
  },
  {
    "path": "bin/detect.sh",
    "content": "#!/usr/bin/env bash\n\n## What am I?\nif [[ -f /var/run/cloud-init/instance-data.json ]]; then\n    # We have a cloud-init environment (Azure or AWS).\n    CLOUD_NAME=$(jq -r '.v1.\"cloud-name\"' /var/run/cloud-init/instance-data.json)\n    if [[ \"${CLOUD_NAME}\" == \"azure\" ]]; then\n        export CLOUD_NAME\n        export CLOUD_INSTANCETYPE=$(jq -r '.ds.\"meta_data\".imds.compute.\"vmSize\"' /var/run/cloud-init/instance-data.json)\n    elif [[ \"${CLOUD_NAME}\" == \"aws\" ]]; then\n        export CLOUD_NAME\n        export CLOUD_INSTANCETYPE=$(jq -r '.ds.\"meta-data\".\"instance-type\"' /var/run/cloud-init/instance-data.json)\n    else\n        export CLOUD_NAME=local\n    fi\nelse\n    export CLOUD_NAME=local\nfi\n"
  },
  {
    "path": "bin/init.sh",
    "content": "#!/usr/bin/env bash\n\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n    echo \"Requested to stop.\"\n    exit 1\n}\n\n# Portable sed -i: BSD sed (macOS) requires an explicit empty-string backup suffix.\nif sed --version 2>/dev/null | grep -q GNU; then\n    sedi() { sed -i \"$@\"; }\nelse\n    sedi() { sed -i '' \"$@\"; }\nfi\n\n# Find a free /24 subnet in 192.168.200-254/24 that doesn't conflict with\n# existing Docker networks or host routes.\nfunction find_free_subnet() {\n    local USED NW_IDS\n    NW_IDS=$(docker network ls -q 2>/dev/null)\n    USED=$(\n        { [[ -n \"$NW_IDS\" ]] && docker network inspect $NW_IDS \\\n              --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}' 2>/dev/null; \\\n          if ip route show 2>/dev/null | grep -q .; then\n              ip route show 2>/dev/null | awk '{print $1}' | grep -E '^[0-9]+\\.'\n          else\n              # macOS: parse inet routes from netstat\n              netstat -rn -f inet 2>/dev/null | awk 'NR>4 && $1~/^[0-9]/{print $1}'\n          fi; } | sort -u\n    )\n    for j in $(seq 200 254); do\n        local CANDIDATE=\"192.168.${j}.0/24\"\n        if ! echo \"$USED\" | grep -qF \"$CANDIDATE\"; then\n            echo \"$CANDIDATE\"\n            return 0\n        fi\n    done\n    return 1\n}\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]:-$0}\")\" >/dev/null 2>&1 && pwd)\"\nINSTALL_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]:-$0}\")/..\" >/dev/null 2>&1 && pwd)\"\n\nif [[ \"$INSTALL_DIR\" == *\\ * ]]; then\n    echo \"Deepracer-for-Cloud cannot be installed in path with spaces. Exiting.\"\n    exit 1\nfi\n\nOPT_ARCH=\"gpu\"\nOPT_CLOUD=\"\"\nOPT_STYLE=\"swarm\"\n\nwhile getopts \":m:c:a:s:\" opt; do\n    case $opt in\n    a)\n        OPT_ARCH=\"$OPTARG\"\n        ;;\n    m)\n        OPT_MOUNT=\"$OPTARG\"\n        ;;\n    c)\n        OPT_CLOUD=\"$OPTARG\"\n        ;;\n    s)\n        OPT_STYLE=\"$OPTARG\"\n        ;;\n    \\?)\n        echo \"Invalid option -$OPTARG\" >&2\n        exit 1\n        ;;\n    esac\ndone\n\n# Check if cloud type is set, if not try to detect it. If detection fails, default to local.\nif [[ -z \"$OPT_CLOUD\" ]]; then\n    source $SCRIPT_DIR/detect.sh\n    OPT_CLOUD=$CLOUD_NAME\n    echo \"Detected cloud type to be $CLOUD_NAME\"\nfi\n\n# Check GPU\nif [ \"$OPT_ARCH\" = \"gpu\" ]; then\n    if GPUS=\"$(docker run --rm --gpus all --pull=missing \\\n        nvcr.io/nvidia/cuda:12.6.3-base-ubuntu24.04 \\\n        bash -lc 'nvidia-smi -L | wc -l')\" ; then\n\n        if [ \"${GPUS:-0}\" -ge 1 ]; then\n            echo \"Detected ${GPUS} GPU(s) inside docker.\"\n        else\n            echo \"No GPU detected in docker. Using CPU\"\n            OPT_ARCH=\"cpu\"\n        fi\n    else\n        echo \"Failed to run GPU test container. Using CPU\"\n        OPT_ARCH=\"cpu\"\n    fi\nfi\n\ncd $INSTALL_DIR\n\n# create directory structure for docker volumes\nmkdir -p $INSTALL_DIR/data $INSTALL_DIR/data/minio $INSTALL_DIR/data/minio/bucket\nmkdir -p $INSTALL_DIR/data/logs $INSTALL_DIR/data/analysis $INSTALL_DIR/data/scripts $INSTALL_DIR/tmp\nsudo mkdir -p /tmp/sagemaker\nsudo chmod -R g+w /tmp/sagemaker\n\n# create symlink to current user's home .aws directory\n# NOTE: AWS cli must be installed for this to work\n# https://docs.aws.amazon.com/cli/latest/userguide/install-linux-al2017.html\nmkdir -p $(eval echo \"~${USER}\")/.aws $INSTALL_DIR/docker/volumes/\nln -sf $(eval echo \"~${USER}\")/.aws $INSTALL_DIR/docker/volumes/\n\n# copy rewardfunctions\nmkdir -p $INSTALL_DIR/custom_files\ncp $INSTALL_DIR/defaults/hyperparameters.json $INSTALL_DIR/custom_files/\ncp $INSTALL_DIR/defaults/model_metadata.json $INSTALL_DIR/custom_files/\ncp $INSTALL_DIR/defaults/reward_function.py $INSTALL_DIR/custom_files/\n\ncp $INSTALL_DIR/defaults/template-system.env $INSTALL_DIR/system.env\ncp $INSTALL_DIR/defaults/template-run.env $INSTALL_DIR/run.env\nif [[ \"${OPT_CLOUD}\" == \"aws\" ]]; then\n    IMDS_TOKEN=$(curl -s -X PUT \"http://169.254.169.254/latest/api/token\" -H \"X-aws-ec2-metadata-token-ttl-seconds: 21600\")\n    AWS_EC2_AVAIL_ZONE=$(curl -s -H \"X-aws-ec2-metadata-token: $IMDS_TOKEN\" http://169.254.169.254/latest/meta-data/placement/availability-zone)\n    AWS_REGION=\"$(echo $AWS_EC2_AVAIL_ZONE | sed 's/[a-z]$//')\"\n    sedi \"s/<AWS_DR_BUCKET>/not-defined/g\" $INSTALL_DIR/system.env\n    sedi \"s/<LOCAL_PROFILE>/default/g\" $INSTALL_DIR/system.env\nelif [[ \"${OPT_CLOUD}\" == \"remote\" ]]; then\n    AWS_REGION=\"us-east-1\"\n    sedi \"s/<LOCAL_PROFILE>/minio/g\" $INSTALL_DIR/system.env\n    sedi \"s/<AWS_DR_BUCKET>/not-defined/g\" $INSTALL_DIR/system.env\n    echo \"Please run 'aws configure --profile minio' to set the credentials\"\n    echo \"Please define DR_REMOTE_MINIO_URL in system.env to point to remote minio instance.\"\nelse\n    AWS_REGION=\"us-east-1\"\n    MINIO_PROFILE=\"minio\"\n    sedi \"s/<LOCAL_PROFILE>/$MINIO_PROFILE/g\" $INSTALL_DIR/system.env\n    sedi \"s/<AWS_DR_BUCKET>/not-defined/g\" $INSTALL_DIR/system.env\n\n    aws configure --profile $MINIO_PROFILE get aws_access_key_id >/dev/null 2>/dev/null\n\n    if [[ \"$?\" -ne 0 ]]; then\n        echo \"Creating default minio credentials in AWS profile '$MINIO_PROFILE'\"\n        aws configure --profile $MINIO_PROFILE set aws_access_key_id $(openssl rand -base64 12)\n        aws configure --profile $MINIO_PROFILE set aws_secret_access_key $(openssl rand -base64 12)\n        aws configure --profile $MINIO_PROFILE set region us-east-1\n    fi\nfi\nsedi \"s/<AWS_DR_BUCKET_ROLE>/to-be-defined/g\" $INSTALL_DIR/system.env\nsedi \"s/<CLOUD_REPLACE>/$OPT_CLOUD/g\" $INSTALL_DIR/system.env\nsedi \"s/<REGION_REPLACE>/$AWS_REGION/g\" $INSTALL_DIR/system.env\n\nif [[ \"${OPT_ARCH}\" == \"gpu\" ]]; then\n    SAGEMAKER_TAG=\"gpu\"\nelse\n    SAGEMAKER_TAG=\"cpu\"\nfi\n\n#set proxys if required\nfor arg in \"$@\"; do\n    IFS='=' read -ra part <<<\"$arg\"\n    if [ \"${part[0]}\" == \"--http_proxy\" ] || [ \"${part[0]}\" == \"--https_proxy\" ] || [ \"${part[0]}\" == \"--no_proxy\" ]; then\n        var=${part[0]:2}=${part[1]}\n        args=\"${args} --build-arg ${var}\"\n    fi\ndone\n\n# Download docker images. Change to build statements if locally built images are desired.\nSIMAPP_VERSION=$(jq -r '.containers.simapp | select (.!=null)' $INSTALL_DIR/defaults/dependencies.json)\nsedi \"s/<SIMAPP_VERSION_TAG>/$SIMAPP_VERSION-$SAGEMAKER_TAG/g\" $INSTALL_DIR/system.env\ndocker pull awsdeepracercommunity/deepracer-simapp:$SIMAPP_VERSION-$SAGEMAKER_TAG\n\n# create the network sagemaker-local if it doesn't exit\nSAGEMAKER_NW='sagemaker-local'\n\nif [[ \"${OPT_STYLE}\" == \"swarm\" ]]; then\n\n    docker node ls >/dev/null 2>/dev/null\n    if [ $? -eq 0 ]; then\n        echo \"Swarm exists. Exiting.\"\n        exit 1\n    fi\n\n    docker swarm init\n    if [ $? -ne 0 ]; then\n\n        if ip route 2>/dev/null | grep -q default; then\n            DEFAULT_IFACE=$(ip route | grep default | awk '{print $5}')\n            DEFAULT_IP=$(ip addr show $DEFAULT_IFACE | grep \"inet\\b\" | awk '{print $2}' | cut -d/ -f1)\n        else\n            # macOS fallback\n            DEFAULT_IFACE=$(route -n get default 2>/dev/null | awk '/interface:/{print $2}')\n            DEFAULT_IP=$(ipconfig getifaddr \"$DEFAULT_IFACE\" 2>/dev/null)\n        fi\n\n        if [ -z \"$DEFAULT_IP\" ]; then\n            echo \"Could not determine default IP address. Exiting.\"\n            exit 1\n        fi\n\n        echo \"Error when creating swarm, trying again with advertise address $DEFAULT_IP.\"\n        docker swarm init --advertise-addr $DEFAULT_IP\n        if [ $? -ne 0 ]; then\n            echo \"Cound not create swarm. Exiting.\"\n            exit 1\n        fi\n    fi\n\n    SWARM_NODE=$(docker node inspect self | jq .[0].ID -r)\n    docker node update --label-add Sagemaker=true $SWARM_NODE >/dev/null 2>/dev/null\n    docker node update --label-add Robomaker=true $SWARM_NODE >/dev/null 2>/dev/null\n    NW_SUBNET=$(find_free_subnet)\n    docker network ls | grep -q $SAGEMAKER_NW && docker network rm $SAGEMAKER_NW >/dev/null 2>&1\n    docker network create $SAGEMAKER_NW -d overlay --attachable --scope swarm \\\n        ${NW_SUBNET:+--subnet=$NW_SUBNET} \\\n        --label com.docker.compose.network=sagemaker-local \\\n        --label com.docker.compose.project=sagemaker-local\n\nelif [[ \"${OPT_STYLE}\" == \"compose\" ]]; then\n\n    NW_SUBNET=$(find_free_subnet)\n    docker network ls | grep -q $SAGEMAKER_NW || \\\n        docker network create $SAGEMAKER_NW ${NW_SUBNET:+--subnet=$NW_SUBNET}\n\nelse\n    echo \"Unknown docker style ${OPT_STYLE}. Exiting.\"\n    exit 1\nfi\nsedi \"s/<DOCKER_STYLE>/${OPT_STYLE}/g\" $INSTALL_DIR/system.env\n\n# ensure our variables are set on startup - not for local setup.\nif [[ \"${OPT_CLOUD}\" != \"local\" ]]; then\n    touch \"$HOME/.profile\"\n    NUM_IN_PROFILE=$(grep -c \"$INSTALL_DIR/bin/activate.sh\" \"$HOME/.profile\" || true)\n    if [ \"$NUM_IN_PROFILE\" -eq 0 ]; then\n        echo \"source $INSTALL_DIR/bin/activate.sh\" >>$HOME/.profile\n    fi\nfi\n\n# mark as done\ndate | tee $INSTALL_DIR/DONE\n\n## Optional auturun feature\n# if using automation scripts to auto configure and run\n# you must pass s3_training_location.txt to this instance in order for this to work\nif [[ -f \"$INSTALL_DIR/autorun.s3url\" ]]; then\n    ## read in first line.  first line always assumed to be training location regardless what else is in file\n    TRAINING_LOC=$(awk 'NR==1 {print; exit}' $INSTALL_DIR/autorun.s3url)\n\n    #get bucket name\n    TRAINING_BUCKET=${TRAINING_LOC%%/*}\n    #get prefix. minor exception handling in case there is no prefix and a root bucket is passed\n    if [[ \"$TRAINING_LOC\" == *\"/\"* ]]; then\n        TRAINING_PREFIX=${TRAINING_LOC#*/}\n    else\n        TRAINING_PREFIX=\"\"\n    fi\n\n    ##check if custom autorun script exists in s3 training bucket.  If not, use default in this repo\n    aws s3api head-object --bucket $TRAINING_BUCKET --key $TRAINING_PREFIX/autorun.sh || not_exist=true\n    if [ $not_exist ]; then\n        echo \"custom file does not exist, using local copy\"\n    else\n        echo \"custom script does exist, use it\"\n        aws s3 cp s3://$TRAINING_LOC/autorun.sh $INSTALL_DIR/bin/autorun.sh\n    fi\n    chmod +x $INSTALL_DIR/bin/autorun.sh\n    bash -c \"source $INSTALL_DIR/bin/autorun.sh\"\nfi\n"
  },
  {
    "path": "bin/module/droa.sh",
    "content": "#!/usr/bin/env bash\n# DRoA (DeepRacer on AWS) shell functions.\n# Sourced by bin/activate.sh alongside scripts_wrapper.sh and summary.sh.\n\nfunction droa-list-models {\n  dr-update-env && python3 \"${DR_DIR}/scripts/droa/list_models.py\" \"$@\"\n}\n\nfunction droa-get-model {\n  dr-update-env && python3 \"${DR_DIR}/scripts/droa/get_model.py\" \"$@\"\n}\n\nfunction droa-download-logs {\n  dr-update-env && python3 \"${DR_DIR}/scripts/droa/download_logs.py\" \"$@\"\n}\n\nfunction droa-delete-model {\n  dr-update-env && python3 \"${DR_DIR}/scripts/droa/delete_model.py\" \"$@\"\n}\n\nfunction droa-import-model {\n  dr-update-env && python3 \"${DR_DIR}/scripts/droa/import_model.py\" \"$@\"\n}\n"
  },
  {
    "path": "bin/module/summary.sh",
    "content": "function dr-summary {\n  # ANSI colour palette\n  local RST='\\033[0m'\n  local BOLD='\\033[1m'\n  local DIM='\\033[2m'\n\n  local C_BORDER='\\033[38;5;33m'      # blue\n  local C_HEADER='\\033[38;5;39m'      # bright blue\n  local C_KEY='\\033[38;5;250m'        # light grey\n  local C_VAL='\\033[38;5;222m'        # amber\n  local C_OK='\\033[38;5;82m'          # green\n  local C_WARN='\\033[38;5;220m'       # yellow\n  local C_ERR='\\033[38;5;196m'        # red\n  local C_SECTION='\\033[38;5;75m'     # sky blue\n\n  # ── dynamic width / height ──────────────────────────────────────────────\n  local TERM_W WIDE=false W\n  TERM_W=$(tput cols 2>/dev/null || echo 80)\n  TERM_H=$(tput lines 2>/dev/null || echo 24)\n  _dr_lines=0   # running line counter (non-local so helpers can increment)\n  if [[ $TERM_W -ge 120 ]]; then\n    W=118   # total box width = W+2 = 120\n    WIDE=true\n  else\n    W=$(( TERM_W - 2 ))\n    [[ $W -lt 78 ]] && W=78\n  fi\n  # Two-column content widths: │ space WL space │ space WR space │ = WL+WR+7 = W+2\n  local WL=$(( (W - 5) / 2 ))\n  local WR=$(( W - 5 - WL ))\n\n  # ── helpers ───────────────────────────────────────────────────────────────\n  _dr_hline() {\n    local L=\"$1\" M=\"$2\" R=\"$3\"\n    printf \"${C_BORDER}${L}\"; printf \"${M}%.0s\" $(seq 1 $W); printf \"${R}${RST}\\n\"\n    (( ++_dr_lines ))\n  }\n  _dr_row() {\n    local text=\"$1\"\n    local plain; plain=$(echo -e \"$text\" | sed 's/\\x1b\\[[0-9;]*m//g')\n    local pad=$(( W - ${#plain} - 2 ))\n    [[ $pad -lt 0 ]] && pad=0\n    printf \"${C_BORDER}│${RST} %b%-*s ${C_BORDER}│${RST}\\n\" \"$text\" \"$pad\" \"\"\n    (( ++_dr_lines ))\n  }\n  _dr_blank() { _dr_row \"\"; }\n  _dr_section() {\n    _dr_hline \"├\" \"─\" \"┤\"\n    local label=\" ${BOLD}${C_SECTION}$1${RST}\"\n    [[ -n \"${2:-}\" ]] && label+=\"${DIM}  $2${RST}\"\n    _dr_row \"$label\"\n    _dr_hline \"├\" \"─\" \"┤\"\n  }\n  _dr_kv() {\n    local k=\"$1\" v=\"$2\" s=\"${3:-}\"\n    local vc=\"$C_VAL\"\n    [[ \"$s\" == \"ok\"   ]] && vc=\"$C_OK\"\n    [[ \"$s\" == \"warn\" ]] && vc=\"$C_WARN\"\n    [[ \"$s\" == \"err\"  ]] && vc=\"$C_ERR\"\n    _dr_row \" ${C_KEY}$(printf '%-22s' \"$k\")${RST} ${vc}${v}${RST}\"\n  }\n  _dr_hline_2col() {  # L M1 SEP M2 R\n    local L=\"$1\" M1=\"$2\" SEP=\"$3\" M2=\"$4\" R=\"$5\"\n    local LD=$(( WL + 2 )) RD=$(( WR + 2 ))\n    printf \"${C_BORDER}${L}\"\n    printf \"${M1}%.0s\" $(seq 1 $LD)\n    printf \"${SEP}\"\n    printf \"${M2}%.0s\" $(seq 1 $RD)\n    printf \"${R}${RST}\\n\"\n    (( ++_dr_lines ))\n  }\n  _dr_row_2col() {\n    local lt=\"$1\" rt=\"${2:-}\"\n    local lp; lp=$(echo -e \"$lt\" | sed 's/\\x1b\\[[0-9;]*m//g')\n    local rp; rp=$(echo -e \"$rt\" | sed 's/\\x1b\\[[0-9;]*m//g')\n    local lpad=$(( WL - ${#lp} )) rpad=$(( WR - ${#rp} ))\n    [[ $lpad -lt 0 ]] && lpad=0\n    [[ $rpad -lt 0 ]] && rpad=0\n    printf \"${C_BORDER}│${RST} %b%-*s ${C_BORDER}│${RST} %b%-*s ${C_BORDER}│${RST}\\n\" \\\n      \"$lt\" \"$lpad\" \"\" \"$rt\" \"$rpad\" \"\"\n    (( ++_dr_lines ))\n  }\n\n  # ── spinner (shown while pre-compute phase runs) ─────────────────────────\n  local _dr_spinner_pid=\"\"\n  if [[ -t 1 ]]; then\n    { (\n      local frames=('⠋' '⠙' '⠹' '⠸' '⠼' '⠴' '⠦' '⠧' '⠇' '⠏') i=0\n      while true; do\n        printf '\\r  \\033[38;5;33m%s\\033[0m  \\033[2mLoading DeepRacer-for-Cloud...\\033[0m' \\\n          \"${frames[i]}\" >/dev/tty 2>/dev/null\n        (( i = (i + 1) % 4 ))\n        sleep 0.12\n      done\n    ) & } 2>/dev/null\n    _dr_spinner_pid=$!\n  fi\n\n  # ── pre-compute git branch / update status ───────────────────────────────\n  local _git_branch _git_update_available=false\n  _git_branch=$(git -C \"$DR_DIR\" rev-parse --abbrev-ref HEAD 2>/dev/null || true)\n  timeout 5 git -C \"$DR_DIR\" fetch --quiet origin 2>/dev/null || true\n  local _local_hash _remote_hash\n  _local_hash=$(git -C \"$DR_DIR\" rev-parse HEAD 2>/dev/null || true)\n  _remote_hash=$(git -C \"$DR_DIR\" rev-parse '@{u}' 2>/dev/null || true)\n  if [[ -n \"$_local_hash\" && -n \"$_remote_hash\" && \"$_local_hash\" != \"$_remote_hash\" ]]; then\n    _git_update_available=true\n  fi\n\n  # ── pre-compute dynamic values ────────────────────────────────────────────\n  local cloud_val=\"${DR_CLOUD:-n/a}\"\n  [[ \"${DR_CLOUD,,}\" == \"aws\" ]] && cloud_val=\"aws\"\n  [[ \"${DR_CLOUD,,}\" == \"remote\" ]] && cloud_val=\"remote\"\n\n  local s3_color\n  if aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3api head-bucket \\\n      --bucket \"${DR_LOCAL_S3_BUCKET}\" >/dev/null 2>&1; then\n    s3_color=\"${C_OK}\"\n  else\n    s3_color=\"${C_ERR}\"\n  fi\n\n  local nvidia_runtime\n  if docker info --format '{{json .Runtimes}}' 2>/dev/null | grep -q '\"nvidia\"'; then\n    nvidia_runtime=\"${C_OK}available${RST}\"\n  else\n    nvidia_runtime=\"${C_WARN}not found${RST}\"\n  fi\n\n  # ── stop spinner and clear its line before rendering ─────────────────────\n  if [[ -n \"${_dr_spinner_pid:-}\" ]]; then\n    kill \"$_dr_spinner_pid\" 2>/dev/null\n    disown \"$_dr_spinner_pid\" 2>/dev/null\n    wait \"$_dr_spinner_pid\" 2>/dev/null\n    printf '\\r\\033[K' >/dev/tty 2>/dev/null || true\n  fi\n\n  # ── header ────────────────────────────────────────────────────────────────\n  echo; (( ++_dr_lines ))\n  _dr_hline \"╭\" \"─\" \"╮\"\n  _dr_row \" ${BOLD}${C_HEADER}DeepRacer for Cloud  —  Environment Summary${RST}\"\n  local _meta_row\n  if [[ -n \"${DR_EXPERIMENT_NAME:-}\" ]]; then\n    _meta_row=\" ${DIM}Experiment: ${RST}${C_VAL}${DR_EXPERIMENT_NAME}${RST}\"\n  else\n    local _rel_config\n    _rel_config=$(realpath --relative-to=\"${PWD}\" \"${DR_CONFIG}\" 2>/dev/null || basename \"${DR_CONFIG}\")\n    _meta_row=\" ${DIM}Config: ${RST}${C_VAL}${_rel_config}${RST}\"\n  fi\n  local _branch_row=\"${DIM}  Branch: ${RST}${C_VAL}${_git_branch:-unknown}${RST}\"\n  if [[ \"$_git_update_available\" == true ]]; then\n    _branch_row+=\"  ${C_WARN}⬆ update available — run 'git pull'${RST}\"\n  fi\n  _dr_row \"${_meta_row}${_branch_row}\"\n\n  # ── system config + run config ────────────────────────────────────────────\n  if [[ \"$WIDE\" == true ]]; then\n    local CKW=18  # key column width in 2-col mode\n    _dr_hline_2col \"├\" \"─\" \"┬\" \"─\" \"┤\"\n    _dr_row_2col \\\n      \" ${BOLD}${C_SECTION}System Configuration${RST}\" \\\n      \" ${BOLD}${C_SECTION}Run Configuration${RST}${DIM}  ID: ${DR_RUN_ID:-0}${RST}\"\n    _dr_hline_2col \"├\" \"─\" \"┼\" \"─\" \"┤\"\n\n    local lrows=() rrows=()\n    lrows+=(\" ${C_KEY}$(printf '%-*s' $CKW 'Docker style')${RST} ${C_VAL}${DR_DOCKER_STYLE:-swarm}${RST}\")\n    lrows+=(\" ${C_KEY}$(printf '%-*s' $CKW 'Cloud / Bucket')${RST} ${DIM}${cloud_val}${RST}  ${s3_color}${DR_LOCAL_S3_BUCKET:-n/a}${RST}\")\n    lrows+=(\" ${C_KEY}$(printf '%-*s' $CKW 'Workers')${RST} ${C_VAL}${DR_WORKERS:-1}${RST}\")\n    lrows+=(\" ${C_KEY}$(printf '%-*s' $CKW 'NVIDIA runtime')${RST} ${nvidia_runtime}\")\n\n    rrows+=(\" ${C_KEY}$(printf '%-*s' $CKW 'Model prefix')${RST} ${C_VAL}${DR_LOCAL_S3_MODEL_PREFIX:-n/a}${RST}\")\n    rrows+=(\" ${C_KEY}$(printf '%-*s' $CKW 'Race type')${RST} ${C_VAL}${DR_RACE_TYPE:-n/a}${RST}\")\n    rrows+=(\" ${C_KEY}$(printf '%-*s' $CKW 'World / track')${RST} ${C_VAL}${DR_WORLD_NAME:-n/a}${RST}\")\n    rrows+=(\" ${C_KEY}$(printf '%-*s' $CKW 'Car name')${RST} ${C_VAL}${DR_CAR_NAME:-n/a}${RST}\")\n\n    local max_r=$(( ${#lrows[@]} > ${#rrows[@]} ? ${#lrows[@]} : ${#rrows[@]} ))\n    for (( i=0; i<max_r; i++ )); do\n      _dr_row_2col \"${lrows[$i]:-}\" \"${rrows[$i]:-}\"\n    done\n    _dr_hline_2col \"├\" \"─\" \"┴\" \"─\" \"┤\"\n  else\n    _dr_section \"System Configuration\"\n    _dr_row \" ${C_KEY}$(printf '%-22s' 'Docker style')${RST} ${C_VAL}${DR_DOCKER_STYLE:-swarm}${RST}\"\n    _dr_row \" ${C_KEY}$(printf '%-22s' 'Cloud / Bucket')${RST} ${DIM}${cloud_val}${RST}  ${s3_color}${DR_LOCAL_S3_BUCKET:-n/a}${RST}\"\n    _dr_kv \"Workers\"        \"${DR_WORKERS:-1}\"\n    _dr_row \" ${C_KEY}$(printf '%-22s' 'NVIDIA runtime')${RST} ${nvidia_runtime}\"\n    _dr_section \"Run Configuration\" \"ID: ${DR_RUN_ID:-0}\"\n    _dr_kv \"Model prefix\"   \"${DR_LOCAL_S3_MODEL_PREFIX:-n/a}\"\n    _dr_kv \"Race type\"      \"${DR_RACE_TYPE:-n/a}\"\n    _dr_kv \"World / track\"  \"${DR_WORLD_NAME:-n/a}\"\n    _dr_kv \"Car name\"       \"${DR_CAR_NAME:-n/a}\"\n  fi\n\n  # ── simapp version check (used inline in docker images section) ───────────\n  local simapp_update_available=false _required_simapp_ver=\"\"\n  _required_simapp_ver=$(jq -r '.containers.simapp | select (.!=null)' \"$DR_DIR/defaults/dependencies.json\" 2>/dev/null || true)\n  if [[ -n \"$_required_simapp_ver\" && -n \"${DR_SIMAPP_VERSION:-}\" ]]; then\n    local _configured_simapp_ver\n    _configured_simapp_ver=$(echo \"${DR_SIMAPP_VERSION}\" | grep -oP '^\\d+\\.\\d+(\\.\\d+)?')\n    if [[ -n \"$_configured_simapp_ver\" ]] && ! verlte \"$_required_simapp_ver\" \"$_configured_simapp_ver\"; then\n      simapp_update_available=true\n    fi\n  fi\n\n  # ── docker images ─────────────────────────────────────────────────────────\n  if [[ \"$WIDE\" == true ]]; then\n    # 2-col closing line already drawn; just add section label row\n    local label=\" ${BOLD}${C_SECTION}Configured Docker Images${RST}\"\n    _dr_row \"$label\"\n    _dr_hline \"├\" \"─\" \"┤\"\n  else\n    _dr_section \"Configured Docker Images\"\n  fi\n\n  local simapp_img=\"${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}\"\n  local simapp_disp=\"${simapp_img/awsdeepracercommunity/[a-d-c]}\"\n  local simapp_id; simapp_id=$(docker image inspect \"$simapp_img\" --format '{{slice .Id 7 19}}' 2>/dev/null)\n\n  local analysis_img=\"awsdeepracercommunity/deepracer-analysis:${DR_ANALYSIS_IMAGE:-cpu}\"\n  local analysis_disp=\"${analysis_img/awsdeepracercommunity/[a-d-c]}\"\n  local analysis_id; analysis_id=$(docker image inspect \"$analysis_img\" --format '{{slice .Id 7 19}}' 2>/dev/null)\n\n  local minio_img=\"\" minio_disp=\"\" minio_id=\"\"\n  if [[ \"${DR_CLOUD,,}\" == \"local\" || \"${DR_CLOUD,,}\" == \"azure\" ]]; then\n    minio_img=\"minio/minio:${DR_MINIO_IMAGE:-latest}\"\n    minio_disp=\"$minio_img\"\n    minio_id=$(docker image inspect \"$minio_img\" --format '{{slice .Id 7 19}}' 2>/dev/null)\n    if [[ -z \"$minio_id\" ]]; then\n      minio_id=$(docker images minio/minio --format '{{slice .ID 0 12}}' 2>/dev/null | head -1)\n    fi\n  fi\n\n  local _simapp_upd_note=\"\"\n  [[ \"$simapp_update_available\" == true ]] && _simapp_upd_note=\"  ${C_WARN}⬆ update available (→ ${_required_simapp_ver})${RST}\"\n\n  if [[ \"$WIDE\" == true ]]; then\n    local IKW=14\n    if [[ -n \"$simapp_id\" ]]; then\n      _dr_row \" ${C_KEY}$(printf '%-*s' $IKW 'SimApp')${RST} ${C_OK}${simapp_disp}${RST}  ${DIM}ID: ${simapp_id}  ✓ local${RST}${_simapp_upd_note}\"\n    else\n      _dr_row \" ${C_KEY}$(printf '%-*s' $IKW 'SimApp')${RST} ${C_WARN}${simapp_disp}  (not pulled)${RST}${_simapp_upd_note}\"\n    fi\n    if [[ -n \"$analysis_id\" ]]; then\n      _dr_row \" ${C_KEY}$(printf '%-*s' $IKW 'Analysis')${RST} ${C_OK}${analysis_disp}${RST}  ${DIM}ID: ${analysis_id}  ✓ local${RST}\"\n    else\n      _dr_row \" ${C_KEY}$(printf '%-*s' $IKW 'Analysis')${RST} ${C_WARN}${analysis_disp}  (not pulled)${RST}\"\n    fi\n    if [[ -n \"$minio_img\" ]]; then\n      if [[ -n \"$minio_id\" ]]; then\n        _dr_row \" ${C_KEY}$(printf '%-*s' $IKW 'MinIO')${RST} ${C_OK}${minio_disp}${RST}  ${DIM}ID: ${minio_id}  ✓ local${RST}\"\n      else\n        _dr_row \" ${C_KEY}$(printf '%-*s' $IKW 'MinIO')${RST} ${C_WARN}${minio_disp}  (not pulled)${RST}\"\n      fi\n    fi\n  else\n    if [[ -n \"$simapp_id\" ]]; then\n      _dr_kv \"SimApp\" \"${simapp_disp}\" \"ok\"\n      _dr_row \" ${DIM}$(printf '%22s' '') ID: ${simapp_id}  ✓ local${RST}${_simapp_upd_note}\"\n    else\n      _dr_kv \"SimApp\" \"${simapp_disp}  (not pulled)${_simapp_upd_note}\" \"warn\"\n    fi\n    if [[ -n \"$analysis_id\" ]]; then\n      _dr_kv \"Analysis\" \"${analysis_disp}\" \"ok\"\n      _dr_row \" ${DIM}$(printf '%22s' '') ID: ${analysis_id}  ✓ local${RST}\"\n    else\n      _dr_kv \"Analysis\" \"${analysis_disp}  (not pulled)\" \"warn\"\n    fi\n    if [[ -n \"$minio_img\" ]]; then\n      if [[ -n \"$minio_id\" ]]; then\n        _dr_kv \"MinIO\" \"${minio_disp}\" \"ok\"\n        _dr_row \" ${DIM}$(printf '%22s' '') ID: ${minio_id}  ✓ local${RST}\"\n      else\n        _dr_kv \"MinIO\" \"${minio_disp}  (not pulled)\" \"warn\"\n      fi\n    fi\n  fi\n\n  # ── services and containers ───────────────────────────────────────────────\n  _dr_section \"DeepRacer Services And Containers\"\n  local found_any=false\n\n  if [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n    local stack_lines\n    stack_lines=$(docker stack ls --format '{{.Name}}\\t{{.Services}}' 2>/dev/null || true)\n    if [[ -n \"$stack_lines\" ]]; then\n      found_any=true\n      _dr_row \" ${DIM}Swarm stacks:${RST}\"\n      while IFS=$'\\t' read -r stname stsvcs; do\n        _dr_row \" ${C_KEY}$(printf '%-30s' \"$stname\")${RST} ${C_VAL}${stsvcs} service(s)${RST}\"\n      done <<< \"$stack_lines\"\n    fi\n\n    local svc_lines\n    svc_lines=$(docker service ls --format '{{.Name}}\\t{{.Replicas}}\\t{{.Image}}' 2>/dev/null \\\n      | grep -i '^deepracer' || true)\n    if [[ -n \"$svc_lines\" ]]; then\n      found_any=true\n      _dr_row \" ${DIM}Swarm services:${RST}\"\n      while IFS=$'\\t' read -r sname sreplicas simage; do\n        local desired actual\n        desired=$(echo \"$sreplicas\" | cut -d'/' -f2)\n        actual=$(echo \"$sreplicas\" | cut -d'/' -f1)\n        local rep_color=\"$C_OK\"\n        [[ \"$actual\" != \"$desired\" ]] && rep_color=\"$C_WARN\"\n        local simage_disp=\"${simage/awsdeepracercommunity/[a-d-c]}\"\n        _dr_row \" ${C_KEY}$(printf '%-30s' \"$sname\")${RST} ${rep_color}$(printf '%-8s' \"$sreplicas\")${RST} ${DIM}${simage_disp}${RST}\"\n      done <<< \"$svc_lines\"\n    fi\n\n    local container_lines\n    container_lines=$(docker ps --format '{{.Names}}\\t{{.Status}}\\t{{.Image}}' 2>/dev/null \\\n      | while IFS=$'\\t' read -r cn cs ci; do\n          if echo \"$cn\" | grep -qiE '^deepracer|robomaker|sagemaker|minio|rl_coach|analysis' \\\n             || [[ \"$ci\" == \"$simapp_img\"* ]]; then\n            printf '%s\\t%s\\n' \"$cn\" \"$cs\"\n          fi\n        done)\n    if [[ -n \"$container_lines\" ]]; then\n      found_any=true\n      local n_ctrs; n_ctrs=$(echo \"$container_lines\" | wc -l)\n      _dr_row \" ${DIM}Containers:${RST}\"\n      # 3 lines reserved for footer (blank row + closing hline + trailing newline)\n      if (( _dr_lines + n_ctrs + 3 > TERM_H )); then\n        _dr_row \"   ${DIM}${n_ctrs} container(s) running  ${C_WARN}(terminal too short to list)${RST}\"\n      else\n        while IFS=$'\\t' read -r cname cstatus; do\n          local status_color=\"$C_OK\"\n          [[ \"$cstatus\" != Up* ]] && status_color=\"$C_WARN\"\n          _dr_row \" ${C_KEY}$(printf '%-30s' \"$cname\")${RST} ${status_color}${cstatus}${RST}\"\n        done <<< \"$container_lines\"\n      fi\n    fi\n  else\n    local proj_lines\n    proj_lines=$(docker compose ls --format json 2>/dev/null \\\n      | jq -r '.[] | select(.Name | test(\"deepracer|s3\"; \"i\")) | \"\\(.Name)\\t\\(.Status)\"' 2>/dev/null || true)\n    if [[ -n \"$proj_lines\" ]]; then\n      found_any=true\n      _dr_row \" ${DIM}Compose projects:${RST}\"\n      while IFS=$'\\t' read -r pname pstatus; do\n        local pstatus_color=\"$C_OK\"\n        [[ \"$pstatus\" != *running* ]] && pstatus_color=\"$C_WARN\"\n        _dr_row \" ${C_KEY}$(printf '%-30s' \"$pname\")${RST} ${pstatus_color}${pstatus}${RST}\"\n      done <<< \"$proj_lines\"\n    fi\n\n    local container_lines\n    container_lines=$(docker ps --format '{{.Names}}\\t{{.Status}}\\t{{.Image}}' 2>/dev/null \\\n      | while IFS=$'\\t' read -r cn cs ci; do\n          if echo \"$cn\" | grep -qiE '^deepracer|robomaker|sagemaker|minio|rl_coach|analysis' \\\n             || [[ \"$ci\" == \"$simapp_img\"* ]]; then\n            printf '%s\\t%s\\n' \"$cn\" \"$cs\"\n          fi\n        done)\n    if [[ -n \"$container_lines\" ]]; then\n      found_any=true\n      local n_ctrs; n_ctrs=$(echo \"$container_lines\" | wc -l)\n      _dr_row \" ${DIM}Compose services:${RST}\"\n      if (( _dr_lines + n_ctrs + 3 > TERM_H )); then\n        _dr_row \"   ${DIM}${n_ctrs} container(s) running  ${C_WARN}(terminal too short to list)${RST}\"\n      else\n        while IFS=$'\\t' read -r cname cstatus; do\n          local status_color=\"$C_OK\"\n          [[ \"$cstatus\" != Up* ]] && status_color=\"$C_WARN\"\n          _dr_row \" ${C_KEY}$(printf '%-30s' \"$cname\")${RST} ${status_color}${cstatus}${RST}\"\n        done <<< \"$container_lines\"\n      fi\n    fi\n  fi\n\n  if [[ \"$found_any\" == false ]]; then\n    _dr_row \"  ${C_WARN}No DeepRacer-related services or containers running.${RST}\"\n  fi\n\n  # ── footer ────────────────────────────────────────────────────────────────\n  _dr_blank\n  _dr_hline \"╰\" \"─\" \"╯\"\n  echo\n}\n"
  },
  {
    "path": "bin/prepare-mac.sh",
    "content": "#!/usr/bin/env bash\n\nset -euo pipefail\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n    echo \"Requested to stop.\"\n    exit 1\n}\n\nDIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" >/dev/null 2>&1 && pwd)\"\n\n## Only allow macOS\nif [[ \"$(uname -s)\" != \"Darwin\" ]]; then\n    echo \"ERROR: This script is for macOS only. Use prepare.sh for Linux.\"\n    exit 1\nfi\n\nMACOS_VERSION=$(sw_vers -productVersion)\nMACOS_MAJOR=$(echo \"$MACOS_VERSION\" | cut -d. -f1)\n\n# Supported: Monterey (12), Ventura (13), Sonoma (14), Sequoia (15)\nSUPPORTED_MACOS_MAJOR=(12 13 14 15)\nVERSION_OK=false\nfor V in \"${SUPPORTED_MACOS_MAJOR[@]}\"; do\n    if [[ \"${MACOS_MAJOR}\" -eq \"$V\" ]]; then\n        VERSION_OK=true\n        break\n    fi\ndone\n\nif [[ \"$VERSION_OK\" != true ]]; then\n    echo \"WARNING: macOS ${MACOS_VERSION} is not a tested version.\"\n    echo \"         Supported: Monterey (12), Ventura (13), Sonoma (14), Sequoia (15)\"\nfi\n\necho \"Detected macOS ${MACOS_VERSION}\"\n\n## macOS does not support NVIDIA GPUs -- always CPU\nARCH=\"cpu\"\necho \"macOS does not support NVIDIA GPUs. Using CPU mode.\"\n\n## Detect Apple Silicon vs Intel\nCPU_ARCH=$(uname -m)\nif [[ \"${CPU_ARCH}\" == \"arm64\" ]]; then\n    echo \"Apple Silicon (arm64) detected.\"\n    BREW_PREFIX=\"/opt/homebrew\"\nelse\n    echo \"Intel (x86_64) detected.\"\n    BREW_PREFIX=\"/usr/local\"\nfi\n\n## Install Homebrew if not present\nif ! command -v brew >/dev/null 2>&1; then\n    echo \"Installing Homebrew...\"\n    /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"\n    eval \"$(\"${BREW_PREFIX}/bin/brew\" shellenv)\"\nfi\n\n## Update Homebrew\nbrew update\n\n## Install required packages (no awscli here -- installed separately below)\nbrew install jq python3 git screen bash\n\n## Set Homebrew bash as the default shell if not already\nBREW_BASH=\"${BREW_PREFIX}/bin/bash\"\nif ! grep -qF \"${BREW_BASH}\" /etc/shells; then\n    echo \"Adding ${BREW_BASH} to /etc/shells...\"\n    echo \"${BREW_BASH}\" | sudo tee -a /etc/shells\nfi\nif [[ \"$(dscl . -read /Users/\"$(id -un)\" UserShell | awk '{print $2}')\" != \"${BREW_BASH}\" ]]; then\n    echo \"Setting default shell to ${BREW_BASH}...\"\n    sudo chsh -s \"${BREW_BASH}\" \"$(id -un)\"\nfi\n\n## Ensure bash 5 + Homebrew PATH are set up for all SSH sessions.\n## Sets PATH first so --login re-entry is safe (BASH_VERSINFO guard prevents looping).\nBASH_PROFILE=\"${HOME}/.bash_profile\"\nBOOTSTRAP_MARKER=\"# drfc-bash5-bootstrap\"\nif ! grep -qF \"${BOOTSTRAP_MARKER}\" \"${BASH_PROFILE}\" 2>/dev/null; then\n    cat >> \"${BASH_PROFILE}\" <<EOF\n\n${BOOTSTRAP_MARKER}\neval \"\\$(${BREW_PREFIX}/bin/brew shellenv)\"\nexport PATH=\"/usr/local/bin:\\$PATH\"  # AWS CLI v2\nif [ -x \"${BREW_BASH}\" ] && [ \"\\${BASH_VERSINFO[0]:-0}\" -lt 5 ]; then\n    exec \"${BREW_BASH}\" --login\nfi\nEOF\n    echo \"Added bash 5 + PATH bootstrap to ${BASH_PROFILE}.\"\nfi\n\n## Install boto3 and pyyaml\nif pip3 install boto3 pyyaml --break-system-packages 2>/dev/null; then\n    echo \"boto3 and pyyaml installed.\"\nelse\n    pip3 install boto3 pyyaml\nfi\n\n## Install AWS CLI v2 via official pkg installer (avoids Homebrew Python conflicts)\nif command -v aws >/dev/null 2>&1; then\n    echo \"AWS CLI already installed: $(aws --version 2>&1)\"\nelse\n    echo \"Installing AWS CLI v2 via official installer...\"\n    TMP_PKG=$(mktemp /tmp/AWSCLIV2.XXXXXX.pkg)\n    curl -fsSL \"https://awscli.amazonaws.com/AWSCLIV2.pkg\" -o \"${TMP_PKG}\"\n    sudo installer -pkg \"${TMP_PKG}\" -target /\n    rm -f \"${TMP_PKG}\"\n    echo \"AWS CLI installed: $(aws --version 2>&1)\"\nfi\n\n## Detect cloud\n# detect.sh relies on cloud-init which is typically absent on macOS.\n# Fall back to probing the AWS Instance Metadata Service (IMDSv2).\nCLOUD_NAME=\"local\"\nif [[ -f /var/run/cloud-init/instance-data.json ]]; then\n    source \"$DIR/detect.sh\"\nelse\n    if IMDS_TOKEN=$(curl -s --connect-timeout 2 \\\n            -X PUT \"http://169.254.169.254/latest/api/token\" \\\n            -H \"X-aws-ec2-metadata-token-ttl-seconds: 21600\" 2>/dev/null) \\\n        && [[ -n \"${IMDS_TOKEN}\" ]]; then\n        CLOUD_NAME=\"aws\"\n        CLOUD_INSTANCETYPE=$(curl -s --connect-timeout 2 \\\n            -H \"X-aws-ec2-metadata-token: ${IMDS_TOKEN}\" \\\n            \"http://169.254.169.254/latest/meta-data/instance-type\" 2>/dev/null || echo \"unknown\")\n        export CLOUD_NAME\n        export CLOUD_INSTANCETYPE\n    else\n        export CLOUD_NAME\n    fi\nfi\necho \"Detected cloud type ${CLOUD_NAME}\"\n\n## Install Docker CLI and Colima (headless Docker runtime for macOS)\n## Colima is preferred over Docker Desktop for headless/EC2 use.\n\nif brew list --formula colima &>/dev/null; then\n    echo \"Colima already installed.\"\nelse\n    brew install colima\nfi\n\nif command -v docker >/dev/null 2>&1; then\n    echo \"Docker CLI already installed.\"\nelse\n    brew install docker\nfi\n\n## Install docker-compose v2 (as CLI plugin)\nif brew list --formula docker-compose &>/dev/null; then\n    echo \"docker-compose already installed.\"\nelse\n    brew install docker-compose\nfi\n\n## Register docker-compose as a Docker CLI plugin\nmkdir -p \"${HOME}/.docker/cli-plugins\"\nln -sfn \"$(brew --prefix)/opt/docker-compose/bin/docker-compose\" \\\n    \"${HOME}/.docker/cli-plugins/docker-compose\"\n\n## Start Colima if not already running\nif colima status 2>/dev/null | grep -q \"Running\"; then\n    echo \"Colima is already running.\"\nelse\n    echo \"Starting Colima...\"\n    if [[ \"${CPU_ARCH}\" == \"arm64\" ]] && [[ \"${MACOS_MAJOR}\" -ge 13 ]]; then\n        # Apple Silicon + macOS 13+: use Virtualization.framework (vz) for much\n        # lower hypervisor overhead vs QEMU. virtiofs gives better I/O than sshfs.\n        colima start --cpu 8 --memory 12 --disk 60 \\\n            --vm-type vz --mount-type virtiofs\n    elif [[ \"${CPU_ARCH}\" == \"arm64\" ]]; then\n        colima start --cpu 8 --memory 12 --disk 60 --mount-type virtiofs\n    else\n        # Intel Mac\n        colima start --cpu 4 --memory 8 --disk 60 --mount-type virtiofs\n    fi\nfi\n\n## Ensure docker socket is reachable\nif ! docker info >/dev/null 2>&1; then\n    echo \"ERROR: Docker is not reachable. Check that Colima is running: colima status\"\n    exit 1\nfi\necho \"Docker is available via Colima.\"\n\n## Create /tmp/sagemaker inside the Colima VM.\n## On macOS, Docker runs inside Colima's Linux VM so bind-mounts must exist there,\n## not on the macOS host. /tmp persists across colima stop/start but not colima delete.\ncolima ssh -- sudo mkdir -p /tmp/sagemaker\ncolima ssh -- sudo chmod -R ug+w /tmp/sagemaker\necho \"/tmp/sagemaker created inside Colima VM.\"\n\n## Ensure Colima auto-starts on login (launchd)\nif ! launchctl list 2>/dev/null | grep -q \"com.abiosoft.colima.default\"; then\n    brew services start colima || true\nfi\n\n## Completion message\necho \"\"\necho \"First stage done. Log out and back in, then run init.sh -c ${CLOUD_NAME} -a ${ARCH}\"\necho \"\"\necho \"Notes:\"\necho \"  - Log out and back in for the new default shell (bash 5) to take effect.\"\necho \"  - Colima must be running before using DeepRacer-for-Cloud.\"\necho \"    Start it manually with: colima start\"\necho \"  - On Apple Silicon (arm64), amd64/x86_64 container images require\"\necho \"    Rosetta 2. Install it with: softwareupdate --install-rosetta\"\necho \"    Then restart Colima with: colima start --arch x86_64\"\necho \"  - No reboot is required.\"\n"
  },
  {
    "path": "bin/prepare.sh",
    "content": "#!/usr/bin/env bash\n\nset -euo pipefail\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n    echo \"Requested to stop.\"\n    exit 1\n}\n\nexport DEBIAN_FRONTEND=noninteractive\nDIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" >/dev/null 2>&1 && pwd)\"\n\n# Only allow supported Ubuntu versions\n. /etc/os-release\nSUPPORTED_VERSIONS=(\"22.04\" \"24.04\" \"24.10\" \"25.04\" \"25.10\")\nDISTRIBUTION=${ID}${VERSION_ID//./}\nUBUNTU_MAJOR_VERSION=$(echo $VERSION_ID | cut -d. -f1)\nUBUNTU_MINOR_VERSION=$(echo $VERSION_ID | cut -d. -f2)\nif [[ \"$ID\" == \"ubuntu\" ]]; then\n    VERSION_OK=false\n    for V in \"${SUPPORTED_VERSIONS[@]}\"; do\n        if [[ \"$VERSION_ID\" == \"$V\" ]]; then\n            VERSION_OK=true\n            break\n        fi\n    done\n    if [[ \"$VERSION_OK\" != true ]]; then\n        echo \"ERROR: Ubuntu $VERSION_ID is not a supported version. Supported versions: ${SUPPORTED_VERSIONS[*]}\"\n        exit 1\n    fi\nfi\n\n## Check if WSL2\nIS_WSL2=\"\"\nif grep -qi Microsoft /proc/version && grep -q \"WSL2\" /proc/version; then\n    IS_WSL2=\"yes\"\nfi\n\n# Remove needrestart in all Ubuntu 2x.04/2x.10+ (future-proof)\nif [[ \"${ID}\" == \"ubuntu\" && ${UBUNTU_MAJOR_VERSION} -ge 22 && -z \"${IS_WSL2}\" ]]; then\n    sudo apt remove -y needrestart || true\nfi\n\n## Patch system\nsudo apt update && sudo apt-mark hold grub-pc && sudo apt -y -o \\\n    DPkg::options::=\"--force-confdef\" -o DPkg::options::=\"--force-confold\" -qq upgrade\n\n## Install required packages\nsudo apt install --no-install-recommends -y jq python3-boto3 python3-venv screen git curl\n\n## Install AWS CLI\nif [[ \"${ID}\" == \"ubuntu\" && ( ${UBUNTU_MAJOR_VERSION} -eq 22 ) ]]; then\n    sudo apt install -y awscli\nelse\n    if command -v snap >/dev/null 2>&1; then\n        sudo snap install aws-cli --classic\n    else\n        echo \"WARNING: snap not available, AWS CLI not installed\"\n    fi\nfi\n\n## Create Python virtual environment\nVENV_DIR=\"${DIR}/../.venv\"\nif [[ ! -d \"${VENV_DIR}\" ]]; then\n    echo \"Creating Python virtual environment at ${VENV_DIR}\"\n    python3 -m venv --prompt drfc \"${VENV_DIR}\"\nfi\necho \"Installing Python requirements into virtual environment\"\n\"${VENV_DIR}/bin/pip\" install --quiet -r \"${DIR}/../requirements.txt\"\n\n## Detect cloud\nsource $DIR/detect.sh\necho \"Detected cloud type ${CLOUD_NAME}\"\n\n## Do I have a GPU\nGPUS=0\nif [[ -z \"${IS_WSL2}\" ]]; then\n    GPUS=$(lspci | awk '/NVIDIA/ && ( /VGA/ || /3D controller/ ) ' | wc -l)\nelse\n    if [[ -f /usr/lib/wsl/lib/nvidia-smi ]]; then\n        GPUS=$(nvidia-smi --query-gpu=name --format=csv,noheader | wc -l)\n    fi\nfi\nif [ $? -ne 0 ] || [ $GPUS -eq 0 ]; then\n    ARCH=\"cpu\"\n    echo \"No NVIDIA GPU detected. Will not install drivers.\"\nelse\n    ARCH=\"gpu\"\nfi\n\n## Adding Nvidia Drivers\nif [[ \"${ARCH}\" == \"gpu\" && -z \"${IS_WSL2}\" ]]; then\n    DRIVER_OK=false\n    # Find all installed nvidia-driver-XXX packages (status 'ii'), extract version, and check if >= 525\n    for PKG in $(dpkg -l | awk '$1 == \"ii\" && /nvidia-driver-[0-9]+/ {print $2}'); do\n        DRIVER_VER=$(echo \"${PKG}\" | sed -E 's/nvidia-driver-([0-9]+).*/\\1/')\n        if [[ ${DRIVER_VER} -ge 560 ]]; then\n            echo \"NVIDIA driver ${DRIVER_VER} already installed.\"\n            DRIVER_OK=true\n            break\n        fi\n    done\n    if [[ \"${DRIVER_OK}\" != true ]]; then\n        # Try to install the highest available driver >= 560\n        HIGHEST_DRIVER=$(apt-cache search --names-only '^nvidia-driver-[0-9]+$' | awk '{print $1}' | grep -oE '[0-9]+$' | awk '$1 >= 560' | sort -nr | head -n1)\n        if [[ -n \"${HIGHEST_DRIVER}\" ]]; then\n            sudo apt install -y \"nvidia-driver-${HIGHEST_DRIVER}\" --no-install-recommends -o Dpkg::Options::=\"--force-overwrite\"\n        elif apt-cache show nvidia-driver-560-server &>/dev/null; then\n            sudo apt install -y nvidia-driver-560-server --no-install-recommends -o Dpkg::Options::=\"--force-overwrite\"\n        else\n            echo \"No supported NVIDIA driver >= 560 found for this Ubuntu version.\"\n            exit 1\n        fi\n    fi\nfi\n\n## Installing Docker\nsudo apt install -y --no-install-recommends docker.io docker-buildx docker-compose-v2\n\n## Install Nvidia Docker Container\nif [[ \"${ARCH}\" == \"gpu\" ]]; then\n    curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg &&\n        curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list |\n        sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' |\n            sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list\n\n    sudo apt update && sudo apt install -y --no-install-recommends nvidia-docker2 nvidia-container-runtime\n    if [ -f \"/etc/docker/daemon.json\" ]; then\n        echo \"Altering /etc/docker/daemon.json with default-runtime nvidia.\"\n        cat /etc/docker/daemon.json | jq 'del(.\"default-runtime\") + {\"default-runtime\": \"nvidia\"}' | sudo tee /etc/docker/daemon.json\n    else\n        echo \"Creating /etc/docker/daemon.json with default-runtime nvidia.\"\n        sudo cp \"${DIR}/../defaults/docker-daemon.json\" /etc/docker/daemon.json\n    fi\nfi\n\n## Enable and start docker\nif [[ -n \"${IS_WSL2}\" ]]; then\n    sudo service docker restart\nelse\n    sudo systemctl enable docker\n    sudo systemctl restart docker\nfi\n\n## Ensure user can run docker\nsudo usermod -a -G docker \"$(id -un)\"\n\n## Reboot to load driver -- continue install if in cloud-init\nCLOUD_INIT=$(pstree -s $BASHPID | awk /cloud-init/ | wc -l)\n\nif [[ \"${CLOUD_INIT}\" -ne 0 ]]; then\n    echo \"Rebooting in 5 seconds. Will continue with install.\"\n    cd \"${DIR}\"\n    ./runonce.sh \"./init.sh -c ${CLOUD_NAME} -a ${ARCH}\"\n    sleep 5s\n    sudo shutdown -r +1\nelif [[ -n \"${IS_WSL2}\" || \"${ARCH}\" == \"cpu\" ]]; then\n    echo \"First stage done. Log out, then log back in and run init.sh -c ${CLOUD_NAME} -a ${ARCH}\"\n    echo \"Note: You may need to log out and back in for docker group membership to take effect.\"\nelse\n    echo \"First stage done. Please reboot and run init.sh -c ${CLOUD_NAME} -a ${ARCH}\"\n    echo \"Note: Reboot is required for NVIDIA drivers and docker group membership to take effect.\"\nfi\n"
  },
  {
    "path": "bin/runonce.sh",
    "content": "#!/usr/bin/env bash\n\nif [[ $# -eq 0 ]]; then\n    echo \"Schedules a command to be run after the next reboot.\"\n    echo \"Usage: $(basename $0) <command>\"\n    echo \"       $(basename $0) -p <path> <command>\"\n    echo \"       $(basename $0) -r <command>\"\nelse\n    REMOVE=0\n    COMMAND=${!#}\n    SCRIPTPATH=$PATH\n\n    while getopts \":r:p:\" optionName; do\n        case \"$optionName\" in\n        r)\n            REMOVE=1\n            COMMAND=$OPTARG\n            ;;\n        p) SCRIPTPATH=$OPTARG ;;\n        esac\n    done\n\n    SCRIPT=\"${HOME}/.$(basename $0)_$(echo $COMMAND | sed 's/[^a-zA-Z0-9_]/_/g')\"\n\n    if [[ ! -f $SCRIPT ]]; then\n        echo \"PATH=$SCRIPTPATH\" >>$SCRIPT\n        echo \"cd $(pwd)\" >>$SCRIPT\n        echo \"logger -t $(basename $0) -p local3.info \\\"COMMAND=$COMMAND ; USER=\\$(whoami) ($(logname)) ; PWD=$(pwd) ; PATH=\\$PATH\\\"\" >>$SCRIPT\n        echo \"$COMMAND | logger -t $(basename $0) -p local3.info\" >>$SCRIPT\n        echo \"$0 -r \\\"$(echo $COMMAND | sed 's/\\\"/\\\\\\\"/g')\\\"\" >>$SCRIPT\n        chmod +x $SCRIPT\n    fi\n\n    CRONTAB=\"${HOME}/.$(basename $0)_temp_crontab_$RANDOM\"\n    ENTRY=\"@reboot $SCRIPT\"\n\n    echo \"$(crontab -l 2>/dev/null)\" | grep -v \"$ENTRY\" | grep -v \"^# DO NOT EDIT THIS FILE - edit the master and reinstall.$\" | grep -v \"^# ([^ ]* installed on [^)]*)$\" | grep -v \"^# (Cron version [^$]*\\$[^$]*\\$)$\" >$CRONTAB\n\n    if [[ $REMOVE -eq 0 ]]; then\n        echo \"$ENTRY\" >>$CRONTAB\n    fi\n\n    crontab $CRONTAB\n    rm $CRONTAB\n\n    if [[ $REMOVE -ne 0 ]]; then\n        rm $SCRIPT\n    fi\nfi\n"
  },
  {
    "path": "bin/scripts_wrapper.sh",
    "content": "#!/usr/bin/env bash\n\nfunction _dr_is_macos {\n  [[ \"$(uname -s)\" == \"Darwin\" ]]\n}\n\nif ! declare -F _realpath >/dev/null 2>&1; then\n  function _realpath {\n    if command -v realpath >/dev/null 2>&1; then\n      realpath \"$1\"\n    elif command -v grealpath >/dev/null 2>&1; then\n      grealpath \"$1\"\n    elif ! _dr_is_macos && readlink -f / >/dev/null 2>&1; then\n      readlink -f \"$1\"\n    else\n      python3 -c 'import os, sys; print(os.path.realpath(sys.argv[1]))' \"$1\"\n    fi\n  }\nfi\nexport -f _realpath\n\nfunction _dr_require_colima {\n  if ! _dr_is_macos; then\n    return 0\n  fi\n\n  if ! command -v colima >/dev/null 2>&1; then\n    echo \"ERROR: Colima is required on macOS. Run bin/prepare-mac.sh or install and start Colima.\" >&2\n    return 1\n  fi\n\n  if ! colima status 2>/dev/null | grep -qi \"running\"; then\n    echo \"ERROR: Colima is not running. Start it with: colima start\" >&2\n    return 1\n  fi\n}\n\nfunction _dr_ensure_sagemaker_dir {\n  if _dr_is_macos; then\n    _dr_require_colima || return 1\n    colima ssh -- sudo mkdir -p /tmp/sagemaker\n    colima ssh -- sudo chmod -R ug+w /tmp/sagemaker\n  elif [ ! -d /tmp/sagemaker ]; then\n    sudo mkdir -p /tmp/sagemaker\n    sudo chmod -R g+w /tmp/sagemaker\n  fi\n}\n\nfunction _dr_runtime_cat {\n  if _dr_is_macos; then\n    _dr_require_colima || return 1\n    colima ssh -- sudo cat \"$1\"\n  else\n    sudo cat \"$1\"\n  fi\n}\n\nfunction _dr_find_sagemaker_compose_files {\n  local compose_service_name=\"$1\"\n\n  if _dr_is_macos; then\n    _dr_require_colima || return 1\n    colima ssh -- sudo env COMPOSE_SERVICE_NAME=\"$compose_service_name\" sh -lc 'find /tmp/sagemaker -name docker-compose.yaml -exec grep -l -- \"$COMPOSE_SERVICE_NAME\" {} +'\n  else\n    sudo find /tmp/sagemaker -name docker-compose.yaml -exec grep -l -- \"$compose_service_name\" {} +\n  fi\n}\n\nfunction _dr_compose_file_matches_run {\n  local compose_file=\"$1\"\n  local compose_content\n\n  compose_content=$(_dr_runtime_cat \"$compose_file\" 2>/dev/null) || return 1\n  grep -Fq \"RUN_ID=${DR_RUN_ID}\" <<<\"$compose_content\" && grep -Fq \"${DR_LOCAL_S3_MODEL_PREFIX}\" <<<\"$compose_content\"\n}\n\nfunction dr-upload-custom-files {\n  eval CUSTOM_TARGET=$(echo s3://$DR_LOCAL_S3_BUCKET/$DR_LOCAL_S3_CUSTOM_FILES_PREFIX/)\n  echo \"Uploading files to $CUSTOM_TARGET\"\n  if [[ -z $DR_EXPERIMENT_NAME ]]; then\n    aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 sync $DR_DIR/custom_files/ $CUSTOM_TARGET\n  else\n    aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 sync $DR_DIR/experiments/$DR_EXPERIMENT_NAME/custom_files/ $CUSTOM_TARGET\n  fi\n}\n\nfunction dr-upload-model {\n  dr-update-env && ${DR_DIR}/scripts/upload/upload-model.sh \"$@\"\n}\n\nfunction dr-download-model {\n  dr-update-env && ${DR_DIR}/scripts/upload/download-model.sh \"$@\"\n}\n\nfunction dr-upload-car-zip {\n  dr-update-env && ${DR_DIR}/scripts/upload/upload-car.sh \"$@\"\n}\n\nfunction dr-list-aws-models {\n  echo \"Due to changes in AWS DeepRacer Console this command is no longer available.\"\n}\n\nfunction dr-set-upload-model {\n  echo \"Due to changes in AWS DeepRacer Console this command is no longer available.\"\n}\n\nfunction dr-increment-upload-model {\n  dr-update-env && ${DR_DIR}/scripts/upload/increment.sh \"$@\" && dr-update-env\n}\n\nfunction dr-download-custom-files {\n  eval CUSTOM_TARGET=$(echo s3://$DR_LOCAL_S3_BUCKET/$DR_LOCAL_S3_CUSTOM_FILES_PREFIX/)\n  echo \"Downloading files from $CUSTOM_TARGET\"\n  if [[ -z $DR_EXPERIMENT_NAME ]]; then\n    aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 sync $CUSTOM_TARGET $DR_DIR/custom_files/\n  else\n    aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 sync $CUSTOM_TARGET $DR_DIR/experiments/$DR_EXPERIMENT_NAME/custom_files/\n  fi\n}\n\nfunction dr-start-training {\n  dr-update-env\n  $DR_DIR/scripts/training/start.sh \"$@\"\n}\n\nfunction dr-increment-training {\n  dr-update-env && ${DR_DIR}/scripts/training/increment.sh \"$@\" && dr-update-env\n}\n\nfunction dr-stop-training {\n  bash -c \"cd $DR_DIR/scripts/training && ./stop.sh\"\n}\n\nfunction dr-start-evaluation {\n  dr-update-env\n  $DR_DIR/scripts/evaluation/start.sh \"$@\"\n}\n\nfunction dr-stop-evaluation {\n  bash -c \"cd $DR_DIR/scripts/evaluation && ./stop.sh\"\n}\n\nfunction dr-stop-all {\n  # Step 1: Stop all stacks (swarm) or all compose projects (compose)\n  if [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n    docker stack ls --format '{{.Name}}' | while read -r STACK; do\n      echo \"Removing stack: $STACK\"\n      docker stack rm \"$STACK\"\n    done\n  else\n    while IFS=$'\\t' read -r NAME CONFIGS; do\n      echo \"Stopping compose project: $NAME\"\n      local CONFIG_FLAGS\n      CONFIG_FLAGS=$(echo \"$CONFIGS\" | tr ',' '\\n' | sed 's/^/-f /' | tr '\\n' ' ')\n      docker compose $CONFIG_FLAGS -p \"$NAME\" down\n    done < <(docker compose ls --format json 2>/dev/null \\\n      | jq -r '.[] | [.Name, .ConfigFiles] | @tsv')\n  fi\n\n  # Step 2: Stop the s3/minio stack if still running\n  if [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n    if docker stack ls --format '{{.Name}}' | grep -qx 's3'; then\n      echo \"Removing stack: s3\"\n      docker stack rm s3\n    fi\n  else\n    if docker compose ls --format json 2>/dev/null | jq -e '.[] | select(.Name == \"s3\")' >/dev/null 2>&1; then\n      echo \"Stopping compose project: s3\"\n      docker compose -p s3 down\n    fi\n  fi\n  echo \"Waiting 10 seconds for stacks and services to stop...\"\n  sleep 10\n  # Step 3: Stop any remaining containers still attached to sagemaker-local\n  local REMAINING\n  REMAINING=$(docker network inspect sagemaker-local --format '{{json .Containers}}' 2>/dev/null \\\n    | jq -r 'keys[] | select(test(\"^[0-9a-f]{64}$\"))' 2>/dev/null)\n  if [[ -n \"$REMAINING\" ]]; then\n    echo \"Stopping remaining containers on sagemaker-local:\"\n    echo \"$REMAINING\" | while read -r CONTAINER_ID; do\n      local CONTAINER_NAME\n      CONTAINER_NAME=$(docker inspect --format '{{.Name}}' \"$CONTAINER_ID\" | sed 's|^/||')\n      echo \"  Stopping: $CONTAINER_NAME\"\n      docker stop \"$CONTAINER_ID\"\n    done\n  fi\n}\n\nfunction dr-start-tournament {\n  echo \"Tournaments are no longer supported. Use Head-to-Model evaluation instead.\"\n}\n\nfunction dr-start-loganalysis {\n  bash -c \"cd $DR_DIR/scripts/log-analysis && ./start.sh\"\n}\n\nfunction dr-stop-loganalysis {\n  eval LOG_ANALYSIS_ID=$(docker ps | awk ' /deepracer-analysis/ { print $1 }')\n  if [ -n \"$LOG_ANALYSIS_ID\" ]; then\n    bash -c \"cd $DR_DIR/scripts/log-analysis && ./stop.sh\"\n  else\n    echo \"Log-analysis is not running.\"\n  fi\n\n}\n\nfunction dr-logs-sagemaker {\n\n  local OPTIND\n  OPT_TIME=\"--since 5m\"\n\n  while getopts \":w:a\" opt; do\n    case $opt in\n    w)\n      OPT_WAIT=$OPTARG\n      ;;\n    a)\n      OPT_TIME=\"\"\n      ;;\n    \\?)\n      echo \"Invalid option -$OPTARG\" >&2\n      ;;\n    esac\n  done\n\n  SAGEMAKER_CONTAINER=$(dr-find-sagemaker)\n\n  if [[ -z \"$SAGEMAKER_CONTAINER\" ]]; then\n    if [[ -n \"$OPT_WAIT\" ]]; then\n      WAIT_TIME=$OPT_WAIT\n      echo \"Waiting up to $WAIT_TIME seconds for Sagemaker to start up...\"\n      until [ -n \"$SAGEMAKER_CONTAINER\" ]; do\n        sleep 1\n        ((WAIT_TIME--))\n        if [ \"$WAIT_TIME\" -lt 1 ]; then\n          echo \"Sagemaker is not running.\"\n          return 1\n        fi\n        SAGEMAKER_CONTAINER=$(dr-find-sagemaker)\n      done\n    else\n      echo \"Sagemaker is not running.\"\n      return 1\n    fi\n  fi\n\n  if [[ \"$TERM_PROGRAM\" == \"vscode\" ]]; then\n    echo \"VS Code terminal detected. Displaying Sagemaker logs inline.\"\n    docker logs $OPT_TIME -f $SAGEMAKER_CONTAINER\n  elif [[ \"${DR_HOST_X,,}\" == \"true\" && -n \"$DISPLAY\" ]]; then\n    if [ -x \"$(command -v gnome-terminal)\" ]; then\n      gnome-terminal --tab --title \"DR-${DR_RUN_ID}: Sagemaker - ${SAGEMAKER_CONTAINER}\" -- /usr/usr/bin/env bash -c \"docker logs $OPT_TIME -f ${SAGEMAKER_CONTAINER}\" 2>/dev/null\n      echo \"Sagemaker container $SAGEMAKER_CONTAINER logs opened in separate gnome-terminal. \"\n    elif [ -x \"$(command -v x-terminal-emulator)\" ]; then\n      x-terminal-emulator -e /bin/sh -c \"docker logs $OPT_TIME -f ${SAGEMAKER_CONTAINER}\" 2>/dev/null\n      echo \"Sagemaker container $SAGEMAKER_CONTAINER logs opened in separate terminal. \"\n    else\n      echo 'Could not find a terminal emulator. Displaying inline.'\n      docker logs $OPT_TIME -f $SAGEMAKER_CONTAINER\n    fi\n  else\n    docker logs $OPT_TIME -f $SAGEMAKER_CONTAINER\n  fi\n\n}\n\nfunction dr-find-sagemaker {\n\n  STACK_NAME=\"deepracer-$DR_RUN_ID\"\n  SAGEMAKER_CONTAINERS=$(docker ps | awk ' /simapp/ { print $1 } ' | xargs)\n\n  if [[ -n \"$SAGEMAKER_CONTAINERS\" ]]; then\n      for CONTAINER in $SAGEMAKER_CONTAINERS; do\n          CONTAINER_NAME=$(docker ps --format '{{.Names}}' --filter id=$CONTAINER)\n          CONTAINER_PREFIX=$(echo $CONTAINER_NAME | perl -n -e'/(.*)-(algo-(.)-(.*))/; print $1')\n          COMPOSE_SERVICE_NAME=$(echo $CONTAINER_NAME | perl -n -e'/(.*)-(algo-(.)-(.*))/; print $2')\n\n          if [[ -n \"$COMPOSE_SERVICE_NAME\" ]]; then\n                COMPOSE_FILES=$(_dr_find_sagemaker_compose_files \"$COMPOSE_SERVICE_NAME\")\n              for COMPOSE_FILE in $COMPOSE_FILES; do\n                  if _dr_compose_file_matches_run \"$COMPOSE_FILE\"; then\n                      echo $CONTAINER\n                  fi\n              done\n          fi\n      done\n  fi\n\n}\n\nfunction dr-logs-robomaker {\n\n  OPT_REPLICA=1\n  OPT_EVAL=\"\"\n  local OPTIND\n  OPT_TIME=\"--since 5m\"\n\n  while getopts \":w:n:ea\" opt; do\n    case $opt in\n    w)\n      OPT_WAIT=$OPTARG\n      ;;\n    n)\n      OPT_REPLICA=$OPTARG\n      ;;\n    e)\n      OPT_EVAL=\"-e\"\n      ;;\n    a)\n      OPT_TIME=\"\"\n      ;;\n    \\?)\n      echo \"Invalid option -$OPTARG\" >&2\n      ;;\n    esac\n  done\n\n  ROBOMAKER_CONTAINER=$(dr-find-robomaker -n ${OPT_REPLICA} ${OPT_EVAL})\n\n  if [[ -z \"$ROBOMAKER_CONTAINER\" ]]; then\n    if [[ -n \"$OPT_WAIT\" ]]; then\n      WAIT_TIME=$OPT_WAIT\n      echo \"Waiting up to $WAIT_TIME seconds for Robomaker #${OPT_REPLICA} to start up...\"\n      until [ -n \"$ROBOMAKER_CONTAINER\" ]; do\n        sleep 1\n        ((WAIT_TIME--))\n        if [ \"$WAIT_TIME\" -lt 1 ]; then\n          echo \"Robomaker #${OPT_REPLICA} is not running.\"\n          return 1\n        fi\n        ROBOMAKER_CONTAINER=$(dr-find-robomaker -n ${OPT_REPLICA} ${OPT_EVAL})\n      done\n    else\n      echo \"Robomaker #${OPT_REPLICA} is not running.\"\n      return 1\n    fi\n  fi\n\n  if [[ \"$TERM_PROGRAM\" == \"vscode\" ]]; then\n    echo \"VS Code terminal detected. Displaying Robomaker #${OPT_REPLICA} logs inline.\"\n    docker logs $OPT_TIME -f $ROBOMAKER_CONTAINER\n  elif [[ \"${DR_HOST_X,,}\" == \"true\" && -n \"$DISPLAY\" ]]; then\n    if [ -x \"$(command -v gnome-terminal)\" ]; then\n      gnome-terminal --tab --title \"DR-${DR_RUN_ID}: Robomaker #${OPT_REPLICA} - ${ROBOMAKER_CONTAINER}\" -- /usr/usr/bin/env bash -c \"docker logs $OPT_TIME -f ${ROBOMAKER_CONTAINER}\" 2>/dev/null\n      echo \"Robomaker #${OPT_REPLICA} ($ROBOMAKER_CONTAINER) logs opened in separate gnome-terminal. \"\n    elif [ -x \"$(command -v x-terminal-emulator)\" ]; then\n      x-terminal-emulator -e /bin/sh -c \"docker logs $OPT_TIME -f ${ROBOMAKER_CONTAINER}\" 2>/dev/null\n      echo \"Robomaker #${OPT_REPLICA} ($ROBOMAKER_CONTAINER) logs opened in separate terminal. \"\n    else\n      echo 'Could not find a terminal emulator. Displaying inline.'\n      docker logs $OPT_TIME -f $ROBOMAKER_CONTAINER\n    fi\n  else\n    docker logs $OPT_TIME -f $ROBOMAKER_CONTAINER\n  fi\n\n}\n\nfunction dr-find-robomaker {\n\n  local OPTIND\n\n  OPT_PREFIX=\"deepracer\"\n\n  while getopts \":n:e\" opt; do\n    case $opt in\n    n)\n      OPT_REPLICA=$OPTARG\n      ;;\n    e)\n      OPT_PREFIX=\"-eval\"\n      ;;\n    \\?)\n      echo \"Invalid option -$OPTARG\" >&2\n      ;;\n    esac\n  done\n\n  if [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n    eval ROBOMAKER_ID=$(docker ps | grep \"${OPT_PREFIX}-${DR_RUN_ID}_robomaker.${OPT_REPLICA}\" | cut -f1 -d\\  | head -1)\n  else\n    eval ROBOMAKER_ID=$(docker ps | grep \"${OPT_PREFIX}-${DR_RUN_ID}-robomaker-${OPT_REPLICA}\" | cut -f1 -d\\  | head -1)\n  fi\n\n  if [ -n \"$ROBOMAKER_ID\" ]; then\n    echo $ROBOMAKER_ID\n  fi\n}\n\nfunction dr-get-robomaker-stats {\n\n  local OPTIND\n  OPT_REPLICA=1\n\n  while getopts \":n:\" opt; do\n    case $opt in\n    n)\n      OPT_REPLICA=$OPTARG\n      ;;\n    \\?)\n      echo \"Invalid option -$OPTARG\" >&2\n      ;;\n    esac\n  done\n\n  eval ROBOMAKER_ID=$(dr-find-robomaker -n $OPT_REPLICA)\n  if [ -n \"$ROBOMAKER_ID\" ]; then\n    echo \"Showing statistics for Robomaker #$OPT_REPLICA - container $ROBOMAKER_ID\"\n    docker exec -ti $ROBOMAKER_ID bash -c \"gz stats\"\n  else\n    echo \"Robomaker #$OPT_REPLICA is not running.\"\n  fi\n}\n\nfunction dr-logs-loganalysis {\n  eval LOG_ANALYSIS_ID=$(docker ps | awk ' /deepracer-analysis/ { print $1 }')\n  if [ -n \"$LOG_ANALYSIS_ID\" ]; then\n    docker logs -f $LOG_ANALYSIS_ID\n  else\n    echo \"Log-analysis is not running.\"\n  fi\n\n}\n\nfunction dr-url-loganalysis {\n  LOG_ANALYSIS_ID=$(docker ps --filter \"name=deepracer-analysis\" --format \"{{.ID}}\" | head -1)\n  if [ -n \"$LOG_ANALYSIS_ID\" ]; then\n    URL=$(docker logs \"$LOG_ANALYSIS_ID\" 2>&1 | grep -oE 'http://127\\.0\\.0\\.1:[0-9]+[^ ]*token=[a-f0-9]+' | tail -1)\n    if [ -n \"$URL\" ]; then\n      echo \"${URL/127.0.0.1/localhost}\"\n    else\n      echo \"Jupyter URL not found yet. Try again in a moment.\"\n    fi\n  else\n    echo \"Log-analysis is not running.\"\n  fi\n}\n\nfunction dr-view-stream {\n  ${DR_DIR}/utils/start-local-browser.sh \"$@\"\n}\n\nfunction dr-start-viewer {\n  $DR_DIR/scripts/viewer/start.sh \"$@\"\n}\n\nfunction dr-stop-viewer {\n  $DR_DIR/scripts/viewer/stop.sh \"$@\"\n}\n\nfunction dr-update-viewer {\n  $DR_DIR/scripts/viewer/stop.sh \"$@\"\n  $DR_DIR/scripts/viewer/start.sh \"$@\"\n}\n\nfunction dr-start-metrics {\n  $DR_DIR/scripts/metrics/start.sh \"$@\"\n}\n\nfunction dr-stop-metrics {\n  $DR_DIR/scripts/metrics/stop.sh \"$@\"\n}"
  },
  {
    "path": "defaults/debug-reward_function.py",
    "content": "import math\r\nimport numpy\r\nimport time\r\n\r\nclass Reward:\r\n\r\n    '''\r\n    Debugging reward function to be used to track performance of local training.\r\n    Will print out the Real-Time-Factor (RTF), as well as how many \r\n    steps-per-second (sim-time) that the system is able to deliver.\r\n    '''\r\n\r\n    def __init__(self, verbose=False, track_time=False):\r\n        self.verbose = verbose\r\n        self.track_time = track_time\r\n\r\n        if track_time:\r\n            TIME_WINDOW=10\r\n            self.time = numpy.zeros([TIME_WINDOW, 2])\r\n\r\n        if verbose:\r\n            print(\"Initializing Reward Class\")\r\n\r\n    def get_time(self):\r\n\r\n        wall_time_incr = numpy.max(self.time[:,0]) - numpy.min(self.time[:,0])\r\n        sim_time_incr = numpy.max(self.time[:,1]) - numpy.min(self.time[:,1])\r\n        \r\n        rtf = sim_time_incr / wall_time_incr\r\n        fps = (self.time.shape[0] - 1) / sim_time_incr\r\n\r\n        return rtf, fps\r\n    \r\n    def record_time(self, steps, sim_time=0.0):\r\n\r\n        index = int(steps) % self.time.shape[0]\r\n        self.time[index,0] = time.time()\r\n        self.time[index,1] = sim_time\r\n\r\n    def reward_function(self, params):\r\n\r\n        # Read input parameters\r\n        steps = params[\"steps\"]\r\n\r\n        if self.track_time:\r\n            self.record_time(steps, sim_time=params.get(\"sim_time\", 0.0))\r\n\r\n        if self.track_time:\r\n            if steps >= self.time.shape[0]:\r\n                rtf, fps = self.get_time()\r\n                print(\"TIME: s: {}, rtf: {}, fps:{}\".format(int(steps), round(rtf, 2), round(fps, 2) ))\r\n\r\n        return 1.0\r\n\r\n\r\nreward_object = Reward(verbose=False, track_time=True)\r\n\r\ndef reward_function(params):\r\n    return reward_object.reward_function(params)\r\n"
  },
  {
    "path": "defaults/dependencies.json",
    "content": "{\n    \"master_version\": \"6.0\",\n    \"containers\": {\n        \"simapp\": \"6.0.4\"\n    }\n}\n"
  },
  {
    "path": "defaults/docker-daemon.json",
    "content": "{\n    \"runtimes\": {\n        \"nvidia\": {\n            \"path\": \"nvidia-container-runtime\",\n            \"runtimeArgs\": []\n        }\n    },\n    \"default-runtime\": \"nvidia\"\n}"
  },
  {
    "path": "defaults/hyperparameters.json",
    "content": "{\n    \"batch_size\": 64,\n    \"beta_entropy\": 0.01,\n    \"discount_factor\": 0.99,\n    \"e_greedy_value\": 0.05,\n    \"epsilon_steps\": 10000,\n    \"exploration_type\": \"categorical\",\n    \"loss_type\": \"huber\",\n    \"lr\": 0.0003,\n    \"num_episodes_between_training\": 20,\n    \"num_epochs\": 5,\n    \"stack_size\": 1,\n    \"term_cond_avg_score\": 350.0,\n    \"term_cond_max_episodes\": 1000,\n    \"sac_alpha\": 0.2\n  }"
  },
  {
    "path": "defaults/model_metadata.json",
    "content": "{\n    \"action_space\": [\n        {\n            \"steering_angle\": -30,\n            \"speed\": 0.6\n        },\n        {\n            \"steering_angle\": -15,\n            \"speed\": 0.6\n        },\n        {\n            \"steering_angle\": 0,\n            \"speed\": 0.6\n        },\n        {\n            \"steering_angle\": 15,\n            \"speed\": 0.6\n        },\n        {\n            \"steering_angle\": 30,\n            \"speed\": 0.6\n        }\n    ],\n    \"sensor\": [\"FRONT_FACING_CAMERA\"],\n    \"neural_network\": \"DEEP_CONVOLUTIONAL_NETWORK_SHALLOW\",\n    \"training_algorithm\": \"clipped_ppo\", \n    \"action_space_type\": \"discrete\",\n    \"version\": \"5\"\n}\n"
  },
  {
    "path": "defaults/model_metadata_cont.json",
    "content": "{\n    \"action_space\": {\n        \"speed\": {\n            \"high\": 2,\n            \"low\": 1\n        },\n        \"steering_angle\": {\n            \"high\": 30,\n            \"low\": -30\n        }\n    },\n    \"sensor\": [\n        \"FRONT_FACING_CAMERA\"\n    ],\n    \"neural_network\": \"DEEP_CONVOLUTIONAL_NETWORK_SHALLOW\",\n    \"training_algorithm\": \"clipped_ppo\",\n    \"action_space_type\": \"continuous\",\n    \"version\": \"5\"\n}"
  },
  {
    "path": "defaults/model_metadata_sac.json",
    "content": "{\n    \"action_space\": {\"speed\": {\"high\": 2, \"low\": 1}, \"steering_angle\": {\"high\": 30, \"low\": -30}},\n    \"sensor\": [\"FRONT_FACING_CAMERA\"],\n    \"neural_network\": \"DEEP_CONVOLUTIONAL_NETWORK_SHALLOW\",\n    \"training_algorithm\": \"sac\", \n    \"action_space_type\": \"continuous\",\n    \"version\": \"4\"\n}\n"
  },
  {
    "path": "defaults/reward_function.py",
    "content": "def reward_function(params):\n    '''\n    Example of penalize steering, which helps mitigate zig-zag behaviors\n    '''\n    \n    # Read input parameters\n    distance_from_center = params['distance_from_center']\n    track_width = params['track_width']\n    steering = abs(params['steering_angle']) # Only need the absolute steering angle\n\n    # Calculate 3 marks that are farther and father away from the center line\n    marker_1 = 0.1 * track_width\n    marker_2 = 0.25 * track_width\n    marker_3 = 0.5 * track_width\n\n    # Give higher reward if the car is closer to center line and vice versa\n    if distance_from_center <= marker_1:\n        reward = 1\n    elif distance_from_center <= marker_2:\n        reward = 0.5\n    elif distance_from_center <= marker_3:\n        reward = 0.1\n    else:\n        reward = 1e-3  # likely crashed/ close to off track\n\n    # Steering penality threshold, change the number based on your action space setting\n    ABS_STEERING_THRESHOLD = 15\n\n    # Penalize reward if the car is steering too much\n    if steering > ABS_STEERING_THRESHOLD:\n        reward *= 0.8\n\n    return float(reward)\n"
  },
  {
    "path": "defaults/template-run.env",
    "content": "DR_RUN_ID=0\nDR_WORLD_NAME=reinvent_base\nDR_RACE_TYPE=TIME_TRIAL\nDR_CAR_NAME=FastCar\nDR_CAR_BODY_SHELL_TYPE=deepracer\nDR_CAR_COLOR=Red\nDR_DISPLAY_NAME=$DR_CAR_NAME\nDR_RACER_NAME=$DR_CAR_NAME\nDR_ENABLE_DOMAIN_RANDOMIZATION=False\nDR_EVAL_NUMBER_OF_TRIALS=3\nDR_EVAL_IS_CONTINUOUS=True\nDR_EVAL_MAX_RESETS=100\nDR_EVAL_OFF_TRACK_PENALTY=5.0\nDR_EVAL_COLLISION_PENALTY=5.0\nDR_EVAL_SAVE_MP4=False\nDR_EVAL_CHECKPOINT=last\nDR_EVAL_OPP_S3_MODEL_PREFIX=rl-deepracer-sagemaker\nDR_EVAL_OPP_CAR_BODY_SHELL_TYPE=deepracer\nDR_EVAL_OPP_CAR_NAME=FasterCar\nDR_EVAL_OPP_DISPLAY_NAME=$DR_EVAL_OPP_CAR_NAME\nDR_EVAL_OPP_RACER_NAME=$DR_EVAL_OPP_CAR_NAME\nDR_EVAL_DEBUG_REWARD=False\nDR_EVAL_RESET_BEHIND_DIST=1.0\nDR_EVAL_REVERSE_DIRECTION=False\n#DR_EVAL_RTF=1.0\nDR_TRAIN_CHANGE_START_POSITION=True\nDR_TRAIN_REVERSE_DIRECTION=False\nDR_TRAIN_ALTERNATE_DRIVING_DIRECTION=False\nDR_TRAIN_START_POSITION_OFFSET=0.0\nDR_TRAIN_ROUND_ROBIN_ADVANCE_DIST=0.05\nDR_TRAIN_MULTI_CONFIG=False\nDR_TRAIN_MIN_EVAL_TRIALS=5\nDR_TRAIN_BEST_MODEL_METRIC=progress\n#DR_TRAIN_RTF=1.0\n#DR_TRAIN_MAX_STEPS_PER_ITERATION=10000\nDR_LOCAL_S3_MODEL_PREFIX=rl-deepracer-sagemaker\nDR_LOCAL_S3_PRETRAINED=False\nDR_LOCAL_S3_PRETRAINED_PREFIX=rl-sagemaker-pretrained\nDR_LOCAL_S3_PRETRAINED_CHECKPOINT=last\nDR_LOCAL_S3_CUSTOM_FILES_PREFIX=custom_files\nDR_LOCAL_S3_TRAINING_PARAMS_FILE=training_params.yaml\nDR_LOCAL_S3_EVAL_PARAMS_FILE=evaluation_params.yaml\nDR_LOCAL_S3_MODEL_METADATA_KEY=$DR_LOCAL_S3_CUSTOM_FILES_PREFIX/model_metadata.json\nDR_LOCAL_S3_HYPERPARAMETERS_KEY=$DR_LOCAL_S3_CUSTOM_FILES_PREFIX/hyperparameters.json\nDR_LOCAL_S3_REWARD_KEY=$DR_LOCAL_S3_CUSTOM_FILES_PREFIX/reward_function.py\nDR_LOCAL_S3_METRICS_PREFIX=$DR_LOCAL_S3_MODEL_PREFIX/metrics\nDR_UPLOAD_S3_PREFIX=$DR_LOCAL_S3_MODEL_PREFIX-1\nDR_OA_NUMBER_OF_OBSTACLES=6\nDR_OA_MIN_DISTANCE_BETWEEN_OBSTACLES=2.0\nDR_OA_RANDOMIZE_OBSTACLE_LOCATIONS=False\nDR_OA_IS_OBSTACLE_BOT_CAR=False\nDR_OA_OBSTACLE_TYPE=box_obstacle\nDR_OA_OBJECT_POSITIONS=\nDR_H2B_IS_LANE_CHANGE=False\nDR_H2B_LOWER_LANE_CHANGE_TIME=3.0\nDR_H2B_UPPER_LANE_CHANGE_TIME=5.0\nDR_H2B_LANE_CHANGE_DISTANCE=1.0\nDR_H2B_NUMBER_OF_BOT_CARS=3\nDR_H2B_MIN_DISTANCE_BETWEEN_BOT_CARS=2.0\nDR_H2B_RANDOMIZE_BOT_CAR_LOCATIONS=False\nDR_H2B_BOT_CAR_SPEED=0.2\nDR_H2B_BOT_CAR_PENALTY=5.0"
  },
  {
    "path": "defaults/template-system.env",
    "content": "DR_CLOUD=<CLOUD_REPLACE>\nDR_AWS_APP_REGION=<REGION_REPLACE>\nDR_UPLOAD_S3_PROFILE=default\nDR_UPLOAD_S3_BUCKET=<AWS_DR_BUCKET>\nDR_LOCAL_S3_BUCKET=bucket\nDR_LOCAL_S3_PROFILE=<LOCAL_PROFILE>\nDR_GUI_ENABLE=False\nDR_KINESIS_STREAM_NAME=\nDR_CAMERA_MAIN_ENABLE=True\nDR_CAMERA_SUB_ENABLE=False\nDR_CAMERA_KVS_ENABLE=True\nDR_ENABLE_EXTRA_KVS_OVERLAY=False\nDR_SIMAPP_SOURCE=awsdeepracercommunity/deepracer-simapp\nDR_SIMAPP_VERSION=<SIMAPP_VERSION_TAG>\nDR_MINIO_IMAGE=latest\nDR_ANALYSIS_IMAGE=cpu\nDR_WORKERS=1\nDR_ROBOMAKER_MOUNT_LOGS=False\n# DR_ROBOMAKER_MOUNT_SIMAPP_DIR=\n# DR_ROBOMAKER_MOUNT_SCRIPTS_DIR=${DR_DIR}/data/scripts\nDR_CLOUD_WATCH_ENABLE=False\nDR_CLOUD_WATCH_LOG_STREAM_PREFIX=\nDR_DOCKER_STYLE=<DOCKER_STYLE>\nDR_HOST_X=False\nDR_WEBVIEWER_PORT=8100\nDR_QUIET_ACTIVATE=False\n# DR_DISPLAY=:99\n# DR_REMOTE_MINIO_URL=http://mynas:9000\n# DR_ROBOMAKER_CUDA_DEVICES=0\n# DR_SAGEMAKER_CUDA_DEVICES=0\n# DR_EXPERIMENT_NAME=\n# DR_TELEGRAF_HOST=telegraf\n# DR_TELEGRAF_PORT=8092\n## DRoA Integration\n# DR_DROA_URL=https://xxxx.cloudfront.net\n# DR_DROA_USERNAME=user@example.com\n"
  },
  {
    "path": "defaults/template-worker.env",
    "content": "DR_WORLD_NAME=reInvent2019_track\nDR_RACE_TYPE=TIME_TRIAL\nDR_CAR_COLOR=Blue\nDR_ENABLE_DOMAIN_RANDOMIZATION=False\nDR_TRAIN_CHANGE_START_POSITION=True\nDR_TRAIN_ALTERNATE_DRIVING_DIRECTION=False\nDR_TRAIN_ROUND_ROBIN_ADVANCE_DIST=0.05\nDR_TRAIN_START_POSITION_OFFSET=0.0\nDR_OA_NUMBER_OF_OBSTACLES=6\nDR_OA_MIN_DISTANCE_BETWEEN_OBSTACLES=2.0\nDR_OA_RANDOMIZE_OBSTACLE_LOCATIONS=False\nDR_OA_IS_OBSTACLE_BOT_CAR=False\nDR_OA_OBSTACLE_TYPE=box_obstacle\nDR_OA_OBJECT_POSITIONS=\nDR_H2B_IS_LANE_CHANGE=False\nDR_H2B_LOWER_LANE_CHANGE_TIME=3.0\nDR_H2B_UPPER_LANE_CHANGE_TIME=5.0\nDR_H2B_LANE_CHANGE_DISTANCE=1.0\nDR_H2B_NUMBER_OF_BOT_CARS=3\nDR_H2B_MIN_DISTANCE_BETWEEN_BOT_CARS=2.0\nDR_H2B_RANDOMIZE_BOT_CAR_LOCATIONS=False\nDR_H2B_BOT_CAR_SPEED=0.2\n"
  },
  {
    "path": "docker/docker-compose-aws.yml",
    "content": "version: '3.7'\n\nservices:\n  rl_coach:\n    environment:\n      - AWS_METADATA_SERVICE_TIMEOUT=3\n      - AWS_METADATA_SERVICE_NUM_ATTEMPTS=5\n  robomaker:\n    environment:\n      - AWS_METADATA_SERVICE_TIMEOUT=3\n      - AWS_METADATA_SERVICE_NUM_ATTEMPTS=5\n"
  },
  {
    "path": "docker/docker-compose-cwlog.yml",
    "content": "version: '3.7'\n\nservices:\n  rl_coach:\n    logging:\n      driver: awslogs\n      options:\n        awslogs-group: '/deepracer-for-cloud'\n        awslogs-create-group: 'true'\n        awslogs-region: ${DR_AWS_APP_REGION}\n        tag: \"${DR_CLOUD_WATCH_LOG_STREAM_PREFIX}{{.Name}}\"\n  robomaker:\n    logging:\n      driver: awslogs\n      options:\n        awslogs-group: '/deepracer-for-cloud'\n        awslogs-create-group: 'true' \n        awslogs-region: ${DR_AWS_APP_REGION}\n        tag: \"${DR_CLOUD_WATCH_LOG_STREAM_PREFIX}{{.Name}}\""
  },
  {
    "path": "docker/docker-compose-endpoint.yml",
    "content": "version: '3.7'\n\nservices:\n  rl_coach:\n    environment:\n      - S3_ENDPOINT_URL=${DR_MINIO_URL}\n  robomaker:\n    environment:\n      - S3_ENDPOINT_URL=${DR_MINIO_URL}\n"
  },
  {
    "path": "docker/docker-compose-eval-swarm.yml",
    "content": "version: '3.7'\n\nservices:\n  rl_coach:\n    deploy:\n      restart_policy:\n        condition: none\n      placement:\n        constraints: [node.labels.Sagemaker == true ]\n  robomaker:\n    deploy:\n      restart_policy:\n        condition: none\n      replicas: 1\n      placement:\n        constraints: [node.labels.Robomaker == true ]\n    environment:\n        - DOCKER_REPLICA_SLOT={{.Task.Slot}}"
  },
  {
    "path": "docker/docker-compose-eval.yml",
    "content": "version: '3.7'\n\nnetworks:\n  default:\n    external: true\n    name: sagemaker-local\n\nservices:\n  rl_coach:\n    image: ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}\n    command: [\"/bin/bash\", \"-c\", \"echo No work for coach in Evaluation Mode\"]\n  robomaker:\n    image: ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}\n    command: [\"${ROBOMAKER_COMMAND:-}\"]\n    ports:\n      - \"${DR_ROBOMAKER_EVAL_PORT}:8080\"\n    environment:\n      - CUDA_VISIBLE_DEVICES=${DR_ROBOMAKER_CUDA_DEVICES:-}\n      - DEBUG_REWARD=${DR_EVAL_DEBUG_REWARD}\n      - WORLD_NAME=${DR_WORLD_NAME}\n      - MODEL_S3_PREFIX=${DR_LOCAL_S3_MODEL_PREFIX}\n      - MODEL_S3_BUCKET=${DR_LOCAL_S3_BUCKET}      \n      - APP_REGION=${DR_AWS_APP_REGION}\n      - S3_YAML_NAME=${DR_CURRENT_PARAMS_FILE}\n      - KINESIS_VIDEO_STREAM_NAME=${DR_KINESIS_STREAM_NAME}\n      - ENABLE_KINESIS=${DR_CAMERA_KVS_ENABLE}\n      - ENABLE_GUI=${DR_GUI_ENABLE}\n      - ROLLOUT_IDX=0\n      - RTF_OVERRIDE=${DR_EVAL_RTF:-}\n      - ROS_MASTER_URI=http://localhost:11311/\n      - ROS_IP=127.0.0.1\n      - GAZEBO_ARGS=${DR_GAZEBO_ARGS:-}\n      - GAZEBO_RENDER_ENGINE=${DR_GAZEBO_RENDER_ENGINE:-ogre2}\n      - TELEGRAF_HOST=${DR_TELEGRAF_HOST:-}\n      - TELEGRAF_PORT=${DR_TELEGRAF_PORT:-}\n    init: true"
  },
  {
    "path": "docker/docker-compose-keys.yml",
    "content": "version: '3.7'\n\nservices:\n  rl_coach:\n    environment:\n      - AWS_ACCESS_KEY_ID=${DR_LOCAL_ACCESS_KEY_ID}\n      - AWS_SECRET_ACCESS_KEY=${DR_LOCAL_SECRET_ACCESS_KEY}\n  robomaker:\n    environment:\n      - AWS_ACCESS_KEY_ID=${DR_LOCAL_ACCESS_KEY_ID}\n      - AWS_SECRET_ACCESS_KEY=${DR_LOCAL_SECRET_ACCESS_KEY}\n"
  },
  {
    "path": "docker/docker-compose-local-xorg-wsl.yml",
    "content": "version: '3.7'\n\nservices:\n  robomaker:\n    environment:\n      - DISPLAY\n      - USE_EXTERNAL_X=${DR_HOST_X}\n      - QT_X11_NO_MITSHM=1\n      - LD_LIBRARY_PATH=/usr/lib/wsl/lib\n    volumes:\n      - '/tmp/.X11-unix/:/tmp/.X11-unix'\n      - '/mnt/wslg:/mnt/wslg'\n      - '/usr/lib/wsl:/usr/lib/wsl'\n    devices:\n      - /dev/dxg\n"
  },
  {
    "path": "docker/docker-compose-local-xorg.yml",
    "content": "version: '3.7'\n\nservices:\n  robomaker:\n    environment:\n      - DISPLAY\n      - USE_EXTERNAL_X=${DR_HOST_X}\n      - XAUTHORITY=/root/.Xauthority\n      - QT_X11_NO_MITSHM=1\n      - NVIDIA_DRIVER_CAPABILITIES=all\n    volumes:\n      - '/tmp/.X11-unix/:/tmp/.X11-unix'\n      - '${XAUTHORITY}:/root/.Xauthority'"
  },
  {
    "path": "docker/docker-compose-local.yml",
    "content": "\nversion: '3.7'\n\nnetworks:\n  default:\n    external: true\n    name: sagemaker-local\n\nservices:\n  minio:\n    image: minio/minio:${DR_MINIO_IMAGE}\n    ports:\n      - \"9000:9000\"\n      - \"9001:9001\"\n    command: server /data --console-address \":9001\"\n    environment:\n      - MINIO_ROOT_USER=${DR_LOCAL_ACCESS_KEY_ID}\n      - MINIO_ROOT_PASSWORD=${DR_LOCAL_SECRET_ACCESS_KEY}\n      - MINIO_UID\n      - MINIO_GID\n      - MINIO_USERNAME\n      - MINIO_GROUPNAME\n    volumes:\n      - ${DR_DIR}/data/minio:/data\n"
  },
  {
    "path": "docker/docker-compose-metrics.yml",
    "content": "\nversion: '3.7'\n\nnetworks:\n  default:\n    external: true\n    name: sagemaker-local\n\nservices:\n  telegraf:\n    image: telegraf:1.18-alpine\n    volumes:\n    - ./metrics/telegraf/etc/telegraf.conf:/etc/telegraf/telegraf.conf:ro\n    depends_on:\n      - influxdb\n    links:\n      - influxdb\n    ports:\n    - '127.0.0.1:8125:8125/udp'\n    - '127.0.0.1:8092:8092/udp'\n\n  influxdb:\n    image: influxdb:1.8-alpine\n    env_file: ./metrics/configuration.env\n    ports:\n      - '127.0.0.1:8886:8086'\n    volumes:\n      - influxdb_data:/var/lib/influxdb\n\n  grafana:\n    image: grafana/grafana:10.4.2\n    depends_on:\n      - influxdb\n    env_file: ./metrics/configuration.env\n    links:\n      - influxdb\n    ports:\n      - '3000:3000'\n    volumes:\n      - grafana_data:/var/lib/grafana\n      - ./metrics/grafana/provisioning/:/etc/grafana/provisioning/\n\nvolumes:\n  grafana_data: {}\n  influxdb_data: {}\n"
  },
  {
    "path": "docker/docker-compose-mount.yml",
    "content": "version: '3.7'\n\nservices:\n  robomaker:\n    volumes:\n      - \"${DR_MOUNT_DIR}:/root/.ros/log\"\n"
  },
  {
    "path": "docker/docker-compose-robomaker-multi.yml",
    "content": "version: '3.7'\n\nservices:\n  robomaker:\n    volumes:\n      - \"${DR_DIR}/tmp/comms.${DR_RUN_ID}:/mnt/comms\"\n"
  },
  {
    "path": "docker/docker-compose-robomaker-scripts.yml",
    "content": "version: '3.7'\n\nservices:\n  robomaker:\n    volumes:\n      - '${DR_ROBOMAKER_MOUNT_SCRIPTS_DIR}:/scripts'"
  },
  {
    "path": "docker/docker-compose-simapp.yml",
    "content": "version: '3.7'\n\nservices:\n  robomaker:\n    volumes:\n      - '${DR_ROBOMAKER_MOUNT_SIMAPP_DIR}:/opt/simapp'"
  },
  {
    "path": "docker/docker-compose-training-swarm.yml",
    "content": "version: '3.7'\n\nservices:\n  rl_coach:\n    deploy:\n      restart_policy:\n        condition: none\n      placement:\n        constraints: [node.labels.Sagemaker == true ]\n  robomaker:\n    deploy:\n      restart_policy:\n        condition: none\n      replicas: ${DR_WORKERS}\n      placement:\n        constraints: [node.labels.Robomaker == true ]\n    environment:\n        - DOCKER_REPLICA_SLOT={{.Task.Slot}}"
  },
  {
    "path": "docker/docker-compose-training.yml",
    "content": "version: \"3.7\"\n\nnetworks:\n  default:\n    external: true\n    name: sagemaker-local\n\nservices:\n  rl_coach:\n    image: ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}\n    command: [\"source /root/sagemaker-venv/bin/activate && python3 /opt/ml/code/rl_coach/start.py\"]\n    working_dir: \"/opt/ml/code/\"\n    environment:\n      - RUN_ID=${DR_RUN_ID}\n      - AWS_REGION=${DR_AWS_APP_REGION}\n      - SAGEMAKER_IMAGE=${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}\n      - PRETRAINED=${DR_LOCAL_S3_PRETRAINED}\n      - PRETRAINED_S3_PREFIX=${DR_LOCAL_S3_PRETRAINED_PREFIX}\n      - PRETRAINED_S3_BUCKET=${DR_LOCAL_S3_BUCKET}\n      - PRETRAINED_CHECKPOINT=${DR_LOCAL_S3_PRETRAINED_CHECKPOINT}\n      - MODEL_S3_PREFIX=${DR_LOCAL_S3_MODEL_PREFIX}\n      - MODEL_S3_BUCKET=${DR_LOCAL_S3_BUCKET}\n      - HYPERPARAMETER_FILE_S3_KEY=${DR_LOCAL_S3_HYPERPARAMETERS_KEY}\n      - MODELMETADATA_FILE_S3_KEY=${DR_LOCAL_S3_MODEL_METADATA_KEY}\n      - CUDA_VISIBLE_DEVICES=${DR_SAGEMAKER_CUDA_DEVICES:-}\n      - MAX_MEMORY_STEPS=${DR_TRAIN_MAX_STEPS_PER_ITERATION:-}\n      - TELEGRAF_HOST=${DR_TELEGRAF_HOST:-}\n      - TELEGRAF_PORT=${DR_TELEGRAF_PORT:-}\n\n    volumes:\n      - \"/var/run/docker.sock:/var/run/docker.sock\"\n      - \"/tmp/sagemaker:/tmp/sagemaker\"\n  robomaker:\n    image: ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}\n    command: [\"${ROBOMAKER_COMMAND:-}\"]\n    ports:\n      - \"${DR_ROBOMAKER_TRAIN_PORT}:8080\"\n      - \"${DR_ROBOMAKER_GUI_PORT}:5900\"\n    environment:\n      - WORLD_NAME=${DR_WORLD_NAME}\n      - SAGEMAKER_SHARED_S3_PREFIX=${DR_LOCAL_S3_MODEL_PREFIX}\n      - SAGEMAKER_SHARED_S3_BUCKET=${DR_LOCAL_S3_BUCKET}\n      - APP_REGION=${DR_AWS_APP_REGION}\n      - S3_YAML_NAME=${DR_CURRENT_PARAMS_FILE}\n      - KINESIS_VIDEO_STREAM_NAME=${DR_KINESIS_STREAM_NAME}\n      - ENABLE_KINESIS=${DR_CAMERA_KVS_ENABLE}\n      - ENABLE_GUI=${DR_GUI_ENABLE}\n      - CUDA_VISIBLE_DEVICES=${DR_ROBOMAKER_CUDA_DEVICES:-}\n      - MULTI_CONFIG\n      - RTF_OVERRIDE=${DR_TRAIN_RTF:-}      \n      - ROS_MASTER_URI=http://localhost:11311/\n      - ROS_IP=127.0.0.1\n      - GAZEBO_ARGS=${DR_GAZEBO_ARGS:-}\n      - GAZEBO_RENDER_ENGINE=${DR_GAZEBO_RENDER_ENGINE:-ogre2}\n      - TELEGRAF_HOST=${DR_TELEGRAF_HOST:-}\n      - TELEGRAF_PORT=${DR_TELEGRAF_PORT:-}\n    init: true\n"
  },
  {
    "path": "docker/docker-compose-webviewer-swarm.yml",
    "content": "version: '3.7'\n\nnetworks:\n  default:\n    external: true\n    name: sagemaker-local\n\nservices:\n  proxy:\n    deploy:\n      restart_policy:\n        condition: none\n      replicas: 1\n      placement:\n        constraints: [node.labels.Sagemaker == true ]\n"
  },
  {
    "path": "docker/docker-compose-webviewer.yml",
    "content": "version: '3.7'\n\nnetworks:\n  default:\n    external: true\n    name: sagemaker-local\n\nservices:\n  proxy:\n    image: nginx\n    ports:\n      - \"${DR_WEBVIEWER_PORT}:80\"\n    volumes:\n      - ${DR_VIEWER_HTML}:/usr/share/nginx/html/index.html\n      - ${DR_NGINX_CONF}:/etc/nginx/conf.d/default.conf\n\n"
  },
  {
    "path": "docker/metrics/configuration.env",
    "content": "# Grafana options\nGF_SECURITY_ADMIN_USER=admin\nGF_SECURITY_ADMIN_PASSWORD=admin\nGF_INSTALL_PLUGINS=\n\n# InfluxDB options\nINFLUXDB_DB=influx\nINFLUXDB_ADMIN_USER=admin\nINFLUXDB_ADMIN_PASSWORD=admin\n"
  },
  {
    "path": "docker/metrics/grafana/provisioning/dashboards/dashboard.yml",
    "content": "apiVersion: 1\n\nproviders:\n- name: 'Default'\n  folder: ''\n  options:\n    path: /etc/grafana/provisioning/dashboards\n"
  },
  {
    "path": "docker/metrics/grafana/provisioning/dashboards/deepracer-training-template.json",
    "content": "{\n  \"annotations\": {\n    \"list\": [\n      {\n        \"builtIn\": 1,\n        \"datasource\": {\n          \"type\": \"datasource\",\n          \"uid\": \"grafana\"\n        },\n        \"enable\": true,\n        \"hide\": true,\n        \"iconColor\": \"rgba(0, 211, 255, 1)\",\n        \"limit\": 100,\n        \"name\": \"Annotations & Alerts\",\n        \"type\": \"dashboard\"\n      }\n    ]\n  },\n  \"editable\": true,\n  \"fiscalYearStartMonth\": 0,\n  \"graphTooltip\": 2,\n  \"id\": 1,\n  \"links\": [],\n  \"panels\": [\n    {\n      \"datasource\": {},\n      \"fieldConfig\": {\n        \"defaults\": {\n          \"color\": {\n            \"mode\": \"palette-classic\"\n          },\n          \"custom\": {\n            \"axisBorderShow\": false,\n            \"axisCenteredZero\": false,\n            \"axisColorMode\": \"text\",\n            \"axisLabel\": \"\",\n            \"axisPlacement\": \"auto\",\n            \"barAlignment\": 0,\n            \"drawStyle\": \"line\",\n            \"fillOpacity\": 9,\n            \"gradientMode\": \"none\",\n            \"hideFrom\": {\n              \"legend\": false,\n              \"tooltip\": false,\n              \"viz\": false\n            },\n            \"insertNulls\": false,\n            \"lineInterpolation\": \"linear\",\n            \"lineWidth\": 1,\n            \"pointSize\": 5,\n            \"scaleDistribution\": {\n              \"type\": \"linear\"\n            },\n            \"showPoints\": \"auto\",\n            \"spanNulls\": false,\n            \"stacking\": {\n              \"group\": \"A\",\n              \"mode\": \"none\"\n            },\n            \"thresholdsStyle\": {\n              \"mode\": \"off\"\n            }\n          },\n          \"mappings\": [],\n          \"thresholds\": {\n            \"mode\": \"absolute\",\n            \"steps\": [\n              {\n                \"color\": \"green\",\n                \"value\": null\n              },\n              {\n                \"color\": \"red\",\n                \"value\": 80\n              }\n            ]\n          }\n        },\n        \"overrides\": [\n          {\n            \"matcher\": {\n              \"id\": \"byName\",\n              \"options\": \"reward\"\n            },\n            \"properties\": [\n              {\n                \"id\": \"color\",\n                \"value\": {\n                  \"fixedColor\": \"red\",\n                  \"mode\": \"fixed\"\n                }\n              }\n            ]\n          }\n        ]\n      },\n      \"gridPos\": {\n        \"h\": 11,\n        \"w\": 24,\n        \"x\": 0,\n        \"y\": 0\n      },\n      \"id\": 6,\n      \"options\": {\n        \"legend\": {\n          \"calcs\": [\n            \"min\",\n            \"mean\",\n            \"max\",\n            \"lastNotNull\"\n          ],\n          \"displayMode\": \"table\",\n          \"placement\": \"bottom\",\n          \"showLegend\": true,\n          \"sortBy\": \"Max\",\n          \"sortDesc\": true\n        },\n        \"tooltip\": {\n          \"mode\": \"single\",\n          \"sort\": \"none\"\n        }\n      },\n      \"targets\": [\n        {\n          \"alias\": \"$tag_model training reward\",\n          \"datasource\": {\n            \"type\": \"influxdb\",\n            \"uid\": \"${DS_INFLUXDB}\"\n          },\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_training_episodes\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"A\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"reward\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"mean\"\n              }\n            ]\n          ],\n          \"tags\": [\n            {\n              \"key\": \"phase\",\n              \"operator\": \"=\",\n              \"value\": \"training\"\n            }\n          ]\n        },\n        {\n          \"alias\": \"$tag_model complete lap reward\",\n          \"datasource\": {\n            \"type\": \"influxdb\",\n            \"uid\": \"${DS_INFLUXDB}\"\n          },\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_training_episodes\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"B\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"reward\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"mean\"\n              }\n            ]\n          ],\n          \"tags\": [\n            {\n              \"key\": \"status\",\n              \"operator\": \"=\",\n              \"value\": \"Lap complete\"\n            }\n          ]\n        },\n        {\n          \"alias\": \"$tag_model eval reward\",\n          \"datasource\": {\n            \"type\": \"influxdb\",\n            \"uid\": \"${DS_INFLUXDB}\"\n          },\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_training_episodes\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"C\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"reward\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"mean\"\n              }\n            ]\n          ],\n          \"tags\": [\n            {\n              \"key\": \"phase\",\n              \"operator\": \"=\",\n              \"value\": \"evaluation\"\n            }\n          ]\n        }\n      ],\n      \"title\": \"Reward\",\n      \"type\": \"timeseries\"\n    },\n    {\n      \"datasource\": {},\n      \"fieldConfig\": {\n        \"defaults\": {\n          \"color\": {\n            \"mode\": \"palette-classic\"\n          },\n          \"custom\": {\n            \"axisBorderShow\": false,\n            \"axisCenteredZero\": false,\n            \"axisColorMode\": \"text\",\n            \"axisLabel\": \"\",\n            \"axisPlacement\": \"auto\",\n            \"barAlignment\": 0,\n            \"drawStyle\": \"points\",\n            \"fillOpacity\": 3,\n            \"gradientMode\": \"none\",\n            \"hideFrom\": {\n              \"legend\": false,\n              \"tooltip\": false,\n              \"viz\": false\n            },\n            \"insertNulls\": false,\n            \"lineInterpolation\": \"linear\",\n            \"lineWidth\": 1,\n            \"pointSize\": 4,\n            \"scaleDistribution\": {\n              \"type\": \"linear\"\n            },\n            \"showPoints\": \"auto\",\n            \"spanNulls\": false,\n            \"stacking\": {\n              \"group\": \"A\",\n              \"mode\": \"none\"\n            },\n            \"thresholdsStyle\": {\n              \"mode\": \"off\"\n            }\n          },\n          \"mappings\": [],\n          \"thresholds\": {\n            \"mode\": \"absolute\",\n            \"steps\": [\n              {\n                \"color\": \"green\",\n                \"value\": null\n              },\n              {\n                \"color\": \"red\",\n                \"value\": 80\n              }\n            ]\n          }\n        },\n        \"overrides\": [\n          {\n            \"matcher\": {\n              \"id\": \"byRegexp\",\n              \"options\": \".*eval progress moving average\"\n            },\n            \"properties\": [\n              {\n                \"id\": \"custom.drawStyle\",\n                \"value\": \"line\"\n              },\n              {\n                \"id\": \"custom.showPoints\",\n                \"value\": \"never\"\n              },\n              {\n                \"id\": \"color\",\n                \"value\": {\n                  \"fixedColor\": \"orange\",\n                  \"mode\": \"fixed\"\n                }\n              }\n            ]\n          },\n          {\n            \"matcher\": {\n              \"id\": \"byRegexp\",\n              \"options\": \".*training progress moving average\"\n            },\n            \"properties\": [\n              {\n                \"id\": \"custom.drawStyle\",\n                \"value\": \"line\"\n              },\n              {\n                \"id\": \"custom.showPoints\",\n                \"value\": \"never\"\n              },\n              {\n                \"id\": \"color\",\n                \"value\": {\n                  \"fixedColor\": \"blue\",\n                  \"mode\": \"fixed\"\n                }\n              }\n            ]\n          }\n        ]\n      },\n      \"gridPos\": {\n        \"h\": 10,\n        \"w\": 24,\n        \"x\": 0,\n        \"y\": 11\n      },\n      \"id\": 4,\n      \"options\": {\n        \"legend\": {\n          \"calcs\": [\n            \"min\",\n            \"mean\",\n            \"max\"\n          ],\n          \"displayMode\": \"table\",\n          \"placement\": \"bottom\",\n          \"showLegend\": true\n        },\n        \"tooltip\": {\n          \"mode\": \"single\",\n          \"sort\": \"none\"\n        }\n      },\n      \"targets\": [\n        {\n          \"alias\": \"$tag_model training progress\",\n          \"datasource\": {\n            \"type\": \"influxdb\",\n            \"uid\": \"${DS_INFLUXDB}\"\n          },\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_training_episodes\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"A\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"progress\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"mean\"\n              }\n            ]\n          ],\n          \"tags\": [\n            {\n              \"key\": \"phase\",\n              \"operator\": \"=\",\n              \"value\": \"training\"\n            }\n          ]\n        },\n        {\n          \"alias\": \"$tag_model eval progress\",\n          \"datasource\": {\n            \"type\": \"influxdb\",\n            \"uid\": \"${DS_INFLUXDB}\"\n          },\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_training_episodes\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"B\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"progress\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"mean\"\n              }\n            ]\n          ],\n          \"tags\": [\n            {\n              \"key\": \"phase\",\n              \"operator\": \"=\",\n              \"value\": \"evaluation\"\n            }\n          ]\n        },\n        {\n          \"alias\": \"$tag_model eval progress moving average\",\n          \"datasource\": {\n            \"type\": \"influxdb\",\n            \"uid\": \"${DS_INFLUXDB}\"\n          },\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_training_episodes\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"C\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"progress\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"mean\"\n              },\n              {\n                \"params\": [\n                  30\n                ],\n                \"type\": \"moving_average\"\n              }\n            ]\n          ],\n          \"tags\": [\n            {\n              \"key\": \"phase\",\n              \"operator\": \"=\",\n              \"value\": \"evaluation\"\n            }\n          ]\n        },\n        {\n          \"alias\": \"$tag_model training progress moving average\",\n          \"datasource\": {\n            \"type\": \"influxdb\",\n            \"uid\": \"${DS_INFLUXDB}\"\n          },\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_training_episodes\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"D\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"progress\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"mean\"\n              },\n              {\n                \"params\": [\n                  30\n                ],\n                \"type\": \"moving_average\"\n              }\n            ]\n          ],\n          \"tags\": [\n            {\n              \"key\": \"phase\",\n              \"operator\": \"=\",\n              \"value\": \"training\"\n            }\n          ]\n        }\n      ],\n      \"title\": \"Progress\",\n      \"type\": \"timeseries\"\n    },\n    {\n      \"datasource\": {},\n      \"fieldConfig\": {\n        \"defaults\": {\n          \"color\": {\n            \"mode\": \"palette-classic\"\n          },\n          \"custom\": {\n            \"axisBorderShow\": false,\n            \"axisCenteredZero\": false,\n            \"axisColorMode\": \"text\",\n            \"axisLabel\": \"\",\n            \"axisPlacement\": \"auto\",\n            \"barAlignment\": 0,\n            \"drawStyle\": \"points\",\n            \"fillOpacity\": 0,\n            \"gradientMode\": \"none\",\n            \"hideFrom\": {\n              \"legend\": false,\n              \"tooltip\": false,\n              \"viz\": false\n            },\n            \"insertNulls\": false,\n            \"lineInterpolation\": \"linear\",\n            \"lineWidth\": 1,\n            \"pointSize\": 4,\n            \"scaleDistribution\": {\n              \"type\": \"linear\"\n            },\n            \"showPoints\": \"auto\",\n            \"spanNulls\": false,\n            \"stacking\": {\n              \"group\": \"A\",\n              \"mode\": \"none\"\n            },\n            \"thresholdsStyle\": {\n              \"mode\": \"off\"\n            }\n          },\n          \"decimals\": 3,\n          \"mappings\": [],\n          \"thresholds\": {\n            \"mode\": \"absolute\",\n            \"steps\": [\n              {\n                \"color\": \"green\",\n                \"value\": null\n              },\n              {\n                \"color\": \"red\",\n                \"value\": 80\n              }\n            ]\n          },\n          \"unit\": \"ms\"\n        },\n        \"overrides\": [\n          {\n            \"matcher\": {\n              \"id\": \"byRegexp\",\n              \"options\": \".*eval lap moving average\"\n            },\n            \"properties\": [\n              {\n                \"id\": \"custom.drawStyle\",\n                \"value\": \"line\"\n              },\n              {\n                \"id\": \"custom.showPoints\",\n                \"value\": \"never\"\n              },\n              {\n                \"id\": \"color\",\n                \"value\": {\n                  \"fixedColor\": \"orange\",\n                  \"mode\": \"fixed\"\n                }\n              },\n              {\n                \"id\": \"custom.lineWidth\",\n                \"value\": 2\n              }\n            ]\n          },\n          {\n            \"matcher\": {\n              \"id\": \"byRegexp\",\n              \"options\": \".*training lap moving average\"\n            },\n            \"properties\": [\n              {\n                \"id\": \"custom.drawStyle\",\n                \"value\": \"line\"\n              },\n              {\n                \"id\": \"custom.showPoints\",\n                \"value\": \"never\"\n              },\n              {\n                \"id\": \"color\",\n                \"value\": {\n                  \"fixedColor\": \"blue\",\n                  \"mode\": \"fixed\"\n                }\n              },\n              {\n                \"id\": \"custom.lineWidth\",\n                \"value\": 2\n              }\n            ]\n          }\n        ]\n      },\n      \"gridPos\": {\n        \"h\": 10,\n        \"w\": 24,\n        \"x\": 0,\n        \"y\": 21\n      },\n      \"id\": 2,\n      \"options\": {\n        \"legend\": {\n          \"calcs\": [\n            \"min\",\n            \"mean\",\n            \"max\",\n            \"lastNotNull\"\n          ],\n          \"displayMode\": \"table\",\n          \"placement\": \"bottom\",\n          \"showLegend\": true\n        },\n        \"tooltip\": {\n          \"mode\": \"single\",\n          \"sort\": \"none\"\n        }\n      },\n      \"targets\": [\n        {\n          \"alias\": \"$tag_model training lap \",\n          \"datasource\": {\n            \"type\": \"influxdb\",\n            \"uid\": \"${DS_INFLUXDB}\"\n          },\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_training_episodes\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"A\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"elapsed_time\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"min\"\n              }\n            ]\n          ],\n          \"tags\": [\n            {\n              \"key\": \"status\",\n              \"operator\": \"=\",\n              \"value\": \"Lap complete\"\n            },\n            {\n              \"condition\": \"AND\",\n              \"key\": \"phase\",\n              \"operator\": \"=\",\n              \"value\": \"training\"\n            }\n          ]\n        },\n        {\n          \"alias\": \"$tag_model eval lap \",\n          \"datasource\": {\n            \"type\": \"influxdb\",\n            \"uid\": \"${DS_INFLUXDB}\"\n          },\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_training_episodes\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"B\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"elapsed_time\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"min\"\n              }\n            ]\n          ],\n          \"tags\": [\n            {\n              \"key\": \"status\",\n              \"operator\": \"=\",\n              \"value\": \"Lap complete\"\n            },\n            {\n              \"condition\": \"AND\",\n              \"key\": \"phase\",\n              \"operator\": \"=\",\n              \"value\": \"evaluation\"\n            }\n          ]\n        },\n        {\n          \"alias\": \"$tag_model eval lap moving average\",\n          \"datasource\": {\n            \"type\": \"influxdb\",\n            \"uid\": \"${DS_INFLUXDB}\"\n          },\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_training_episodes\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"C\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"elapsed_time\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"min\"\n              },\n              {\n                \"params\": [\n                  30\n                ],\n                \"type\": \"moving_average\"\n              }\n            ]\n          ],\n          \"tags\": [\n            {\n              \"key\": \"status\",\n              \"operator\": \"=\",\n              \"value\": \"Lap complete\"\n            },\n            {\n              \"condition\": \"AND\",\n              \"key\": \"phase\",\n              \"operator\": \"=\",\n              \"value\": \"evaluation\"\n            }\n          ]\n        },\n        {\n          \"alias\": \"$tag_model training lap moving average\",\n          \"datasource\": {\n            \"type\": \"influxdb\",\n            \"uid\": \"${DS_INFLUXDB}\"\n          },\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_training_episodes\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"D\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"elapsed_time\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"min\"\n              },\n              {\n                \"params\": [\n                  30\n                ],\n                \"type\": \"moving_average\"\n              }\n            ]\n          ],\n          \"tags\": [\n            {\n              \"key\": \"status\",\n              \"operator\": \"=\",\n              \"value\": \"Lap complete\"\n            },\n            {\n              \"condition\": \"AND\",\n              \"key\": \"phase\",\n              \"operator\": \"=\",\n              \"value\": \"training\"\n            }\n          ]\n        }\n      ],\n      \"title\": \"Training Complete Lap times\",\n      \"type\": \"timeseries\"\n    },\n    {\n      \"datasource\": {},\n      \"description\": \"\",\n      \"fieldConfig\": {\n        \"defaults\": {\n          \"color\": {\n            \"mode\": \"palette-classic\"\n          },\n          \"custom\": {\n            \"axisBorderShow\": false,\n            \"axisCenteredZero\": false,\n            \"axisColorMode\": \"text\",\n            \"axisLabel\": \"\",\n            \"axisPlacement\": \"auto\",\n            \"barAlignment\": 0,\n            \"drawStyle\": \"points\",\n            \"fillOpacity\": 0,\n            \"gradientMode\": \"none\",\n            \"hideFrom\": {\n              \"legend\": false,\n              \"tooltip\": false,\n              \"viz\": false\n            },\n            \"insertNulls\": false,\n            \"lineInterpolation\": \"linear\",\n            \"lineWidth\": 1,\n            \"pointSize\": 5,\n            \"scaleDistribution\": {\n              \"type\": \"linear\"\n            },\n            \"showPoints\": \"auto\",\n            \"spanNulls\": false,\n            \"stacking\": {\n              \"group\": \"A\",\n              \"mode\": \"none\"\n            },\n            \"thresholdsStyle\": {\n              \"mode\": \"off\"\n            }\n          },\n          \"mappings\": [],\n          \"thresholds\": {\n            \"mode\": \"absolute\",\n            \"steps\": [\n              {\n                \"color\": \"green\",\n                \"value\": null\n              },\n              {\n                \"color\": \"red\",\n                \"value\": 80\n              }\n            ]\n          }\n        },\n        \"overrides\": [\n          {\n            \"matcher\": {\n              \"id\": \"byRegexp\",\n              \"options\": \".*entropy moving average\"\n            },\n            \"properties\": [\n              {\n                \"id\": \"custom.drawStyle\",\n                \"value\": \"line\"\n              },\n              {\n                \"id\": \"custom.showPoints\",\n                \"value\": \"never\"\n              },\n              {\n                \"id\": \"color\",\n                \"value\": {\n                  \"fixedColor\": \"blue\",\n                  \"mode\": \"fixed\"\n                }\n              }\n            ]\n          }\n        ]\n      },\n      \"gridPos\": {\n        \"h\": 11,\n        \"w\": 24,\n        \"x\": 0,\n        \"y\": 31\n      },\n      \"id\": 7,\n      \"options\": {\n        \"legend\": {\n          \"calcs\": [\n            \"min\",\n            \"mean\",\n            \"max\",\n            \"lastNotNull\"\n          ],\n          \"displayMode\": \"table\",\n          \"placement\": \"bottom\",\n          \"showLegend\": true\n        },\n        \"tooltip\": {\n          \"mode\": \"single\",\n          \"sort\": \"none\"\n        }\n      },\n      \"targets\": [\n        {\n          \"alias\": \"$tag_model entropy\",\n          \"datasource\": {},\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            },\n            {\n              \"params\": [\n                \"none\"\n              ],\n              \"type\": \"fill\"\n            }\n          ],\n          \"measurement\": \"dr_sagemaker_epochs\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"A\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"entropy\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"mean\"\n              }\n            ]\n          ],\n          \"tags\": []\n        },\n        {\n          \"alias\": \"$tag_model entropy moving average\",\n          \"datasource\": {},\n          \"groupBy\": [\n            {\n              \"params\": [\n                \"$__interval\"\n              ],\n              \"type\": \"time\"\n            },\n            {\n              \"params\": [\n                \"model\"\n              ],\n              \"type\": \"tag\"\n            }\n          ],\n          \"hide\": false,\n          \"measurement\": \"dr_sagemaker_epochs\",\n          \"orderByTime\": \"ASC\",\n          \"policy\": \"default\",\n          \"refId\": \"B\",\n          \"resultFormat\": \"time_series\",\n          \"select\": [\n            [\n              {\n                \"params\": [\n                  \"entropy\"\n                ],\n                \"type\": \"field\"\n              },\n              {\n                \"params\": [],\n                \"type\": \"mean\"\n              },\n              {\n                \"params\": [\n                  10\n                ],\n                \"type\": \"moving_average\"\n              }\n            ]\n          ],\n          \"tags\": []\n        }\n      ],\n      \"title\": \"Epoch\",\n      \"type\": \"timeseries\"\n    }\n  ],\n  \"refresh\": \"10s\",\n  \"schemaVersion\": 39,\n  \"tags\": [],\n  \"templating\": {\n    \"list\": []\n  },\n  \"time\": {\n    \"from\": \"now-1h\",\n    \"to\": \"now\"\n  },\n  \"timepicker\": {},\n  \"timezone\": \"\",\n  \"title\": \"DeepRacer Training template\",\n  \"uid\": \"adke0lwv5zwg0e\",\n  \"version\": 1,\n  \"weekStart\": \"\"\n}"
  },
  {
    "path": "docker/metrics/grafana/provisioning/datasources/influxdb.yml",
    "content": "# config file version\napiVersion: 1\n\n# list of datasources that should be deleted from the database\ndeleteDatasources:\n  - name: Influxdb\n    orgId: 1\n\n# list of datasources to insert/update depending\n# whats available in the database\ndatasources:\n  # <string, required> name of the datasource. Required\n- name: InfluxDB\n  # <string, required> datasource type. Required\n  type: influxdb\n  # <string, required> access mode. direct or proxy. Required\n  access: proxy\n  # <int> org id. will default to orgId 1 if not specified\n  orgId: 1\n  # <string> url\n  url: http://influxdb:8086\n  # <string> database password, if used\n  password: \"admin\"\n  # <string> database user, if used\n  user: \"admin\"\n  # <string> database name, if used\n  database: \"influx\"\n  # <bool> enable/disable basic auth\n  basicAuth: false\n#  withCredentials:\n  # <bool> mark as default datasource. Max one per org\n  isDefault: true\n  # <map> fields that will be converted to json and stored in json_data\n  jsonData:\n    timeInterval: \"5s\"\n#     graphiteVersion: \"1.1\"\n#     tlsAuth: false\n#     tlsAuthWithCACert: false\n#  # <string> json object of data that will be encrypted.\n#  secureJsonData:\n#    tlsCACert: \"...\"\n#    tlsClientCert: \"...\"\n#    tlsClientKey: \"...\"\n  version: 1\n  # <bool> allow users to edit datasources from the UI.\n  editable: false\n"
  },
  {
    "path": "docker/metrics/telegraf/etc/telegraf.conf",
    "content": "# Telegraf configuration\n\n# Telegraf is entirely plugin driven. All metrics are gathered from the\n# declared inputs, and sent to the declared outputs.\n\n# Plugins must be declared in here to be active.\n# To deactivate a plugin, comment out the name and any variables.\n\n# Use 'telegraf -config telegraf.conf -test' to see what metrics a config\n# file would generate.\n\n# Global tags can be specified here in key=\"value\" format.\n[global_tags]\n  # dc = \"us-east-1\" # will tag all metrics with dc=us-east-1\n  # rack = \"1a\"\n\n# Configuration for telegraf agent\n[agent]\n  ## Default data collection interval for all inputs\n  interval = \"5s\"\n  ## Rounds collection interval to 'interval'\n  ## ie, if interval=\"10s\" then always collect on :00, :10, :20, etc.\n  round_interval = true\n\n  ## Telegraf will cache metric_buffer_limit metrics for each output, and will\n  ## flush this buffer on a successful write.\n  metric_buffer_limit = 10000\n  ## Flush the buffer whenever full, regardless of flush_interval.\n  flush_buffer_when_full = true\n\n  ## Collection jitter is used to jitter the collection by a random amount.\n  ## Each plugin will sleep for a random time within jitter before collecting.\n  ## This can be used to avoid many plugins querying things like sysfs at the\n  ## same time, which can have a measurable effect on the system.\n  collection_jitter = \"0s\"\n\n  ## Default flushing interval for all outputs. You shouldn't set this below\n  ## interval. Maximum flush_interval will be flush_interval + flush_jitter\n  flush_interval = \"1s\"\n  ## Jitter the flush interval by a random amount. This is primarily to avoid\n  ## large write spikes for users running a large number of telegraf instances.\n  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s\n  flush_jitter = \"0s\"\n\n  ## Run telegraf in debug mode\n  debug = false\n  ## Run telegraf in quiet mode\n  quiet = false\n  ## Override default hostname, if empty use os.Hostname()\n  hostname = \"\"\n\n\n###############################################################################\n#                                  OUTPUTS                                    #\n###############################################################################\n\n# Configuration for influxdb server to send metrics to\n[[outputs.influxdb]]\n  # The full HTTP or UDP endpoint URL for your InfluxDB instance.\n  # Multiple urls can be specified but it is assumed that they are part of the same\n  # cluster, this means that only ONE of the urls will be written to each interval.\n  # urls = [\"udp://localhost:8089\"] # UDP endpoint example\n  urls = [\"http://influxdb:8086\"] # required\n  # The target database for metrics (telegraf will create it if not exists)\n  database = \"influx\" # required\n  # Precision of writes, valid values are \"ns\", \"us\" (or \"µs\"), \"ms\", \"s\", \"m\", \"h\".\n  # note: using second precision greatly helps InfluxDB compression\n  precision = \"s\"\n\n  ## Write timeout (for the InfluxDB client), formatted as a string.\n  ## If not provided, will default to 5s. 0s means no timeout (not recommended).\n  timeout = \"5s\"\n  # username = \"telegraf\"\n  # password = \"metricsmetricsmetricsmetrics\"\n  # Set the user agent for HTTP POSTs (can be useful for log differentiation)\n  # user_agent = \"telegraf\"\n  # Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)\n  # udp_payload = 512\n\n\n###############################################################################\n#                                  INPUTS                                     #\n###############################################################################\n# Statsd Server\n[[inputs.statsd]]\n  ## Protocol, must be \"tcp\", \"udp4\", \"udp6\" or \"udp\" (default=udp)\n  protocol = \"udp\"\n\n  ## MaxTCPConnection - applicable when protocol is set to tcp (default=250)\n  max_tcp_connections = 250\n\n  ## Enable TCP keep alive probes (default=false)\n  tcp_keep_alive = false\n\n  ## Specifies the keep-alive period for an active network connection.\n  ## Only applies to TCP sockets and will be ignored if tcp_keep_alive is false.\n  ## Defaults to the OS configuration.\n  # tcp_keep_alive_period = \"2h\"\n\n  ## Address and port to host UDP listener on\n  service_address = \":8125\"\n\n  ## The following configuration options control when telegraf clears it's cache\n  ## of previous values. If set to false, then telegraf will only clear it's\n  ## cache when the daemon is restarted.\n  ## Reset gauges every interval (default=true)\n  delete_gauges = true\n  ## Reset counters every interval (default=true)\n  delete_counters = true\n  ## Reset sets every interval (default=true)\n  delete_sets = true\n  ## Reset timings & histograms every interval (default=true)\n  delete_timings = true\n\n  ## Percentiles to calculate for timing & histogram stats\n  percentiles = [90]\n\n  ## separator to use between elements of a statsd metric\n  metric_separator = \"_\"\n\n  ## Parses tags in the datadog statsd format\n  ## http://docs.datadoghq.com/guides/dogstatsd/\n  parse_data_dog_tags = false\n\n  ## Statsd data translation templates, more info can be read here:\n  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#graphite\n  # templates = [\n  #     \"cpu.* measurement*\"\n  # ]\n\n  ## Number of UDP messages allowed to queue up, once filled,\n  ## the statsd server will start dropping packets\n  allowed_pending_messages = 10000\n\n  ## Number of timing/histogram values to track per-measurement in the\n  ## calculation of percentiles. Raising this limit increases the accuracy\n  ## of percentiles but also increases the memory usage and cpu time.\n  percentile_limit = 1000\n\n  ## Maximum socket buffer size in bytes, once the buffer fills up, metrics\n  ## will start dropping.  Defaults to the OS default.\n  # read_buffer_size = 65535\n\n# Read metrics about cpu usage\n[[inputs.cpu]]\n  ## Whether to report per-cpu stats or not\n  percpu = true\n  ## Whether to report total system cpu stats or not\n  totalcpu = true\n  ## Comment this line if you want the raw CPU time metrics\n  fielddrop = [\"time_*\"]\n\n\n# Read metrics about disk usage by mount point\n[[inputs.disk]]\n  ## By default, telegraf gather stats for all mountpoints.\n  ## Setting mountpoints will restrict the stats to the specified mountpoints.\n  # mount_points = [\"/\"]\n\n  ## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually\n  ## present on /run, /var/run, /dev/shm or /dev).\n  ignore_fs = [\"tmpfs\", \"devtmpfs\"]\n\n\n# Read metrics about disk IO by device\n[[inputs.diskio]]\n  ## By default, telegraf will gather stats for all devices including\n  ## disk partitions.\n  ## Setting devices will restrict the stats to the specified devices.\n  # devices = [\"sda\", \"sdb\"]\n  ## Uncomment the following line if you need disk serial numbers.\n  # skip_serial_number = false\n\n\n# Get kernel statistics from /proc/stat\n[[inputs.kernel]]\n  # no configuration\n\n\n# Read metrics about memory usage\n[[inputs.mem]]\n  # no configuration\n\n\n# Get the number of processes and group them by status\n[[inputs.processes]]\n  # no configuration\n\n\n# Read metrics about swap memory usage\n[[inputs.swap]]\n  # no configuration\n\n\n# Read metrics about system load & uptime\n[[inputs.system]]\n  # no configuration\n\n# Read metrics about network interface usage\n[[inputs.net]]\n  # collect data only about specific interfaces\n  # interfaces = [\"eth0\"]\n\n\n[[inputs.netstat]]\n  # no configuration\n\n[[inputs.interrupts]]\n  # no configuration\n\n[[inputs.linux_sysctl_fs]]\n  # no configuration\n\n[[inputs.socket_listener]]\n  service_address = \"udp://:8092\""
  },
  {
    "path": "docs/_config.yml",
    "content": "---\ntheme: jekyll-theme-slate\nmarkdown: GFM\nname: Deepracer-for-Cloud\nplugins:\n  - jekyll-relative-links\nrelative_links:\n  enabled:     true\n  collections: false"
  },
  {
    "path": "docs/docker.md",
    "content": "# About the Docker setup\n\nDRfC supports running Docker in to modes `swarm` and `compose` - this behaviour is configured in `system.env` through `DR_DOCKER_STYLE`.\n\n## Swarm Mode\n\nDocker Swarm mode is the default. Docker Swarm makes it possible to connect multiple hosts together to spread the load -- esp. useful if one wants to run multiple Robomaker workers, but can also be useful locally if one has two computers that each are not powerful enough to run DeepRacer.\n\nIn Swarm mode DRfC creates Stacks, using `docker stack`. During operations one can check running stacks through `docker stack ls`, and running services through `docker stack <id> ls`.\n\nDRfC is installed only on the manager. (The first installed host.) Swarm workers are 'dumb' and do not need to have DRfC installed.\n\n### Key features\n\n* Allows user to connect multiple computers on the same network. (In AWS the instances must be connected on same VPC, and instances must be allowed to communicate.)\n* Supports [multiple Robomaker workers](multi_worker.md)\n* Supports [running multiple parallel experiments](multi_run.md)\n\n### Limitations\n\n* The Sagemaker container can only be run on the manager.\n* Docker images are downloaded from Docker Hub. Locally built images are allowed only if they have a unique tag, not in Docker Hub. If you have multiple Docker nodes ensure that they all have the image available.\n\n### Connecting Workers\n\n* On the manager run `docker swarm join-token manager`.\n* On the worker run the command that was displayed on the manager `docker swarm join --token <token> <ip>:<port>`.\n\n### Ports\n\nDocker Swarm will automatically put a load-balancer in front of all replicas in a service. This means that the ROS Web View, which provides a video stream of the DeepRacer during training, will be load balanced - sharing one port (`8080`). If you have multiple workers (even across multiple hosts) then press F5 to cycle through them. \n\n## Compose Mode\n\nIn Compose mode DRfC creates Services, using `docker compose`. During operations one can check running stacks through `docker service ls`, and running services through `docker service ps`.\n\n### Key features\n\n* Supports [multiple Robomaker workers](multi_worker.md)\n* Supports [running multiple parallel experiments](multi_run.md)\n* Supports [GPU Accelerated OpenGL for Robomaker](opengl.md)\n\n### Limitations\n\n* Workload cannot be spread across multiple hosts.\n\n### Ports\n\nIn the case of using Docker Compose the different Robomaker worker will require unique ports for ROS Web Vew and VNC. Docker will assign these dynamically. Use `docker ps` to see which container has been assigned which ports.\n"
  },
  {
    "path": "docs/droa.md",
    "content": "# DeepRacer on AWS (DRoA) Integration\n\n[DeepRacer on AWS](https://aws.amazon.com/solutions/implementations/deepracer-on-aws/) is the community-hosted replacement for the original AWS DeepRacer console. DRfC includes a set of `droa-*` commands that let you manage models in your DRoA installation directly from the command line.\n\n## Prerequisites\n\n### Install DRoA\n\nFollow the [DeepRacer on AWS installation guide](https://github.com/aws-deepracer-community/deepracer-on-aws) to deploy DRoA into your own AWS account.\n\n### Configure DRfC\n\nIn `system.env` set:\n\n```bash\nDR_DROA_URL=https://<your-droa-domain>   # e.g. https://deepracer.aws.example.com\nDR_DROA_USERNAME=<your-droa-email>\n```\n\n`DR_DROA_URL` is the base URL of your DRoA deployment. At runtime, DRfC fetches `<DR_DROA_URL>/env.js` to discover the region, Cognito pools, API endpoint, and upload bucket automatically — no additional AWS config required.\n\n> **Security**: never store your DRoA password in `system.env`. All commands prompt for it interactively (or accept `--password` on the CLI). Credentials are cached in `~/.droa-cache/` for the duration of the session token.\n\n### Python environment\n\nRun `bin/prepare.sh` to create the `.venv` virtual environment and install the required Python packages (`boto3`, `pyyaml`, `requests`, `deepracer-utils`). After `source bin/activate.sh` the venv is active and all `droa-*` commands are available.\n\n---\n\n## Commands\n\n### `droa-list-models`\n\nList all models in your DRoA installation, sorted newest-first.\n\n```\ndroa-list-models [--json]\n```\n\nOutput columns: `modelId`, `name`, `status`, `trainingStatus`, `createdAt`.\n\n| Status | Meaning |\n|--------|---------|\n| `IMPORTING` | Import in progress |\n| `READY` | Available for evaluation |\n| `TRAINING` | Training job running |\n| `ERROR` | Import or training failed |\n| `DELETING` | Deletion in progress |\n\n---\n\n### `droa-get-model`\n\nShow details of a single model.\n\n```\ndroa-get-model <modelId> [--verbose] [--summary] [--json]\n```\n\n| Flag | Description |\n|------|-------------|\n| *(none)* | Identity, car config, training config, metadata |\n| `--verbose` | Adds action space and reward function source |\n| `--summary` | Adds mean training metrics (reward, progress) via DeepRacer Utils |\n| `--json` | Raw JSON output |\n\n---\n\n### `droa-download-logs`\n\nDownload training or evaluation logs for a model.\n\n```\ndroa-download-logs <modelId> [--asset-type TRAINING_LOGS|EVALUATION_LOGS|PHYSICAL_CAR_MODEL|VIRTUAL_MODEL|VIDEOS]\n                              [--evaluation-id <id>]\n                              [--output <file>]\n                              [--summary]\n```\n\n| Flag | Description |\n|------|-------------|\n| `--asset-type` | Asset type (default: `TRAINING_LOGS`) |\n| `--evaluation-id` | Required when `--asset-type EVALUATION_LOGS` |\n| `--output` / `-o` | Output file path (default: derived from the presigned URL filename) |\n| `--summary` | Print DeepRacer Utils stability summary after download (TRAINING_LOGS only) |\n\nThe command polls until the asset is ready (up to 5 minutes for `VIRTUAL_MODEL`).\n\n---\n\n### `droa-delete-model`\n\nDelete a model. Only models with status `READY` or `ERROR` can be deleted.\n\n```\ndroa-delete-model <modelId> [-y/--yes]\n```\n\nWithout `--yes`, you are shown the model name and status and must type the model name to confirm. Deletion is asynchronous — the model transitions to `DELETING` status.\n\n---\n\n### `droa-import-model`\n\nImport a locally trained DRFC model into DRoA.\n\n```\ndroa-import-model (--model-prefix <prefix> | --model-dir <dir>)\n                  [--model-name <name>]\n                  [--model-description <text>]\n                  [--best | --checkpoint <step>]\n```\n\n#### Source options\n\n| Option | Description |\n|--------|-------------|\n| `--model-prefix` | Pull directly from local MinIO S3 (`DR_LOCAL_S3_BUCKET`). Defaults `--model-name` to the prefix. |\n| `--model-dir` | Use a pre-assembled local directory containing all required model files. |\n\n#### Checkpoint selection (`--model-prefix` only)\n\n| Flag | Behaviour |\n|------|-----------|\n| *(none)* | Use the last checkpoint |\n| `--best` | Use the best checkpoint |\n| `--checkpoint STEP` | Use the checkpoint at the given training step |\n\n#### What happens\n\n1. Model files are pulled from MinIO (path-style S3, using `DR_MINIO_URL` and `DR_LOCAL_S3_PROFILE`).\n2. `training_params.yaml` is copied from the bucket (`training_params_1.yaml` preferred for multi-worker runs). If missing, it is generated from `DR_*` environment variables.\n3. `WORLD_NAME` direction suffixes (`_cw`, `_ccw`) are stripped and `TRACK_DIRECTION_CLOCKWISE` is added — required by DRoA's track validation.\n4. Files are uploaded to the DRoA S3 transit bucket and the import API is called.\n\n#### Required files (when using `--model-dir`)\n\n- `model_metadata.json`\n- `reward_function.py`\n- `training_params.yaml`\n- `hyperparameters.json`\n\n---\n\n## Environment variables reference\n\n| Variable | Location | Description |\n|----------|----------|-------------|\n| `DR_DROA_URL` | `system.env` | Base URL of your DRoA deployment |\n| `DR_DROA_USERNAME` | `system.env` | DRoA login email |\n| `DR_MINIO_URL` | `system.env` | MinIO endpoint URL (e.g. `http://minio:9000`) |\n| `DR_LOCAL_S3_PROFILE` | `system.env` | boto3 AWS profile name for MinIO access |\n| `DR_LOCAL_S3_BUCKET` | `run.env` | Local S3 bucket name |\n| `DR_LOCAL_S3_MODEL_PREFIX` | `run.env` | Default model prefix for `--model-prefix` |\n\nAll `droa-*` commands also accept `--url`, `--username`, and `--password` flags to override the environment variables.\n"
  },
  {
    "path": "docs/head-to-head.md",
    "content": "# Head-to-Head Race (Beta)\n\nIt is possible to run a head-to-head race, similar to the races in the brackets \nrun by AWS in the Virtual Circuits to  determine the winner of the head-to-bot races.\n\nThis replaces the \"Tournament Mode\".\n\n## Introduction\n\nThe concept is that you have two models racing each other, one Purple and one Orange Car. One car\nis powered by our primary configured model, and the second car is powered by the model in `DR_EVAL_OPP_S3_MODEL_PREFIX`\n\n## Configuration\n\n### run.env\n\nConfigure `run.env` with the following parameters:\n* `DR_RACE_TYPE` should be `HEAD_TO_MODEL`.\n* `DR_EVAL_OPP_S3_MODEL_PREFIX` will be the S3 prefix for the secondary model.\n* `DR_EVAL_OPP_CAR_NAME` is the display name of this model.\n\nMetrics, Traces and Videos will be stored in each models' prefix.\n\n## Run\n\nRun the race with `dr-start-evaluation`; one race will be run. "
  },
  {
    "path": "docs/index.md",
    "content": "# Introduction\n\nProvides a quick and easy way to get up and running with a DeepRacer training environment in AWS or Azure, using either the Azure [N-Series Virtual Machines](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu) or [AWS EC2 Accelerated Computing instances](https://aws.amazon.com/ec2/instance-types/?nc1=h_ls#Accelerated_Computing), or locally on your own desktop or server.\n\nDeepRacer-For-Cloud (DRfC) started as an extension of the work done by Alex (https://github.com/alexschultz/deepracer-for-dummies), which is again a wrapper around the amazing work done by Chris (https://github.com/crr0004/deepracer). With the introduction of the second generation Deepracer Console the repository has been split up. This repository contains the scripts needed to *run* the training, but depends on Docker Hub to provide pre-built docker images. All the under-the-hood building capabilities have been moved to my [Deepracer Build](https://github.com/aws-deepracer-community/deepracer) repository.\n\n# Main Features\n\nDRfC supports a wide set of features to ensure that you can focus on creating the best model:\n* User-friendly\n\t* Based on the continously updated community [Robomaker](https://github.com/aws-deepracer-community/deepracer-simapp) and [Sagemaker](https://github.com/aws-deepracer-community/deepracer-sagemaker-container) containers, supporting a wide range of CPU and GPU setups.\n\t* Wide set of scripts (`dr-*`) enables effortless training.\n\t* Detection of your AWS DeepRacer Console models; allows upload of a locally trained model to any of them.\n* Modes\n\t* Time Trial\n\t* Object Avoidance\n\t* Head-to-Bot\n* Training\n\t* Multiple Robomaker instances per Sagemaker (N:1) to improve training progress.\n\t* Multiple training sessions in parallel - each being (N:1) if hardware supports it - to test out things in parallel.\n\t* Connect multiple nodes together (Swarm-mode only) to combine the powers of multiple computers/instances.\n* Evaluation\n\t* Evaluate independently from training.\n\t* Save evaluation run to MP4 file in S3.\n* Logging\n\t* Training metrics and trace files are stored to S3.\n\t* Optional integration with AWS CloudWatch.\n\t* Optional exposure of Robomaker internal log-files.\n* Technology\n\t* Supports both Docker Swarm (used for connecting multiple nodes together) and Docker Compose (used to support OpenGL)\n\n# Documentation\n\n* [Initial Installation](installation.md)\n* [DeepRacer on AWS (DRoA) Integration](droa.md)\n* [Reference](reference.md)\n* [Using multiple Robomaker workers](multi_worker.md)\n* [Managing experiments and running multiple parallel experiments](multi_run.md)\n* [GPU Accelerated OpenGL for Robomaker](opengl.md)\n* [Having multiple GPUs in one Computer](multi_gpu.md)\n* [Installing on Windows](windows.md)\n* [Run a Head-to-Head Race](head-to-head.md)\n* [Watching the car](video.md)\n\n# Support\n\n* For general support it is suggested to join the [AWS DeepRacing Community](https://deepracing.io/). The Community Slack has a channel #dr-training-local where the community provides active support.\n* Create a GitHub issue if you find an actual code issue, or where updates to documentation would be required.\n"
  },
  {
    "path": "docs/installation.md",
    "content": "# Installing Deepracer-for-Cloud\n\n## Requirements\n\nDepending on your needs as well as specific needs of the cloud platform you can configure your VM to your liking. Both CPU-only as well as GPU systems are supported.\n\n**AWS**:\n\n* EC2 instance of type G3, G4, P2 or P3 - recommendation is g4dn.2xlarge - for GPU enabled training. C5 or M6 types - recommendation is c5.2xlarge - for CPU training.\n  * Ubuntu 20.04\n  * Minimum 30 GB, preferred 40 GB of OS disk.\n  * Ephemeral Drive connected\n  * Minimum of 8 GB GPU-RAM if running with GPU.\n  * Recommended at least 6 VCPUs\n* S3 bucket. Preferrably in same region as EC2 instance.\n* The internal `sagemaker-local` docker network runs by default on `192.168.2.0/24`. Ensure that your AWS IPC does not overlap with this subnet.\n\n**Azure**:\n\n* N-Series VM that comes with NVIDIA Graphics Adapter - recommendation is NC6_Standard\n  * Ubuntu 20.04\n  * Standard 30 GB OS drive is sufficient to get started.\n  * Recommended to add an additional 32 GB data disk if you want to use the Log Analysis container.\n  * Minimum 8 GB GPU-RAM\n  * Recommended at least 6 VCPUs\n* Storage Account with one Blob container configured for Access Key authentication.\n\n**Local**:\n\n* A modern, comparatively powerful, Intel based system.\n  * Ubuntu 20.04, other Linux-dristros likely to work.\n  * 4 core-CPU, equivalent to 8 vCPUs; the more the better.\n  * NVIDIA Graphics adapter with minimum 8 GB RAM for Sagemaker to run GPU. Robomaker enabled GPU instances need ~1 GB each.\n  * System RAM + GPU RAM should be at least 32 GB.\n* Running DRfC Ubuntu 20.04 on Windows using Windows Subsystem for Linux 2 is possible. See [Installing on Windows](windows.md)\n\n## Installation\n\nThe package comes with preparation and setup scripts that would allow a turn-key setup for a fresh virtual machine.\n\n```shell\ngit clone https://github.com/aws-deepracer-community/deepracer-for-cloud.git\n```\n\n**For cloud setup** execute:\n\n```shell\ncd deepracer-for-cloud && ./bin/prepare.sh\n```\n\nThis will prepare the VM by partitioning additional drives as well as installing all prerequisites. After a reboot it will continuee to run `./bin/init.sh` setting up the full repository and downloading the core Docker images. Depending on your environment this may take up to 30 minutes. The scripts will create a file `DONE` once completed.\n\nThe installation script will adapt `.profile` to ensure that all settings are applied on login. Otherwise run the activation with `source bin/activate.sh`.\n\n**For local install** it is recommended *not* to run the `bin/prepare.sh` script; it might do more changes than what you want. Rather ensure that all prerequisites are set up and run `bin/init.sh` directly.\n\nSee also the [following article](https://awstip.com/deepracer-for-cloud-drfc-local-setup-3c6418b2c75a) for guidance.\n\nThe Init Script takes a few parameters:\n\n| Variable | Description |\n|----------|-------------|\n| `-c <cloud>` | Sets the cloud version to be configured, automatically updates the `DR_CLOUD` parameter in `system.env`. Options are `azure`, `aws` or `local`. Default is `local` |\n| `-a <arch>` | Sets the architecture to be configured. Either `cpu` or `gpu`. Default is `gpu`. |\n\n## Environment Setup\n\nThe initialization script will attempt to auto-detect your environment (`Azure`, `AWS` or `Local`), and store the outcome in the `DR_CLOUD` parameter in `system.env`. You can also pass in a `-c <cloud>` parameter to override it, e.g. if you want to run the minio-based `local` mode in the cloud.\n\nThe main difference between the mode is based on authentication mechanisms and type of storage being configured. The next chapters will review each type of environment on its own.\n\n### AWS\n\nIn AWS it is possible to set up authentication to S3 in two ways: Integrated sign-on using [IAM Roles](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) or using access keys.\n\n#### IAM Role\n\nTo use IAM Roles:\n\n* An empty S3 bucket in the same region as the EC2 instance.\n* An IAM Role that has permissions to:\n  * Access both the *new* S3 bucket as well as the DeepRacer bucket.\n  * AmazonVPCReadOnlyAccess\n  * AmazonKinesisVideoStreamsFullAccess if you want to stream to Kinesis\n  * CloudWatch\n* An EC2 instance with the defined IAM Role assigned.\n* Configure `system.env` as follows:\n  * `DR_LOCAL_S3_PROFILE=default`\n  * `DR_LOCAL_S3_BUCKET=<bucketname>`\n  * `DR_UPLOAD_S3_PROFILE=default`\n  * `DR_UPLOAD_S3_BUCKET=<your-aws-deepracer-bucket>`\n* Run `dr-update` for configuration to take effect.\n\n#### Manual setup\n\nFor access with IAM user:\n\n* An empty S3 bucket in the same region as the EC2 instance.\n* A real AWS IAM user set up with access keys:\n  * User should have permissions to access the *new* bucket as well as the dedicated DeepRacer S3 bucket.\n  * Use `aws configure` to configure this into the default profile.\n* Configure `system.env` as follows:\n  * `DR_LOCAL_S3_PROFILE=default`\n  * `DR_LOCAL_S3_BUCKET=<bucketname>`\n  * `DR_UPLOAD_S3_PROFILE=default`\n  * `DR_UPLOAD_S3_BUCKET=<your-aws-deepracer-bucket>`\n* Run `dr-update` for configuration to take effect.\n\n### Azure\n\nMinio has deprecated the gateway feature that exposed an Azure Blob Storage as an S3 bucket. Azure mode now sets up minio in the same way as in local mode.\n\nIf you want to use awscli (`aws`) to manually move files then use `aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 ...`, as this will set both `--profile` and `--endpoint-url` parameters to match your configuration.\n\n### Local\n\nLocal mode runs a minio server that hosts the data in the `docker/volumes` directory. It is otherwise command-compatible with the Azure setup; as the data is accessible via Minio and not via native S3.\n\nIn Local mode the script-set requires the following:\n\n* Configure the Minio credentials with `aws configure --profile minio`. The default configuration will use the `minio` profile to configure MINIO. You can choose any username or password, but username needs to be at least length 3, and password at least length 8.\n* A real AWS IAM user configured with `aws configure` to enable upload of models into AWS DeepRacer.\n* Configure `system.env` as follows:\n  * `DR_LOCAL_S3_PROFILE=default`\n  * `DR_LOCAL_S3_BUCKET=<bucketname>`\n  * `DR_UPLOAD_S3_PROFILE=default`\n  * `DR_UPLOAD_S3_BUCKET=<your-aws-deepracer-bucket>`\n* Run `dr-update` for configuration to take effect.\n\n## First Run\n\nFor the first run the following final steps are needed. This creates a training run with all default values in\n\n* Define your custom files in `custom_files/` - samples can be found in `defaults` which you must copy over:\n  * `hyperparameters.json` - definining the training hyperparameters\n  * `model_metadata.json` - defining the action space and sensors\n  * `reward_function.py` - defining the reward function\n* Upload the files into the bucket with `dr-upload-custom-files`. This will also start minio if required.\n* Start training with `dr-start-training`\n\nAfter a while you will see the sagemaker logs on the screen.\n\n## Troubleshooting\n\nHere are some hints for troubleshooting specific issues you may encounter\n\n### Local training troubleshooting\n\n| Issue        | Troubleshooting hint |\n|------------- | ---------------------|\nGet messages like \"Sagemaker is not running\" | Run `docker -ps a` to see if the containers are running or if they stopped due to some errors. If running after a fresh install, try restarting the system.\nCheck docker errors for specific container | Run `docker logs -f <containerid>`\nGet message \"Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on interface <your_interface> ...\" when running `./bin/init.sh -c local -a cpu` | It means you have multiple IP addresses and you need to specify one within `./bin/init.sh`.<br> If you don't care which one to use, you can get the first one by running ```ifconfig \\| grep $(route \\| awk '/^default/ {print $8}') -a1 \\| grep -o -P '(?<=inet ).*(?= netmask)```.<br> Edit   `./bin/init.sh` and locate line `docker swarm init` and change it to `docker swarm init --advertise-addr <your_IP>`.<br> Rerun  `./bin/init.sh -c local -a cpu`\nI don't have any of the `dr-*` commands | Run `source bin/activate.sh`.\n"
  },
  {
    "path": "docs/mac.md",
    "content": "# Running DeepRacer-for-Cloud on macOS\n\nDRfC can be run on macOS, both on AWS Mac EC2 instances (mac1/mac2 family) and on local Mac hardware (Intel or Apple Silicon). Because macOS does not support NVIDIA GPUs, training always runs in **CPU mode**.\n\n---\n\n## Architecture overview\n\nOn macOS, Docker containers run inside a lightweight Linux VM managed by [Colima](https://github.com/abiosoft/colima) rather than directly on the host. This has a few implications you should be aware of:\n\n| Concern | Impact |\n|---|---|\n| **No NVIDIA GPU** | Always `cpu` architecture; training is slower than a GPU instance |\n| **Colima VM filesystem** | Bind-mount paths (e.g. `/tmp/sagemaker`) must exist inside the VM, not on the macOS host |\n| **IMDS not reachable from VM** | IAM role credentials are not automatically available inside containers; explicit AWS keys must be configured |\n| **BSD userland** | `sed`, `grep`, `sort`, `readlink` differ from GNU; key shell entrypoints use portable path handling, but custom scripts should still avoid assuming GNU-only flags |\n| **bash 3.2 ships with macOS** | A modern bash 5 must be installed via Homebrew and set as the login shell |\n\n---\n\n## Option 1: AWS Mac EC2 instance\n\nAWS offers bare-metal Mac instances (`mac1.metal` for Intel, `mac2.metal` / `mac2-m2.metal` for Apple Silicon). These run macOS natively and support EC2 features like IAM roles, S3, and instance metadata — with the IMDS caveat noted above.\n\n### Prerequisites\n\n* A Mac EC2 instance running macOS Monterey (12) or later\n* An IAM role or IAM user with permissions for S3 (and optionally STS, CloudWatch)\n* An S3 bucket in the same region as the instance\n\n### Step 1 — Clone the repository\n\n```bash\ngit clone https://github.com/aws-deepracer-community/deepracer-for-cloud.git\ncd deepracer-for-cloud\n```\n\n### Step 2 — Run prepare-mac.sh\n\n```bash\nbash bin/prepare-mac.sh\n```\n\nThis script will:\n\n1. Verify macOS version compatibility\n2. Install [Homebrew](https://brew.sh) if not present\n3. Install required packages: `jq`, `python3`, `git`, `screen`, `bash`\n4. Install bash 5 and set it as the default login shell\n5. Add a `~/.bash_profile` bootstrap so bash 5 is used even when SSH starts `/bin/bash` (3.2)\n6. Install the AWS CLI v2 via the official `.pkg` installer (avoids Homebrew Python conflicts)\n7. Install [Colima](https://github.com/abiosoft/colima) and the Docker CLI\n8. Start Colima (4 vCPUs, 8 GB RAM, 60 GB disk — adjust as needed)\n9. Create `/tmp/sagemaker` inside the Colima VM\n10. Install a launchd agent so Colima auto-starts on login\n\nAfter the script completes, **log out and back in** so the new default shell takes effect.\n\n### Step 3 — Configure AWS credentials\n\nBecause containers run inside Colima's Linux VM, they cannot reach the EC2 Instance Metadata Service at `169.254.169.254`. You must provide explicit AWS credentials:\n\n```bash\naws configure --profile default\n```\n\nEnter an Access Key ID and Secret Access Key for an IAM user (or long-term credentials). The profile name must match `DR_LOCAL_S3_PROFILE` in `system.env` (default: `default` for AWS cloud setups).\n\n> **Tip:** Create a dedicated IAM user with a policy scoped to your S3 bucket rather than using root or overly broad credentials.\n\n### Step 4 — Run init.sh\n\n```bash\nbin/init.sh -c aws -a cpu\n```\n\nThis sets up the directory structure, configures `system.env` and `run.env`, and pulls the Docker images. Image pulls may take a while depending on bandwidth.\n\n### Step 5 — Activate and train\n\n```bash\nsource bin/activate.sh\ndr-upload-custom-files\ndr-start-training -q\n```\n\n---\n\n## Option 2: Local Mac (desktop/laptop)\n\nRunning DRfC locally on a Mac works for development and small-scale training. Performance is limited by CPU speed and memory.\n\n### Differences from EC2\n\n* No IAM role — configure an IAM user with `aws configure`\n* `DR_CLOUD` should be set to `local` in `system.env`, which uses a local MinIO container as the S3 backend\n* Colima memory and CPU limits should be tuned to your machine (leave headroom for the macOS host)\n\n### Recommended Colima sizing\n\n| Mac | Recommended Colima config |\n|---|---|\n| M1/M2/M3 with 16 GB RAM | `--cpu 6 --memory 10 --disk 60` |\n| M1/M2/M3 with 32 GB RAM | `--cpu 10 --memory 20 --disk 60` |\n| Intel with 16 GB RAM | `--cpu 4 --memory 8 --disk 60` |\n\nTo change the sizing after initial setup:\n\n```bash\ncolima stop\ncolima start --cpu 6 --memory 10 --disk 60\n```\n\n### Apple Silicon (arm64) and container image architecture\n\nThe DRfC SimApp images are built for `amd64` (x86_64). On Apple Silicon, Colima runs them via emulation. This works but is slower. To enable it:\n\n```bash\n# Install Rosetta 2 if not already present\nsoftwareupdate --install-rosetta\n\n# Start Colima with x86_64 architecture\ncolima stop\ncolima start --arch x86_64 --cpu 4 --memory 8 --disk 60\n```\n\n> Note: Once Colima is started with `--arch x86_64`, it stays in that mode until deleted. You cannot mix architectures in the same Colima instance.\n\n### Installation steps\n\n```bash\ngit clone https://github.com/aws-deepracer-community/deepracer-for-cloud.git\ncd deepracer-for-cloud\nbash bin/prepare-mac.sh\n# Log out and back in\nbin/init.sh -c local -a cpu\nsource bin/activate.sh\ndr-upload-custom-files\ndr-start-training -q\n```\n\n---\n\n## Known limitations\n\n| Limitation | Notes |\n|---|---|\n| CPU-only training | No NVIDIA GPU support on macOS |\n| IMDS not reachable from containers | Must use explicit AWS keys; IAM role auto-rotation does not work inside containers |\n| `/tmp/sagemaker` must exist in Colima VM | Created automatically by `prepare-mac.sh` and `dr-start-training`; recreate manually after `colima delete` with `colima ssh -- sudo mkdir -p /tmp/sagemaker && colima ssh -- sudo chmod -R a+w /tmp/sagemaker` |\n| Colima iptables rules reset on restart | Not relevant with the explicit-keys approach |\n| `brew services` fails headlessly | Colima is started via a launchd plist instead |\n\n---\n\n## Troubleshooting\n\n**`bash: ${VAR,,}: bad substitution`**  \nYou are running bash 3.2 (macOS built-in). Run `prepare-mac.sh` to install bash 5, then log out and back in.\n\n**`No configuration file.`** when sourcing `activate.sh`  \n`init.sh` has not been run yet, or `run.env` does not exist. Run `bin/init.sh` first.\n\n**`docker: command not found`**  \nHomebrew PATH is not set. Ensure `~/.bash_profile` contains `eval \"$(brew shellenv)\"` and re-source it.\n\n**`NoCredentialsError: Unable to locate credentials`**  \nContainers cannot reach IMDS. Run `aws configure --profile default` (or the profile matching `DR_LOCAL_S3_PROFILE`) on the host.\n\n**Colima fails to start**  \nCheck `colima status` and `colima start` output. On freshly allocated Mac EC2 instances the full macOS desktop session may still be initialising — wait a minute and retry.\n"
  },
  {
    "path": "docs/metrics.md",
    "content": "# Realtime Metrics\n\nIt is possible to collect and visualise real-time metrics using the optional telegraf/influxdb/grafana stack.\n\n```mermaid\nflowchart TD\n    A(Robomaker) --> B(Telegraf)\n    B --> C(InfluxDB)\n    C --> D(Grafana)\n```\n\nWhen enabled the Robomaker containers will send UDP metrics to Telegraf, which enriches and stores the metrics in the InfluxDB timeseries database container.\n\nGrafana provides a presentation layer for interactive dashboards.\n\n## Initial config and start-up\n\nTo enable the feature simply uncomment the lines in system.env for `DR_TELEGRAF_HOST` and `DR_TELEGRAF_PORT`. In most cases the default values should work without modification.\n\nStart the metrics docker stack using `dr-start-metrics`. \n\nOnce running Grafana should be accessible via a web browser on port 3000, e.g http://localhost:3000\nThe default username is `admin`, password `admin`. You will be prompted to set your own password on first login.\n\n*Note: Grafana can take 60-90 seconds to perform initial internal setup the first time it is started. The web UI will not be available until this is complete. You can check the status by viewing the grafana container logs if necessary.*\n\nThe metrics stack will remain running until stopped (`dr-stop-metrics`) or the machine is rebooted. It does not need to be restarted in between training runs and should automatically pick up metrics from new models. \n\n## Using the dashboards\n\nA template dashboard is provided to show how to access basic deepracer metrics. You can use this dashboard as a base to build your own more customised dashboards.\n\nAfter connecting to the Grafana Web UI with a browser use the menu to browse to the Dashboards section. \n\nThe template dashboard called `DeepRacer Training template` should be visible, showing graphs of reward, progress, and completed lap times. \n\nAs this is an automatically provisioned dashboard you are not able to save changes to it, however you can copy it by clicking on the small cog icon to enter the dashboard settings page, and then clicking `Save as` to make an editable copy. \n\nA full user guide on how to work the dashboards is available on the [Grafana website](https://grafana.com/docs/grafana/latest/dashboards/use-dashboards/).\n\n"
  },
  {
    "path": "docs/multi_gpu.md",
    "content": "# Training on a Computer with more than one GPU\n\nIn some cases you might end up with having a computer with more than one GPU. This may be common on a workstation\nwhich may have one GPU for general graphics (e.g. GTX 10-series, RTX 20-series), as well as a data center GPU \nlike a Tesla K40, K80 or M40.\n\nIn this setting it can get a bit chaotic as DeepRacer will 'greedily' put any workload on any GPU - which will \nlead to Out-of-Memory somewhere down the road.\n\n## Checking available GPUs\n\nYou can use Tensorflow to give you an overview of available devices running `utils/cuda-check.sh`.\n\nIt will say something like:\n```\n2020-07-04 12:25:55.179580: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\n2020-07-04 12:25:55.547206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties: \nname: GeForce GTX 1650 major: 7 minor: 5 memoryClockRate(GHz): 1.68\npciBusID: 0000:04:00.0\ntotalMemory: 3.82GiB freeMemory: 3.30GiB\n2020-07-04 12:25:55.732066: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 1 with properties: \nname: Tesla M40 24GB major: 5 minor: 2 memoryClockRate(GHz): 1.112\npciBusID: 0000:81:00.0\ntotalMemory: 22.41GiB freeMemory: 22.30GiB\n2020-07-04 12:25:55.732141: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0, 1\n2020-07-04 12:25:56.745647: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:\n2020-07-04 12:25:56.745719: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977]      0 1 \n2020-07-04 12:25:56.745732: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0:   N N \n2020-07-04 12:25:56.745743: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 1:   N N \n2020-07-04 12:25:56.745973: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 195 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1650, pci bus id: 0000:04:00.0, compute capability: 7.5)\n2020-07-04 12:25:56.750352: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 1147 MB memory) -> physical GPU (device: 1, name: Tesla M40 24GB, pci bus id: 0000:81:00.0, compute capability: 5.2)\n2020-07-04 12:25:56.774305: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0, 1\n2020-07-04 12:25:56.774408: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:\n2020-07-04 12:25:56.774425: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977]      0 1 \n2020-07-04 12:25:56.774436: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0:   N N \n2020-07-04 12:25:56.774446: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 1:   N N \n2020-07-04 12:25:56.774551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/device:GPU:0 with 195 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1650, pci bus id: 0000:04:00.0, compute capability: 7.5)\n2020-07-04 12:25:56.774829: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/device:GPU:1 with 1147 MB memory) -> physical GPU (device: 1, name: Tesla M40 24GB, pci bus id: 0000:81:00.0, compute capability: 5.2)\n['/device:GPU:0', '/device:GPU:1']\n```\nIn this case the CUDA device #0 is the GTX 1650 and the CUDA device #1 is the Tesla M40.\n\n### Selecting Device\n\nTo control the CUDA assignment for Sagemaker abd Robomaker then the following to variables in `system.env`:\n\n```\nDR_ROBOMAKER_CUDA_DEVICES=0\nDR_SAGEMAKER_CUDA_DEVICES=1\n``` \n\nThe number is the CUDA number of the GPU you want the containers to use.\n\n"
  },
  {
    "path": "docs/multi_run.md",
    "content": "# Managing Experiments\n\n## Experiment sub-directories\n\nWhen iterating on a model you typically need different reward functions, action spaces, hyperparameters, and track settings across runs. By default DRfC stores all of this in `run.env` and `custom_files/` at the root of the installation, which can become difficult to manage over time.\n\nThe **experiment sub-directory** feature lets you keep every config and custom file for a training run in its own folder under `experiments/`. DRfC then picks up those files automatically when you activate with the experiment name.\n\n### Directory structure\n\n```\ndeepracer-for-cloud/\n├── experiments/\n│   ├── sprint-v1/\n│   │   ├── run.env\n│   │   ├── worker-2.env          # optional – multi-worker only\n│   │   └── custom_files/\n│   │       ├── reward_function.py\n│   │       ├── model_metadata.json\n│   │       └── hyperparameters.json\n│   └── sprint-v2/\n│       ├── run.env\n│       └── custom_files/\n│           └── ...\n├── system.env\n└── ...\n```\n\nThe `experiments/` directory is excluded from git (via `.gitignore`) to avoid committing sensitive configuration and credentials.\n\n### Setting up your first experiment\n\n1. Create the directory structure (run from the DRfC root):\n\n    ```bash\n    mkdir -p experiments/sprint-v1/custom_files\n    ```\n\n2. Copy your current run configuration into the experiment:\n\n    ```bash\n    cp run.env experiments/sprint-v1/\n    cp custom_files/* experiments/sprint-v1/custom_files/\n    ```\n\n    If you are using multiple workers, copy the worker env files too:\n\n    ```bash\n    cp worker-*.env experiments/sprint-v1/\n    ```\n\n3. Activate with the experiment name using the `-e` flag:\n\n    ```bash\n    source bin/activate.sh -e sprint-v1\n    ```\n\n### Activating an experiment\n\nThere are two ways to select an experiment:\n\n**Option A — `-e` flag (recommended)**\n\nPass the experiment name when sourcing the activation script. This takes precedence over anything in `system.env`:\n\n```bash\nsource bin/activate.sh -e sprint-v1\n```\n\n**Option B — `DR_EXPERIMENT_NAME` in `system.env`**\n\nUncomment and set the variable in `system.env`:\n\n```\nDR_EXPERIMENT_NAME=sprint-v1\n```\n\nThen run `dr-update` or re-source `bin/activate.sh`. Use this option if you want the experiment to persist across shell sessions automatically.\n\nWhen `DR_EXPERIMENT_NAME` is set (by either method), DRfC will:\n- Load `run.env` from `experiments/<name>/run.env`\n- Load `worker-N.env` from `experiments/<name>/worker-N.env` (multi-worker)\n- Sync `custom_files` to/from `experiments/<name>/custom_files/`\n- Show `Experiment: <name>` in `dr-summary`\n\nIf the experiment directory does not exist, activation will abort with an error.\n\n### Iterating to a new experiment\n\nCopy the entire experiment folder to a new name and update the model prefix in `run.env`:\n\n```bash\ncp -av experiments/sprint-v1 experiments/sprint-v2\n```\n\nEdit `experiments/sprint-v2/run.env` to update `DR_LOCAL_S3_MODEL_PREFIX` (and `DR_LOCAL_S3_PRETRAINED_PREFIX` if you want to continue training from the previous experiment's model), then activate the new experiment:\n\n```bash\nsource bin/activate.sh -e sprint-v2\n```\n\n### Custom files upload and download\n\n`dr-upload-custom-files` and `dr-download-custom-files` are experiment-aware. When an experiment is active they sync against `experiments/<name>/custom_files/` instead of the root `custom_files/` directory.\n\n---\n\n# Running Multiple Parallel Experiments\n\nIt is possible to run multiple experiments on one computer in parallel. This is possible both in `swarm` and `compose` mode, and is controlled by `DR_RUN_ID` in `run.env`.\n\nThe feature works by creating unique prefixes to the container names:\n* In Swarm mode this is done through defining a stack name (default: deepracer-0)\n* In Compose mode this is done through adding a project name.\n\n## Suggested way to use the feature\n\nBy default `run.env` is loaded when DRfC is activated - but it is possible to load a separate configuration through `source bin/activate.sh <filename>`, or through `source bin/activate.sh -e <experiment-name>` when using experiment sub-directories.\n\nThe best way to use this feature is to have a bash-shell per experiment, and to load a separate configuration per shell.\n\nAfter activating one can control each experiment independently through using the `dr-*` commands.\n\nIf using local or Azure the S3 / Minio instance will be shared, and is running only once.\n"
  },
  {
    "path": "docs/multi_worker.md",
    "content": "# Using multiple Robomaker workers\n\nOne way to accelerate training is to launch multiple Robomaker workers that feed into one Sagemaker instance.\n\nThe number of workers is configured through setting `system.env` `DR_WORKERS` to the desired number of workers. The result is that the number of episodes (hyperparameter `num_episodes_between_training`) will be divivided over the number of workers. The theoretical maximum number of workers equals `num_episodes_between_training`.\n\nThe training can be started as normal.\n\n## How many workers do I need?\n\nOne Robomaker worker requires 2-4 vCPUs. Tests show that a `c5.4xlarge` instance can run 3 workers and the Sagemaker without a drop in performance. Using OpenGL images reduces the number of vCPUs required per worker.\n\nTo avoid issues with the position from which evaluations are run ensure that `( num_episodes_between_training / DR_WORKERS) * DR_TRAIN_ROUND_ROBIN_ADVANCE_DIST = 1.0`. \n\nExample: With 3 workers set `num_episodes_between_training: 30` and `DR_TRAIN_ROUND_ROBIN_ADVANCE_DIST=0.1`.\n\nNote; Sagemaker will stop collecting experiences once you have reached 10.000 steps (3-layer CNN) in an iteration. For longer tracks with 600-1000 steps per completed episodes this will define the upper bound for the number of workers and episodes per iteration.\n\n## Training with different parameters for each worker\n\nIt is also possible to use different configurations between workers, such as different tracks (WORLD_NAME).  To enable, set DR_TRAIN_MULTI_CONFIG=True inside run.env, then make copies of defaults/template-worker.env in the main deepracer-for-cloud directory with format worker-2.env, worker-3.env, etc.  (So alongside run.env, you should have woker-2.env, worker-3.env, etc.  run.env is still used for worker 1)  Modify the worker env files with your desired changes, which can be more than just the world_name.  These additional worker env files are only used if you are training with multiple workers.\n\n## Watching the streams\n\nIf you want to watch the streams -- and are in `compose` mode you can use the script `utils/start-local-browser.sh` to dynamically create a HTML that streams the KVS stream from ALL workers at a time.\n"
  },
  {
    "path": "docs/opengl.md",
    "content": "# GPU Accelerated OpenGL for Robomaker\n\nOne way to improve performance, especially of Robomaker, is to enable GPU-accelerated OpenGL. OpenGL can significantly improve Gazebo performance, even where the GPU does not have enough GPU RAM, or is too old, to support Tensorflow.\n\n## Desktop \n\nOn a Ubuntu desktop running Unity there are hardly any additional steps required.\n\n* Ensure that a recent Nvidia driver is installed and is running.\n* Ensure that nvidia-docker is installed; review `bin/prepare.sh` for steps if you do not want to directly run the script.\n* Configure DRfC using the following settings in `system.env`:\n    * `DR_HOST_X=True`; uses the local X server rather than starting one within the docker container.\n    * `DR_DISPLAY`; set to the value of your running X server, if not set then `DISPLAY` will be used.\n\nBefore running `dr-start-training`/`dr-start-evaluation` ensure that `DR_DISPLAY`/`DISPLAY` and `XAUTHORITY` are defined.\n\nCheck that OpenGL is working by looking for `gzserver` in `nvidia-smi`.\n\nIf `DR_GUI_ENABLE=True` then the Gazebo UI, rviz and rqt will open up in separate windows. (With multiple workers it can get crowded...)\n\n### Remote connection to Desktop \n\nIf you want to start training or evaluation via SSH (e.g. to increment the training whilst you are on the go) there are a few steps to do:\n* Ensure that you are actually logged in to the local machine (desktop session is running).\n* In the SSH terminal:\n    * Ensure `DR_DISPLAY` is configured in `system.env`. Otherwise run `export DISPLAY=:1`. [*]\n    * Run `export XAUTHORITY=/run/user/$(id -u)/gdm/Xauthority` to let X know where the X magic cookie is.\n    * Run `source bin/activate.sh` as normal.\n    * Run your `dr-start-training` or `dr-start-evaluation` command. \n\n*Remark*: Setting `DISPLAY` will lead to certain commands (e.g. `dr-logs-sagemaker`) starting in a terminal window on the desktop, rather than the output being showhn in the SSH terminal.\nUse of `DR_DISPLAY` is recommended to avoid this.\n\n## Headless Server\n\nAlso a headless server with a GPU, e.g. an EC2 instance, or a local computer with a displayless GPU (e.g. Tesla K40, K80, M40).\n\nThis also applies for a desktop computer where you are not logged in. In this case also disconnect any monitor cables to avoid conflict.\n\n* Ensure that a Nvidia driver and nvidia-docker is installed; review `bin/prepare.sh` for steps if you do not want to directly run the script.\n* Setup an X-server on the host. `utils/setup-xorg.sh` is a basic installation script.\n* Configure DRfC using the following settings in `system.env`:\n    * `DR_HOST_X=True`; uses the local X server rather than starting one within the docker container.\n    * `DR_DISPLAY`; the X display that the headless X server will start on. (Default is `:99`, avoid using `:0` or `:1` as it may conflict with other X servers.)\n\nStart up the X server with `utils/start-xorg.sh`. \n\nIf `DR_GUI_ENABLE=True` then a VNC server will be started on port 5900 so that you can connect and interact with the Gazebo UI.\n\nCheck that OpenGL is working by looking for `gzserver` in `nvidia-smi`.\n\n## WSL2 on Windows 11\n\nOpenGL is also supported in WSL2 on Windows 11. By default an Xwayland server is started in Ubuntu 22.04.\n\nTo enable OpenGL acceleration perform the following steps:\n* Install x11-server-utils with `sudo apt install x11-xserver-utils`.\n* Configure DRfC using the following settings in `system.env`:\n    * `DR_HOST_X=True`; uses the local X server rather than starting one within the docker container.\n    * `DR_DISPLAY=:0`; the Xwayland starts on :0 by default.\n\nIf you want to interact with the Gazebo UI, set `DR_DOCKER_STYLE=compose` and `DR_GUI_ENABLE=True` in `system.env`.\n"
  },
  {
    "path": "docs/reference.md",
    "content": "# Deepracer-for-Cloud Reference\n\n## Environment Variables\n\nThe scripts assume that two files `system.env` containing constant configuration values and  `run.env` with run specific values is populated with the required values. Which values go into which file is not really important.\n\n| Variable | Description |\n|----------|-------------|\n| `DR_RUN_ID` | Used if you have multiple independent training jobs only a single DRfC instance. This is an advanced configuration and generally you should just leave this as the default `0`.|\n| `DR_WORLD_NAME` | Defines the track to be used.|\n| `DR_RACE_TYPE` | Valid options are `TIME_TRIAL`, `OBJECT_AVOIDANCE`, and `HEAD_TO_BOT`.|\n| `DR_CAR_COLOR` | Valid options are `Black`, `Grey`, `Blue`, `Red`, `Orange`, `White`, and `Purple`.|\n| `DR_CAR_NAME` | Display name of car; shows in Deepracer Console when uploading.|\n| `DR_ENABLE_DOMAIN_RANDOMIZATION` | If `True`, this cycles through different environment colors and lighting each episode.  This is typically used to make your model more robust and generalized instead of tightly aligned with the simulator|\n| `DR_UPLOAD_S3_PREFIX` | Prefix of the target location. (Typically starts with `DeepRacer-SageMaker-RoboMaker-comm-`|\n| `DR_EVAL_NUMBER_OF_TRIALS` | How many laps to complete for evaluation simulations.|\n| `DR_EVAL_IS_CONTINUOUS` | If False, your evaluation trial will end if you car goes off track or is in a collision. If True, your car will take the penalty times as configured in those parameters, but continue evaluating the trial.|\n| `DR_EVAL_OFF_TRACK_PENALTY` | Number of seconds penalty time added for an off track during evaluation.  Only takes effect if `DR_EVAL_IS_CONTINUOUS` is set to True.|\n| `DR_EVAL_COLLISION_PENALTY` | Number of seconds penalty time added for a collision during evaluation.  Only takes effect if `DR_EVAL_IS_CONTINUOUS` is set to True.|\n| `DR_EVAL_SAVE_MP4` | Set to `True` to save MP4 of an evaluation run. |\n| `DR_EVAL_REVERSE_DIRECTION` | Set to `True` to reverse the direction in which the car traverses the track.|\n| `DR_TRAIN_CHANGE_START_POSITION` | Determines if the racer shall round-robin the starting position during training sessions. (Recommended to be `True` for initial training.)|\n| `DR_TRAIN_ALTERNATE_DRIVING_DIRECTION` | `True` or `False`.  If `True`, the car will alternate driving between clockwise and counter-clockwise each episode.|\n| `DR_TRAIN_START_POSITION_OFFSET` | Used to control where to start the training from on first episode.|\n| `DR_TRAIN_ROUND_ROBIN_ADVANCE_DISTANCE` | How far to progress each episode in round robin.  0.05 is 5% of the track.  Generally best to try and keep this to even numbers that match with your total number of episodes to allow for even distribution around the track.  For example, if 20 episodes per iternation, .05 or .10 or .20  would be good.|\n| `DR_TRAIN_MULTI_CONFIG` | `True` or `False`.  This is used if you want to use different run.env configurations for each worker in a multi worker training run.  See multi config documentation for more details on how to set this up.|\n| `DR_TRAIN_MIN_EVAL_TRIALS` | The minimum number of evaluation trials run between each training iteration.  Evaluations will continue as long as policy training is occuring and may be more than this number.  This establishes the minimum, and is generally useful if you want to speed up training especially when using gpu sagemaker containers.|\n| `DR_TRAIN_REVERSE_DIRECTION` | Set to `True` to reverse the direction in which the car traverses the track. |\n| `DR_TRAIN_BEST_MODEL_METRIC` | Can be used to control which model is kept as the \"best\" model. Set to `progress` to select the model with the highest evaluation completion percentage, set to `reward` to select the model with the highest evaluation reward.|\n| `DR_TRAIN_MAX_STEPS_PER_ITERATION` | Can be used to control the max number of steps per iteration to use for learning, the excess steps will be discarded to avoid out-of-memory situations, default is 10000. |\n| `DR_LOCAL_S3_PRETRAINED` | Determines if training or evaluation shall be based on the model created in a previous session, held in `s3://{DR_LOCAL_S3_BUCKET}/{LOCAL_S3_PRETRAINED_PREFIX}`, accessible by credentials held in profile `{DR_LOCAL_S3_PROFILE}`.|\n| `DR_LOCAL_S3_PRETRAINED_PREFIX` | Prefix of pretrained model within S3 bucket.|\n| `DR_LOCAL_S3_MODEL_PREFIX` | Prefix of model within S3 bucket.|\n| `DR_LOCAL_S3_BUCKET` | Name of S3 bucket which will be used during the session.|\n| `DR_LOCAL_S3_CUSTOM_FILES_PREFIX` | Prefix of configuration files within S3 bucket.|\n| `DR_LOCAL_S3_TRAINING_PARAMS_FILE` | Name of YAML file that holds parameters sent to robomaker container for configuration during training. Filename is relative to `s3://{DR_LOCAL_S3_BUCKET}/{LOCAL_S3_PRETRAINED_PREFIX}`.|\n| `DR_LOCAL_S3_EVAL_PARAMS_FILE` | Name of YAML file that holds parameters sent to robomaker container for configuration during evaluations.  Filename is relative to `s3://{DR_LOCAL_S3_BUCKET}/{LOCAL_S3_PRETRAINED_PREFIX}`.|\n| `DR_LOCAL_S3_MODEL_METADATA_KEY` | Location where the `model_metadata.json` file is stored.|\n| `DR_LOCAL_S3_HYPERPARAMETERS_KEY` | Location where the `hyperparameters.json` file is stored.|\n| `DR_LOCAL_S3_REWARD_KEY` | Location where the `reward_function.py` file is stored.|\n| `DR_LOCAL_S3_METRICS_PREFIX` | Location where the metrics will be stored.|\n| `DR_OA_NUMBER_OF_OBSTACLES` | For Object Avoidance, the number of obstacles on the track.|\n| `DR_OA_MIN_DISTANCE_BETWEEN_OBSTACLES` | Minimum distance in meters between obstacles.|\n| `DR_OA_RANDOMIZE_OBSTACLE_LOCATIONS` | If True, obstacle locations will randomly change after each episode.|\n| `DR_OA_IS_OBSTACLE_BOT_CAR` | If True, obstacles will appear as a stationary car instead of a box.|\n| `DR_OA_OBJECT_POSITIONS` | Positions of boxes on the track. Tuples consisting of progress (fraction [0..1]) and inside or outside lane (-1 or 1). Example: `\"0.23,-1;0.46,1\"`|\n| `DR_H2B_IS_LANE_CHANGE` | If True, bot cars will change lanes based on configuration.|\n| `DR_H2B_LOWER_LANE_CHANGE_TIME` | Minimum time in seconds before car will change lanes.|\n| `DR_H2B_UPPER_LANE_CHANGE_TIME` | Maximum time in seconds before car will change langes.|\n| `DR_H2B_LANE_CHANGE_DISTANCE` | Distance in meters how long it will take the car to change lanes.|\n| `DR_H2B_NUMBER_OF_BOT_CARS` | Number of bot cars on the track.|\n| `DR_H2B_MIN_DISTANCE_BETWEEN_BOT_CARS` | Minimum distance between bot cars.|\n| `DR_H2B_RANDOMIZE_BOT_CAR_LOCATIONS` | If True, bot car locations will randomly change after each episode.|\n| `DR_H2B_BOT_CAR_SPEED` | How fast the bot cars go in meters per second.|\n| `DR_CLOUD` | Can be `azure`, `aws`, `local` or `remote`; determines how the storage will be configured.|\n| `DR_AWS_APP_REGION` | (AWS only) Region for other AWS resources (e.g. Kinesis) |\n| `DR_UPLOAD_S3_PROFILE` | AWS Cli profile to be used that holds the 'real' S3 credentials needed to upload a model into AWS DeepRacer.|\n| `DR_UPLOAD_S3_BUCKET` | Name of the AWS DeepRacer bucket where models will be uploaded. (Typically starts with `aws-deepracer-`.)|\n| `DR_LOCAL_S3_PROFILE` | Name of AWS profile with credentials to be used. Stored in `~/.aws/credentials` unless AWS IAM Roles are used.|\n| `DR_GUI_ENABLE` | Enable or disable the Gazebo GUI in Robomaker |\n| `DR_KINESIS_STREAM_NAME` | Kinesis stream name. Used if you actually publish to the AWS KVS service. Leave blank if you do not want this. |\n| `DR_KINESIS_STREAM_ENABLE` | Enable or disable 'Kinesis Stream', True both publishes to a AWS KVS stream (if name not None), and to the topic `/racecar/deepracer/kvs_stream`. Leave True if you want to watch the car racing. |\n| `DR_SAGEMAKER_IMAGE` | Determines which sagemaker image will be used for training.|\n| `DR_ROBOMAKER_IMAGE` | Determines which robomaker image will be used for training or evaluation.|\n| `DR_MINIO_IMAGE` | Determines which Minio image will be used. |\n| `DR_COACH_IMAGE` | Determines which coach image will be used for training.|\n| `DR_WORKERS` | Number of Robomaker workers to be used for training.  See additional documentation for more information about this feature.|\n| `DR_ROBOMAKER_MOUNT_LOGS` | True to get logs mounted to `$DR_DIR/data/logs/robomaker/$DR_LOCAL_S3_MODEL_PREFIX`|\n| `DR_ROBOMAKER_MOUNT_SIMAPP_DIR` | Path to the altered Robomaker bundle, e.g. `/home/ubuntu/deepracer-simapp/bundle`.|\n| `DR_CLOUD_WATCH_ENABLE` | Send log files to AWS CloudWatch.|\n| `DR_CLOUD_WATCH_LOG_STREAM_PREFIX` | Add a prefix to the CloudWatch log stream name.|\n| `DR_DOCKER_STYLE` | Valid Options are `Swarm` and `Compose`.  Use Compose for openGL optimized containers.|\n| `DR_HOST_X` | Uses the host X-windows server, rather than starting one inside of Robomaker. Required for OpenGL images.|\n| `DR_WEBVIEWER_PORT` | Port for the web-viewer proxy which enables the streaming of all robomaker workers at once.|\n| `CUDA_VISIBLE_DEVICES` | Used in multi-GPU configurations. See additional documentation for more information about this feature.|\n| `DR_TELEGRAF_HOST` | The hostname to send real-time metrics to. Uncommenting this will enable real-time metrics collection using Telegraf. The telegraf/influxdb/grafana compose stack must already be running (use `dr-start-metrics`) for this to work, and it should usually be set to `telegraf` to send metrics to the telegraf container.\n| `DR_TELEGRAF_PORT` | Defines the UDP port to send real-time metrics to. Should usually remain set as 8092.  \n| `DR_QUIET_ACTIVATE` | Set to `True` to suppress the environment summary dashboard that is displayed when sourcing `bin/activate.sh` in an interactive shell. Defaults to `False`.|\n| `DR_EXPERIMENT_NAME` | Optional. When set, DRfC loads `run.env`, `worker-N.env`, and `custom_files/` from `experiments/<name>/` instead of the repository root. Can be set here or passed via `source bin/activate.sh -e <name>`. See [Managing Experiments](multi_run.md).|\n\n## Commands\n\n| Command | Description |\n|---------|-------------|\n| `dr-update` | Loads in all scripts and environment variables again.|\n| `dr-reload` | Re-sources `bin/activate.sh` with the current configuration file.|\n| `dr-summary` | Displays the environment summary dashboard (cloud config, Docker images, running services and containers). Runs automatically on interactive shell activation unless `DR_QUIET_ACTIVATE=True`.|\n| `dr-update-env` | Loads in all environment variables from `system.env` and `run.env`.|\n| `dr-upload-custom-files` | Uploads changed configuration files from `custom_files/` into `s3://{DR_LOCAL_S3_BUCKET}/custom_files`.|\n| `dr-download-custom-files` | Downloads changed configuration files from `s3://{DR_LOCAL_S3_BUCKET}/custom_files` into `custom_files/`.|\n| `dr-start-training` | Starts a training session in the local VM based on current configuration.|\n| `dr-increment-training` | Updates configuration, setting the current model prefix to pretrained, and incrementing a serial.|\n| `dr-stop-training` | Stops the current local training session. Uploads log files.|\n| `dr-start-evaluation` | Starts a evaluation session in the local VM based on current configuration.|\n| `dr-stop-evaluation` | Stops the current local evaluation session. Uploads log files.|\n| `dr-start-loganalysis` | Starts a Jupyter log-analysis container, available on port 8888.|\n| `dr-stop-loganalysis` | Stops the Jupyter log-analysis container.|\n| `dr-start-viewer` | Starts an NGINX proxy to stream all the robomaker streams; accessible remotly.|\n| `dr-stop-viewer` | Stops the NGINX proxy.|\n| `dr-logs-sagemaker` | Displays the logs from the running Sagemaker container.|\n| `dr-logs-robomaker` | Displays the logs from the running Robomaker container.|\n| `dr-list-aws-models` | Lists the models that are currently stored in your AWS DeepRacer S3 bucket. |\n| `dr-set-upload-model` | Updates the `run.env` with the prefix and name of your selected model. |\n| `dr-upload-model` | Uploads the model defined in `DR_LOCAL_S3_MODEL_PREFIX` to the AWS DeepRacer S3 prefix defined in `DR_UPLOAD_S3_PREFIX` |\n| `dr-download-model` | Downloads a file from a 'real' S3 location into a local prefix of choice. |\n"
  },
  {
    "path": "docs/video.md",
    "content": "# Watching the car\n\nThere are multiple ways to watch the car during training and evaluation. The ports and 'features' depend on the docker mode (swarm vs. compose) as well as between training and evaluation.\n\n## Training using Viewer\n\nDRfC has a built in viewer that supports showing the video stream from up to 6 workers on one webpage.\n\nThe view can be started with `dr-start-viewer` and is available on `http://localhost:8100` or `http://127.0.0.1:8100`. The viewer must be updated if training is restarted using `dr-update-viewer`, as it needs to connect to the new containers.\n\nIt is also possible to automatically start/update the viewer using the `-v` flag to `dr-start-training`.\n\n## ROS Stream Viewer\n\nThe ROS Stream Viewer is a built in ROS feature that will stream any topic in ROS that publishing ROSImg messages. The viewer starts automatically.\n\n### Ports\n\n| Docker Mode  | Training         | Evaluation      | Comment\n| -------- | -------- | -------- | -------- | \n| swarm      | 8080 + `DR_RUN_ID` |  8180 + `DR_RUN_ID` | Default 8080/8180. Multiple workers share one port, press F5 to cycle between them.\n| compose | 8080-8089 | 8080-8089 | Each worker gets a unique port.\n\n### Topics\n\n| Topic  | Description         | \n| -------- | -------- | \n| `/racecar/camera/zed/rgb/image_rect_color`      | In-car video stream. This is used for inference. | \n| `/racecar/main_camera/zed/rgb/image_rect_color`      | Camera following the car. Stream without overlay | \n| `/sub_camera/zed/rgb/image_rect_color`      | Top-view of the track | \n| `/racecar/deepracer/kvs_stream`      | Camera following the car. Stream with overlay. Different overlay in Training and Evaluation | \n| `/racecar/deepracer/main_camera_stream`      | Same as `kvs_stream`, topic used for MP4 production. Only active in Evaluation if `DR_EVAL_SAVE_MP4=True` | \n\n## Saving Evaluation to File\n\nDuring evaluation (`dr-start-evaluation`), if `DR_EVAL_SAVE_MP4=True` then three MP4 files are created in the S3 bucket's MP4 folder. They contain the in-car camera, top-camera and the camera following the car."
  },
  {
    "path": "docs/windows.md",
    "content": "# Installing on Windows\n\n## Prerequisites\n\nThe basic installation steps to get a NVIDIA GPU / CUDA enabled Ubuntu subsystem on Windows can be found in the [Cuda on WSL User Guide](https://docs.nvidia.com/cuda/wsl-user-guide/index.html).  Ensure your windows has an updated [nvidia cuda enabled driver](https://developer.nvidia.com/cuda/wsl/download) that will work with WSL.\n\nThe further instructions assume that you have a basic working WSL using the default Ubuntu distribution.\n\n\n## Additional steps\n\nThe typical `bin/prepare.sh` script will not work for a Ubuntu WSL installation, hence alternate steps will be required.\n\n### Adding required packages\n\nInstall additional packages with the following command:\n\n```\nsudo apt-get install jq awscli python3-boto3 docker-compose\n```\n\n### Install and configure docker and nvidia-docker\n```\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -\nsudo add-apt-repository    \"deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable\"\nsudo apt-get update && sudo apt-get install -y --no-install-recommends docker-ce docker-ce-cli containerd.io\n\ndistribution=$(. /etc/os-release;echo $ID$VERSION_ID)\ncurl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -\ncurl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list\n\ncat /etc/docker/daemon.json | jq 'del(.\"default-runtime\") + {\"default-runtime\": \"nvidia\"}' | sudo tee /etc/docker/daemon.json\nsudo usermod -a -G docker $(id -un)\n```\n\n\n### Install DRfC\n\nYou can now run `bin/init.sh -a gpu -c local` to setup DRfC, and follow the typical DRfC startup instructions\n\n## Known Issues\n\n* `init.sh` is not able to detect the GPU given differences in the Nvidia drivers, and the WSL2 Linux Kernel. You need to manually set the GPU image in `system.env`.\n* Docker does not start automatically when you launch Ubuntu. Start it manually with `sudo service docker start` \n\n     You can also configure the service to start automatically using the Windows Task Scheduler\n     \n     *1)* Create a new file at /etc/init-wsl  (sudo vi /etc/init-wsl) with the following contents.\n     \n     ```\n     #!/bin/sh\n     service start docker\n     ```\n \n     *2)* Make the script executable `sudo chmod +x /etc/init-wsl`\n       \n     *3)* Open Task Scheduler in Windows 10\n       \n     - On the left, click **Task Scheduler Library** option, and then on the right, click **Create Task**\n          \n     - In **General** Tab, Enter Name **WSL Startup**, and select **Run whether user is logged on or not** and **Run with highest privileges** options.\n         \n     - In **Trigger** tab, click New ... > Begin the task: **At startup** > OK\n        \n     - In **Actions** tab, click New ... > Action: **Start a program**\n                            \n       program/script:  **wsl**\n                   \n       add arguments:  **-u root /etc/init-wsl**\n                   \n     - Click OK to exit\n          \n     *4)* You can run the task manually to confirm, or after Windows reboot docker should now automatically start.\n\n* Video streams may not load using the localhost address.  To access the html video streams from your windows browser, you may need to use the IP address of the WSL VM.  From a WSL terminal, determine your IP address by the command 'ip addr' and look for **eth0** then **inet** (e.g. ip = 172.29.38.21).  Then from your windows browser (edge, chrome, etc) navigate to **ip:8080** (e.g. 172.29.38.21:8080)\n     \n"
  },
  {
    "path": "requirements.txt",
    "content": "boto3\npyyaml\nrequests\ndeepracer-utils"
  },
  {
    "path": "scripts/droa/__init__.py",
    "content": ""
  },
  {
    "path": "scripts/droa/auth.py",
    "content": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"\nShared authentication and configuration utilities for DRoA (DeepRacer on AWS) scripts.\n\nProvides:\n  fetch_env_config(site_url)           — fetch and parse <site_url>/env.js\n  authenticate(...)                    — Cognito User Pool sign-in → ID token\n  get_aws_credentials(...)             — Identity Pool → temporary AWS credentials\n  load_droa_config(args)               — resolve config from env vars + CLI args\n  build_auth(url, credentials, region) — prepare botocore SigV4 request signing using AWSRequest/SigV4Auth\n  add_common_args(parser)              — add shared CLI flags to an argparse parser\n\"\"\"\n\nimport datetime\nimport json\nimport hashlib\nimport os\nimport re\nimport sys\nimport uuid\n\nimport boto3\nimport requests\nimport requests.auth\nfrom botocore.auth import SigV4Auth\nfrom botocore.awsrequest import AWSRequest\n\n\n# ---------------------------------------------------------------------------\n# Config discovery\n# ---------------------------------------------------------------------------\n\ndef fetch_env_config(site_url: str) -> dict:\n    \"\"\"Fetch <site_url>/env.js and parse the window.EnvironmentConfig object.\"\"\"\n    env_js_url = site_url.rstrip(\"/\") + \"/env.js\"\n    response = requests.get(env_js_url, timeout=10)\n    if not response.ok:\n        raise RuntimeError(\n            f\"Could not fetch env.js from {env_js_url}: \"\n            f\"{response.status_code} {response.reason}\"\n        )\n    match = re.search(\n        r\"window\\.EnvironmentConfig\\s*=\\s*(\\{.+\\})\\s*;\", response.text, re.DOTALL)\n    if not match:\n        raise RuntimeError(f\"Could not find EnvironmentConfig in {env_js_url}\")\n    raw = match.group(1)\n    try:\n        config = json.loads(raw)\n    except json.JSONDecodeError:\n        # Convert JS object literal to strict JSON\n        js = raw\n        js = re.sub(r'([{,]\\s*)([A-Za-z_]\\w*)\\s*:', r'\\1\"\\2\":', js)\n        js = re.sub(r\"'([^']*)'\", r'\"\\1\"', js)\n        js = re.sub(r',(\\s*})', r'\\1', js)\n        try:\n            config = json.loads(js)\n        except json.JSONDecodeError as exc:\n            raise RuntimeError(\n                f\"Could not parse EnvironmentConfig from {env_js_url}.\\n\"\n                f\"Parse error: {exc}\\nRaw content:\\n{raw}\"\n            ) from exc\n    return config\n\n\n# ---------------------------------------------------------------------------\n# Cognito authentication\n# ---------------------------------------------------------------------------\n\ndef authenticate(region: str, client_id: str, username: str, password: str) -> str:\n    \"\"\"Sign in to Cognito User Pool and return an ID token.\"\"\"\n    client = boto3.client(\"cognito-idp\", region_name=region)\n    response = client.initiate_auth(\n        AuthFlow=\"USER_PASSWORD_AUTH\",\n        AuthParameters={\"USERNAME\": username, \"PASSWORD\": password},\n        ClientId=client_id,\n    )\n    result = response.get(\"AuthenticationResult\") or {}\n    id_token = result.get(\"IdToken\")\n    if not id_token:\n        raise RuntimeError(\"Authentication failed – no ID token in response.\")\n    return id_token\n\n\ndef get_aws_credentials(\n    region: str,\n    user_pool_id: str,\n    identity_pool_id: str,\n    id_token: str,\n) -> dict:\n    \"\"\"Exchange a Cognito ID token for temporary STS credentials via Identity Pool.\"\"\"\n    cognito_identity = boto3.client(\"cognito-identity\", region_name=region)\n    login_key = f\"cognito-idp.{region}.amazonaws.com/{user_pool_id}\"\n\n    identity_response = cognito_identity.get_id(\n        IdentityPoolId=identity_pool_id,\n        Logins={login_key: id_token},\n    )\n    identity_id = identity_response[\"IdentityId\"]\n\n    creds_response = cognito_identity.get_credentials_for_identity(\n        IdentityId=identity_id,\n        Logins={login_key: id_token},\n    )\n    creds = creds_response[\"Credentials\"]\n    if os.environ.get(\"DR_DROA_DEBUG\"):\n        sts = boto3.client(\n            \"sts\",\n            region_name=region,\n            aws_access_key_id=creds_response[\"Credentials\"][\"AccessKeyId\"],\n            aws_secret_access_key=creds_response[\"Credentials\"][\"SecretKey\"],\n            aws_session_token=creds_response[\"Credentials\"][\"SessionToken\"],\n        )\n        identity = sts.get_caller_identity()\n        print(\n            f\"  STS identity: Account={identity['Account']} Arn={identity['Arn']}\", file=sys.stderr)\n\n    return {\n        \"access_key\": creds[\"AccessKeyId\"],\n        \"secret_key\": creds[\"SecretKey\"],\n        \"session_token\": creds[\"SessionToken\"],\n        \"expiry\": creds[\"Expiration\"],\n    }\n\n\n# ---------------------------------------------------------------------------\n# Credential cache\n# ---------------------------------------------------------------------------\n\ndef _credential_cache_path(identity_pool_id: str, username: str) -> str:\n    key = hashlib.sha256(\n        f\"{identity_pool_id}:{username}\".encode()).hexdigest()[:16]\n    cache_dir = os.path.expanduser(\"~/.droa-cache\")\n    os.makedirs(cache_dir, mode=0o700, exist_ok=True)\n    return os.path.join(cache_dir, f\"{key}.json\")\n\n\ndef load_cached_credentials(identity_pool_id: str, username: str) -> dict | None:\n    \"\"\"Return cached AWS credentials if they have more than 60 seconds of validity left.\"\"\"\n    path = _credential_cache_path(identity_pool_id, username)\n    if not os.path.exists(path):\n        return None\n    try:\n        with open(path) as f:\n            data = json.load(f)\n        expiry = datetime.datetime.fromisoformat(data[\"expiry\"])\n        if expiry.tzinfo is None:\n            expiry = expiry.replace(tzinfo=datetime.timezone.utc)\n        if expiry <= datetime.datetime.now(tz=datetime.timezone.utc) + datetime.timedelta(seconds=60):\n            return None\n        return {k: v for k, v in data.items() if k != \"expiry\"}\n    except (KeyError, ValueError, json.JSONDecodeError, OSError):\n        return None\n\n\ndef save_credentials_to_cache(identity_pool_id: str, username: str, credentials: dict) -> None:\n    \"\"\"Save AWS credentials (including 'expiry') to a 0600 cache file.\"\"\"\n    expiry = credentials.get(\"expiry\")\n    if expiry is None:\n        return\n    path = _credential_cache_path(identity_pool_id, username)\n    try:\n        data = {\n            k: v for k, v in credentials.items() if k != \"expiry\"\n        }\n        data[\"expiry\"] = expiry.isoformat() if hasattr(\n            expiry, \"isoformat\") else str(expiry)\n        with open(path, \"w\") as f:\n            json.dump(data, f)\n        os.chmod(path, 0o600)\n    except OSError:\n        pass  # Non-fatal\n\n\n# ---------------------------------------------------------------------------\n# Config resolution\n# ---------------------------------------------------------------------------\n\nclass DRoAConfig:\n    \"\"\"Resolved DRoA endpoint configuration.\"\"\"\n\n    def __init__(\n        self,\n        region: str,\n        user_pool_id: str,\n        client_id: str,\n        identity_pool_id: str,\n        api_endpoint: str,\n        upload_bucket: str,\n        site_url: str | None = None,\n    ) -> None:\n        self.region = region\n        self.user_pool_id = user_pool_id\n        self.client_id = client_id\n        self.identity_pool_id = identity_pool_id\n        self.api_endpoint = api_endpoint.rstrip(\"/\")\n        self.upload_bucket = upload_bucket\n        self.site_url = site_url\n\n\ndef load_droa_config(args) -> \"DRoAConfig\":\n    \"\"\"\n    Resolve DRoA configuration in priority order:\n      1. Explicit CLI override flags on ``args``\n      2. env.js fetched from --url / DR_DROA_URL environment variable\n\n    Exits with a descriptive error if any required value is missing.\n    \"\"\"\n    site_url = getattr(args, \"url\", None) or os.environ.get(\"DR_DROA_URL\")\n    env: dict = {}\n    if site_url:\n        env = fetch_env_config(site_url)\n        print(f\"Loaded configuration from {site_url}/env.js\", file=sys.stderr)\n\n    region = getattr(args, \"region\", None) or env.get(\"region\")\n    user_pool_id = getattr(args, \"user_pool_id\", None) or env.get(\"userPoolId\")\n    client_id = getattr(args, \"user_pool_client_id\",\n                        None) or env.get(\"userPoolClientId\")\n    identity_pool_id = getattr(\n        args, \"identity_pool_id\", None) or env.get(\"identityPoolId\")\n    api_endpoint = getattr(args, \"api_endpoint\",\n                           None) or env.get(\"apiEndpointUrl\")\n    upload_bucket = getattr(args, \"upload_bucket\",\n                            None) or env.get(\"uploadBucketName\")\n\n    missing = [\n        name for name, val in [\n            (\"region\", region),\n            (\"user-pool-id\", user_pool_id),\n            (\"user-pool-client-id\", client_id),\n            (\"identity-pool-id\", identity_pool_id),\n            (\"api-endpoint\", api_endpoint),\n            (\"upload-bucket\", upload_bucket),\n        ]\n        if not val\n    ]\n    if missing:\n        print(\n            f\"Error: could not resolve: {', '.join(missing)}.\\n\"\n            \"Set DR_DROA_URL in system.env or pass --url.\",\n            file=sys.stderr,\n        )\n        sys.exit(1)\n\n    return DRoAConfig(\n        region=region,\n        user_pool_id=user_pool_id,\n        client_id=client_id,\n        identity_pool_id=identity_pool_id,\n        api_endpoint=api_endpoint,\n        upload_bucket=upload_bucket,\n        site_url=site_url,\n    )\n\n\n# ---------------------------------------------------------------------------\n# SigV4 auth helper\n# ---------------------------------------------------------------------------\n\ndef build_auth(url: str, credentials: dict, region: str, site_url: str | None = None) -> requests.auth.AuthBase:\n    \"\"\"Create a requests AuthBase that SigV4-signs each request via botocore.\"\"\"\n    origin = site_url.rstrip(\"/\") if site_url else None\n    session = boto3.Session(\n        aws_access_key_id=credentials[\"access_key\"],\n        aws_secret_access_key=credentials[\"secret_key\"],\n        aws_session_token=credentials[\"session_token\"],\n        region_name=region,\n    )\n    frozen_creds = session.get_credentials().get_frozen_credentials()\n\n    class _Auth(requests.auth.AuthBase):\n        def __call__(self, r: requests.PreparedRequest) -> requests.PreparedRequest:\n            body = r.body or b\"\"\n            if isinstance(body, str):\n                body = body.encode(\"utf-8\")\n\n            sign_headers = {\n                \"accept\": \"*/*\",\n                \"accept-encoding\": \"gzip, deflate\",\n                \"accept-language\": \"en-US,en;q=0.9,de-DE;q=0.8,de;q=0.7\",\n                \"amz-sdk-invocation-id\": str(uuid.uuid4()),\n                \"amz-sdk-request\": \"attempt=1; max=3\",\n                \"cache-control\": \"no-cache\",\n                \"pragma\": \"no-cache\",\n                \"sec-ch-ua\": '\"Microsoft Edge\";v=\"147\", \"Not.A/Brand\";v=\"8\", \"Chromium\";v=\"147\"',\n                \"sec-ch-ua-mobile\": \"?0\",\n                \"sec-ch-ua-platform\": '\"Windows\"',\n                \"sec-fetch-dest\": \"empty\",\n                \"sec-fetch-mode\": \"cors\",\n                \"sec-fetch-site\": \"cross-site\",\n                \"user-agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/147.0.0.0 Safari/537.36 Edg/147.0.0.0\",\n                \"x-amz-content-sha256\": hashlib.sha256(body).hexdigest(),\n                \"x-amz-user-agent\": \"aws-sdk-js/1.0.0 ua/2.0 os/Windows#NT-10.0 lang/js md/browser#Microsoft-Edge_147.0.0.0\",\n            }\n            if origin:\n                sign_headers[\"origin\"] = origin\n                sign_headers[\"referer\"] = origin + \"/\"\n            aws_request = AWSRequest(\n                method=r.method, url=r.url, data=body, headers=sign_headers)\n            SigV4Auth(frozen_creds, \"execute-api\",\n                      region).add_auth(aws_request)\n            r.headers.update(dict(aws_request.headers))\n\n            if os.environ.get(\"DR_DROA_DEBUG\"):\n                print(\"\\n--- DEBUG: signed request ---\", file=sys.stderr)\n                print(f\"  {r.method} {r.url}\", file=sys.stderr)\n                print(\n                    f\"  (access key: ***{credentials['access_key'][-4:]})\", file=sys.stderr)\n                for k, v in sorted(r.headers.items()):\n                    display = v[:40] + \\\n                        \"...\" if k.lower() == \"x-amz-security-token\" and len(v) > 40 else v\n                    print(f\"  {k}: {display}\", file=sys.stderr)\n                print(\"-----------------------------\\n\", file=sys.stderr)\n            return r\n\n    return _Auth()\n\n\n# ---------------------------------------------------------------------------\n# Shared argparse helpers\n# ---------------------------------------------------------------------------\n\ndef add_common_args(parser) -> None:\n    \"\"\"Add shared DRoA connection/auth arguments to an argparse parser.\"\"\"\n    parser.add_argument(\n        \"--url\",\n        help=\"DeepRacer on AWS site URL (defaults to DR_DROA_URL env var). \"\n             \"All AWS config is read automatically from <url>/env.js.\",\n    )\n    parser.add_argument(\"--region\", help=\"Override: AWS region\")\n    parser.add_argument(\n        \"--user-pool-id\", help=\"Override: Cognito User Pool ID\")\n    parser.add_argument(\"--user-pool-client-id\",\n                        help=\"Override: Cognito App Client ID\")\n    parser.add_argument(\"--identity-pool-id\",\n                        help=\"Override: Cognito Identity Pool ID\")\n    parser.add_argument(\n        \"--api-endpoint\", help=\"Override: API Gateway base URL\")\n    parser.add_argument(\"--upload-bucket\",\n                        help=\"Override: S3 upload bucket name\")\n    parser.add_argument(\n        \"--username\",\n        help=\"Cognito username / email (defaults to DR_DROA_USERNAME env var)\",\n    )\n    parser.add_argument(\n        \"--password\", help=\"Cognito password (prompted if omitted)\")\n"
  },
  {
    "path": "scripts/droa/delete_model.py",
    "content": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"\nDelete a model from DeepRacer on AWS (DRoA).\n\nSends DELETE /models/{modelId}.  Before issuing the request the script fetches\nthe model record and verifies its status — the API only permits deletion when\nthe model is in READY or ERROR state.  Any other status results in a clear\nerror message without making the DELETE call.\n\nDeletion is asynchronous on the server: the model status transitions to\nDELETING immediately, then S3 artifacts, training records and evaluation\nrecords are removed in the background.  If any step fails the server reverts\nthe status to ERROR for manual cleanup.\n\nUsage examples\n--------------\n  # Interactive confirmation (shows model name and current status):\n  python delete_model.py 2w7R6h2PNexQ9kC\n\n  # Skip confirmation prompt (use in scripts):\n  python delete_model.py 2w7R6h2PNexQ9kC --yes\n\n  # Override site URL and username on the command line:\n  python delete_model.py 2w7R6h2PNexQ9kC --url https://my.droa.example.com --username alice\n\nAuthentication\n--------------\nCredentials are obtained via the Cognito Identity Pool embedded in the DRoA\nsite's /env.js.  A password prompt is shown on the first call; subsequent\ncalls within the credential lifetime (~1 h) reuse a cache stored in\n~/.droa-cache/.\n\nThe site URL is read from DR_DROA_URL and the username from DR_DROA_USERNAME\n(both set in system.env), or supplied via --url / --username.\n\nDeletable status values\n-----------------------\n  READY   Model trained successfully and ready for use\n  ERROR   Model encountered an error during a previous operation\n\nAll other statuses (TRAINING, EVALUATING, IMPORTING, QUEUED, STOPPING,\nSUBMITTING, DELETING) are rejected by the API with HTTP 400.\n\"\"\"\n\nimport argparse\nimport getpass\nimport os\nimport sys\n\nimport requests\n\nfrom auth import (\n    add_common_args, authenticate, build_auth, get_aws_credentials, load_droa_config,\n    load_cached_credentials, save_credentials_to_cache,\n)\n\n_DELETABLE_STATUSES = {\"READY\", \"ERROR\"}\n\n\ndef fetch_model(cfg, credentials: dict, model_id: str) -> dict:\n    url = f\"{cfg.api_endpoint}/models/{model_id}\"\n    response = requests.get(\n        url, auth=build_auth(url, credentials, cfg.region, cfg.site_url), timeout=30\n    )\n    if not response.ok:\n        raise RuntimeError(\n            f\"API error fetching model: {response.status_code} {response.reason}\\n{response.text}\"\n        )\n    data = response.json()\n    return data.get(\"model\", data)\n\n\ndef delete_model(cfg, credentials: dict, model_id: str) -> None:\n    url = f\"{cfg.api_endpoint}/models/{model_id}\"\n    response = requests.delete(\n        url, auth=build_auth(url, credentials, cfg.region, cfg.site_url), timeout=30\n    )\n    if not response.ok:\n        raise RuntimeError(\n            f\"API error: {response.status_code} {response.reason}\\n{response.text}\"\n        )\n\n\ndef parse_args() -> argparse.Namespace:\n    parser = argparse.ArgumentParser(\n        description=\"Delete a model from DeepRacer on AWS.\",\n        epilog=(\n            \"examples:\\n\"\n            \"  %(prog)s 2w7R6h2PNexQ9kC\\n\"\n            \"  %(prog)s 2w7R6h2PNexQ9kC --yes\"\n        ),\n        formatter_class=argparse.RawDescriptionHelpFormatter,\n    )\n    add_common_args(parser)\n    parser.add_argument(\"model_id\", help=\"Model ID to delete\")\n    parser.add_argument(\n        \"-y\", \"--yes\", action=\"store_true\", help=\"Skip confirmation prompt\"\n    )\n    return parser.parse_args()\n\n\ndef main() -> None:\n    args = parse_args()\n\n    username = args.username or os.environ.get(\"DR_DROA_USERNAME\")\n    if not username:\n        print(\"Error: --username or DR_DROA_USERNAME required.\", file=sys.stderr)\n        sys.exit(1)\n\n    cfg = load_droa_config(args)\n\n    credentials = load_cached_credentials(cfg.identity_pool_id, username)\n    if credentials:\n        print(\"Using cached credentials.\", file=sys.stderr)\n    else:\n        password = args.password or getpass.getpass(\n            f\"Password for {username}: \")\n        id_token = authenticate(cfg.region, cfg.client_id, username, password)\n        credentials = get_aws_credentials(\n            cfg.region, cfg.user_pool_id, cfg.identity_pool_id, id_token)\n        save_credentials_to_cache(cfg.identity_pool_id, username, credentials)\n\n    model = fetch_model(cfg, credentials, args.model_id)\n    name = model.get(\"name\", args.model_id)\n    status = model.get(\"status\", \"UNKNOWN\")\n\n    if status not in _DELETABLE_STATUSES:\n        print(\n            f\"Error: model '{name}' has status {status} and cannot be deleted.\\n\"\n            f\"Only models with status READY or ERROR may be deleted.\",\n            file=sys.stderr,\n        )\n        sys.exit(1)\n\n    if not args.yes:\n        print(f\"Model name : {name}\")\n        print(f\"Model ID   : {args.model_id}\")\n        print(f\"Status     : {status}\")\n        print()\n        confirm = input(\n            f\"Type the model name to confirm deletion: \"\n        ).strip()\n        if confirm != name:\n            print(\"Aborted.\")\n            sys.exit(0)\n\n    print(f\"Deleting model '{name}' ({args.model_id})...\")\n    delete_model(cfg, credentials, args.model_id)\n    print(\"Delete request accepted. The model will be removed shortly (status → DELETING).\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "scripts/droa/download_logs.py",
    "content": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"\nDownload logs or assets for a model from DeepRacer on AWS (DRoA).\n\nCalls GET /models/{modelId}/getasset to obtain a presigned S3 URL, then\ndownloads the file.  For VIRTUAL_MODEL the server packages the artifact\nasynchronously — the script polls until the URL is ready (up to\nPOLL_TIMEOUT seconds).  All other asset types return a URL immediately\nor a 400 error if the underlying job has not yet completed.\n\nUsage examples\n--------------\n  # Download training logs (default):\n  python download_logs.py 2w7R6h2PNexQ9kC\n\n  # Download and immediately print a training stability summary:\n  python download_logs.py 2w7R6h2PNexQ9kC --summary\n\n  # Download training logs to a specific file:\n  python download_logs.py 2w7R6h2PNexQ9kC -o training.tar.gz\n\n  # Download evaluation logs (evaluation ID required):\n  python download_logs.py 2w7R6h2PNexQ9kC --asset-type EVALUATION_LOGS --evaluation-id <evalId>\n\n  # Download virtual model artifact (polls until packaging completes):\n  python download_logs.py 2w7R6h2PNexQ9kC --asset-type VIRTUAL_MODEL\n\n  # Override site URL and username on the command line:\n  python download_logs.py 2w7R6h2PNexQ9kC --url https://my.droa.example.com --username alice\n\nAsset types\n-----------\n  TRAINING_LOGS       Logs from the training job (job must be COMPLETED or FAILED)\n  EVALUATION_LOGS     Logs from an evaluation run (requires --evaluation-id;\n                      job must be COMPLETED or FAILED)\n  PHYSICAL_CAR_MODEL  Physical car model artifact\n  VIRTUAL_MODEL       Virtual model package (packaged asynchronously; script polls)\n  VIDEOS              Evaluation video recordings\n\nAuthentication\n--------------\nCredentials are obtained via the Cognito Identity Pool embedded in the DRoA\nsite's /env.js.  A password prompt is shown on the first call; subsequent\ncalls within the credential lifetime (~1 h) reuse a cache stored in\n~/.droa-cache/.\n\nThe site URL is read from DR_DROA_URL and the username from DR_DROA_USERNAME\n(both set in system.env), or supplied via --url / --username.\n\"\"\"\n\nimport argparse\nimport getpass\nimport os\nimport sys\nimport time\nfrom urllib.parse import urlparse\n\nimport requests\n\nfrom auth import (\n    add_common_args, authenticate, build_auth, get_aws_credentials, load_droa_config,\n    load_cached_credentials, save_credentials_to_cache,\n)\n\nASSET_TYPES = [\"TRAINING_LOGS\", \"EVALUATION_LOGS\",\n               \"PHYSICAL_CAR_MODEL\", \"VIRTUAL_MODEL\", \"VIDEOS\"]\nPOLL_INTERVAL = 5    # seconds between status checks\nPOLL_TIMEOUT = 300  # seconds before giving up (packaging can take a while)\n\n\ndef get_asset_url(cfg, credentials, model_id, asset_type, evaluation_id=None):\n    \"\"\"Call GET /models/{modelId}/getasset, polling while status is QUEUED.\"\"\"\n    url = f\"{cfg.api_endpoint}/models/{model_id}/getasset\"\n    params = {\"assetType\": asset_type}\n    if evaluation_id:\n        params[\"evaluationId\"] = evaluation_id\n\n    deadline = time.monotonic() + POLL_TIMEOUT\n    while True:\n        response = requests.get(\n            url, params=params,\n            auth=build_auth(url, credentials, cfg.region, cfg.site_url),\n            timeout=30,\n        )\n        if not response.ok:\n            raise RuntimeError(\n                f\"API error: {response.status_code} {response.reason}\\n{response.text}\"\n            )\n        data = response.json()\n        if data.get(\"url\"):\n            return data[\"url\"]\n        # Only VIRTUAL_MODEL returns status:QUEUED while packaging\n        status = data.get(\"status\", \"UNKNOWN\")\n        if status != \"QUEUED\":\n            raise RuntimeError(\n                f\"No URL returned and unexpected status '{status}'. \"\n                f\"The asset may not be available yet.\"\n            )\n        if time.monotonic() > deadline:\n            raise RuntimeError(\n                f\"Timed out waiting for asset after {POLL_TIMEOUT}s.\")\n        print(\n            f\"  Packaging in progress (status: {status}) — retrying in {POLL_INTERVAL}s...\", file=sys.stderr)\n        time.sleep(POLL_INTERVAL)\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser(\n        description=\"Download logs/assets from a DeepRacer on AWS model.\",\n        epilog=(\n            \"examples:\\n\"\n            \"  %(prog)s 2w7R6h2PNexQ9kC\\n\"\n            \"  %(prog)s 2w7R6h2PNexQ9kC --asset-type EVALUATION_LOGS --evaluation-id <evalId>\\n\"\n            \"  %(prog)s 2w7R6h2PNexQ9kC --asset-type VIRTUAL_MODEL -o model.tar.gz\"\n        ),\n        formatter_class=argparse.RawDescriptionHelpFormatter,\n    )\n    add_common_args(parser)\n    parser.add_argument(\"model_id\", help=\"Model ID\")\n    parser.add_argument(\n        \"--asset-type\", default=\"TRAINING_LOGS\", choices=ASSET_TYPES,\n        help=\"Asset type to download (default: TRAINING_LOGS)\",\n    )\n    parser.add_argument(\n        \"--evaluation-id\", default=None,\n        help=\"Evaluation ID — required when --asset-type is EVALUATION_LOGS\",\n    )\n    parser.add_argument(\n        \"--output\", \"-o\", default=None,\n        help=\"Output file path (default: derived from the presigned URL filename)\",\n    )\n    parser.add_argument(\n        \"--summary\", action=\"store_true\",\n        help=\"After downloading, load the archive with DeepRacer Utils and print a training stability summary (TRAINING_LOGS only)\",\n    )\n    return parser.parse_args()\n\n\ndef main():\n    args = parse_args()\n\n    username = args.username or os.environ.get(\"DR_DROA_USERNAME\")\n    if not username:\n        print(\"Error: --username or DR_DROA_USERNAME required.\", file=sys.stderr)\n        sys.exit(1)\n\n    if args.asset_type == \"EVALUATION_LOGS\" and not args.evaluation_id:\n        print(\"Error: --evaluation-id is required for EVALUATION_LOGS.\",\n              file=sys.stderr)\n        sys.exit(1)\n\n    cfg = load_droa_config(args)\n\n    credentials = load_cached_credentials(cfg.identity_pool_id, username)\n    if credentials:\n        print(\"Using cached credentials.\", file=sys.stderr)\n    else:\n        password = args.password or getpass.getpass(\n            f\"Password for {username}: \")\n        id_token = authenticate(cfg.region, cfg.client_id, username, password)\n        credentials = get_aws_credentials(\n            cfg.region, cfg.user_pool_id, cfg.identity_pool_id, id_token)\n        save_credentials_to_cache(cfg.identity_pool_id, username, credentials)\n\n    print(\n        f\"Requesting {args.asset_type} for model {args.model_id}...\", file=sys.stderr)\n    presigned_url = get_asset_url(\n        cfg, credentials, args.model_id, args.asset_type, args.evaluation_id\n    )\n\n    dl_response = requests.get(presigned_url, timeout=120, stream=True)\n    if not dl_response.ok:\n        raise RuntimeError(\n            f\"Download failed: {dl_response.status_code} {dl_response.reason}\")\n\n    out_path = args.output\n    if not out_path:\n        url_filename = os.path.basename(urlparse(presigned_url).path)\n        out_path = url_filename or f\"{args.model_id}_{args.asset_type.lower()}.bin\"\n\n    with open(out_path, \"wb\") as f:\n        for chunk in dl_response.iter_content(chunk_size=65536):\n            f.write(chunk)\n\n    print(f\"Downloaded to: {out_path}\", file=sys.stderr)\n\n    if args.summary:\n        if args.asset_type != \"TRAINING_LOGS\":\n            print(\n                f\"Warning: --summary is only supported for TRAINING_LOGS, skipping.\",\n                file=sys.stderr,\n            )\n        else:\n            try:\n                from deepracer.logs import DeepRacerLog, TarFileHandler\n            except ImportError:\n                print(\n                    \"Error: deepracer-utils is not installed. \"\n                    \"Run: pip install deepracer-utils\",\n                    file=sys.stderr,\n                )\n                sys.exit(1)\n            print(file=sys.stderr)\n            fh = TarFileHandler(archive_path=out_path)\n            log = DeepRacerLog(filehandler=fh, verbose=True)\n            log.load_training_trace()\n            log.stability.print_summary()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "scripts/droa/get_model.py",
    "content": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"\nGet details of a specific model from DeepRacer on AWS (DRoA).\n\nRetrieves the full model record from GET /models/{modelId} and prints it in\na human-readable key-value format.  Use --json for machine-readable output.\n\nUsage examples\n--------------\n  # Basic summary (status, training config, sensors, hyperparameters):\n  python get_model.py 2w7R6h2PNexQ9kC\n\n  # Include reward function, action space and Metrics URL:\n  python get_model.py 2w7R6h2PNexQ9kC --verbose\n\n  # Print DeepRacer Utils training metrics summary:\n  python get_model.py 2w7R6h2PNexQ9kC --summary\n\n  # Raw JSON (suitable for piping to jq):\n  python get_model.py 2w7R6h2PNexQ9kC --json | jq .status\n\n  # Override site URL and username on the command line:\n  python get_model.py 2w7R6h2PNexQ9kC --url https://my.droa.example.com --username alice\n\nAuthentication\n--------------\nCredentials are obtained via the Cognito Identity Pool embedded in the DRoA\nsite's /env.js.  A password prompt is shown on the first call; subsequent\ncalls within the credential lifetime (~1 h) reuse a cache stored in\n~/.droa-cache/.\n\nThe site URL is read from DR_DROA_URL and the username from DR_DROA_USERNAME\n(both set in system.env), or supplied via --url / --username.\n\nModel status values\n-------------------\n  DELETING  ERROR  EVALUATING  IMPORTING  QUEUED  READY\n  STOPPING  SUBMITTING  TRAINING\n\nTraining status values\n----------------------\n  CANCELED  COMPLETED  FAILED  IN_PROGRESS  INITIALIZING  QUEUED  STOPPING\n\"\"\"\n\nimport argparse\nimport getpass\nimport json\nimport os\nimport sys\n\nimport requests\n\nfrom auth import (\n    add_common_args, authenticate, build_auth, get_aws_credentials, load_droa_config,\n    load_cached_credentials, save_credentials_to_cache,\n)\n\n\ndef get_model(cfg, credentials: dict, model_id: str) -> dict:\n    url = f\"{cfg.api_endpoint}/models/{model_id}\"\n    response = requests.get(\n        url, auth=build_auth(url, credentials, cfg.region, cfg.site_url), timeout=30\n    )\n    if not response.ok:\n        raise RuntimeError(\n            f\"API error: {response.status_code} {response.reason}\\n{response.text}\"\n        )\n    data = response.json()\n    return data.get(\"model\", data)\n\n\ndef _fmt_bytes(n) -> str:\n    if n is None:\n        return \"\"\n    for unit, threshold in ((\"GB\", 1024**3), (\"MB\", 1024**2), (\"KB\", 1024)):\n        if n >= threshold:\n            return f\"{n / threshold:.1f} {unit}\"\n    return f\"{n} B\"\n\n\ndef _kv(key: str, value, indent: int = 0) -> None:\n    if value is None or value == \"\":\n        return\n    pad = \"  \" * indent\n    print(f\"{pad}{key:<22}: {value}\")\n\n\ndef print_model(model: dict, verbose: bool = False) -> None:\n    _kv(\"Model ID\", model.get(\"modelId\"))\n    _kv(\"Name\", model.get(\"name\"))\n    _kv(\"Description\", model.get(\"description\"))\n    _kv(\"Status\", model.get(\"status\"))\n    _kv(\"Training Status\", model.get(\"trainingStatus\"))\n    created = (model.get(\"createdAt\") or \"\")[:19].replace(\"T\", \" \")\n    _kv(\"Created At\", created)\n    _kv(\"File Size\", _fmt_bytes(model.get(\"fileSizeInBytes\")))\n    _kv(\"Packaging Status\", model.get(\"packagingStatus\"))\n    if model.get(\"importErrorMessage\"):\n        _kv(\"Import Error\", model[\"importErrorMessage\"])\n\n    car = model.get(\"carCustomization\") or {}\n    if car:\n        print()\n        print(\"Car Customization\")\n        _kv(\"Color\", car.get(\"carColor\"), indent=1)\n        _kv(\"Shell\", car.get(\"carShell\"), indent=1)\n\n    tc = model.get(\"trainingConfig\") or {}\n    if tc:\n        print()\n        print(\"Training Config\")\n        track = tc.get(\"trackConfig\") or {}\n        _kv(\"Track\", track.get(\"trackId\"), indent=1)\n        _kv(\"Direction\", track.get(\"trackDirection\"), indent=1)\n        _kv(\"Race Type\", tc.get(\"raceType\"), indent=1)\n        _kv(\"Max Time (min)\", tc.get(\"maxTimeInMinutes\"), indent=1)\n\n    meta = model.get(\"metadata\") or {}\n    if meta:\n        print()\n        print(\"Metadata\")\n        _kv(\"Algorithm\", meta.get(\"agentAlgorithm\"), indent=1)\n        sensors = meta.get(\"sensors\") or {}\n        _kv(\"Camera\", sensors.get(\"camera\"), indent=1)\n        _kv(\"Lidar\", sensors.get(\"lidar\"), indent=1)\n        hp = meta.get(\"hyperparameters\") or {}\n        if hp:\n            print(\"  Hyperparameters\")\n            for k, v in hp.items():\n                _kv(k, v, indent=2)\n        if verbose:\n            action_space = meta.get(\"actionSpace\") or {}\n            if action_space:\n                print()\n                print(\"Action Space\")\n                cont = action_space.get(\"continous\") or {}\n                disc = action_space.get(\"discrete\") or []\n                if cont:\n                    _kv(\"Type\", \"continuous\", indent=1)\n                    _kv(\"Speed range\",\n                        f\"{cont.get('lowSpeed')} – {cont.get('highSpeed')} m/s\", indent=1)\n                    _kv(\"Steering range\",\n                        f\"{cont.get('lowSteeringAngle')}° – {cont.get('highSteeringAngle')}°\", indent=1)\n                elif disc:\n                    _kv(\"Type\", f\"discrete ({len(disc)} actions)\", indent=1)\n                    for i, a in enumerate(disc):\n                        _kv(f\"Action {i}\", f\"speed={a.get('speed')} m/s, steering={a.get('steeringAngle')}°\", indent=2)\n            rf = meta.get(\"rewardFunction\")\n            if rf:\n                print()\n                print(\"Reward Function\")\n                print(rf)\n\n    if verbose:\n        metrics_url = model.get(\"trainingMetricsUrl\")\n        if metrics_url:\n            print()\n            _kv(\"Metrics URL\", metrics_url)\n    video_url = model.get(\"trainingVideoStreamUrl\")\n    if video_url:\n        print()\n        _kv(\"Video Stream URL\", video_url)\n\n\ndef parse_args() -> argparse.Namespace:\n    parser = argparse.ArgumentParser(\n        description=\"Get details of a model in DeepRacer on AWS.\",\n        epilog=(\n            \"examples:\\n\"\n            \"  %(prog)s 2w7R6h2PNexQ9kC\\n\"\n            \"  %(prog)s 2w7R6h2PNexQ9kC --verbose\\n\"\n            \"  %(prog)s 2w7R6h2PNexQ9kC --summary\\n\"\n            \"  %(prog)s 2w7R6h2PNexQ9kC --json | jq .status\"\n        ),\n        formatter_class=argparse.RawDescriptionHelpFormatter,\n    )\n    add_common_args(parser)\n    parser.add_argument(\"model_id\", help=\"Model ID to retrieve\")\n    parser.add_argument(\n        \"--json\", dest=\"output_json\", action=\"store_true\",\n        help=\"Output raw JSON instead of formatted view\",\n    )\n    parser.add_argument(\n        \"-v\", \"--verbose\", action=\"store_true\",\n        help=\"Also print reward function, action space, and Metrics URL\",\n    )\n    parser.add_argument(\n        \"--summary\", action=\"store_true\",\n        help=\"Load training metrics via DeepRacer Utils and print a mean summary\",\n    )\n    return parser.parse_args()\n\n\ndef main() -> None:\n    args = parse_args()\n\n    username = args.username or os.environ.get(\"DR_DROA_USERNAME\")\n    if not username:\n        print(\"Error: --username or DR_DROA_USERNAME required.\", file=sys.stderr)\n        sys.exit(1)\n\n    cfg = load_droa_config(args)\n\n    credentials = load_cached_credentials(cfg.identity_pool_id, username)\n    if credentials:\n        print(\"Using cached credentials.\", file=sys.stderr)\n    else:\n        password = args.password or getpass.getpass(\n            f\"Password for {username}: \")\n        id_token = authenticate(cfg.region, cfg.client_id, username, password)\n        credentials = get_aws_credentials(\n            cfg.region, cfg.user_pool_id, cfg.identity_pool_id, id_token)\n        save_credentials_to_cache(cfg.identity_pool_id, username, credentials)\n    model = get_model(cfg, credentials, args.model_id)\n    if args.output_json:\n        print(json.dumps(model, indent=2, default=str))\n    else:\n        print_model(model, verbose=args.verbose)\n\n    if args.summary:\n        metrics_url = model.get(\"trainingMetricsUrl\")\n        if not metrics_url:\n            print(\"Error: no trainingMetricsUrl available for this model.\",\n                  file=sys.stderr)\n            sys.exit(1)\n        try:\n            from deepracer.logs import TrainingMetrics\n        except ImportError:\n            print(\n                \"Error: deepracer-utils is not installed. \"\n                \"Run: pip install deepracer-utils\",\n                file=sys.stderr,\n            )\n            sys.exit(1)\n        print()\n        tm = TrainingMetrics(None, url=metrics_url)\n        print(tm.getSummary(method=\"mean\",\n              summary_index=[\"r-i\", \"master_iteration\"]))\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "scripts/droa/import_model.py",
    "content": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"\nImport a locally trained DRFC model into DeepRacer on AWS (DRoA).\n\nTwo source modes\n----------------\n--model-dir DIR\n    Upload from a pre-assembled local directory.  The directory must contain\n    at minimum: model_metadata.json, reward_function.py, training_params.yaml,\n    hyperparameters.json.  All files are uploaded as-is preserving relative\n    paths.\n\n--model-prefix PREFIX\n    Pull the model from the DRFC local S3 bucket (MinIO), assemble the correct\n    upload structure, and generate training_params.yaml from DR_* environment\n    variables — replicating what scripts/upload/upload-model.sh and\n    scripts/upload/prepare-config.py do.  If omitted, DR_LOCAL_S3_MODEL_PREFIX\n    is used as the prefix.\n\nCheckpoint selection (--model-prefix mode only)\n-----------------------------------------------\n  Default          last tested checkpoint  (last_checkpoint in deepracer_checkpoints.json)\n  --best           best checkpoint         (best_checkpoint)\n  --checkpoint N   specific checkpoint step number\n\nFlow\n----\n  1. Authenticate with Cognito User Pool  →  ID token\n  2. Exchange ID token via Identity Pool  →  temporary AWS credentials\n  3. (--model-prefix) Download from local S3 into a temp dir;\n     generate training_params.yaml from DR_* env vars\n  4. Upload assembled directory to the DRoA upload S3 bucket\n  5. POST /importmodel  →  modelId\n\nUsage examples\n--------------\n  # Upload from a pre-assembled local directory:\n  python import_model.py --model-dir /tmp/my-model --model-name my-model\n\n  # Pull current model from local MinIO (uses DR_LOCAL_S3_MODEL_PREFIX):\n  python import_model.py --model-prefix rl-deepracer-sagemaker --model-name my-model\n\n  # Pull with best checkpoint:\n  python import_model.py --model-prefix rl-deepracer-sagemaker --model-name my-model --best\n\nAuthentication\n--------------\n  DR_DROA_URL and DR_DROA_USERNAME (system.env) or --url / --username.\n  Credential cache: ~/.droa-cache/\n\nEnvironment variables (--model-prefix mode)\n-------------------------------------------\n  DR_LOCAL_S3_BUCKET       Local S3 bucket name\n  DR_LOCAL_S3_MODEL_PREFIX Default model prefix (overridden by --model-prefix)\n  DR_MINIO_URL             MinIO endpoint URL (e.g. http://minio:9000)\n  DR_LOCAL_S3_PROFILE      AWS profile name for local S3 access (default: \"default\")\n  DR_*                     Training config variables used to build training_params.yaml\n\"\"\"\n\nimport argparse\nimport getpass\nimport json\nimport os\nimport re\nimport sys\nimport tempfile\nimport uuid\nfrom pathlib import Path\n\nimport boto3\nimport requests\nimport yaml\n\nfrom auth import (\n    add_common_args, authenticate, build_auth, get_aws_credentials, load_droa_config,\n    load_cached_credentials, save_credentials_to_cache,\n)\n\n\nEXCLUDED_FILES = {\".DS_Store\", \"Thumbs.db\", \"desktop.ini\", \"._.DS_Store\"}\n\n# Required files when validating a user-supplied --model-dir\nREQUIRED_FILES_DIR = {\n    \"model_metadata.json\",\n    \"reward_function.py\",\n    \"training_params.yaml\",\n    \"hyperparameters.json\",\n}\n\n\n# ---------------------------------------------------------------------------\n# Content-type helper\n# ---------------------------------------------------------------------------\n\ndef _content_type(file_path):\n    name = file_path.name\n    ext = file_path.suffix.lower()\n    if name == \"done\":\n        return \"text/plain\"\n    mapping = {\n        \".meta\": \"application/octet-stream\",\n        \".ckpt\": \"application/octet-stream\",\n        \".pb\": \"application/octet-stream\",\n        \".ready\": \"text/plain\",\n        \".json\": \"application/json\",\n        \".yaml\": \"application/x-yaml\",\n        \".yml\": \"application/x-yaml\",\n        \".py\": \"text/x-python\",\n        \".data\": \"application/octet-stream\",\n        \".index\": \"application/octet-stream\",\n    }\n    return mapping.get(ext, \"application/octet-stream\")\n\n\n# ---------------------------------------------------------------------------\n# Local S3 client (MinIO via DR_MINIO_URL + DR_LOCAL_S3_PROFILE)\n# ---------------------------------------------------------------------------\n\ndef _local_s3_client():\n    \"\"\"Return a boto3 S3 client pointed at the local MinIO instance.\"\"\"\n    profile = os.environ.get(\"DR_LOCAL_S3_PROFILE\", \"default\")\n    endpoint = os.environ.get(\"DR_MINIO_URL\")  # e.g. http://minio:9000\n    session = boto3.Session(profile_name=profile)\n    kwargs = {\"endpoint_url\": endpoint} if endpoint else {}\n    return session.client(\"s3\", **kwargs)\n\n\ndef _s3_cp_down(s3, bucket, key, local_path):\n    \"\"\"Download a single S3 object to local_path.\"\"\"\n    print(f\"    s3 cp  s3://{bucket}/{key}  →  {local_path}\")\n    Path(local_path).parent.mkdir(parents=True, exist_ok=True)\n    s3.download_file(bucket, key, str(local_path))\n\n\ndef _s3_sync_down(s3, bucket, prefix, local_dir, include_pattern=None):\n    \"\"\"Download all objects under prefix into local_dir, optionally filtered.\"\"\"\n    print(f\"    s3 sync  s3://{bucket}/{prefix}  →  {local_dir}\")\n    paginator = s3.get_paginator(\"list_objects_v2\")\n    for page in paginator.paginate(Bucket=bucket, Prefix=prefix):\n        for obj in page.get(\"Contents\", []):\n            key = obj[\"Key\"]\n            name = key[len(prefix):].lstrip(\"/\")\n            if not name:\n                continue\n            if include_pattern and not any(name.startswith(p) for p in include_pattern):\n                continue\n            dest = Path(local_dir) / name\n            dest.parent.mkdir(parents=True, exist_ok=True)\n            s3.download_file(bucket, key, str(dest))\n\n\n# ---------------------------------------------------------------------------\n# training_params.yaml generation  (replicates prepare-config.py)\n# ---------------------------------------------------------------------------\n\ndef _build_training_params(work_dir, target_bucket, target_prefix):\n    \"\"\"Generate training_params.yaml from DR_* env vars into work_dir.\"\"\"\n    e = os.environ.get\n    cfg = {\n        \"AWS_REGION\":                  e(\"DR_AWS_APP_REGION\", \"us-east-1\"),\n        \"JOB_TYPE\":                    \"TRAINING\",\n        \"METRICS_S3_BUCKET\":           target_bucket,\n        \"METRICS_S3_OBJECT_KEY\":       f\"{target_prefix}/TrainingMetrics.json\",\n        \"MODEL_METADATA_FILE_S3_KEY\":  f\"{target_prefix}/model/model_metadata.json\",\n        \"REWARD_FILE_S3_KEY\":          f\"{target_prefix}/reward_function.py\",\n        \"SAGEMAKER_SHARED_S3_BUCKET\":  target_bucket,\n        \"SAGEMAKER_SHARED_S3_PREFIX\":  target_prefix,\n        \"BODY_SHELL_TYPE\":             e(\"DR_CAR_BODY_SHELL_TYPE\", \"deepracer\"),\n        \"CAR_NAME\":                    e(\"DR_CAR_NAME\", \"MyCar\"),\n        \"RACE_TYPE\":                   e(\"DR_RACE_TYPE\", \"TIME_TRIAL\"),\n        # DRoA TrackId has no direction suffix; strip _cw/_ccw that DRFC appends\n        \"WORLD_NAME\":                  re.sub(r'_(cw|ccw)$', '', e(\"DR_WORLD_NAME\", \"LGSWide\")),\n        \"DISPLAY_NAME\":                e(\"DR_DISPLAY_NAME\", \"racer1\"),\n        \"RACER_NAME\":                  e(\"DR_RACER_NAME\", \"racer1\"),\n        \"ALTERNATE_DRIVING_DIRECTION\": e(\"DR_TRAIN_ALTERNATE_DRIVING_DIRECTION\",\n                                         e(\"DR_ALTERNATE_DRIVING_DIRECTION\", \"false\")),\n        \"CHANGE_START_POSITION\":       e(\"DR_TRAIN_CHANGE_START_POSITION\",\n                                         e(\"DR_CHANGE_START_POSITION\", \"true\")),\n        \"ROUND_ROBIN_ADVANCE_DIST\":    e(\"DR_TRAIN_ROUND_ROBIN_ADVANCE_DIST\", \"0.05\"),\n        \"START_POSITION_OFFSET\":       e(\"DR_TRAIN_START_POSITION_OFFSET\", \"0.00\"),\n        \"ENABLE_DOMAIN_RANDOMIZATION\": e(\"DR_ENABLE_DOMAIN_RANDOMIZATION\", \"false\"),\n        \"MIN_EVAL_TRIALS\":             e(\"DR_TRAIN_MIN_EVAL_TRIALS\", \"5\"),\n    }\n\n    if cfg[\"BODY_SHELL_TYPE\"] == \"deepracer\":\n        cfg[\"CAR_COLOR\"] = e(\"DR_CAR_COLOR\", \"Red\")\n\n    race_type = cfg[\"RACE_TYPE\"]\n    if race_type == \"OBJECT_AVOIDANCE\":\n        cfg[\"NUMBER_OF_OBSTACLES\"] = e(\"DR_OA_NUMBER_OF_OBSTACLES\", \"6\")\n        cfg[\"MIN_DISTANCE_BETWEEN_OBSTACLES\"] = e(\n            \"DR_OA_MIN_DISTANCE_BETWEEN_OBSTACLES\", \"2.0\")\n        cfg[\"RANDOMIZE_OBSTACLE_LOCATIONS\"] = e(\n            \"DR_OA_RANDOMIZE_OBSTACLE_LOCATIONS\", \"True\")\n        cfg[\"IS_OBSTACLE_BOT_CAR\"] = e(\"DR_OA_IS_OBSTACLE_BOT_CAR\", \"false\")\n        positions_str = e(\"DR_OA_OBJECT_POSITIONS\", \"\")\n        if positions_str:\n            positions = positions_str.split(\";\")\n            cfg[\"OBJECT_POSITIONS\"] = positions\n            cfg[\"NUMBER_OF_OBSTACLES\"] = str(len(positions))\n\n    if race_type == \"HEAD_TO_BOT\":\n        cfg[\"IS_LANE_CHANGE\"] = e(\"DR_H2B_IS_LANE_CHANGE\", \"False\")\n        cfg[\"LOWER_LANE_CHANGE_TIME\"] = e(\n            \"DR_H2B_LOWER_LANE_CHANGE_TIME\", \"3.0\")\n        cfg[\"UPPER_LANE_CHANGE_TIME\"] = e(\n            \"DR_H2B_UPPER_LANE_CHANGE_TIME\", \"5.0\")\n        cfg[\"LANE_CHANGE_DISTANCE\"] = e(\"DR_H2B_LANE_CHANGE_DISTANCE\", \"1.0\")\n        cfg[\"NUMBER_OF_BOT_CARS\"] = e(\"DR_H2B_NUMBER_OF_BOT_CARS\", \"0\")\n        cfg[\"MIN_DISTANCE_BETWEEN_BOT_CARS\"] = e(\n            \"DR_H2B_MIN_DISTANCE_BETWEEN_BOT_CARS\", \"2.0\")\n        cfg[\"RANDOMIZE_BOT_CAR_LOCATIONS\"] = e(\n            \"DR_H2B_RANDOMIZE_BOT_CAR_LOCATIONS\", \"False\")\n        cfg[\"BOT_CAR_SPEED\"] = e(\"DR_H2B_BOT_CAR_SPEED\", \"0.2\")\n\n    # TRACK_DIRECTION_CLOCKWISE: infer from the raw DR_WORLD_NAME (which still carries\n    # the _cw/_ccw suffix) before that suffix is stripped for WORLD_NAME above.\n    raw_world = e(\"DR_WORLD_NAME\", \"LGSWide\")\n    if raw_world.endswith(\"_cw\"):\n        cfg[\"TRACK_DIRECTION_CLOCKWISE\"] = True\n    elif raw_world.endswith(\"_ccw\"):\n        cfg[\"TRACK_DIRECTION_CLOCKWISE\"] = False\n    else:\n        reverse = e(\"DR_TRAIN_REVERSE_DIRECTION\",\n                    \"False\").lower() in (\"true\", \"1\", \"yes\")\n        cfg[\"TRACK_DIRECTION_CLOCKWISE\"] = not reverse\n\n    out = Path(work_dir) / \"training_params.yaml\"\n    with open(out, \"w\") as fh:\n        yaml.dump(cfg, fh, default_flow_style=False,\n                  default_style=\"'\", explicit_start=True)\n    return out\n\n\n# ---------------------------------------------------------------------------\n# Pull from local S3 and assemble upload structure\n# ---------------------------------------------------------------------------\n\ndef _build_from_s3_prefix(model_prefix, checkpoint_mode, checkpoint_num,\n                          target_bucket, target_prefix):\n    \"\"\"\n    Download model files from local DRFC S3 into a temp directory and return\n    its path.  The caller is responsible for cleanup.\n\n    checkpoint_mode: 'last' | 'best' | 'number'\n    checkpoint_num:  step number (int) when mode == 'number'\n    target_bucket / target_prefix: DRoA upload destination — needed to bake\n    correct paths into training_params.yaml.\n    \"\"\"\n    local_bucket = os.environ.get(\"DR_LOCAL_S3_BUCKET\", \"bucket\")\n\n    work = Path(tempfile.mkdtemp(prefix=\"droa-import-\"))\n    model_dir = work / \"model\"\n    model_dir.mkdir()\n    ip_dir = work / \"ip\"\n    ip_dir.mkdir()\n    metrics_dir = work / \"metrics\"\n    metrics_dir.mkdir()\n\n    print(f\"Pulling model from s3://{local_bucket}/{model_prefix}\")\n    s3 = _local_s3_client()\n\n    # --- metadata files ---\n    # model_metadata.json must be at the root of the upload prefix (API reads it there)\n    # also keep a copy inside model/ so sagemaker-artifacts structure is preserved\n    _s3_cp_down(s3, local_bucket, f\"{model_prefix}/model/model_metadata.json\",\n                work / \"model_metadata.json\")\n    (model_dir / \"model_metadata.json\").write_bytes((work /\n                                                     \"model_metadata.json\").read_bytes())\n    _s3_cp_down(s3, local_bucket, f\"{model_prefix}/ip/hyperparameters.json\",\n                ip_dir / \"hyperparameters.json\")\n\n    # reward_function.py: try model root first, then DR_LOCAL_S3_REWARD_KEY\n    local_reward_key = os.environ.get(\"DR_LOCAL_S3_REWARD_KEY\",\n                                      f\"{model_prefix}/reward_function.py\")\n    try:\n        _s3_cp_down(s3, local_bucket, f\"{model_prefix}/reward_function.py\",\n                    work / \"reward_function.py\")\n    except Exception:\n        _s3_cp_down(s3, local_bucket, local_reward_key,\n                    work / \"reward_function.py\")\n\n    # metrics\n    metrics_prefix = os.environ.get(\"DR_LOCAL_S3_METRICS_PREFIX\",\n                                    f\"{model_prefix}/metrics\")\n    _s3_sync_down(s3, local_bucket, metrics_prefix, metrics_dir)\n\n    # --- checkpoint index ---\n    _s3_cp_down(s3, local_bucket, f\"{model_prefix}/model/deepracer_checkpoints.json\",\n                model_dir / \"deepracer_checkpoints.json\")\n    with open(model_dir / \"deepracer_checkpoints.json\") as fh:\n        ckpt_index = json.load(fh)\n\n    if checkpoint_mode == \"best\":\n        ckpt_entry = ckpt_index.get(\n            \"best_checkpoint\", ckpt_index.get(\"last_checkpoint\"))\n        print(\"Using best checkpoint.\")\n    elif checkpoint_mode == \"number\":\n        # List model/ prefix and find the matching .ckpt.index key\n        paginator = s3.get_paginator(\"list_objects_v2\")\n        match = None\n        for page in paginator.paginate(Bucket=local_bucket,\n                                       Prefix=f\"{model_prefix}/model/\"):\n            for obj in page.get(\"Contents\", []):\n                fname = obj[\"Key\"].split(\"/\")[-1]\n                if fname.startswith(f\"{checkpoint_num}_Step-\") and fname.endswith(\".ckpt.index\"):\n                    match = fname[:-len(\".index\")]  # strip .index → .ckpt\n                    break\n            if match:\n                break\n        if not match:\n            raise RuntimeError(\n                f\"No checkpoint found for step {checkpoint_num} \"\n                f\"in s3://{local_bucket}/{model_prefix}/model/\"\n            )\n        ckpt_entry = {\"name\": match}\n        print(f\"Using checkpoint {match}.\")\n    else:\n        ckpt_entry = ckpt_index.get(\"last_checkpoint\")\n        print(\"Using last checkpoint.\")\n\n    if not ckpt_entry:\n        raise RuntimeError(\n            \"Could not determine checkpoint from deepracer_checkpoints.json\")\n\n    ckpt_file = ckpt_entry[\"name\"]       # e.g. \"500_Step-500.ckpt\"\n    ckpt_step = ckpt_file.split(\"_\")[0]  # e.g. \"500\"\n    print(f\"Checkpoint: {ckpt_file}\")\n\n    # Download checkpoint model files (prefix-filtered sync)\n    _s3_sync_down(\n        s3, local_bucket, f\"{model_prefix}/model/\", model_dir,\n        include_pattern=[f\"{ckpt_step}_Step-\", f\"model_{ckpt_step}.pb\"],\n    )\n\n    # Write .coach_checkpoint\n    (model_dir / \".coach_checkpoint\").write_text(ckpt_file)\n\n    # Rewrite deepracer_checkpoints.json to reference only chosen checkpoint\n    new_ckpt_json = {\"last_checkpoint\": ckpt_entry,\n                     \"best_checkpoint\": ckpt_entry}\n    with open(model_dir / \"deepracer_checkpoints.json\", \"w\") as fh:\n        json.dump(new_ckpt_json, fh)\n\n    # --- training_params.yaml: copy from bucket, generate only if missing ---\n    # Multi-worker training produces training_params_1.yaml, training_params_2.yaml, …\n    # We prefer _1 (worker 1 is canonical), then the plain name, then generate.\n    tp_dst = work / \"training_params.yaml\"\n    tp_candidates = [\n        f\"{model_prefix}/training_params_1.yaml\",\n        f\"{model_prefix}/training_params.yaml\",\n    ]\n    tp_found = False\n    for tp_key in tp_candidates:\n        try:\n            _s3_cp_down(s3, local_bucket, tp_key, tp_dst)\n            print(f\"Using {tp_key.split('/')[-1]} from bucket.\")\n            tp_found = True\n            break\n        except Exception:\n            pass\n    if not tp_found:\n        print(\"training_params.yaml not found in bucket — generating from DR_* env vars.\")\n        _build_training_params(work, target_bucket, target_prefix)\n\n    # Normalise training_params.yaml for DRoA:\n    # 1. Patch all S3 bucket/prefix fields to the DRoA upload destination\n    #    (the file from the bucket still references the original DRFC paths).\n    # 2. WORLD_NAME must not have a _cw/_ccw suffix (DRoA TrackId has none)\n    # 3. TRACK_DIRECTION_CLOCKWISE must be present (DRFC never wrote it)\n    with open(tp_dst) as fh:\n        tp_data = yaml.safe_load(fh) or {}\n    changed = False\n    # Always overwrite the S3 destination fields regardless of where the file came from\n    tp_data[\"METRICS_S3_BUCKET\"] = target_bucket\n    tp_data[\"METRICS_S3_OBJECT_KEY\"] = f\"{target_prefix}/TrainingMetrics.json\"\n    tp_data[\"MODEL_METADATA_FILE_S3_KEY\"] = f\"{target_prefix}/model/model_metadata.json\"\n    tp_data[\"REWARD_FILE_S3_KEY\"] = f\"{target_prefix}/reward_function.py\"\n    tp_data[\"SAGEMAKER_SHARED_S3_BUCKET\"] = target_bucket\n    tp_data[\"SAGEMAKER_SHARED_S3_PREFIX\"] = target_prefix\n    changed = True\n    # Strip direction suffix from WORLD_NAME if present\n    world_raw = tp_data.get(\"WORLD_NAME\", \"\")\n    world_clean = re.sub(r'_(cw|ccw)$', '', world_raw)\n    if world_clean != world_raw:\n        tp_data[\"WORLD_NAME\"] = world_clean\n        changed = True\n        print(\n            f\"    Stripped direction suffix from WORLD_NAME: {world_raw} → {world_clean}\")\n    # Infer TRACK_DIRECTION_CLOCKWISE if missing\n    if \"TRACK_DIRECTION_CLOCKWISE\" not in tp_data:\n        # Prefer DR_WORLD_NAME env var which still carries the suffix\n        dr_world = os.environ.get(\"DR_WORLD_NAME\", world_raw)\n        if dr_world.endswith(\"_cw\") or world_raw.endswith(\"_cw\"):\n            tp_data[\"TRACK_DIRECTION_CLOCKWISE\"] = True\n        elif dr_world.endswith(\"_ccw\") or world_raw.endswith(\"_ccw\"):\n            tp_data[\"TRACK_DIRECTION_CLOCKWISE\"] = False\n        else:\n            reverse = os.environ.get(\n                \"DR_TRAIN_REVERSE_DIRECTION\", \"False\").lower() in (\"true\", \"1\", \"yes\")\n            tp_data[\"TRACK_DIRECTION_CLOCKWISE\"] = not reverse\n        changed = True\n        print(\n            f\"    Set TRACK_DIRECTION_CLOCKWISE={tp_data['TRACK_DIRECTION_CLOCKWISE']}\")\n    if changed:\n        with open(tp_dst, \"w\") as fh:\n            yaml.dump(tp_data, fh, default_flow_style=False)\n\n    return work\n\n\n# ---------------------------------------------------------------------------\n# Upload to DRoA S3\n# ---------------------------------------------------------------------------\n\ndef upload_model_folder(cfg, model_dir, credentials, validate_required=True, s3_prefix=None):\n    \"\"\"Upload all eligible files from model_dir to the DRoA S3 bucket.\n\n    If ``s3_prefix`` is provided the files are uploaded under that exact prefix\n    (important when training_params.yaml already references that prefix).\n    Otherwise a new UUID-based prefix is generated.\n    \"\"\"\n    if validate_required:\n        root = Path(model_dir)\n        missing = {f for f in REQUIRED_FILES_DIR if not (root / f).is_file()}\n        if missing:\n            raise ValueError(\n                f\"Missing required model files at root of model dir: {', '.join(sorted(missing))}\")\n\n    if s3_prefix is None:\n        s3_prefix = f\"uploads/models/{uuid.uuid4()}\"\n    s3 = boto3.client(\n        \"s3\",\n        region_name=cfg.region,\n        aws_access_key_id=credentials[\"access_key\"],\n        aws_secret_access_key=credentials[\"secret_key\"],\n        aws_session_token=credentials[\"session_token\"],\n    )\n\n    for file_path in Path(model_dir).rglob(\"*\"):\n        if not file_path.is_file():\n            continue\n        if file_path.name in EXCLUDED_FILES:\n            continue\n        if file_path.suffix.lower() in {\".gz\", \".zip\"}:\n            continue\n        relative = file_path.relative_to(model_dir)\n        s3_key = f\"{s3_prefix}/{relative}\"\n        print(f\"    Uploading: {relative}\")\n        s3.upload_file(\n            Filename=str(file_path),\n            Bucket=cfg.upload_bucket,\n            Key=s3_key,\n            ExtraArgs={\"ContentType\": _content_type(file_path)},\n        )\n\n    print(\n        f\"[3/4] Uploaded model files to s3://{cfg.upload_bucket}/{s3_prefix}\")\n    return s3_prefix\n\n\n# ---------------------------------------------------------------------------\n# DRoA API call\n# ---------------------------------------------------------------------------\n\ndef call_import_model_api(cfg, s3_path, model_name, model_description, credentials):\n    \"\"\"POST /importmodel and return the created modelId.\"\"\"\n    url = f\"{cfg.api_endpoint}/importmodel\"\n    payload = {\n        \"s3Bucket\": cfg.upload_bucket,\n        \"s3Path\": s3_path,\n        \"modelName\": model_name,\n    }\n    if model_description:\n        payload[\"modelDescription\"] = model_description\n\n    response = requests.post(\n        url,\n        json=payload,\n        auth=build_auth(url, credentials, cfg.region, cfg.site_url),\n        headers={\"Content-Type\": \"application/json\"},\n        timeout=30,\n    )\n    if not response.ok:\n        raise RuntimeError(\n            f\"API call failed: {response.status_code} {response.reason}\\n{response.text}\"\n        )\n    model_id = response.json().get(\"modelId\")\n    if not model_id:\n        raise RuntimeError(f\"Unexpected API response: {response.text}\")\n    print(f\"[4/4] Import job created. modelId: {model_id}\")\n    return model_id\n\n\n# ---------------------------------------------------------------------------\n# CLI\n# ---------------------------------------------------------------------------\n\ndef parse_args():\n    parser = argparse.ArgumentParser(\n        description=\"Import a locally trained DRFC model into DeepRacer on AWS.\",\n        epilog=(\n            \"examples:\\n\"\n            \"  %(prog)s --model-dir /tmp/my-model --model-name my-model\\n\"\n            \"  %(prog)s --model-prefix rl-deepracer-sagemaker\\n\"\n            \"  %(prog)s --model-prefix rl-deepracer-sagemaker --model-name my-model --best\"\n        ),\n        formatter_class=argparse.RawDescriptionHelpFormatter,\n    )\n    add_common_args(parser)\n\n    src = parser.add_mutually_exclusive_group(required=True)\n    src.add_argument(\n        \"--model-dir\", type=Path,\n        help=\"Pre-assembled local directory containing model files\",\n    )\n    src.add_argument(\n        \"--model-prefix\",\n        help=\"DRFC local S3 model prefix to pull from (default: DR_LOCAL_S3_MODEL_PREFIX)\",\n    )\n\n    parser.add_argument(\n        \"--model-name\", default=None,\n        help=\"Name for the imported model (default: --model-prefix or directory name)\",\n    )\n    parser.add_argument(\"--model-description\", default=None,\n                        help=\"Optional model description\")\n\n    ckpt = parser.add_mutually_exclusive_group()\n    ckpt.add_argument(\n        \"--best\", action=\"store_true\",\n        help=\"(--model-prefix) Use best checkpoint instead of last\",\n    )\n    ckpt.add_argument(\n        \"--checkpoint\", type=int, metavar=\"STEP\",\n        help=\"(--model-prefix) Use specific checkpoint step number\",\n    )\n\n    return parser.parse_args()\n\n\ndef main():\n    args = parse_args()\n\n    username = args.username or os.environ.get(\"DR_DROA_USERNAME\")\n    if not username:\n        print(\"Error: --username or DR_DROA_USERNAME required.\", file=sys.stderr)\n        sys.exit(1)\n\n    if args.model_dir and not args.model_dir.is_dir():\n        print(\n            f\"Error: --model-dir '{args.model_dir}' is not a directory.\", file=sys.stderr)\n        sys.exit(1)\n\n    # Derive model name from source if not given explicitly\n    if not args.model_name:\n        if args.model_prefix:\n            args.model_name = args.model_prefix\n        elif args.model_dir:\n            args.model_name = args.model_dir.name\n        else:\n            print(\n                \"Error: --model-name is required when source cannot be inferred.\", file=sys.stderr)\n            sys.exit(1)\n\n    cfg = load_droa_config(args)\n\n    credentials = load_cached_credentials(cfg.identity_pool_id, username)\n    if credentials:\n        print(\"[1/4] Using cached credentials.\")\n    else:\n        password = args.password or getpass.getpass(\n            f\"Password for {username}: \")\n        print(\"[1/4] Authenticating with Cognito User Pool...\")\n        id_token = authenticate(cfg.region, cfg.client_id, username, password)\n        print(\"[2/4] Obtaining temporary AWS credentials...\")\n        credentials = get_aws_credentials(\n            cfg.region, cfg.user_pool_id, cfg.identity_pool_id, id_token)\n        save_credentials_to_cache(cfg.identity_pool_id, username, credentials)\n        print(\"[2/4] Credentials obtained.\")\n\n    temp_dir = None\n    upload_prefix = None\n    try:\n        if args.model_dir:\n            source_dir = args.model_dir\n            validate = True\n        else:\n            model_prefix = args.model_prefix or os.environ.get(\n                \"DR_LOCAL_S3_MODEL_PREFIX\")\n            if not model_prefix:\n                print(\n                    \"Error: --model-prefix or DR_LOCAL_S3_MODEL_PREFIX required.\", file=sys.stderr)\n                sys.exit(1)\n            # Generate the upload prefix now so training_params.yaml can reference it\n            upload_prefix = f\"uploads/models/{uuid.uuid4()}\"\n            checkpoint_mode = \"best\" if args.best else (\n                \"number\" if args.checkpoint else \"last\")\n            print(\"[3/4] Pulling model from local S3...\")\n            temp_dir = _build_from_s3_prefix(\n                model_prefix, checkpoint_mode, args.checkpoint,\n                cfg.upload_bucket, upload_prefix,\n            )\n            source_dir = temp_dir\n            validate = False\n\n        print(\"[3/4] Uploading to DRoA S3...\")\n        s3_path = upload_model_folder(\n            cfg, source_dir, credentials, validate_required=validate,\n            s3_prefix=upload_prefix if not args.model_dir else None)\n        model_id = call_import_model_api(\n            cfg, s3_path, args.model_name, args.model_description, credentials\n        )\n    finally:\n        if temp_dir:\n            import shutil\n            shutil.rmtree(temp_dir, ignore_errors=True)\n\n    print(\n        f\"\\nDone. Model '{args.model_name}' is being imported (id: {model_id})\")\n    print(\"Check the DeepRacer on AWS console or use: droa-get-model \" + model_id)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "scripts/droa/list_models.py",
    "content": "#!/usr/bin/env python3\n# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"\nList all models in DeepRacer on AWS (DRoA).\n\nFetches the full paginated model list from GET /models and prints it as a\ntable.  Use --json for machine-readable output.\n\nUsage examples\n--------------\n  # Print table of all models:\n  python list_models.py\n\n  # Raw JSON (suitable for piping to jq):\n  python list_models.py --json | jq '[.[] | {id: .modelId, name: .name}]'\n\n  # Override site URL and username on the command line:\n  python list_models.py --url https://my.droa.example.com --username alice\n\nTable columns\n-------------\n  Model ID        15-char alphanumeric model identifier\n  Name            Model name (up to 64 chars)\n  Status          Current model lifecycle status\n  Training        Training job status\n  Created At      Creation timestamp (UTC, second precision)\n\nAuthentication\n--------------\nCredentials are obtained via the Cognito Identity Pool embedded in the DRoA\nsite's /env.js.  A password prompt is shown on the first call; subsequent\ncalls within the credential lifetime (~1 h) reuse a cache stored in\n~/.droa-cache/.\n\nThe site URL is read from DR_DROA_URL and the username from DR_DROA_USERNAME\n(both set in system.env), or supplied via --url / --username.\n\nModel status values\n-------------------\n  DELETING  ERROR  EVALUATING  IMPORTING  QUEUED  READY\n  STOPPING  SUBMITTING  TRAINING\n\nTraining status values\n----------------------\n  CANCELED  COMPLETED  FAILED  IN_PROGRESS  INITIALIZING  QUEUED  STOPPING\n\"\"\"\n\nimport argparse\nimport getpass\nimport json\nimport os\nimport sys\n\nimport requests\n\nfrom auth import (\n    add_common_args, authenticate, build_auth, get_aws_credentials, load_droa_config,\n    load_cached_credentials, save_credentials_to_cache,\n)\n\n\ndef list_models(cfg, credentials: dict) -> list:\n    \"\"\"Fetch all models, auto-paginating via token.\"\"\"\n    url = f\"{cfg.api_endpoint}/models\"\n    models = []\n    token = None\n    while True:\n        params = {\"token\": token} if token else {}\n        response = requests.get(\n            url, params=params, auth=build_auth(url, credentials, cfg.region, cfg.site_url), timeout=30\n        )\n        if not response.ok:\n            raise RuntimeError(\n                f\"API error: {response.status_code} {response.reason}\\n\"\n                f\"Headers: {dict(response.headers)}\\n\"\n                f\"Body: {response.text}\"\n            )\n        data = response.json()\n        models.extend(data.get(\"models\", []))\n        token = data.get(\"token\")\n        if not token:\n            break\n    return models\n\n\ndef parse_args() -> argparse.Namespace:\n    parser = argparse.ArgumentParser(\n        description=\"List models in DeepRacer on AWS.\",\n        epilog=(\n            \"examples:\\n\"\n            \"  %(prog)s\\n\"\n            \"  %(prog)s --json | jq '[.[] | {id: .modelId, name: .name}]'\"\n        ),\n        formatter_class=argparse.RawDescriptionHelpFormatter,\n    )\n    add_common_args(parser)\n    parser.add_argument(\n        \"--json\", dest=\"output_json\", action=\"store_true\",\n        help=\"Output raw JSON instead of a table\",\n    )\n    return parser.parse_args()\n\n\ndef main() -> None:\n    args = parse_args()\n\n    username = args.username or os.environ.get(\"DR_DROA_USERNAME\")\n    if not username:\n        print(\"Error: --username or DR_DROA_USERNAME required.\", file=sys.stderr)\n        sys.exit(1)\n\n    cfg = load_droa_config(args)\n\n    credentials = load_cached_credentials(cfg.identity_pool_id, username)\n    if credentials:\n        print(\"Using cached credentials.\", file=sys.stderr)\n    else:\n        password = args.password or getpass.getpass(\n            f\"Password for {username}: \")\n        id_token = authenticate(cfg.region, cfg.client_id, username, password)\n        credentials = get_aws_credentials(\n            cfg.region, cfg.user_pool_id, cfg.identity_pool_id, id_token)\n        save_credentials_to_cache(cfg.identity_pool_id, username, credentials)\n    models = sorted(\n        list_models(cfg, credentials),\n        key=lambda m: m.get(\"createdAt\") or \"\",\n        reverse=True,\n    )\n\n    if args.output_json:\n        print(json.dumps(models, indent=2, default=str))\n        return\n\n    if not models:\n        print(\"No models found.\")\n        return\n\n    id_w, name_w, status_w, tstatus_w = 15, 40, 16, 16\n    header = (\n        f\"{'Model ID':<{id_w}}  {'Name':<{name_w}}  \"\n        f\"{'Status':<{status_w}}  {'Training':<{tstatus_w}}  Created At\"\n    )\n    print(header)\n    print(\"-\" * (id_w + name_w + status_w + tstatus_w + 30))\n    for m in models:\n        created = (m.get(\"createdAt\") or \"\")[:19].replace(\"T\", \" \")\n        print(\n            f\"{m.get('modelId', ''):<{id_w}}  \"\n            f\"{m.get('name', ''):<{name_w}}  \"\n            f\"{m.get('status', ''):<{status_w}}  \"\n            f\"{m.get('trainingStatus', ''):<{tstatus_w}}  \"\n            f\"{created}\"\n        )\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "scripts/evaluation/prepare-config.py",
    "content": "#!/usr/bin/python3\n\nimport boto3\nfrom datetime import datetime\nimport sys\nimport os \nimport time\nimport json\nimport io\nimport yaml\n\ndef str2bool(v):\n  return v.lower() in (\"yes\", \"true\", \"t\", \"1\")\n\neval_time = datetime.now().strftime('%Y%m%d%H%M%S')\n\nconfig = {}\nconfig['CAR_COLOR'] = []\nconfig['BODY_SHELL_TYPE'] = []\nconfig['RACER_NAME'] = []\nconfig['DISPLAY_NAME'] = []\nconfig['MODEL_S3_PREFIX'] = []\nconfig['MODEL_S3_BUCKET'] = []\nconfig['SIMTRACE_S3_PREFIX'] = []\nconfig['SIMTRACE_S3_BUCKET'] = []\nconfig['KINESIS_VIDEO_STREAM_NAME'] = []\nconfig['METRICS_S3_BUCKET'] = []\nconfig['METRICS_S3_OBJECT_KEY'] = []\nconfig['MP4_S3_BUCKET'] = []\nconfig['MP4_S3_OBJECT_PREFIX'] = []\n\n# Basic configuration; including all buckets etc.\nconfig['AWS_REGION'] = os.environ.get('DR_AWS_APP_REGION', 'us-east-1')\nconfig['JOB_TYPE'] = 'EVALUATION'\nconfig['KINESIS_VIDEO_STREAM_NAME'] = os.environ.get('DR_KINESIS_STREAM_NAME', '')\nconfig['ROBOMAKER_SIMULATION_JOB_ACCOUNT_ID'] = os.environ.get('', 'Dummy')\n\ns3_container_endpoint_url = os.environ.get('DR_MINIO_URL', None)\nif s3_container_endpoint_url is not None:\n    config['S3_ENDPOINT_URL'] = s3_container_endpoint_url\n\nconfig['MODEL_S3_PREFIX'].append(os.environ.get('DR_LOCAL_S3_MODEL_PREFIX', 'rl-deepracer-sagemaker'))\nconfig['MODEL_S3_BUCKET'].append(os.environ.get('DR_LOCAL_S3_BUCKET', 'bucket'))\nconfig['SIMTRACE_S3_BUCKET'].append(os.environ.get('DR_LOCAL_S3_BUCKET', 'bucket'))\nconfig['SIMTRACE_S3_PREFIX'].append(\n    '{}/evaluation-{}'.format(os.environ.get('DR_LOCAL_S3_MODEL_PREFIX', 'rl-deepracer-sagemaker'), eval_time)\n)\n\n# Metrics\nconfig['METRICS_S3_BUCKET'].append(os.environ.get('DR_LOCAL_S3_BUCKET', 'bucket'))\nmetrics_prefix = os.environ.get('DR_LOCAL_S3_METRICS_PREFIX', None)\nif metrics_prefix is not None:\n    config['METRICS_S3_OBJECT_KEY'].append('{}/evaluation/evaluation-{}.json'.format(metrics_prefix, eval_time))\nelse:\n    config['METRICS_S3_OBJECT_KEY'].append('DeepRacer-Metrics/EvaluationMetrics-{}.json'.format(eval_time))\n    \n# MP4 configuration / sav\nsave_mp4 = str2bool(os.environ.get(\"DR_EVAL_SAVE_MP4\", \"False\"))\nif save_mp4:\n    config['MP4_S3_BUCKET'].append(os.environ.get('DR_LOCAL_S3_BUCKET', 'bucket'))\n    config['MP4_S3_OBJECT_PREFIX'].append('{}/{}'.format(os.environ.get('DR_LOCAL_S3_MODEL_PREFIX', 'bucket'),'mp4'))\n\n# Checkpoint\nconfig['EVAL_CHECKPOINT'] = os.environ.get('DR_EVAL_CHECKPOINT', 'last')\n\n# Car and training \nbody_shell_type = os.environ.get('DR_CAR_BODY_SHELL_TYPE', 'deepracer')\nconfig['BODY_SHELL_TYPE'].append(body_shell_type)\nconfig['CAR_COLOR'].append(os.environ.get('DR_CAR_COLOR', 'Red'))\nconfig['DISPLAY_NAME'].append(os.environ.get('DR_DISPLAY_NAME', 'racer1'))\nconfig['RACER_NAME'].append(os.environ.get('DR_RACER_NAME', 'racer1'))\n\nconfig['RACE_TYPE'] = os.environ.get('DR_RACE_TYPE', 'TIME_TRIAL')\nconfig['WORLD_NAME'] = os.environ.get('DR_WORLD_NAME', 'LGSWide')\nconfig['NUMBER_OF_TRIALS'] = os.environ.get('DR_EVAL_NUMBER_OF_TRIALS', '5')\nconfig['ENABLE_DOMAIN_RANDOMIZATION'] = os.environ.get('DR_ENABLE_DOMAIN_RANDOMIZATION', 'false')\nconfig['RESET_BEHIND_DIST'] = os.environ.get('DR_EVAL_RESET_BEHIND_DIST', '1.0')\n\nconfig['IS_CONTINUOUS'] = os.environ.get('DR_EVAL_IS_CONTINUOUS', 'True')\nconfig['NUMBER_OF_RESETS'] = os.environ.get('DR_EVAL_MAX_RESETS', '0')\n\nconfig['OFF_TRACK_PENALTY'] = os.environ.get('DR_EVAL_OFF_TRACK_PENALTY', '5.0')\nconfig['COLLISION_PENALTY'] = os.environ.get('DR_COLLISION_PENALTY', '5.0')\n\nconfig['CAMERA_MAIN_ENABLE'] = os.environ.get('DR_CAMERA_MAIN_ENABLE', 'True')\nconfig['CAMERA_SUB_ENABLE'] = os.environ.get('DR_CAMERA_SUB_ENABLE', 'True')\nconfig['REVERSE_DIR'] = os.environ.get('DR_EVAL_REVERSE_DIRECTION', False)\nconfig['ENABLE_EXTRA_KVS_OVERLAY'] = os.environ.get('DR_ENABLE_EXTRA_KVS_OVERLAY', 'False')\n\n# Object Avoidance\nif config['RACE_TYPE'] == 'OBJECT_AVOIDANCE':\n    config['NUMBER_OF_OBSTACLES'] = os.environ.get('DR_OA_NUMBER_OF_OBSTACLES', '6')\n    config['MIN_DISTANCE_BETWEEN_OBSTACLES'] = os.environ.get('DR_OA_MIN_DISTANCE_BETWEEN_OBSTACLES', '2.0')\n    config['RANDOMIZE_OBSTACLE_LOCATIONS'] = os.environ.get('DR_OA_RANDOMIZE_OBSTACLE_LOCATIONS', 'True')\n    config['IS_OBSTACLE_BOT_CAR'] = os.environ.get('DR_OA_IS_OBSTACLE_BOT_CAR', 'false')\n    config['OBSTACLE_TYPE'] = os.environ.get('DR_OA_OBSTACLE_TYPE', 'box_obstacle')\n\n    object_position_str = os.environ.get('DR_OA_OBJECT_POSITIONS', \"\")\n    if object_position_str != \"\":\n        object_positions = []\n        for o in object_position_str.split(\";\"):\n            object_positions.append(o)\n        config['OBJECT_POSITIONS'] = object_positions\n        config['NUMBER_OF_OBSTACLES'] = str(len(object_positions))\n\n# Head to Bot\nif config['RACE_TYPE'] == 'HEAD_TO_BOT':\n    config['IS_LANE_CHANGE'] = os.environ.get('DR_H2B_IS_LANE_CHANGE', 'False')\n    config['LOWER_LANE_CHANGE_TIME'] = os.environ.get('DR_H2B_LOWER_LANE_CHANGE_TIME', '3.0')\n    config['UPPER_LANE_CHANGE_TIME'] = os.environ.get('DR_H2B_UPPER_LANE_CHANGE_TIME', '5.0')\n    config['LANE_CHANGE_DISTANCE'] = os.environ.get('DR_H2B_LANE_CHANGE_DISTANCE', '1.0')\n    config['NUMBER_OF_BOT_CARS'] = os.environ.get('DR_H2B_NUMBER_OF_BOT_CARS', '0')\n    config['MIN_DISTANCE_BETWEEN_BOT_CARS'] = os.environ.get('DR_H2B_MIN_DISTANCE_BETWEEN_BOT_CARS', '2.0')\n    config['RANDOMIZE_BOT_CAR_LOCATIONS'] = os.environ.get('DR_H2B_RANDOMIZE_BOT_CAR_LOCATIONS', 'False')\n    config['BOT_CAR_SPEED'] = os.environ.get('DR_H2B_BOT_CAR_SPEED', '0.2')\n    config['PENALTY_SECONDS'] = os.environ.get('DR_H2B_BOT_CAR_PENALTY', '2.0')\n\n# Head to Model\nif config['RACE_TYPE'] == 'HEAD_TO_MODEL':\n    config['MODEL_S3_PREFIX'].append(os.environ.get('DR_EVAL_OPP_S3_MODEL_PREFIX', 'rl-deepracer-sagemaker'))\n    config['MODEL_S3_BUCKET'].append(os.environ.get('DR_LOCAL_S3_BUCKET', 'bucket'))\n    config['SIMTRACE_S3_BUCKET'].append(os.environ.get('DR_LOCAL_S3_BUCKET', 'bucket'))\n    config['SIMTRACE_S3_PREFIX'].append(os.environ.get('DR_EVAL_OPP_S3_MODEL_PREFIX', 'rl-deepracer-sagemaker'))\n\n    # Metrics\n    config['METRICS_S3_BUCKET'].append(os.environ.get('DR_LOCAL_S3_BUCKET', 'bucket'))\n    metrics_prefix = os.environ.get('DR_EVAL_OPP_S3_METRICS_PREFIX', '{}/{}'.format(os.environ.get('DR_EVAL_OPP_S3_MODEL_PREFIX', 'rl-deepracer-sagemaker'),'metrics'))\n    if metrics_prefix is not None:\n        config['METRICS_S3_OBJECT_KEY'].append('{}/EvaluationMetrics-{}.json'.format(metrics_prefix, str(round(time.time()))))\n    else:\n        config['METRICS_S3_OBJECT_KEY'].append('DeepRacer-Metrics/EvaluationMetrics-{}.json'.format(str(round(time.time()))))\n\n    # MP4 configuration / sav\n    save_mp4 = str2bool(os.environ.get(\"DR_EVAL_SAVE_MP4\", \"False\"))\n    if save_mp4:\n        config['MP4_S3_BUCKET'].append(os.environ.get('DR_LOCAL_S3_BUCKET', 'bucket'))\n        config['MP4_S3_OBJECT_PREFIX'].append('{}/{}'.format(os.environ.get('DR_EVAL_OPP_MODEL_PREFIX', 'bucket'),'mp4'))\n\n    # Car and training \n    config['DISPLAY_NAME'].append(os.environ.get('DR_EVAL_OPP_DISPLAY_NAME', 'racer1'))\n    config['RACER_NAME'].append(os.environ.get('DR_EVAL_OPP_RACER_NAME', 'racer1'))\n\n    body_shell_type = os.environ.get('DR_EVAL_OPP_CAR_BODY_SHELL_TYPE', 'deepracer')\n    config['BODY_SHELL_TYPE'].append(body_shell_type)\n    config['VIDEO_JOB_TYPE'] = 'EVALUATION'\n    config['CAR_COLOR'] = ['Purple', 'Orange']    \n    config['MODEL_NAME'] = config['DISPLAY_NAME']\n\n# S3 Setup / write and upload file\ns3_local_endpoint_url = os.environ.get('DR_LOCAL_S3_ENDPOINT_URL', None)\ns3_region = config['AWS_REGION']\ns3_bucket = config['MODEL_S3_BUCKET'][0]\ns3_prefix = config['MODEL_S3_PREFIX'][0]\ns3_mode = os.environ.get('DR_LOCAL_S3_AUTH_MODE','profile')\nif s3_mode == 'profile':\n    s3_profile = os.environ.get('DR_LOCAL_S3_PROFILE', 'default')\nelse: # mode is 'role'\n    s3_profile = None\ns3_yaml_name = os.environ.get('DR_LOCAL_S3_EVAL_PARAMS_FILE', 'eval_params.yaml')\nyaml_key = os.path.normpath(os.path.join(s3_prefix, s3_yaml_name))\n\nsession = boto3.session.Session(profile_name=s3_profile)\ns3_client = session.client('s3', region_name=s3_region, endpoint_url=s3_local_endpoint_url)\n\nyaml_key = os.path.normpath(os.path.join(s3_prefix, s3_yaml_name))\nlocal_yaml_path = os.path.abspath(os.path.join(os.environ.get('DR_DIR'),'tmp', 'eval-params-' + str(round(time.time())) + '.yaml'))\n\nwith open(local_yaml_path, 'w') as yaml_file:\n    yaml.dump(config, yaml_file, default_flow_style=False, default_style='\\'', explicit_start=True)\n\ns3_client.upload_file(Bucket=s3_bucket, Key=yaml_key, Filename=local_yaml_path)\n"
  },
  {
    "path": "scripts/evaluation/start.sh",
    "content": "#!/usr/bin/env bash\n\nsource $DR_DIR/bin/scripts_wrapper.sh\n\nusage() {\n  echo \"Usage: $0 [-q] [-c]\"\n  echo \"       -q        Quiet - does not start log tracing.\"\n  echo \"       -c        Clone - copies model into new prefix before evaluating.\"\n  exit 1\n}\n\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n  echo \"Requested to stop.\"\n  exit 1\n}\n\nwhile getopts \":qc\" opt; do\n  case $opt in\n  q)\n    OPT_QUIET=\"QUIET\"\n    ;;\n  c)\n    OPT_CLONE=\"CLONE\"\n    ;;\n  h)\n    usage\n    ;;\n  \\?)\n    echo \"Invalid option -$OPTARG\" >&2\n    usage\n    ;;\n  esac\ndone\n\n## Check if WSL2\nif [[ -f /proc/version ]] && grep -qi Microsoft /proc/version && grep -q \"WSL2\" /proc/version; then\n    IS_WSL2=\"yes\"\nfi\n\n# set evaluation specific environment variables\nSTACK_NAME=\"deepracer-eval-$DR_RUN_ID\"\nSTACK_CONTAINERS=$(docker stack ps $STACK_NAME 2>/dev/null | wc -l)\nif [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n  if [[ \"$STACK_CONTAINERS\" -gt 1 ]]; then\n    echo \"ERROR: Processes running in stack $STACK_NAME. Stop evaluation with dr-stop-evaluation.\"\n    exit 1\n  fi\nfi\n\n# Ensure Sagemaker's folder is there\n_dr_ensure_sagemaker_dir\n\necho \"Evaluation of model s3://$DR_LOCAL_S3_BUCKET/$DR_LOCAL_S3_MODEL_PREFIX starting.\"\necho \"Using image ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}\"\necho \"\"\n\n# clone if required\nif [ -n \"$OPT_CLONE\" ]; then\n  echo \"Cloning model into s3://$DR_LOCAL_S3_BUCKET/${DR_LOCAL_S3_MODEL_PREFIX}-E\"\n  aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 sync s3://$DR_LOCAL_S3_BUCKET/$DR_LOCAL_S3_MODEL_PREFIX/model s3://$DR_LOCAL_S3_BUCKET/${DR_LOCAL_S3_MODEL_PREFIX}-E/model\n  aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 sync s3://$DR_LOCAL_S3_BUCKET/$DR_LOCAL_S3_MODEL_PREFIX/ip s3://$DR_LOCAL_S3_BUCKET/${DR_LOCAL_S3_MODEL_PREFIX}-E/ip\n  export DR_LOCAL_S3_MODEL_PREFIX=${DR_LOCAL_S3_MODEL_PREFIX}-E\nfi\n\n# set evaluation specific environment variables\nS3_PATH=\"s3://$DR_LOCAL_S3_BUCKET/$DR_LOCAL_S3_MODEL_PREFIX\"\n\nexport ROBOMAKER_COMMAND=\"/opt/ml/code/run.sh run evaluation.launch.py\"\nexport DR_CURRENT_PARAMS_FILE=${DR_LOCAL_S3_EVAL_PARAMS_FILE}\n\nif [ ${DR_ROBOMAKER_MOUNT_LOGS,,} = \"true\" ]; then\n  COMPOSE_FILES=\"$DR_EVAL_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DR_DIR/docker/docker-compose-mount.yml\"\n  export DR_MOUNT_DIR=\"$DR_DIR/data/logs/robomaker/$DR_LOCAL_S3_MODEL_PREFIX\"\n  mkdir -p $DR_MOUNT_DIR\nelse\n  COMPOSE_FILES=\"$DR_EVAL_COMPOSE_FILE\"\nfi\n\necho \"Creating Robomaker configuration in $S3_PATH/$DR_CURRENT_PARAMS_FILE\"\npython3 $DR_DIR/scripts/evaluation/prepare-config.py\n\n# Check if we are using Host X -- ensure variables are populated\nif [[ \"${DR_HOST_X,,}\" == \"true\" ]]; then\n  if [[ -n \"$DR_DISPLAY\" ]]; then\n    ROBO_DISPLAY=$DR_DISPLAY\n  else\n    ROBO_DISPLAY=$DISPLAY\n  fi\n\n  if ! DISPLAY=$ROBO_DISPLAY timeout 1s xset q &>/dev/null; then\n    echo \"No X Server running on display $ROBO_DISPLAY. Exiting\"\n    exit 1\n  fi\n\n  if [[ -z \"$XAUTHORITY\" && \"$IS_WSL2\" != \"yes\" ]]; then\n    export XAUTHORITY=~/.Xauthority\n    if [[ ! -f \"$XAUTHORITY\" ]]; then\n      echo \"No XAUTHORITY defined. .Xauthority does not exist. Stopping.\"\n      exit 1\n    fi\n  fi\nfi\n\n# Check if we will use Docker Swarm or Docker Compose\nif [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n\n  if [ \"$DR_DOCKER_MAJOR_VERSION\" -gt 24 ]; then\n    DETACH_FLAG=\"--detach=true\"\n  fi\n\n  DISPLAY=$ROBO_DISPLAY docker stack deploy $COMPOSE_FILES $DETACH_FLAG $STACK_NAME\nelse\n  DISPLAY=$ROBO_DISPLAY docker compose $COMPOSE_FILES -p $STACK_NAME up -d\nfi\n\n# Request to be quiet. Quitting here.\nif [ -n \"$OPT_QUIET\" ]; then\n  exit 0\nfi\n\n# Trigger requested log-file\ndr-logs-robomaker -w 15 -e\n"
  },
  {
    "path": "scripts/evaluation/stop.sh",
    "content": "#!/usr/bin/env bash\n\nSTACK_NAME=\"deepracer-eval-$DR_RUN_ID\"\n\n# Check if we will use Docker Swarm or Docker Compose\nif [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n    docker stack rm $STACK_NAME\nelse\n    COMPOSE_FILES=$(echo ${DR_EVAL_COMPOSE_FILE} | cut -f1-2 -d\\ )\n    export DR_CURRENT_PARAMS_FILE=\"\"\n    docker compose $COMPOSE_FILES -p $STACK_NAME down\nfi\n"
  },
  {
    "path": "scripts/log-analysis/start.sh",
    "content": "#!/usr/bin/env bash\n\nif docker ps --filter \"name=deepracer-analysis\" --format \"{{.Names}}\" | grep -q \"^deepracer-analysis$\"; then\n  echo \"Log-analysis is already running. Use dr-url-loganalysis to get the URL.\"\n  exit 0\nfi\n\necho \"Starting log-analysis container (image: awsdeepracercommunity/deepracer-analysis:${DR_ANALYSIS_IMAGE})...\"\ndocker run --rm -d -p \"8888:8888\" \\\n-v $DR_DIR/data/logs:/workspace/logs \\\n-v $DR_DIR/docker/volumes/.aws:/home/ubuntu/.aws \\\n-v $DR_DIR/data/analysis:/workspace/analysis \\\n-v $DR_DIR/data/minio:/workspace/minio \\\n--name deepracer-analysis \\\n--network sagemaker-local \\\n awsdeepracercommunity/deepracer-analysis:$DR_ANALYSIS_IMAGE > /dev/null\n\necho \"Waiting for Jupyter to start...\"\nfor i in $(seq 1 30); do\n  URL=$(docker logs deepracer-analysis 2>&1 | grep -oE 'http://127\\.0\\.0\\.1:[0-9]+[^ ]*token=[a-f0-9]+' | tail -1)\n  if [ -n \"$URL\" ]; then\n    echo \"Log-analysis is running. Open in browser:\"\n    echo \"  ${URL/127.0.0.1/localhost}\"\n    exit 0\n  fi\n  sleep 1\ndone\necho \"Log-analysis started. Use dr-url-loganalysis to get the URL once ready.\""
  },
  {
    "path": "scripts/log-analysis/stop.sh",
    "content": "#!/usr/bin/env bash\n\necho \"Stopping log-analysis container...\"\nif docker stop deepracer-analysis > /dev/null 2>&1; then\n  echo \"Log-analysis stopped.\"\nelse\n  echo \"Log-analysis is not running.\"\nfi\n"
  },
  {
    "path": "scripts/metrics/start.sh",
    "content": "#!/usr/bin/env bash\n\nCOMPOSE_FILES=./docker/docker-compose-metrics.yml\n\ndocker compose -f $COMPOSE_FILES -p deepracer-metrics up -d"
  },
  {
    "path": "scripts/metrics/stop.sh",
    "content": "#!/usr/bin/env bash\n\nCOMPOSE_FILES=./docker/docker-compose-metrics.yml\n\ndocker compose -f $COMPOSE_FILES -p deepracer-metrics down"
  },
  {
    "path": "scripts/training/increment.sh",
    "content": "#!/usr/bin/env bash\n\nusage() {\n    echo \"Usage: $0 [-f] [-w] [-p <model-prefix>] [-d <delimiter>]\"\n    echo \"\"\n    echo \"Command will set the current model to be the pre-trained model and increment a numerical suffix.\"\n    echo \"-p model  Sets the to-be name to be <model-prefix> rather than auto-incremeneting the previous model.\"\n    echo \"-d delim  Delimiter in model-name (e.g. '-' in 'test-model-1')\"\n    echo \"-f        Force. Ask for no confirmations.\"\n    echo \"-w        Wipe the S3 prefix to ensure that two models are not mixed.\"\n    exit 1\n}\n\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n    echo \"Requested to stop.\"\n    exit 1\n}\n\nOPT_DELIM='-'\n\nwhile getopts \":fwp:d:\" opt; do\n    case $opt in\n\n    f)\n        OPT_FORCE=\"True\"\n        ;;\n    p)\n        OPT_PREFIX=\"$OPTARG\"\n        ;;\n    w)\n        OPT_WIPE=\"--delete\"\n        ;;\n    d)\n        OPT_DELIM=\"$OPTARG\"\n        ;;\n    h)\n        usage\n        ;;\n    \\?)\n        echo \"Invalid option -$OPTARG\" >&2\n        usage\n        ;;\n    esac\ndone\n\nCONFIG_FILE=$DR_CONFIG\necho \"Configuration file $CONFIG_FILE will be updated.\"\n\n## Read in data\nCURRENT_RUN_MODEL=$(grep -e \"^DR_LOCAL_S3_MODEL_PREFIX\" ${CONFIG_FILE} | awk '{split($0,a,\"=\"); print a[2] }')\nCURRENT_RUN_MODEL_NUM=$(echo \"${CURRENT_RUN_MODEL}\" |\n    awk -v DELIM=\"${OPT_DELIM}\" '{ n=split($0,a,DELIM); if (a[n] ~ /[0-9]*/) print a[n]; else print \"\"; }')\nif [[ -n ${OPT_PREFIX} ]]; then\n    NEW_RUN_MODEL=\"${OPT_PREFIX}\"\nelse\n    if [[ -z ${CURRENT_RUN_MODEL_NUM} ]]; then\n        NEW_RUN_MODEL=\"${CURRENT_RUN_MODEL}${OPT_DELIM}1\"\n    else\n        NEW_RUN_MODEL_NUM=$(echo \"${CURRENT_RUN_MODEL_NUM} + 1\" | bc)\n        NEW_RUN_MODEL=$(echo $CURRENT_RUN_MODEL | sed \"s/${CURRENT_RUN_MODEL_NUM}\\$/${NEW_RUN_MODEL_NUM}/\")\n    fi\nfi\n\nif [[ -n \"${NEW_RUN_MODEL}\" ]]; then\n    echo \"Incrementing model from ${CURRENT_RUN_MODEL} to ${NEW_RUN_MODEL}\"\n    if [[ -z \"${OPT_FORCE}\" ]]; then\n        read -r -p \"Are you sure? [y/N] \" response\n        if [[ ! \"$response\" =~ ^([yY][eE][sS]|[yY])$ ]]; then\n            echo \"Aborting.\"\n            exit 1\n        fi\n    fi\n    sed -i.bak -re \"s/(DR_LOCAL_S3_PRETRAINED_PREFIX=).*$/\\1$CURRENT_RUN_MODEL/g; s/(DR_LOCAL_S3_PRETRAINED=).*$/\\1True/g; ; s/(DR_LOCAL_S3_MODEL_PREFIX=).*$/\\1$NEW_RUN_MODEL/g\" \"$CONFIG_FILE\" && echo \"Done.\"\nelse\n    echo \"Error in determining new model. Aborting.\"\n    exit 1\nfi\n\nif [[ -n \"${OPT_WIPE}\" ]]; then\n    MODEL_DIR_S3=$(aws ${DR_LOCAL_PROFILE_ENDPOINT_URL} s3 ls s3://${DR_LOCAL_S3_BUCKET}/${NEW_RUN_MODEL})\n    if [[ -n \"${MODEL_DIR_S3}\" ]]; then\n        echo \"The new model's S3 prefix s3://${DR_LOCAL_S3_BUCKET}/${NEW_RUN_MODEL} exists. Will wipe.\"\n    fi\n    if [[ -z \"${OPT_FORCE}\" ]]; then\n        read -r -p \"Are you sure? [y/N] \" response\n        if [[ ! \"$response\" =~ ^([yY][eE][sS]|[yY])$ ]]; then\n            echo \"Aborting.\"\n            exit 1\n        fi\n    fi\n    aws ${DR_LOCAL_PROFILE_ENDPOINT_URL} s3 rm s3://${DR_LOCAL_S3_BUCKET}/${NEW_RUN_MODEL} --recursive\nfi\n"
  },
  {
    "path": "scripts/training/prepare-config.py",
    "content": "#!/usr/bin/python3\n\nfrom datetime import datetime\nimport boto3\nimport sys\nimport os \nimport time\nimport json\nimport io\nimport yaml\n\ntrain_time = datetime.now().strftime('%Y%m%d%H%M%S')\n\nconfig = {}\nconfig['AWS_REGION'] = os.environ.get('DR_AWS_APP_REGION', 'us-east-1')\nconfig['JOB_TYPE'] = 'TRAINING'\nconfig['KINESIS_VIDEO_STREAM_NAME'] = os.environ.get('DR_KINESIS_STREAM_NAME', '')\nconfig['METRICS_S3_BUCKET'] = os.environ.get('DR_LOCAL_S3_BUCKET', 'bucket')\n\ns3_container_endpoint_url = os.environ.get('DR_MINIO_URL', None)\nif s3_container_endpoint_url is not None:\n    config['S3_ENDPOINT_URL'] = s3_container_endpoint_url\n\nmetrics_prefix = os.environ.get('DR_LOCAL_S3_METRICS_PREFIX', None)\nif metrics_prefix is not None:\n    config['METRICS_S3_OBJECT_KEY'] = '{}/TrainingMetrics.json'.format(metrics_prefix)\nelse:\n    config['METRICS_S3_OBJECT_KEY'] = 'DeepRacer-Metrics/TrainingMetrics-{}.json'.format(train_time)\n\nconfig['MODEL_METADATA_FILE_S3_KEY'] = os.environ.get('DR_LOCAL_S3_MODEL_METADATA_KEY', 'custom_files/model_metadata.json') \nconfig['REWARD_FILE_S3_KEY'] = os.environ.get('DR_LOCAL_S3_REWARD_KEY', 'custom_files/reward_function.py')\nconfig['ROBOMAKER_SIMULATION_JOB_ACCOUNT_ID'] = os.environ.get('', 'Dummy')\nconfig['NUM_WORKERS'] = os.environ.get('DR_WORKERS', 1)\nconfig['SAGEMAKER_SHARED_S3_BUCKET'] = os.environ.get('DR_LOCAL_S3_BUCKET', 'bucket')\nconfig['SAGEMAKER_SHARED_S3_PREFIX'] = os.environ.get('DR_LOCAL_S3_MODEL_PREFIX', 'rl-deepracer-sagemaker')\nconfig['SIMTRACE_S3_BUCKET'] = os.environ.get('DR_LOCAL_S3_BUCKET', 'bucket')\nconfig['SIMTRACE_S3_PREFIX'] = os.environ.get('DR_LOCAL_S3_MODEL_PREFIX', 'rl-deepracer-sagemaker')\nconfig['TRAINING_JOB_ARN'] = 'arn:Dummy'\n\n# Car and training \nconfig['BODY_SHELL_TYPE'] = os.environ.get('DR_CAR_BODY_SHELL_TYPE', 'deepracer')\nconfig['CAR_COLOR'] = os.environ.get('DR_CAR_COLOR', 'Red')\nconfig['CAR_NAME'] = os.environ.get('DR_CAR_NAME', 'MyCar')\nconfig['RACE_TYPE'] = os.environ.get('DR_RACE_TYPE', 'TIME_TRIAL')\nconfig['WORLD_NAME'] = os.environ.get('DR_WORLD_NAME', 'LGSWide')\nconfig['DISPLAY_NAME'] = os.environ.get('DR_DISPLAY_NAME', 'racer1')\nconfig['RACER_NAME'] = os.environ.get('DR_RACER_NAME', 'racer1')\n\nconfig['REVERSE_DIR'] = os.environ.get('DR_TRAIN_REVERSE_DIRECTION', False)\nconfig['ALTERNATE_DRIVING_DIRECTION'] = os.environ.get('DR_TRAIN_ALTERNATE_DRIVING_DIRECTION', os.environ.get('DR_ALTERNATE_DRIVING_DIRECTION', 'false'))\nconfig['CHANGE_START_POSITION'] = os.environ.get('DR_TRAIN_CHANGE_START_POSITION', os.environ.get('DR_CHANGE_START_POSITION', 'true'))\nconfig['ROUND_ROBIN_ADVANCE_DIST'] = os.environ.get('DR_TRAIN_ROUND_ROBIN_ADVANCE_DIST', '0.05')\nconfig['START_POSITION_OFFSET'] = os.environ.get('DR_TRAIN_START_POSITION_OFFSET', '0.00')\nconfig['ENABLE_DOMAIN_RANDOMIZATION'] = os.environ.get('DR_ENABLE_DOMAIN_RANDOMIZATION', 'false')\nconfig['MIN_EVAL_TRIALS'] = os.environ.get('DR_TRAIN_MIN_EVAL_TRIALS', '5')\nconfig['CAMERA_MAIN_ENABLE'] = os.environ.get('DR_CAMERA_MAIN_ENABLE', 'True')\nconfig['CAMERA_SUB_ENABLE'] = os.environ.get('DR_CAMERA_SUB_ENABLE', 'True')\nconfig['BEST_MODEL_METRIC'] = os.environ.get('DR_TRAIN_BEST_MODEL_METRIC', 'progress')\nconfig['ENABLE_EXTRA_KVS_OVERLAY'] = os.environ.get('DR_ENABLE_EXTRA_KVS_OVERLAY', 'False')\n\n# Object Avoidance\nif config['RACE_TYPE'] == 'OBJECT_AVOIDANCE':\n    config['NUMBER_OF_OBSTACLES'] = os.environ.get('DR_OA_NUMBER_OF_OBSTACLES', '6')\n    config['MIN_DISTANCE_BETWEEN_OBSTACLES'] = os.environ.get('DR_OA_MIN_DISTANCE_BETWEEN_OBSTACLES', '2.0')\n    config['RANDOMIZE_OBSTACLE_LOCATIONS'] = os.environ.get('DR_OA_RANDOMIZE_OBSTACLE_LOCATIONS', 'True')\n    config['IS_OBSTACLE_BOT_CAR'] = os.environ.get('DR_OA_IS_OBSTACLE_BOT_CAR', 'false')\n    config['OBSTACLE_TYPE'] = os.environ.get('DR_OA_OBSTACLE_TYPE', 'box_obstacle')\n\n    object_position_str = os.environ.get('DR_OA_OBJECT_POSITIONS', \"\")\n    if object_position_str != \"\":\n        object_positions = []\n        for o in object_position_str.split(\";\"):\n            object_positions.append(o)\n        config['OBJECT_POSITIONS'] = object_positions\n        config['NUMBER_OF_OBSTACLES'] = str(len(object_positions))\n\n# Head to Bot\nif config['RACE_TYPE'] == 'HEAD_TO_BOT':\n    config['IS_LANE_CHANGE'] = os.environ.get('DR_H2B_IS_LANE_CHANGE', 'False')\n    config['LOWER_LANE_CHANGE_TIME'] = os.environ.get('DR_H2B_LOWER_LANE_CHANGE_TIME', '3.0')\n    config['UPPER_LANE_CHANGE_TIME'] = os.environ.get('DR_H2B_UPPER_LANE_CHANGE_TIME', '5.0')\n    config['LANE_CHANGE_DISTANCE'] = os.environ.get('DR_H2B_LANE_CHANGE_DISTANCE', '1.0')\n    config['NUMBER_OF_BOT_CARS'] = os.environ.get('DR_H2B_NUMBER_OF_BOT_CARS', '0')\n    config['MIN_DISTANCE_BETWEEN_BOT_CARS'] = os.environ.get('DR_H2B_MIN_DISTANCE_BETWEEN_BOT_CARS', '2.0')\n    config['RANDOMIZE_BOT_CAR_LOCATIONS'] = os.environ.get('DR_H2B_RANDOMIZE_BOT_CAR_LOCATIONS', 'False')\n    config['BOT_CAR_SPEED'] = os.environ.get('DR_H2B_BOT_CAR_SPEED', '0.2')\n    config['PENALTY_SECONDS'] = os.environ.get('DR_H2B_BOT_CAR_PENALTY', '2.0')\n\ns3_local_endpoint_url = os.environ.get('DR_LOCAL_S3_ENDPOINT_URL', None)\ns3_region = config['AWS_REGION']\ns3_bucket = config['SAGEMAKER_SHARED_S3_BUCKET']\ns3_prefix = config['SAGEMAKER_SHARED_S3_PREFIX']\ns3_mode = os.environ.get('DR_LOCAL_S3_AUTH_MODE','profile')\nif s3_mode == 'profile':\n    s3_profile = os.environ.get('DR_LOCAL_S3_PROFILE', 'default')\nelse: # mode is 'role'\n    s3_profile = None\ns3_yaml_name = os.environ.get('DR_LOCAL_S3_TRAINING_PARAMS_FILE', 'training_params.yaml')\nyaml_key = os.path.normpath(os.path.join(s3_prefix, s3_yaml_name))\n\nsession = boto3.session.Session(profile_name=s3_profile)\ns3_client = session.client('s3', region_name=s3_region, endpoint_url=s3_local_endpoint_url)\n\nyaml_key = os.path.normpath(os.path.join(s3_prefix, s3_yaml_name))\nlocal_yaml_path = os.path.abspath(os.path.join(os.environ.get('DR_DIR'),'tmp', 'training-params-' + train_time + '.yaml'))\n\nwith open(local_yaml_path, 'w') as yaml_file:\n    yaml.dump(config, yaml_file, default_flow_style=False, default_style='\\'', explicit_start=True)\n\n# Copy the reward function to the s3 prefix bucket for compatability with DeepRacer console.\nreward_function_key = os.path.normpath(os.path.join(s3_prefix, \"reward_function.py\"))\ncopy_source = {\n    'Bucket': s3_bucket,\n    'Key': config['REWARD_FILE_S3_KEY']\n}\ns3_client.copy(copy_source, Bucket=s3_bucket, Key=reward_function_key)\n\n# Training with different configurations on each worker (aka Multi Config training)\nconfig['MULTI_CONFIG'] = os.environ.get('DR_TRAIN_MULTI_CONFIG', 'False')\nnum_workers = int(config['NUM_WORKERS'])\n\nif config['MULTI_CONFIG'] == \"True\" and num_workers > 1:\n    \n    multi_config = {}\n    multi_config['multi_config'] = [None] * num_workers\n\n    for i in range(1,num_workers+1,1):\n        if i == 1:\n            # copy training_params to training_params_1\n            s3_yaml_name_list = s3_yaml_name.split('.')\n            s3_yaml_name_temp = s3_yaml_name_list[0] + \"_%d.yaml\" % i\n\n            #upload additional training params files\n            yaml_key = os.path.normpath(os.path.join(s3_prefix, s3_yaml_name_temp))\n            s3_client.upload_file(Bucket=s3_bucket, Key=yaml_key, Filename=local_yaml_path)            \n\n            # Store in multi_config array\n            multi_config['multi_config'][i - 1] = {'config_file': s3_yaml_name_temp,\n                                                             'world_name': config['WORLD_NAME']}\n\n        else:  # i >= 2 \n            #read in additional configuration file.  format of file must be worker#-run.env\n            if os.environ.get('DR_EXPERIMENT_NAME'):\n                location = os.path.abspath(os.path.join(os.environ.get('DR_DIR'),'experiments', os.environ.get('DR_EXPERIMENT_NAME'),'worker-{}.env'.format(i)))\n            else:\n                location = os.path.abspath(os.path.join(os.environ.get('DR_DIR'),'worker-{}.env'.format(i)))\n            with open(location, 'r') as fh:\n                vars_dict = dict(\n                    tuple(line.split('='))\n                    for line in fh.read().splitlines() if not line.startswith('#')\n                    )\n\n            # Reset parameters for the configuration of this worker number\n            os.environ.update(vars_dict)\n\n            # Update car and training parameters\n            config.update({'WORLD_NAME': os.environ.get('DR_WORLD_NAME')})\n            config.update({'RACE_TYPE': os.environ.get('DR_RACE_TYPE')})\n            config.update({'CAR_COLOR': os.environ.get('DR_CAR_COLOR')})\n            config.update({'BODY_SHELL_TYPE': os.environ.get('DR_CAR_BODY_SHELL_TYPE')})\n            config.update({'ALTERNATE_DRIVING_DIRECTION': os.environ.get('DR_TRAIN_ALTERNATE_DRIVING_DIRECTION')})\n            config.update({'CHANGE_START_POSITION': os.environ.get('DR_TRAIN_CHANGE_START_POSITION')})\n            config.update({'ROUND_ROBIN_ADVANCE_DIST': os.environ.get('DR_TRAIN_ROUND_ROBIN_ADVANCE_DIST')})\n            config.update({'ENABLE_DOMAIN_RANDOMIZATION': os.environ.get('DR_ENABLE_DOMAIN_RANDOMIZATION')})\n            config.update({'START_POSITION_OFFSET': os.environ.get('DR_TRAIN_START_POSITION_OFFSET', '0.00')})\n            config.update({'REVERSE_DIR': os.environ.get('DR_TRAIN_REVERSE_DIRECTION', False)})\n            config.update({'CAMERA_MAIN_ENABLE': os.environ.get('DR_CAMERA_MAIN_ENABLE', 'True')})\n            config.update({'CAMERA_SUB_ENABLE': os.environ.get('DR_CAMERA_SUB_ENABLE', 'True')})  \n            config.update({'ENABLE_EXTRA_KVS_OVERLAY': os.environ.get('DR_ENABLE_EXTRA_KVS_OVERLAY', 'False')})\n\n            \n            # Update Object Avoidance parameters\n            if config['RACE_TYPE'] == 'OBJECT_AVOIDANCE':\n                config.update({'NUMBER_OF_OBSTACLES': os.environ.get('DR_OA_NUMBER_OF_OBSTACLES')})\n                config.update({'MIN_DISTANCE_BETWEEN_OBSTACLES': os.environ.get('DR_OA_MIN_DISTANCE_BETWEEN_OBSTACLES')})\n                config.update({'RANDOMIZE_OBSTACLE_LOCATIONS': os.environ.get('DR_OA_RANDOMIZE_OBSTACLE_LOCATIONS')})\n                config.update({'IS_OBSTACLE_BOT_CAR': os.environ.get('DR_OA_IS_OBSTACLE_BOT_CAR')})\n                config.update({'OBSTACLE_TYPE': os.environ.get('DR_OA_OBSTACLE_TYPE', 'box_obstacle')})\n\n                object_position_str = os.environ.get('DR_OA_OBJECT_POSITIONS', \"\")\n                if object_position_str != \"\":\n                    object_positions = []\n                    for o in object_position_str.replace('\"','').split(\";\"):\n                        object_positions.append(o)\n                    config.update({'OBJECT_POSITIONS': object_positions})\n                    config.update({'NUMBER_OF_OBSTACLES': str(len(object_positions))})\n                else:\n                    config.pop('OBJECT_POSITIONS',[])\n            else:\n                config.pop('NUMBER_OF_OBSTACLES', None)\n                config.pop('MIN_DISTANCE_BETWEEN_OBSTACLES', None)\n                config.pop('RANDOMIZE_OBSTACLE_LOCATIONS', None)\n                config.pop('IS_OBSTACLE_BOT_CAR', None)\n                config.pop('OBJECT_POSITIONS',[])\n\n            # Update Head to Bot parameters\n            if config['RACE_TYPE'] == 'HEAD_TO_BOT':\n                config.update({'IS_LANE_CHANGE': os.environ.get('DR_H2B_IS_LANE_CHANGE')})\n                config.update({'LOWER_LANE_CHANGE_TIME': os.environ.get('DR_H2B_LOWER_LANE_CHANGE_TIME')})\n                config.update({'UPPER_LANE_CHANGE_TIME': os.environ.get('DR_H2B_UPPER_LANE_CHANGE_TIME')})\n                config.update({'LANE_CHANGE_DISTANCE': os.environ.get('DR_H2B_LANE_CHANGE_DISTANCE')})\n                config.update({'NUMBER_OF_BOT_CARS': os.environ.get('DR_H2B_NUMBER_OF_BOT_CARS')})\n                config.update({'MIN_DISTANCE_BETWEEN_BOT_CARS': os.environ.get('DR_H2B_MIN_DISTANCE_BETWEEN_BOT_CARS')})\n                config.update({'RANDOMIZE_BOT_CAR_LOCATIONS': os.environ.get('DR_H2B_RANDOMIZE_BOT_CAR_LOCATIONS')})\n                config.update({'BOT_CAR_SPEED': os.environ.get('DR_H2B_BOT_CAR_SPEED')})\n                config.update({'PENALTY_SECONDS': os.environ.get('DR_H2B_BOT_CAR_PENALTY')})\n            else:\n                config.pop('IS_LANE_CHANGE', None)\n                config.pop('LOWER_LANE_CHANGE_TIME', None)\n                config.pop('UPPER_LANE_CHANGE_TIME', None)\n                config.pop('LANE_CHANGE_DISTANCE', None)\n                config.pop('NUMBER_OF_BOT_CARS', None)\n                config.pop('MIN_DISTANCE_BETWEEN_BOT_CARS', None)\n                config.pop('RANDOMIZE_BOT_CAR_LOCATIONS', None)\n                config.pop('BOT_CAR_SPEED', None)\n\n            #split string s3_yaml_name, insert the worker number, and add back on the .yaml extension\n            s3_yaml_name_list = s3_yaml_name.split('.')\n            s3_yaml_name_temp = s3_yaml_name_list[0] + \"_%d.yaml\" % i\n\n            #upload additional training params files\n            yaml_key = os.path.normpath(os.path.join(s3_prefix, s3_yaml_name_temp))\n            local_yaml_path = os.path.abspath(os.path.join(os.environ.get('DR_DIR'),'tmp', 'training-params-' + train_time + '-' + str(i) + '.yaml'))\n            with open(local_yaml_path, 'w') as yaml_file:\n                yaml.dump(config, yaml_file, default_flow_style=False, default_style='\\'', explicit_start=True)\n            s3_client.upload_file(Bucket=s3_bucket, Key=yaml_key, Filename=local_yaml_path)\n\n            # Store in multi_config array\n            multi_config['multi_config'][i - 1] = {'config_file': s3_yaml_name_temp,\n                                                             'world_name': config['WORLD_NAME']}\n\n    print(json.dumps(multi_config))\n\nelse:\n    s3_client.upload_file(Bucket=s3_bucket, Key=yaml_key, Filename=local_yaml_path)\n"
  },
  {
    "path": "scripts/training/start.sh",
    "content": "#!/usr/bin/env bash\n\nsource $DR_DIR/bin/scripts_wrapper.sh\n\nusage() {\n  echo \"Usage: $0 [-w] [-q | -s | -r [n] | -a ] [-v]\"\n  echo \"       -w        Wipes the target AWS DeepRacer model structure before upload.\"\n  echo \"       -q        Do not output / follow a log when starting.\"\n  echo \"       -a        Follow all Sagemaker and Robomaker logs.\"\n  echo \"       -s        Follow Sagemaker logs (default).\"\n  echo \"       -v        Updates the viewer webpage.\"\n  echo \"       -r [n]    Follow Robomaker logs for worker n (default worker 0 / replica 1).\"\n  exit 1\n}\n\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n  echo \"Requested to stop.\"\n  exit 1\n}\n\nOPT_DISPLAY=\"SAGEMAKER\"\n\nwhile getopts \":whqsavr:\" opt; do\n  case $opt in\n  w)\n    OPT_WIPE=\"WIPE\"\n    ;;\n  q)\n    OPT_QUIET=\"QUIET\"\n    ;;\n  s)\n    OPT_DISPLAY=\"SAGEMAKER\"\n    ;;\n  a)\n    OPT_DISPLAY=\"ALL\"\n    ;;\n  r) # Check if value is in numeric format.\n    OPT_DISPLAY=\"ROBOMAKER\"\n    if [[ $OPTARG =~ ^[0-9]+$ ]]; then\n      OPT_ROBOMAKER=$OPTARG\n    else\n      OPT_ROBOMAKER=0\n      ((OPTIND--))\n    fi\n    ;;\n  v)\n    OPT_VIEWER=\"VIEWER\"\n    ;;\n  h)\n    usage\n    ;;\n  \\?)\n    echo \"Invalid option -$OPTARG\" >&2\n    usage\n    ;;\n  esac\ndone\n\n## Check if WSL2\nif [[ -f /proc/version ]] && grep -qi Microsoft /proc/version && grep -q \"WSL2\" /proc/version; then\n    IS_WSL2=\"yes\"\nfi\n\n# Ensure Sagemaker's folder is there\n_dr_ensure_sagemaker_dir\n\n# set evaluation specific environment variables\nSTACK_NAME=\"deepracer-$DR_RUN_ID\"\nSTACK_CONTAINERS=$(docker stack ps $STACK_NAME 2>/dev/null | wc -l)\nif [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n  if [[ \"$STACK_CONTAINERS\" -gt 1 ]]; then\n    echo \"ERROR: Processes running in stack $STACK_NAME. Stop training with dr-stop-training.\"\n    exit 1\n  fi\nfi\n\n# Check if metadata-files are available\nWORK_DIR=${DR_DIR}/tmp/start/\nmkdir -p ${WORK_DIR}\nrm -f ${WORK_DIR}/*\n\nREWARD_FILE=\"\"\nMETADATA_FILE=\"\"\nHYPERPARAM_FILE=\"\"\n\naws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 cp s3://${DR_LOCAL_S3_BUCKET}/${DR_LOCAL_S3_REWARD_KEY} ${WORK_DIR} --no-progress >/dev/null 2>&1\naws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 cp s3://${DR_LOCAL_S3_BUCKET}/${DR_LOCAL_S3_MODEL_METADATA_KEY} ${WORK_DIR} --no-progress >/dev/null 2>&1\naws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 cp s3://${DR_LOCAL_S3_BUCKET}/${DR_LOCAL_S3_HYPERPARAMETERS_KEY} ${WORK_DIR} --no-progress >/dev/null 2>&1\n\nif [ -f \"${WORK_DIR}/$(basename \"$DR_LOCAL_S3_REWARD_KEY\")\" ]; then\n  REWARD_FILE=$(_realpath \"${WORK_DIR}/$(basename \"$DR_LOCAL_S3_REWARD_KEY\")\")\nfi\n\nif [ -f \"${WORK_DIR}/$(basename \"$DR_LOCAL_S3_MODEL_METADATA_KEY\")\" ]; then\n  METADATA_FILE=$(_realpath \"${WORK_DIR}/$(basename \"$DR_LOCAL_S3_MODEL_METADATA_KEY\")\")\nfi\n\nif [ -f \"${WORK_DIR}/$(basename \"$DR_LOCAL_S3_HYPERPARAMETERS_KEY\")\" ]; then\n  HYPERPARAM_FILE=$(_realpath \"${WORK_DIR}/$(basename \"$DR_LOCAL_S3_HYPERPARAMETERS_KEY\")\")\nfi\n\nif [ -n \"$METADATA_FILE\" ] && [ -n \"$REWARD_FILE\" ] && [ -n \"$HYPERPARAM_FILE\" ]; then\n  echo \"Training of model s3://$DR_LOCAL_S3_BUCKET/$DR_LOCAL_S3_MODEL_PREFIX starting.\"\n  echo \"Using configuration files:\"\n  echo \"   s3://${DR_LOCAL_S3_BUCKET}/${DR_LOCAL_S3_REWARD_KEY}\"\n  echo \"   s3://${DR_LOCAL_S3_BUCKET}/${DR_LOCAL_S3_MODEL_METADATA_KEY}\"\n  echo \"   s3://${DR_LOCAL_S3_BUCKET}/${DR_LOCAL_S3_HYPERPARAMETERS_KEY}\"\n  echo \"Using image ${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}\"\n  echo \"\"\nelse\n  echo \"Training aborted. Configuration files were not found.\"\n  echo \"Manually check that the following files exist:\"\n  echo \"   s3://${DR_LOCAL_S3_BUCKET}/${DR_LOCAL_S3_REWARD_KEY}\"\n  echo \"   s3://${DR_LOCAL_S3_BUCKET}/${DR_LOCAL_S3_MODEL_METADATA_KEY}\"\n  echo \"   s3://${DR_LOCAL_S3_BUCKET}/${DR_LOCAL_S3_HYPERPARAMETERS_KEY}\"\n  echo \"You might have to run dr-upload-custom files.\"\n  exit 1\nfi\n\n# Check if model path exists.\nS3_PATH=\"s3://$DR_LOCAL_S3_BUCKET/$DR_LOCAL_S3_MODEL_PREFIX\"\n\nS3_FILES=$(aws ${DR_LOCAL_PROFILE_ENDPOINT_URL} s3 ls ${S3_PATH} | wc -l)\nif [[ \"$S3_FILES\" -gt 0 ]]; then\n  if [[ -z $OPT_WIPE ]]; then\n    echo \"Selected path $S3_PATH exists. Delete it, or use -w option. Exiting.\"\n    exit 1\n  else\n    echo \"Wiping path $S3_PATH.\"\n    aws ${DR_LOCAL_PROFILE_ENDPOINT_URL} s3 rm --recursive ${S3_PATH}\n  fi\nfi\n\n# Base compose file\nif [ ${DR_ROBOMAKER_MOUNT_LOGS,,} = \"true\" ]; then\n  COMPOSE_FILES=\"$DR_TRAIN_COMPOSE_FILE $DR_DOCKER_FILE_SEP $DR_DIR/docker/docker-compose-mount.yml\"\n  export DR_MOUNT_DIR=\"$DR_DIR/data/logs/robomaker/$DR_LOCAL_S3_MODEL_PREFIX\"\n  mkdir -p $DR_MOUNT_DIR\nelse\n  COMPOSE_FILES=\"$DR_TRAIN_COMPOSE_FILE\"\nfi\n\nexport DR_CURRENT_PARAMS_FILE=${DR_LOCAL_S3_TRAINING_PARAMS_FILE}\n\nWORKER_CONFIG=$(python3 $DR_DIR/scripts/training/prepare-config.py)\n\nif [ \"$DR_WORKERS\" -gt 1 ]; then\n  echo \"Starting $DR_WORKERS workers\"\n\n  if [[ \"${DR_DOCKER_STYLE,,}\" != \"swarm\" ]]; then\n    mkdir -p $DR_DIR/tmp/comms.$DR_RUN_ID\n    rm -rf $DR_DIR/tmp/comms.$DR_RUN_ID/*\n    COMPOSE_FILES=\"$COMPOSE_FILES $DR_DOCKER_FILE_SEP $DR_DIR/docker/docker-compose-robomaker-multi.yml\"\n  fi\n\n  if [ \"$DR_TRAIN_MULTI_CONFIG\" == \"True\" ]; then\n    export MULTI_CONFIG=$WORKER_CONFIG\n    echo \"Multi-config training, creating multiple Robomaker configurations in $S3_PATH\"\n  else\n    echo \"Creating Robomaker configuration in $S3_PATH/$DR_LOCAL_S3_TRAINING_PARAMS_FILE\"\n  fi\n  export ROBOMAKER_COMMAND=\"/opt/ml/code/run.sh multi distributed_training.launch.py\"\n\nelse\n  export ROBOMAKER_COMMAND=\"/opt/ml/code/run.sh run distributed_training.launch.py\"\n  echo \"Creating Robomaker configuration in $S3_PATH/$DR_LOCAL_S3_TRAINING_PARAMS_FILE\"\nfi\n\n# Check if we are using Host X -- ensure variables are populated\nif [[ \"${DR_HOST_X,,}\" == \"true\" ]]; then\n  if [[ -n \"$DR_DISPLAY\" ]]; then\n    ROBO_DISPLAY=$DR_DISPLAY\n  else\n    ROBO_DISPLAY=$DISPLAY\n  fi\n\n  if ! DISPLAY=$ROBO_DISPLAY timeout 1s xset q &>/dev/null; then\n    echo \"No X Server running on display $ROBO_DISPLAY. Exiting\"\n    exit 1\n  fi\n\n  if [[ -z \"$XAUTHORITY\" && \"$IS_WSL2\" != \"yes\" ]]; then\n    export XAUTHORITY=~/.Xauthority\n    if [[ ! -f \"$XAUTHORITY\" ]]; then\n      echo \"No XAUTHORITY defined. .Xauthority does not exist. Stopping.\"\n      exit 1\n    fi\n  fi\n\nfi\n\n# Check if we will use Docker Swarm or Docker Compose\nif [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n  ROBOMAKER_NODES=$(docker node ls --format '{{.ID}}' | xargs docker inspect | jq '.[] | select (.Spec.Labels.Robomaker == \"true\") | .ID' | wc -l)\n  if [[ \"$ROBOMAKER_NODES\" -eq 0 ]]; then\n    echo \"ERROR: No Swarm Nodes labelled for placement of Robomaker. Please add Robomaker node.\"\n    echo \"       Example: docker node update --label-add Robomaker=true $(docker node inspect self | jq .[0].ID -r)\"\n    exit 1\n  fi\n\n  SAGEMAKER_NODES=$(docker node ls --format '{{.ID}}' | xargs docker inspect | jq '.[] | select (.Spec.Labels.Sagemaker == \"true\") | .ID' | wc -l)\n  if [[ \"$SAGEMAKER_NODES\" -eq 0 ]]; then\n    echo \"ERROR: No Swarm Nodes labelled for placement of Sagemaker. Please add Sagemaker node.\"\n    echo \"       Example: docker node update --label-add Sagemaker=true $(docker node inspect self | jq .[0].ID -r)\"\n    exit 1\n  fi\n\n  if [ \"$DR_DOCKER_MAJOR_VERSION\" -gt 24 ]; then\n    DETACH_FLAG=\"--detach=true\"\n  fi\n\n  DISPLAY=$ROBO_DISPLAY docker stack deploy $COMPOSE_FILES $DETACH_FLAG $STACK_NAME\n\nelse\n  DISPLAY=$ROBO_DISPLAY docker compose $COMPOSE_FILES -p $STACK_NAME up -d --scale robomaker=$DR_WORKERS\nfi\n\n# Viewer\nif [ -n \"$OPT_VIEWER\" ]; then\n  (\n    sleep 5\n    dr-update-viewer\n  )\nfi\n\n# Request to be quiet. Quitting here.\nif [ -n \"$OPT_QUIET\" ]; then\n  exit 0\nfi\n\n# Trigger requested log-file\nif [[ \"${OPT_DISPLAY,,}\" == \"all\" && -n \"${DISPLAY}\" && \"${DR_HOST_X,,}\" == \"true\" ]]; then\n  dr-logs-sagemaker -w 15\n  if [ \"${DR_WORKERS}\" -gt 1 ]; then\n    for i in $(seq 1 ${DR_WORKERS}); do\n      dr-logs-robomaker -w 15 -n $i\n    done\n  else\n    dr-logs-robomaker -w 15\n  fi\nelif [[ \"${OPT_DISPLAY,,}\" == \"robomaker\" ]]; then\n  dr-logs-robomaker -w 15 -n $OPT_ROBOMAKER\nelif [[ \"${OPT_DISPLAY,,}\" == \"sagemaker\" ]]; then\n  dr-logs-sagemaker -w 15\nfi\n"
  },
  {
    "path": "scripts/training/stop.sh",
    "content": "#!/usr/bin/env bash\nsource $DR_DIR/bin/scripts_wrapper.sh\n\nSTACK_NAME=\"deepracer-$DR_RUN_ID\"\n\nSAGEMAKER_CONTAINERS=$(dr-find-sagemaker)\n\nif [[ -n \"$SAGEMAKER_CONTAINERS\" ]]; then\n    for CONTAINER in $SAGEMAKER_CONTAINERS; do\n        CONTAINER_NAME=$(docker ps --format '{{.Names}}' --filter id=$CONTAINER)\n        if [[ -n \"$CONTAINER_NAME\" ]]; then\n            echo Found Sagemaker as $CONTAINER_NAME\n            if _dr_is_macos; then\n                echo \"Stopping container $CONTAINER_NAME\"\n                docker stop $CONTAINER || true\n                docker container rm $CONTAINER -v >/dev/null 2>&1 || true\n            else\n                COMPOSE_SERVICE_NAME=$(echo $CONTAINER_NAME | perl -n -e'/(.*)-(algo-(.)-(.*))/; print $2')\n                if [[ -n \"$COMPOSE_SERVICE_NAME\" ]]; then\n                    COMPOSE_FILES=$(_dr_find_sagemaker_compose_files \"$COMPOSE_SERVICE_NAME\")\n                    for COMPOSE_FILE in $COMPOSE_FILES; do\n                        if _dr_compose_file_matches_run \"$COMPOSE_FILE\"; then\n                            if [ \"$DR_DOCKER_MAJOR_VERSION\" -gt 24 ]; then\n                                sudo sed -i '/^version:/d' $COMPOSE_FILE\n                            fi\n\n                            echo \"Stopping service $COMPOSE_SERVICE_NAME\"\n                            sudo docker compose -f $COMPOSE_FILE stop $COMPOSE_SERVICE_NAME\n                            docker container rm $CONTAINER -v >/dev/null 2>&1 || true\n                        fi\n                    done\n                fi\n            fi\n        fi\n    done\nfi\n\n# Check if we will use Docker Swarm or Docker Compose\nif [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n    docker stack rm $STACK_NAME\nelse\n    COMPOSE_FILES=$(echo ${DR_TRAIN_COMPOSE_FILE} | cut -f1-2 -d\\ )\n    export DR_CURRENT_PARAMS_FILE=\"\"\n    docker compose $COMPOSE_FILES -p $STACK_NAME down\nfi\n"
  },
  {
    "path": "scripts/upload/download-model.sh",
    "content": "#!/usr/bin/env bash\n\nusage() {\n  echo \"Usage: $0 [-f] [-w] [-d] -s <source-prefix> -t <target-prefix>\"\n  echo \"       -f                Force download. No confirmation question.\"\n  echo \"       -w                Wipes the target AWS DeepRacer model structure before upload.\"\n  echo \"       -d                Dry-Run mode. Does not perform any write or delete operatios on target.\"\n  echo \"       -c                Copy config files into custom_files.\"\n  echo \"       -s source-url     Downloads model from specified S3 URL (s3://bucket/prefix).\"\n  echo \"       -t target-prefix  Downloads model into specified prefix in local storage.\"\n  exit 1\n}\n\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n  echo \"Requested to stop.\"\n  exit 1\n}\n\nwhile getopts \"s:t:fwcdh\" opt; do\n  case $opt in\n  f)\n    OPT_FORCE=\"True\"\n    ;;\n  c)\n    OPT_CONFIG=\"Config\"\n    ;;\n  d)\n    OPT_DRYRUN=\"--dryrun\"\n    ;;\n  w)\n    OPT_WIPE=\"--delete\"\n    ;;\n  t)\n    OPT_TARGET=\"$OPTARG\"\n    ;;\n  s)\n    OPT_SOURCE=\"$OPTARG\"\n    ;;\n  h)\n    usage\n    ;;\n  \\?)\n    echo \"Invalid option -$OPTARG\" >&2\n    usage\n    ;;\n  esac\ndone\n\nif [[ -n \"${OPT_DRYRUN}\" ]]; then\n  echo \"*** DRYRUN MODE ***\"\nfi\n\nSOURCE_S3_URL=\"${OPT_SOURCE}\"\n\nif [[ -z \"${SOURCE_S3_URL}\" ]]; then\n  echo \"No source URL to download model from.\"\n  exit 1\nfi\n\nTARGET_S3_BUCKET=${DR_LOCAL_S3_BUCKET}\nTARGET_S3_PREFIX=${OPT_TARGET}\nif [[ -z \"${TARGET_S3_PREFIX}\" ]]; then\n  echo \"No target prefix defined. Exiting.\"\n  exit 1\nfi\n\nSOURCE_REWARD_FILE_S3_KEY=\"${SOURCE_S3_URL}/reward_function.py\"\nSOURCE_HYPERPARAM_FILE_S3_KEY=\"${SOURCE_S3_URL}/ip/hyperparameters.json\"\nSOURCE_METADATA_S3_KEY=\"${SOURCE_S3_URL}/model/model_metadata.json\"\n\nWORK_DIR=${DR_DIR}/tmp/download\nmkdir -p ${WORK_DIR} && rm -rf ${WORK_DIR} && mkdir -p ${WORK_DIR}/config ${WORK_DIR}/full\n\n# Check if metadata-files are available\nREWARD_FILE=\"\"\nMETADATA_FILE=\"\"\nHYPERPARAM_FILE=\"\"\n\naws ${DR_UPLOAD_PROFILE} s3 cp \"${SOURCE_REWARD_FILE_S3_KEY}\" ${WORK_DIR}/config/ --no-progress >/dev/null\naws ${DR_UPLOAD_PROFILE} s3 cp \"${SOURCE_METADATA_S3_KEY}\" ${WORK_DIR}/config/ --no-progress >/dev/null\naws ${DR_UPLOAD_PROFILE} s3 cp \"${SOURCE_HYPERPARAM_FILE_S3_KEY}\" ${WORK_DIR}/config/ --no-progress >/dev/null\n\nif [ -f \"${WORK_DIR}/config/$(basename \"$SOURCE_REWARD_FILE_S3_KEY\")\" ]; then\n  REWARD_FILE=$(_realpath \"${WORK_DIR}/config/$(basename \"$SOURCE_REWARD_FILE_S3_KEY\")\")\nfi\n\nif [ -f \"${WORK_DIR}/config/$(basename \"$SOURCE_METADATA_S3_KEY\")\" ]; then\n  METADATA_FILE=$(_realpath \"${WORK_DIR}/config/$(basename \"$SOURCE_METADATA_S3_KEY\")\")\nfi\n\nif [ -f \"${WORK_DIR}/config/$(basename \"$SOURCE_HYPERPARAM_FILE_S3_KEY\")\" ]; then\n  HYPERPARAM_FILE=$(_realpath \"${WORK_DIR}/config/$(basename \"$SOURCE_HYPERPARAM_FILE_S3_KEY\")\")\nfi\n\nif [ -n \"$METADATA_FILE\" ] && [ -n \"$REWARD_FILE\" ] && [ -n \"$HYPERPARAM_FILE\" ]; then\n  echo \"All meta-data files found. Source model ${SOURCE_S3_URL} valid.\"\nelse\n  echo \"Meta-data files are not found. Source model ${SOURCE_S3_URL} not valid. Exiting.\"\n  exit 1\nfi\n\n# Upload files\nif [[ -z \"${OPT_FORCE}\" ]]; then\n  echo \"Ready to download model ${SOURCE_S3_URL} to local ${TARGET_S3_PREFIX}\"\n  read -r -p \"Are you sure? [y/N] \" response\n  if [[ ! \"$response\" =~ ^([yY][eE][sS]|[yY])$ ]]; then\n    echo \"Aborting.\"\n    exit 1\n  fi\nfi\n\ncd ${WORK_DIR}\naws ${DR_UPLOAD_PROFILE} s3 sync \"${SOURCE_S3_URL}\" ${WORK_DIR}/full/ ${OPT_DRYRUN}\naws ${DR_LOCAL_PROFILE_ENDPOINT_URL} s3 sync ${WORK_DIR}/full/ s3://${TARGET_S3_BUCKET}/${TARGET_S3_PREFIX}/ ${OPT_DRYRUN} ${OPT_WIPE}\n\nif [[ -n \"${OPT_CONFIG}\" ]]; then\n  echo \"Copy configuration to custom_files\"\n  cp ${WORK_DIR}/config/* ${DR_DIR}/custom_files/\nfi\n\necho \"Done.\"\n"
  },
  {
    "path": "scripts/upload/increment.sh",
    "content": "#!/usr/bin/env bash\n\nusage() {\n    echo \"Usage: $0 [-f] [-w] [-p <model-prefix>] [-d <delimiter>]\"\n    echo \"\"\n    echo \"Command will increment a numerical suffix on the current upload model.\"\n    echo \"-p model  Sets the to-be name to be <model-prefix> rather than auto-incremeneting the previous model.\"\n    echo \"-d delim  Delimiter in model-name (e.g. '-' in 'test-model-1')\"\n    echo \"-f        Force. Ask for no confirmations.\"\n    echo \"-w        Wipe the S3 prefix to ensure that two models are not mixed.\"\n    exit 1\n}\n\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n    echo \"Requested to stop.\"\n    exit 1\n}\n\nOPT_DELIM='-'\n\nwhile getopts \":fwp:d:\" opt; do\n    case $opt in\n\n    f)\n        OPT_FORCE=\"True\"\n        ;;\n    p)\n        OPT_PREFIX=\"$OPTARG\"\n        ;;\n    w)\n        OPT_WIPE=\"--delete\"\n        ;;\n    d)\n        OPT_DELIM=\"$OPTARG\"\n        ;;\n    h)\n        usage\n        ;;\n    \\?)\n        echo \"Invalid option -$OPTARG\" >&2\n        usage\n        ;;\n    esac\ndone\n\nCONFIG_FILE=$DR_CONFIG\necho \"Configuration file $CONFIG_FILE will be updated.\"\n\n## Read in data\nCURRENT_UPLOAD_MODEL=$(grep -e \"^DR_UPLOAD_S3_PREFIX\" ${CONFIG_FILE} | awk '{split($0,a,\"=\"); print a[2] }')\nCURRENT_UPLOAD_MODEL_NUM=$(echo \"${CURRENT_UPLOAD_MODEL}\" |\n    awk -v DELIM=\"${OPT_DELIM}\" '{ n=split($0,a,DELIM); if (a[n] ~ /[0-9]*/) print a[n]; else print \"\"; }')\nif [[ -z ${CURRENT_UPLOAD_MODEL_NUM} ]]; then\n    NEW_UPLOAD_MODEL=\"${CURRENT_UPLOAD_MODEL}${OPT_DELIM}1\"\nelse\n    NEW_UPLOAD_MODEL_NUM=$(echo \"${CURRENT_UPLOAD_MODEL_NUM} + 1\" | bc)\n    NEW_UPLOAD_MODEL=$(echo $CURRENT_UPLOAD_MODEL | sed \"s/${CURRENT_UPLOAD_MODEL_NUM}\\$/${NEW_UPLOAD_MODEL_NUM}/\")\nfi\n\nif [[ -n \"${NEW_UPLOAD_MODEL}\" ]]; then\n    echo \"Incrementing model from ${CURRENT_UPLOAD_MODEL} to ${NEW_UPLOAD_MODEL}\"\n    if [[ -z \"${OPT_FORCE}\" ]]; then\n        read -r -p \"Are you sure? [y/N] \" response\n        if [[ ! \"$response\" =~ ^([yY][eE][sS]|[yY])$ ]]; then\n            echo \"Aborting.\"\n            exit 1\n        fi\n    fi\n    sed -i.bak -re \"s/(DR_UPLOAD_S3_PREFIX=).*$/\\1$NEW_UPLOAD_MODEL/g\" \"$CONFIG_FILE\" && echo \"Done.\"\nelse\n    echo \"Error in determining new model. Aborting.\"\n    exit 1\nfi\n\nexport DR_UPLOAD_S3_PREFIX=$(eval echo \"${NEW_UPLOAD_MODEL}\")\n\nif [[ -n \"${OPT_WIPE}\" ]]; then\n    MODEL_DIR_S3=$(aws ${DR_LOCAL_PROFILE_ENDPOINT_URL} s3 ls s3://${DR_LOCAL_S3_BUCKET}/${NEW_UPLOAD_MODEL})\n    if [[ -n \"${MODEL_DIR_S3}\" ]]; then\n        echo \"The new model's S3 prefix s3://${DR_LOCAL_S3_BUCKET}/${NEW_UPLOAD_MODEL} exists. Will wipe.\"\n    fi\n    if [[ -z \"${OPT_FORCE}\" ]]; then\n        read -r -p \"Are you sure? [y/N] \" response\n        if [[ ! \"$response\" =~ ^([yY][eE][sS]|[yY])$ ]]; then\n            echo \"Aborting.\"\n            exit 1\n        fi\n    fi\n    aws ${DR_LOCAL_PROFILE_ENDPOINT_URL} s3 rm s3://${DR_LOCAL_S3_BUCKET}/${NEW_UPLOAD_MODEL} --recursive\nfi\n"
  },
  {
    "path": "scripts/upload/prepare-config.py",
    "content": "#!/usr/bin/python3\n\nimport boto3\nimport sys\nimport os \nimport time\nimport json\nimport io\nimport yaml\n\nconfig = {}\nconfig['AWS_REGION'] = os.environ.get('DR_AWS_APP_REGION', 'us-east-1')\nconfig['JOB_TYPE'] = 'TRAINING'\nconfig['METRICS_S3_BUCKET'] = os.environ.get('TARGET_S3_BUCKET', 'bucket')\nconfig['METRICS_S3_OBJECT_KEY'] = \"{}/TrainingMetrics.json\".format(os.environ.get('TARGET_S3_PREFIX', 'bucket'))\nconfig['MODEL_METADATA_FILE_S3_KEY'] = \"{}/model/model_metadata.json\".format(os.environ.get('TARGET_S3_PREFIX', 'bucket'))\nconfig['REWARD_FILE_S3_KEY'] = \"{}/reward_function.py\".format(os.environ.get('TARGET_S3_PREFIX', 'bucket'))\nconfig['SAGEMAKER_SHARED_S3_BUCKET'] = os.environ.get('TARGET_S3_BUCKET', 'bucket')\nconfig['SAGEMAKER_SHARED_S3_PREFIX'] = os.environ.get('TARGET_S3_PREFIX', 'rl-deepracer-sagemaker')\n\n# Car and training \nconfig['BODY_SHELL_TYPE'] = os.environ.get('DR_CAR_BODY_SHELL_TYPE', 'deepracer')\nif config['BODY_SHELL_TYPE'] == 'deepracer':\n    config['CAR_COLOR'] = os.environ.get('DR_CAR_COLOR', 'Red')\nconfig['CAR_NAME'] = os.environ.get('DR_CAR_NAME', 'MyCar')\nconfig['RACE_TYPE'] = os.environ.get('DR_RACE_TYPE', 'TIME_TRIAL')\nconfig['WORLD_NAME'] = os.environ.get('DR_WORLD_NAME', 'LGSWide')\nconfig['DISPLAY_NAME'] = os.environ.get('DR_DISPLAY_NAME', 'racer1')\nconfig['RACER_NAME'] = os.environ.get('DR_RACER_NAME', 'racer1')\n\nconfig['ALTERNATE_DRIVING_DIRECTION'] = os.environ.get('DR_TRAIN_ALTERNATE_DRIVING_DIRECTION', os.environ.get('DR_ALTERNATE_DRIVING_DIRECTION', 'false'))\nconfig['CHANGE_START_POSITION'] = os.environ.get('DR_TRAIN_CHANGE_START_POSITION', os.environ.get('DR_CHANGE_START_POSITION', 'true'))\nconfig['ROUND_ROBIN_ADVANCE_DIST'] = os.environ.get('DR_TRAIN_ROUND_ROBIN_ADVANCE_DIST', '0.05')\nconfig['START_POSITION_OFFSET'] = os.environ.get('DR_TRAIN_START_POSITION_OFFSET', '0.00')\nconfig['ENABLE_DOMAIN_RANDOMIZATION'] = os.environ.get('DR_ENABLE_DOMAIN_RANDOMIZATION', 'false')\nconfig['MIN_EVAL_TRIALS'] = os.environ.get('DR_TRAIN_MIN_EVAL_TRIALS', '5')\n\n# Object Avoidance\nif config['RACE_TYPE'] == 'OBJECT_AVOIDANCE':\n    config['NUMBER_OF_OBSTACLES'] = os.environ.get('DR_OA_NUMBER_OF_OBSTACLES', '6')\n    config['MIN_DISTANCE_BETWEEN_OBSTACLES'] = os.environ.get('DR_OA_MIN_DISTANCE_BETWEEN_OBSTACLES', '2.0')\n    config['RANDOMIZE_OBSTACLE_LOCATIONS'] = os.environ.get('DR_OA_RANDOMIZE_OBSTACLE_LOCATIONS', 'True')\n    config['IS_OBSTACLE_BOT_CAR'] = os.environ.get('DR_OA_IS_OBSTACLE_BOT_CAR', 'false')\n\n    object_position_str = os.environ.get('DR_OA_OBJECT_POSITIONS', \"\")\n    if object_position_str != \"\":\n        object_positions = []\n        for o in object_position_str.split(\";\"):\n            object_positions.append(o)\n        config['OBJECT_POSITIONS'] = object_positions\n        config['NUMBER_OF_OBSTACLES'] = str(len(object_positions))\n\n# Head to Bot\nif config['RACE_TYPE'] == 'HEAD_TO_BOT':\n    config['IS_LANE_CHANGE'] = os.environ.get('DR_H2B_IS_LANE_CHANGE', 'False')\n    config['LOWER_LANE_CHANGE_TIME'] = os.environ.get('DR_H2B_LOWER_LANE_CHANGE_TIME', '3.0')\n    config['UPPER_LANE_CHANGE_TIME'] = os.environ.get('DR_H2B_UPPER_LANE_CHANGE_TIME', '5.0')\n    config['LANE_CHANGE_DISTANCE'] = os.environ.get('DR_H2B_LANE_CHANGE_DISTANCE', '1.0')\n    config['NUMBER_OF_BOT_CARS'] = os.environ.get('DR_H2B_NUMBER_OF_BOT_CARS', '0')\n    config['MIN_DISTANCE_BETWEEN_BOT_CARS'] = os.environ.get('DR_H2B_MIN_DISTANCE_BETWEEN_BOT_CARS', '2.0')\n    config['RANDOMIZE_BOT_CAR_LOCATIONS'] = os.environ.get('DR_H2B_RANDOMIZE_BOT_CAR_LOCATIONS', 'False')\n    config['BOT_CAR_SPEED'] = os.environ.get('DR_H2B_BOT_CAR_SPEED', '0.2')\n\nlocal_yaml_path = os.path.abspath(os.path.join(os.environ.get('WORK_DIR'),'training_params.yaml'))\nprint(local_yaml_path)\nwith open(local_yaml_path, 'w') as yaml_file:\n    yaml.dump(config, yaml_file, default_flow_style=False, default_style='\\'', explicit_start=True)"
  },
  {
    "path": "scripts/upload/upload-car.sh",
    "content": "#!/usr/bin/env bash\n\nusage() {\n    echo \"Usage: $0 [-L] [-f]\"\n    echo \"       -f        Force. Do not ask for confirmation.\"\n    echo \"       -L        Upload model to the local S3 bucket.\"\n    exit 1\n}\n\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n    echo \"Requested to stop.\"\n    exit 1\n}\n\nwhile getopts \":Lf\" opt; do\n    case $opt in\n    L)\n        OPT_LOCAL=\"Local\"\n        ;;\n    f)\n        OPT_FORCE=\"force\"\n        ;;\n    h)\n        usage\n        ;;\n    \\?)\n        echo \"Invalid option -$OPTARG\" >&2\n        usage\n        ;;\n    esac\ndone\n\n# This script creates the tar.gz file necessary to operate inside a deepracer physical car\n# The file is created directly from within the sagemaker container, using the most recent checkpoint\n\n# Find name of sagemaker container\nSAGEMAKER_CONTAINERS=$(docker ps | awk ' /algo/ { print $1 } ' | xargs)\nif [[ -n $SAGEMAKER_CONTAINERS ]]; then\n    for CONTAINER in $SAGEMAKER_CONTAINERS; do\n        CONTAINER_NAME=$(docker ps --format '{{.Names}}' --filter id=$CONTAINER)\n        CONTAINER_PREFIX=$(echo $CONTAINER_NAME | perl -n -e'/(.*)_(algo(.*))_./; print $1')\n        echo \"Found Sagemaker container: $CONTAINER_NAME\"\n    done\nfi\n\n#create tmp directory if it doesnt already exit\nmkdir -p $DR_DIR/tmp/car_upload\ncd $DR_DIR/tmp/car_upload\n#ensure directory is empty\nrm -r $DR_DIR/tmp/car_upload/*\n#The files we want are located inside the sagemaker container at /opt/ml/model.  Copy them to the tmp directory\ndocker cp $CONTAINER_NAME:/opt/ml/model $DR_DIR/tmp/car_upload\ncd $DR_DIR/tmp/car_upload/model\n#create a tar.gz file containing all of these files\ntar -czvf carfile.tar.gz *\n\n# Upload files\nif [[ -z \"${OPT_FORCE}\" ]]; then\n    if [[ -n \"${OPT_LOCAL}\" ]]; then\n        echo \"Ready to upload car model to local s3://${DR_LOCAL_S3_BUCKET}/${DR_UPLOAD_S3_PREFIX}.\"\n    else\n        echo \"Ready to upload car model to remote s3://${DR_UPLOAD_S3_BUCKET}/${DR_UPLOAD_S3_PREFIX}.\"\n    fi\n    read -r -p \"Are you sure? [y/N] \" response\n    if [[ ! \"$response\" =~ ^([yY][eE][sS]|[yY])$ ]]; then\n        echo \"Aborting.\"\n        exit 1\n    fi\nfi\n\n#upload to s3\nif [[ -n \"${OPT_LOCAL}\" ]]; then\n    aws ${DR_LOCAL_PROFILE_ENDPOINT_URL} s3 cp carfile.tar.gz s3://${DR_LOCAL_S3_BUCKET}/${DR_UPLOAD_S3_PREFIX}/carfile.tar.gz\nelse\n    aws ${DR_UPLOAD_PROFILE} s3 cp carfile.tar.gz s3://${DR_UPLOAD_S3_BUCKET}/${DR_UPLOAD_S3_PREFIX}/carfile.tar.gz\nfi\n"
  },
  {
    "path": "scripts/upload/upload-model.sh",
    "content": "#!/usr/bin/env bash\n\nusage() {\n  echo \"Usage: $0 [-f] [-w] [-d] [-b] [-1] [-i] [-I] [-L] [-c <checkpoint>] [-p <model-prefix>]\"\n  echo \"       -f        Force upload. No confirmation question.\"\n  echo \"       -w        Wipes the target AWS DeepRacer model structure before upload.\"\n  echo \"       -d        Dry-Run mode. Does not perform any write or delete operatios on target.\"\n  echo \"       -b        Uploads best checkpoint. Default is last checkpoint.\"\n  echo \"       -p model  Uploads model from specified S3 prefix.\"\n  echo \"       -1        Increment upload name with 1 (dr-increment-upload-model)\"\n  echo \"       -L        Upload model to the local S3 bucket\"\n  exit 1\n}\n\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n  echo \"Requested to stop.\"\n  exit 1\n}\n\nwhile getopts \":fwdhbp:c:1L\" opt; do\n  case $opt in\n  b)\n    OPT_CHECKPOINT=\"Best\"\n    ;;\n  c)\n    OPT_CHECKPOINT_NUM=\"$OPTARG\"\n    ;;\n  f)\n    OPT_FORCE=\"-f\"\n    ;;\n  d)\n    OPT_DRYRUN=\"--dryrun\"\n    ;;\n  p)\n    OPT_PREFIX=\"$OPTARG\"\n    ;;\n  w)\n    OPT_WIPE=\"--delete\"\n    ;;\n  L)\n    OPT_LOCAL=\"Local\"\n    ;;\n  1)\n    OPT_INCREMENT=\"Yes\"\n    ;;\n  h)\n    usage\n    ;;\n  \\?)\n    echo \"Invalid option -$OPTARG\" >&2\n    usage\n    ;;\n  esac\ndone\n\nif [[ -n \"${OPT_DRYRUN}\" ]]; then\n  echo \"*** DRYRUN MODE ***\"\nfi\n\nif [[ -n \"${OPT_INCREMENT}\" ]]; then\n  source $DR_DIR/scripts/upload/increment.sh ${OPT_FORCE}\nfi\n\nSOURCE_S3_BUCKET=${DR_LOCAL_S3_BUCKET}\nif [[ -n \"${OPT_PREFIX}\" ]]; then\n  SOURCE_S3_MODEL_PREFIX=${OPT_PREFIX}\n  SOURCE_S3_REWARD=${OPT_PREFIX}/reward_function.py\n  SOURCE_S3_METRICS=${OPT_PREFIX}/metrics\n  TARGET_S3_PREFIX=${OPT_PREFIX}\nelse\n  SOURCE_S3_MODEL_PREFIX=${DR_LOCAL_S3_MODEL_PREFIX}\n  SOURCE_S3_REWARD=${DR_LOCAL_S3_REWARD_KEY}\n  SOURCE_S3_METRICS=${DR_LOCAL_S3_METRICS_PREFIX}\n  TARGET_S3_PREFIX=${DR_UPLOAD_S3_PREFIX}\nfi\n\nif [[ -z \"${OPT_LOCAL}\" ]]; then\n  TARGET_S3_BUCKET=${DR_UPLOAD_S3_BUCKET}\n  UPLOAD_PROFILE=${DR_UPLOAD_PROFILE}\nelse\n  if [[ \"${TARGET_S3_PREFIX}\" = \"${SOURCE_S3_MODEL_PREFIX}\" ]]; then\n    echo \"Target equals source. Exiting.\"\n    exit 1\n  fi\n\n  TARGET_S3_BUCKET=${DR_LOCAL_S3_BUCKET}\n  UPLOAD_PROFILE=${DR_LOCAL_PROFILE_ENDPOINT_URL}\nfi\n\nif [[ -z \"${TARGET_S3_BUCKET}\" ]]; then\n  echo \"No upload bucket defined. Exiting.\"\n  exit 1\nfi\n\nif [[ -z \"${TARGET_S3_PREFIX}\" ]]; then\n  echo \"No upload prefix defined. Exiting.\"\n  exit 1\nfi\n\nexport WORK_DIR=${DR_DIR}/tmp/upload/\nrm -rf ${WORK_DIR} && mkdir -p ${WORK_DIR}model ${WORK_DIR}ip\n\n# Upload information on model.\nTARGET_PARAMS_FILE_S3_KEY=\"s3://${TARGET_S3_BUCKET}/${TARGET_S3_PREFIX}/training_params.yaml\"\nTARGET_REWARD_FILE_S3_KEY=\"s3://${TARGET_S3_BUCKET}/${TARGET_S3_PREFIX}/reward_function.py\"\nTARGET_HYPERPARAM_FILE_S3_KEY=\"s3://${TARGET_S3_BUCKET}/${TARGET_S3_PREFIX}/ip/hyperparameters.json\"\nTARGET_METRICS_FILE_S3_KEY=\"s3://${TARGET_S3_BUCKET}/${TARGET_S3_PREFIX}/metrics/\"\n\n# Check if metadata-files are available\nREWARD_IN_ROOT=$(aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 ls s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_MODEL_PREFIX}/reward_function.py 2>/dev/null | wc -l)\nif [ \"$REWARD_IN_ROOT\" -ne 0 ]; then\n  SOURCE_REWARD_BASENAME=\"reward_function.py\"\n  aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 cp s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_MODEL_PREFIX}/reward_function.py ${WORK_DIR} --no-progress >/dev/null\nelse\n  echo \"Looking for Reward Function in s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_REWARD}\"\n  SOURCE_REWARD_BASENAME=$(basename \"$SOURCE_S3_REWARD\")\n  aws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 cp s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_REWARD} ${WORK_DIR} --no-progress >/dev/null\nfi\n\naws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 cp s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_MODEL_PREFIX}/model/model_metadata.json ${WORK_DIR} --no-progress >/dev/null\naws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 cp s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_MODEL_PREFIX}/ip/hyperparameters.json ${WORK_DIR} --no-progress >/dev/null\naws $DR_LOCAL_PROFILE_ENDPOINT_URL s3 sync s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_METRICS} ${WORK_DIR}/metrics --no-progress >/dev/null\n\nREWARD_FILE=\"\"\nMETADATA_FILE=\"\"\nHYPERPARAM_FILE=\"\"\nMETRICS_FILE=\"\"\n\nif [ -f \"${WORK_DIR}${SOURCE_REWARD_BASENAME}\" ]; then\n  REWARD_FILE=$(_realpath \"${WORK_DIR}${SOURCE_REWARD_BASENAME}\")\nfi\n\nif [ -f \"${WORK_DIR}model_metadata.json\" ]; then\n  METADATA_FILE=$(_realpath \"${WORK_DIR}model_metadata.json\")\nfi\n\nif [ -f \"${WORK_DIR}hyperparameters.json\" ]; then\n  HYPERPARAM_FILE=$(_realpath \"${WORK_DIR}hyperparameters.json\")\nfi\n\nif find \"${WORK_DIR}/metrics\" -type f | grep -q .; then\n  METRICS_FILE=$(_realpath \"${WORK_DIR}/metrics\")\nfi\n\nif [ -n \"$METADATA_FILE\" ] && [ -n \"$REWARD_FILE\" ] && [ -n \"$HYPERPARAM_FILE\" ] && [ -n \"$METRICS_FILE\" ]; then\n  echo \"All meta-data files found. Looking for checkpoint.\"\nelse\n  echo \"Meta-data files are not found. Exiting.\"\n  exit 1\nfi\n\n# Download checkpoint file\necho \"Looking for model to upload from s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_MODEL_PREFIX}/\"\nCHECKPOINT_INDEX=\"\"\naws ${DR_LOCAL_PROFILE_ENDPOINT_URL} s3 cp s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_MODEL_PREFIX}/model/deepracer_checkpoints.json ${WORK_DIR}model/ --no-progress >/dev/null\n\nif [ -f \"${WORK_DIR}model/deepracer_checkpoints.json\" ]; then\n  CHECKPOINT_INDEX=$(_realpath \"${WORK_DIR}model/deepracer_checkpoints.json\")\nfi\n\nif [ -z \"$CHECKPOINT_INDEX\" ]; then\n  echo \"No checkpoint file available at s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_MODEL_PREFIX}/model. Exiting.\"\n  exit 1\nfi\n\nif [ -n \"$OPT_CHECKPOINT_NUM\" ]; then\n  echo \"Checking for checkpoint $OPT_CHECKPOINT_NUM\"\n  export OPT_CHECKPOINT_NUM\n  CHECKPOINT_FILE=$(aws ${DR_LOCAL_PROFILE_ENDPOINT_URL} s3 ls s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_MODEL_PREFIX}/model/ | perl -ne'print \"$1\\n\" if /.*\\s($ENV{OPT_CHECKPOINT_NUM}_Step-[0-9]{1,7}\\.ckpt)\\.index/')\n  CHECKPOINT=$(echo $CHECKPOINT_FILE | cut -f1 -d_)\n  TIMESTAMP=$(date +%s)\n  CHECKPOINT_JSON_PART=$(jq -n '{ checkpoint: { name: $name, time_stamp: $timestamp | tonumber, avg_comp_pct: 50.0 } }' --arg name $CHECKPOINT_FILE --arg timestamp $TIMESTAMP)\n  CHECKPOINT_JSON=$(echo $CHECKPOINT_JSON_PART | jq '. | {last_checkpoint: .checkpoint, best_checkpoint: .checkpoint}')\nelif [ -z \"$OPT_CHECKPOINT\" ]; then\n  echo \"Checking for latest tested checkpoint\"\n  CHECKPOINT_FILE=$(jq -r .last_checkpoint.name <$CHECKPOINT_INDEX)\n  CHECKPOINT=$(echo $CHECKPOINT_FILE | cut -f1 -d_)\n  CHECKPOINT_JSON=$(jq '. | {last_checkpoint: .last_checkpoint, best_checkpoint: .last_checkpoint}' <$CHECKPOINT_INDEX)\n  echo \"Latest checkpoint = $CHECKPOINT\"\nelse\n  echo \"Checking for best checkpoint\"\n  CHECKPOINT_FILE=$(jq -r .best_checkpoint.name <$CHECKPOINT_INDEX)\n  CHECKPOINT=$(echo $CHECKPOINT_FILE | cut -f1 -d_)\n  CHECKPOINT_JSON=$(jq '. | {last_checkpoint: .best_checkpoint, best_checkpoint: .best_checkpoint}' <$CHECKPOINT_INDEX)\n  echo \"Best checkpoint: $CHECKPOINT\"\nfi\n\n# Find checkpoint & model files - download\nif [ -n \"$CHECKPOINT\" ]; then\n  aws ${DR_LOCAL_PROFILE_ENDPOINT_URL} s3 sync s3://${SOURCE_S3_BUCKET}/${SOURCE_S3_MODEL_PREFIX}/model/ ${WORK_DIR}model/ --exclude \"*\" --include \"${CHECKPOINT}*\" --include \"model_${CHECKPOINT}.pb\" --include \"deepracer_checkpoints.json\" --no-progress >/dev/null\n  CHECKPOINT_MODEL_FILE_COUNT=$(find \"${WORK_DIR}model\" -maxdepth 1 -type f \\( -name \"${CHECKPOINT}*\" -o -name \"model_${CHECKPOINT}.pb\" -o -name \"deepracer_checkpoints.json\" \\) | wc -l)\n  if [ \"$CHECKPOINT_MODEL_FILE_COUNT\" -eq 0 ]; then\n    echo \"No model files found. Files possibly deleted. Try again.\"\n    exit 1\n  fi\n  cp ${METADATA_FILE} ${WORK_DIR}model/\n  #    echo \"model_checkpoint_path: \\\"${CHECKPOINT_FILE}\\\"\" | tee ${WORK_DIR}model/checkpoint\n  echo ${CHECKPOINT_FILE} | tee ${WORK_DIR}model/.coach_checkpoint >/dev/null\nelse\n  echo \"Checkpoint not found. Exiting.\"\n  exit 1\nfi\n\n# Create Training Params Yaml.\nPARAMS_FILE=$(python3 $DR_DIR/scripts/upload/prepare-config.py)\n\n# Upload files\nif [[ -z \"${OPT_FORCE}\" ]]; then\n  echo \"Ready to upload model ${SOURCE_S3_MODEL_PREFIX} to s3://${TARGET_S3_BUCKET}/${TARGET_S3_PREFIX}/\"\n  read -r -p \"Are you sure? [y/N] \" response\n  if [[ ! \"$response\" =~ ^([yY][eE][sS]|[yY])$ ]]; then\n    echo \"Aborting.\"\n    exit 1\n  fi\nfi\n\n# echo \"\" > ${WORK_DIR}model/.ready\ncd ${WORK_DIR}\necho ${CHECKPOINT_JSON} >${WORK_DIR}model/deepracer_checkpoints.json\naws ${UPLOAD_PROFILE} s3 sync ${WORK_DIR}model/ s3://${TARGET_S3_BUCKET}/${TARGET_S3_PREFIX}/model/ ${OPT_DRYRUN} ${OPT_WIPE}\naws ${UPLOAD_PROFILE} s3 cp ${REWARD_FILE} ${TARGET_REWARD_FILE_S3_KEY} ${OPT_DRYRUN}\naws ${UPLOAD_PROFILE} s3 sync ${WORK_DIR}/metrics/ ${TARGET_METRICS_FILE_S3_KEY} ${OPT_DRYRUN}\naws ${UPLOAD_PROFILE} s3 cp ${PARAMS_FILE} ${TARGET_PARAMS_FILE_S3_KEY} ${OPT_DRYRUN}\naws ${UPLOAD_PROFILE} s3 cp ${HYPERPARAM_FILE} ${TARGET_HYPERPARAM_FILE_S3_KEY} ${OPT_DRYRUN}\naws ${UPLOAD_PROFILE} s3 cp ${METADATA_FILE} s3://${TARGET_S3_BUCKET}/${TARGET_S3_PREFIX}/ ${OPT_DRYRUN}\n"
  },
  {
    "path": "scripts/viewer/index.template.html",
    "content": "<!DOCTYPE html>\r\n<html lang=\"en\">\r\n\r\n<head>\r\n    <title>DR-$DR_RUN_ID - $DR_LOCAL_S3_MODEL_PREFIX</title>\r\n    <style>\r\n        :root {\r\n            --primary-color: #500280;\r\n        }\r\n\r\n\r\n        body {\r\n            display: block;\r\n            margin: 0;\r\n            background: #161e2d;\r\n            color: #ffffff;\r\n            font-family: \"Amazon Ember\", \"Helvetica Neue\", Roboto, Arial, sans-serif;\r\n            font-size: 16px;\r\n            font-weight: 400;\r\n        }\r\n\r\n        input {\r\n            width: 100px;\r\n        }\r\n\r\n        .container {\r\n            display: flex;\r\n            flex-direction: column;\r\n            position: absolute;\r\n            top: 42px;\r\n            bottom: 0;\r\n            left: 0;\r\n            right: 0;\r\n        }\r\n\r\n        .navbar {\r\n            position: fixed;\r\n            display: flex;\r\n            justify-content: space-between;\r\n            top: 0;\r\n            left: 0;\r\n            right: 0;\r\n            z-index: 2;\r\n            background: var(--primary-color);\r\n            box-shadow: rgba(0, 0, 0, 0.2) 0px 3px 5px -1px, rgba(0, 0, 0, 0.14) 0px 6px 10px 0px, rgba(0, 0, 0, 0.12) 0px 1px 18px 0px;\r\n        }\r\n\r\n        h1.navbar-header {\r\n            font-weight: 750;\r\n            font-size: 1.125rem;\r\n            display: flex;\r\n            flex-wrap: wrap;\r\n            align-items: center;\r\n            padding: 12px 16px;\r\n            margin: 0;\r\n        }\r\n\r\n        #main-container {\r\n            justify-content: center;\r\n            align-items: center;\r\n            display: flex;\r\n            flex-direction: row;\r\n            flex-wrap: wrap;\r\n            padding: 16px;\r\n        }\r\n\r\n        .card {\r\n            margin: 8px;\r\n            box-shadow: rgba(0, 0, 0, 0.2) 0px 2px 1px -1px, rgba(0, 0, 0, 0.14) 0px 1px 1px 0px, rgba(0, 0, 0, 0.12) 0px 1px 3px 0px;\r\n            transition: box-shadow 280ms cubic-bezier(0.4, 0, 0.2, 1);\r\n            border-radius: 4px;\r\n            display: block;\r\n            position: relative;\r\n        }\r\n\r\n        .card-img {\r\n            border-radius: 4px;\r\n        }\r\n\r\n        .select {\r\n            display: flex;\r\n            align-items: center;\r\n            margin-right: 1rem;\r\n        }\r\n\r\n        .select-options {\r\n            display: flex;\r\n            align-items: center;\r\n        }\r\n\r\n        label {\r\n            margin-right: 0.5rem;\r\n        }\r\n\r\n        h2 {\r\n            margin-left: 0.3rem;\r\n            margin-top: 0.3rem;\r\n            margin-bottom: 0.1rem;\r\n        }\r\n\r\n        h3 {\r\n            margin-left: 0;\r\n            margin-top: 0.3rem;\r\n        }\r\n\r\n        .hide {\r\n            display: none;\r\n        }\r\n\r\n        .robo-camera-group {\r\n            display: flex;\r\n            flex-wrap: wrap;\r\n        }\r\n\r\n        .robo-maker {\r\n            padding: 0.5rem;\r\n            margin: 0.5rem;\r\n            border: medium solid var(--primary-color);\r\n            border-radius: 10px;\r\n        }\r\n\r\n        .dismiss-button {\r\n            padding: 0.5rem 1rem;\r\n            margin: 1rem;\r\n        }\r\n    </style>\r\n</head>\r\n\r\n<body>\r\n    <div class=\"container\">\r\n        <div class=\"navbar\">\r\n            <h1 class=\"navbar-header\">Run ID:$DR_RUN_ID - Model: $DR_LOCAL_S3_MODEL_PREFIX </h1>\r\n            <div class=\"select-options\">\r\n                <div class=\"select\">\r\n                    <label for=\"robo-select\">Worker:</label>\r\n                    <select name=\"robo-select\" id=\"robo-select\">\r\n                    </select>\r\n                </div>\r\n\r\n                <div class=\"select\">\r\n                    <label for=\"camera-select\">Cameras:</label>\r\n                    <select name=\"camera-select\" id=\"camera-select\" value=\"kvs_stream\">\r\n                    </select>\r\n                </div>\r\n                <div class=\"select\">\r\n                    <label for=\"camera-quality\">Quality:</label>\r\n                    <input name=\"camera-quality\" id=\"camera-quality\" type=\"number\" min=\"1\" max=\"100\" value=\"$QUALITY\" />\r\n                </div>\r\n                <div class=\"select\">\r\n                    <label for=\"width-size\">Width:</label>\r\n                    <input name=\"width-size\" id=\"width-size\" type=\"number\" min=\"160\" max=\"1920\" />\r\n                </div>\r\n            </div>\r\n        </div>\r\n        <div id=\"main-container\">\r\n        </div>\r\n\r\n    </div>\r\n\r\n    <script>\r\n\r\n        var robomakerContainers = [\r\n            $ROBOMAKER_CONTAINERS_HTML\r\n        ];\r\n\r\n\r\n        const maximumCameraAmount = 6\r\n\r\n        var cameras = [\r\n            {\r\n                id: \"kvs_stream\",\r\n                topic: \"/racecar/deepracer/kvs_stream\",\r\n            },\r\n            {\r\n                id: \"camera\",\r\n                topic: \"/racecar/camera/zed/rgb/image_rect_color\",\r\n            },\r\n            {\r\n                id: \"main_camera\",\r\n                topic: \"/racecar/main_camera/zed/rgb/image_rect_color\",\r\n            },\r\n            {\r\n                id: \"sub_camera\",\r\n                topic: \"/sub_camera/zed/rgb/image_rect_color\",\r\n            },\r\n        ]\r\n\r\n        \r\n        let { robo, camera, quality, width } = extractPropertiesFromUrl(location.href)\r\n        // Set defaults: ALL workers, kvs_stream camera, width 480, quality 70\r\n        let needsUrlUpdate = false;\r\n        if (!robo) { robo = 'all'; needsUrlUpdate = true; }\r\n        if (!camera) { camera = 'kvs_stream'; needsUrlUpdate = true; }\r\n        if (!width) { width = 480; needsUrlUpdate = true; }\r\n        if (!quality) { quality = 70; needsUrlUpdate = true; }\r\n\r\n        // If any defaults were used, update the URL to reflect them (without reloading)\r\n        if (needsUrlUpdate) {\r\n            const url = createUrl(robo, camera, quality, width);\r\n            window.history.replaceState({}, '', url.href);\r\n        }\r\n\r\n        const widthSize = document.getElementById('width-size')\r\n\r\n        widthSize.addEventListener('change', () => updatePage());\r\n\r\n        const camQuality = document.getElementById('camera-quality')\r\n\r\n        camQuality.addEventListener('change', () => updatePage());\r\n\r\n        // Add Robomaker select options\r\n        const roboSelect = document.getElementById('robo-select')\r\n\r\n        addAllOption(roboSelect)\r\n\r\n        robomakerContainers.forEach(robomaker => {\r\n            var roboOption = document.createElement('option')\r\n            roboOption.value = robomaker\r\n            roboOption.innerHTML = robomaker\r\n            roboSelect.appendChild(roboOption)\r\n        })\r\n\r\n        roboSelect.addEventListener('change', () => updatePage());\r\n\r\n        // Add Camera select options\r\n        const cameraSelect = document.getElementById('camera-select')\r\n\r\n        addAllOption(cameraSelect)\r\n\r\n        cameras.forEach(camera => {\r\n            var cameraOption = document.createElement('option')\r\n            cameraOption.value = camera.id\r\n            cameraOption.innerHTML = camera.id\r\n            cameraSelect.appendChild(cameraOption)\r\n        })\r\n\r\n        cameraSelect.addEventListener('change', () => updatePage());\r\n\r\n        setupForm(robo, camera, quality, width)\r\n\r\n        buildElements()\r\n\r\n        function buildElements() {\r\n\r\n            const urlSearchParams = new URLSearchParams(window.location.search)\r\n\r\n            const mainContainer = document.getElementById('main-container')\r\n            let roboSelectionValue = document.getElementById('robo-select').value\r\n            const urlRoboSelection = urlSearchParams.get('robo')\r\n            roboSelectionValue = urlRoboSelection || roboSelectionValue\r\n\r\n            const cameraSelectionEl = document.getElementById('camera-select')\r\n            let cameraSelectionValue = cameraSelectionEl.value\r\n            cameraSelectionValue = urlSearchParams.get('camera') || cameraSelectionValue\r\n            const cameraAllSelectEl = cameraSelectionEl.querySelector('.all-select')\r\n            const cameraSelectTopEl = cameraSelectionEl.querySelectorAll('option')[1]\r\n            const qualityVal = document.getElementById('camera-quality').value\r\n\r\n            const urlWidthSelection = urlSearchParams.get('width')\r\n            let widthSelectionValue = document.getElementById('width-size').value\r\n            widthSelectionValue = urlWidthSelection || widthSelectionValue\r\n\r\n            // Cleanup: close all streams by clearing src of all images before removing them\r\n            mainContainer.querySelectorAll('img.card-img').forEach(img => { img.src = ''; });\r\n            mainContainer.innerHTML = ''\r\n\r\n            // Prevent All cameras with All cars\r\n            if (roboSelectionValue === 'all' && cameraSelectionValue === 'all') {\r\n                cameraSelectionValue = 'kvs_stream';\r\n                cameraSelectionEl.value = 'kvs_stream';\r\n            }\r\n\r\n            if (roboSelectionValue === 'all') {\r\n                cameraAllSelectEl.classList.add('hide')\r\n            } else {\r\n                cameraAllSelectEl.classList.remove('hide')\r\n            }\r\n\r\n            let cumulativeCameraAmount = 0\r\n\r\n            robomakerContainers\r\n                .filter((robo) => roboSelectionValue === 'all' || roboSelectionValue === robo)\r\n                .forEach((robo) => {\r\n\r\n                    if (cumulativeCameraAmount < maximumCameraAmount) {\r\n                        const roboMaker = document.createElement('div')\r\n                        roboMaker.classList.add('robo-maker')\r\n                        roboMaker.dataset.robo = robo\r\n                        const roboMakerTitle = document.createElement('h2')\r\n                        roboMakerTitle.innerHTML = 'Worker: ' + robo\r\n                        roboMaker.appendChild(roboMakerTitle)\r\n                        const roboCameras = document.createElement('div')\r\n                        roboCameras.dataset.robo = robo\r\n                        roboCameras.classList.add('robo-camera-group')\r\n\r\n                        const camerasToShow = cameras\r\n                            .filter((cam) => cameraSelectionValue === 'all' || cameraSelectionValue === cam.id)\r\n\r\n                        camerasToShow.forEach((camera) => {\r\n\r\n                            if (cumulativeCameraAmount < maximumCameraAmount) {\r\n                                // Create div\r\n                                const div = document.createElement('div')\r\n                                div.dataset.camera = camera.id\r\n                                div.dataset.robo = robo\r\n                                div.classList.add('card')\r\n                                const cameraTitle = document.createElement('h3')\r\n                                cameraTitle.innerHTML = camera.id\r\n\r\n                                // Create image\r\n                                const image = document.createElement('img')\r\n                                image.dataset.camera = camera.id\r\n\r\n                                const numericWidth = Number(widthSelectionValue)\r\n                                const validatedWidth = Math.min(1920, Math.max(160, isNaN(numericWidth) ? 160 : numericWidth))\r\n\r\n                                const url = createStreamUrl(robo, camera.topic, qualityVal, validatedWidth)\r\n                                image.classList.add('card-img')\r\n                                image.setAttribute('src', url)\r\n                                image.style.width = validatedWidth + 'px'\r\n                                image.setAttribute('alt', robo + '-' + camera.id)\r\n                                image.onerror = function() {\r\n                                    this.style.border = '2px solid red';\r\n                                    this.alt = 'Failed to load: ' + robo + '-' + camera.id;\r\n                                }\r\n\r\n                                div.appendChild(cameraTitle)\r\n                                div.appendChild(image)\r\n                                roboCameras.append(div)\r\n                                cumulativeCameraAmount += 1\r\n                            }\r\n                        })\r\n                        roboMaker.appendChild(roboCameras)\r\n                        mainContainer.appendChild(roboMaker)\r\n\r\n                    } else if (cumulativeCameraAmount === maximumCameraAmount) {\r\n                        const div = document.createElement('div')\r\n                        div.innerText = \"Maximum amount of \" + maximumCameraAmount + \" cameras reached\"\r\n                        div.classList.add('max-cameras-reached-alert')\r\n                        const dismissButton = document.createElement('button')\r\n                        dismissButton.classList.add('dismiss-button')\r\n                        dismissButton.innerText = \"Dismiss\"\r\n                        dismissButton.addEventListener('click', () => document.querySelector('.max-cameras-reached-alert').remove())\r\n                        div.appendChild(dismissButton)\r\n                        mainContainer.append(div)\r\n                        cumulativeCameraAmount += 1\r\n                    }\r\n                })\r\n        }\r\n\r\n        // Adds an 'all' option to the select options element argument\r\n        function addAllOption(el) {\r\n            const option = document.createElement('option')\r\n            option.value = 'all'\r\n            option.innerHTML = 'All'\r\n            option.classList.add('all-select')\r\n            el.appendChild(option)\r\n        }\r\n\r\n        function createStreamUrl(robo, topic, quality, width) {\r\n            // Ensure width is a finite number within an acceptable range\r\n            let widthNum = Number(width)\r\n            if (!Number.isFinite(widthNum)) {\r\n                widthNum = 480\r\n            } else {\r\n                widthNum = Math.max(160, Math.min(1920, widthNum))\r\n            }\r\n\r\n            // Ensure quality is a finite number within an acceptable range\r\n            let qualityNum = Number(quality)\r\n            if (!Number.isFinite(qualityNum)) {\r\n                qualityNum = 75\r\n            } else {\r\n                qualityNum = Math.max(1, Math.min(100, qualityNum))\r\n            }\r\n\r\n            // Calculate height maintaining 4:3 aspect ratio\r\n            const height = Math.round(widthNum * 3 / 4)\r\n            return \"/\" + robo + \"/stream?topic=\" + topic + \"&quality=\" + qualityNum + \"&width=\" + widthNum + \"&height=\" + height\r\n        }\r\n\r\n        function createUrl(robo, camera, quality, width) {\r\n            const url = new URL(origin);\r\n            const search_params = url.searchParams;\r\n            search_params.set('robo', robo);\r\n            search_params.set('camera', camera);\r\n            search_params.set('quality', quality);\r\n            search_params.set('width', width);\r\n\r\n            // change the search property of the main url\r\n            url.search = search_params.toString();\r\n\r\n            return url\r\n        }\r\n\r\n        function extractPropertiesFromUrl(url) {\r\n            const urlObj = new URL(url);\r\n            const search_params = urlObj.searchParams;\r\n            return {\r\n                robo: search_params.get('robo'),\r\n                camera: search_params.get('camera'),\r\n                quality: search_params.get('quality'),\r\n                width: search_params.get('width'),\r\n            }\r\n        }\r\n\r\n        function setupForm(robo, camera, quality, width) {\r\n            // Always set default to ALL workers, kvs_stream camera, width 480, quality 70 if not present\r\n            document.getElementById('robo-select').value = robo || 'all';\r\n            document.getElementById('camera-select').value = camera || 'kvs_stream';\r\n            document.getElementById('camera-quality').value = quality || 70;\r\n            document.getElementById('width-size').value = width || 480;\r\n        }\r\n\r\n        function getFormValue() {\r\n            return {\r\n                robo: document.getElementById('robo-select').value,\r\n                camera: document.getElementById('camera-select').value,\r\n                quality: document.getElementById('camera-quality').value,\r\n                width: document.getElementById('width-size').value\r\n            }\r\n        }\r\n\r\n        function updatePage() {\r\n            setTimeout(() => {\r\n                const { robo, camera, quality, width } = getFormValue()\r\n\r\n                // Validate and clamp quality\r\n                const qualityNum = Number(quality)\r\n                const clampedQuality = (!Number.isFinite(qualityNum) || qualityNum < 1 || qualityNum > 100)\r\n                    ? (Number.isFinite(qualityNum) ? Math.max(1, Math.min(100, qualityNum)) : 75)\r\n                    : qualityNum\r\n                \r\n                if (clampedQuality !== qualityNum) {\r\n                    document.getElementById('camera-quality').value = clampedQuality\r\n                }\r\n\r\n                // Validate and clamp width\r\n                const widthNum = Number(width)\r\n                const clampedWidth = (!Number.isFinite(widthNum) || widthNum < 160 || widthNum > 1920)\r\n                    ? (Number.isFinite(widthNum) ? Math.max(160, Math.min(1920, widthNum)) : 480)\r\n                    : widthNum\r\n                \r\n                if (clampedWidth !== widthNum) {\r\n                    document.getElementById('width-size').value = clampedWidth\r\n                }\r\n\r\n                const url = createUrl(robo, camera, clampedQuality, clampedWidth)\r\n                location.href = url.href\r\n            }, 300)\r\n        }\r\n\r\n    </script>\r\n</body>\r\n\r\n</html>"
  },
  {
    "path": "scripts/viewer/start.sh",
    "content": "#!/usr/bin/env bash\n\nusage() {\n  echo \"Usage: $0 [-t topic] [-w width] [-h height] [-q quality] -b [browser-command] -p [port]\"\n  echo \"       -w        Width of individual stream.\"\n  echo \"       -h        Heigth of individual stream.\"\n  echo \"       -q        Quality of the stream image.\"\n  echo \"       -t        Topic to follow - default /racecar/deepracer/kvs_stream\"\n  echo \"       -b        Browser command (default: firefox --new-tab)\"\n  echo \"       -p        The port to use \"\n  exit 1\n}\n\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n  echo \"Requested to stop.\"\n  exit 1\n}\n\n# Stream definition\nTOPIC=\"/racecar/deepracer/kvs_stream\"\nWIDTH=480\nHEIGHT=360\nQUALITY=75\nBROWSER=${BROWSER:-\"firefox --new-tab\"}\nPORT=$DR_WEBVIEWER_PORT\n\nwhile getopts \":w:h:q:t:b:p:\" opt; do\n  case $opt in\n  w)\n    WIDTH=\"$OPTARG\"\n    ;;\n  h)\n    HEIGHT=\"$OPTARG\"\n    ;;\n  q)\n    QUALITY=\"$OPTARG\"\n    ;;\n  t)\n    TOPIC=\"$OPTARG\"\n    ;;\n  b)\n    BROWSER=\"$OPTARG\"\n    ;;\n  p)\n    PORT=\"$OPTARG\"\n    ;;\n  \\?)\n    echo \"Invalid option -$OPTARG\" >&2\n    usage\n    ;;\n  esac\ndone\n\nDR_WEBVIEWER_PORT=$PORT\n\nexport DR_VIEWER_HTML=$DR_DIR/tmp/streams-$DR_RUN_ID.html\nexport DR_NGINX_CONF=$DR_DIR/tmp/streams-$DR_RUN_ID.conf\n\ncat <<EOF >$DR_NGINX_CONF\nserver {\n  listen 80;\n  location / {\n    root   /usr/share/nginx/html;\n    index  index.html index.htm;\n  }\nEOF\n\nif [[ \"${DR_DOCKER_STYLE,,}\" != \"swarm\" ]]; then\n  ROBOMAKER_CONTAINERS=$(docker ps --format \"{{.ID}} {{.Names}}\" --filter name=\"deepracer-${DR_RUN_ID}\" | grep robomaker | cut -f1 -d\\ )\nelse\n  ROBOMAKER_SERVICE_REPLICAS=$(docker service ps deepracer-${DR_RUN_ID}_robomaker | awk '/robomaker/ { print $1 }')\n  for c in $ROBOMAKER_SERVICE_REPLICAS; do\n    ROBOMAKER_CONTAINER_IP=$(docker inspect $c | jq -r '.[].NetworksAttachments[] | select (.Network.Spec.Name == \"sagemaker-local\") | .Addresses[0] ' | cut -f1 -d/)\n    ROBOMAKER_CONTAINERS=\"${ROBOMAKER_CONTAINERS} ${ROBOMAKER_CONTAINER_IP}\"\n  done\nfi\n\nif [ -z \"$ROBOMAKER_CONTAINERS\" ]; then\n  echo \"No running robomakers. Exiting.\"\n  exit\nfi\n\n# Expose the diamensions to the HTML template\nexport QUALITY\nexport WIDTH\nexport HEIGHT\n# Create .js array of robomakers to pass to the HTML template\nexport ROBOMAKER_CONTAINERS_HTML=\"\"\nfor c in $ROBOMAKER_CONTAINERS; do\n  ROBOMAKER_CONTAINERS_HTML+=\"'$c',\"\ndone\nSCRIPT_PATH=\"${BASH_SOURCE:-$0}\"\nABS_SCRIPT_PATH=\"$(realpath \"${SCRIPT_PATH}\")\"\nABS_DIRECTORY=\"$(dirname \"${ABS_SCRIPT_PATH}\")\"\nINDEX_HTML_TEMPLATE=\"${ABS_DIRECTORY}/index.template.html\"\n# Replace all variables in HTML template and create the viewer html file\nenvsubst <\"${INDEX_HTML_TEMPLATE}\" >$DR_VIEWER_HTML\n\n# Add proxy paths in the NGINX file\nfor c in $ROBOMAKER_CONTAINERS; do\n  echo \"  location /$c { proxy_pass http://$c:8080; rewrite /$c/(.*) /\\$1 break; }\" >>$DR_NGINX_CONF\ndone\necho \"}\" >>$DR_NGINX_CONF\n\n# Check if we will use Docker Swarm or Docker Compose\nSTACK_NAME=\"deepracer-$DR_RUN_ID-viewer\"\nCOMPOSE_FILES=$DR_DIR/docker/docker-compose-webviewer.yml\n\nif [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n  if [ \"$DR_DOCKER_MAJOR_VERSION\" -gt 24 ]; then\n    DETACH_FLAG=\"--detach=true\"\n  fi\n\n  COMPOSE_FILES=\"$COMPOSE_FILES -c $DR_DIR/docker/docker-compose-webviewer-swarm.yml\"\n  docker stack deploy -c $COMPOSE_FILES $DETACH_FLAG $STACK_NAME\nelse\n  docker compose -f $COMPOSE_FILES -p $STACK_NAME up -d\nfi\n\n# Starting browser if using local X and having display defined.\nif [[ -n \"${DISPLAY}\" && \"${DR_HOST_X,,}\" == \"true\" ]]; then\n  echo \"Starting browser '$BROWSER'.\"\n  if [ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]; then\n    sleep 5\n  fi\n  echo Launching $BROWSER \"http://127.0.0.1:${DR_WEBVIEWER_PORT}\"\n  $BROWSER \"http://127.0.0.1:${DR_WEBVIEWER_PORT}\" &\nfi\n\nCURRENT_CONTAINER_HASH=$(docker ps | grep dr_viewer | head -c 12)\n\nIP_ADDRESSES=\"$(hostname -I)\"\necho \"The viewer will avaliable on the following hosts after initialization:\"\nfor ip in $IP_ADDRESSES; do\n  echo \"http://${ip}:${PORT}\"\ndone\n"
  },
  {
    "path": "scripts/viewer/stop.sh",
    "content": "#!/usr/bin/env bash\n\nSTACK_NAME=\"deepracer-$DR_RUN_ID-viewer\"\nCOMPOSE_FILES=$DR_DIR/docker/docker-compose-webviewer.yml\n\n# Check if we will use Docker Swarm or Docker Compose\nif [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n    docker stack rm $STACK_NAME\nelse\n    export DR_VIEWER_HTML=$DR_DIR/tmp/streams-$DR_RUN_ID.html\n    export DR_NGINX_CONF=$DR_DIR/tmp/streams-$DR_RUN_ID.conf\n\n    docker compose -f $COMPOSE_FILES -p $STACK_NAME down\nfi\n"
  },
  {
    "path": "utils/Dockerfile.gpu-detect",
    "content": "FROM nvcr.io/nvidia/cuda:12.6.3-base-ubuntu24.04\nRUN apt-get update && apt-get install -y --no-install-recommends wget python3\nRUN wget https://gist.githubusercontent.com/f0k/63a664160d016a491b2cbea15913d549/raw/f25b6b38932cfa489150966ee899e5cc899bf4a6/cuda_check.py\nCMD [\"python3\",\"cuda_check.py\"]"
  },
  {
    "path": "utils/cuda-check-tf.py",
    "content": "from tensorflow.python.client import device_lib\nimport tensorflow as tf\n\ndef get_available_gpus():\n    local_device_protos = device_lib.list_local_devices()\n    return [x.name for x in local_device_protos if x.device_type == 'GPU']\n\ngpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.05)\nsess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))\nprint(get_available_gpus())\n"
  },
  {
    "path": "utils/cuda-check.sh",
    "content": "#!/usr/bin/env bash\n\nCONTAINER_ID=$(docker create --rm -ti -e CUDA_VISIBLE_DEVICES --name cuda-check awsdeepracercommunity/deepracer-robomaker:$DR_ROBOMAKER_IMAGE \"python3 cuda-check-tf.py\")\ndocker cp $DR_DIR/utils/cuda-check-tf.py $CONTAINER_ID:/opt/install/\ndocker start -a $CONTAINER_ID\n"
  },
  {
    "path": "utils/download-car-model.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nThis script checks for model files in an S3 bucket, downloads, and renames them based on a specified pattern.\n\nEnvironment Variables:\n- DR_LOCAL_S3_BUCKET: Name of the S3 bucket.\n- DR_LOCAL_S3_PROFILE: AWS profile name for boto3 session.\n- DR_REMOTE_MINIO_URL: (Optional) MinIO server URL.\n\nUsage:\n    python download-car-model.py --pattern <prefix_pattern>\n\"\"\"\n\nimport boto3\nimport os\nimport fnmatch\nimport argparse\n\n# Load environment variables\nbucket_name = os.getenv('DR_LOCAL_S3_BUCKET')\nprofile_name = os.getenv('DR_LOCAL_S3_PROFILE')\nminio_url = os.getenv('DR_REMOTE_MINIO_URL')\n\n# Set up boto3 session with the specified profile\nsession = boto3.Session(profile_name=profile_name)\nendpoint_url = minio_url if minio_url else None\ns3 = session.client('s3', endpoint_url=endpoint_url)\n\ndef check_model_file(prefix):\n    \"\"\"\n    Check if a model.tar.gz file exists in the specified prefix.\n\n    Args:\n        prefix (str): The prefix to check within the S3 bucket.\n\n    Returns:\n        bool: True if the model file is found, False otherwise.\n    \"\"\"\n    try:\n        response = s3.list_objects_v2(Bucket=bucket_name, Prefix=f\"{prefix}output/\")\n        for obj in response.get('Contents', []):\n            if obj['Key'].endswith('model.tar.gz'):\n                print(f\"Found model.tar.gz in {prefix}output/\")\n                return f\"{obj['Key']}\"\n\n        response = s3.list_objects_v2(Bucket=bucket_name, Prefix=prefix)\n        for obj in response.get('Contents', []):\n            if obj['Key'].endswith('carfile.tar.gz'):\n                print(f\"Found carfile.tar.gz in {prefix}\")\n                return f\"{obj['Key']}\"\n\n        print(f\"No model.tar.gz found in {prefix}output/ and no carfile.tar.gz found in {prefix}\")\n        return None\n    except Exception as e:\n        print(f\"Error checking {prefix}: {e}\")\n        return None\n\ndef list_matching_prefixes(bucket_name, prefix_pattern):\n    \"\"\"\n    List all prefixes in the specified S3 bucket that match the given pattern.\n\n    Args:\n        bucket_name (str): The name of the S3 bucket.\n        prefix_pattern (str): The pattern to match prefixes against.\n\n    Returns:\n        list: A list of matching prefixes.\n    \"\"\"\n    try:\n        response = s3.list_objects_v2(Bucket=bucket_name, Delimiter='/')\n        prefixes = [prefix['Prefix'] for prefix in response.get('CommonPrefixes', [])]\n        matching_prefixes = fnmatch.filter(prefixes, prefix_pattern)\n        return matching_prefixes\n    except Exception as e:\n        print(f\"Error listing prefixes: {e}\")\n        return []\n\ndef download_and_rename_model_file(prefix, file_key, output_folder=\".\"):\n    \"\"\"\n    Download and rename the model.tar.gz file from the specified file key.\n\n    Args:\n        prefix (str): The prefix of the model file.\n        file_key (str): The S3 key of the model file to download.\n        output_folder (str): The folder where the downloaded file should be placed. Defaults to the current directory.\n\n    Returns:\n        bool: True if the model file is downloaded and renamed, False otherwise.\n    \"\"\"\n    try:\n        if not os.path.exists(output_folder):\n            os.makedirs(output_folder)\n        local_filename = os.path.join(output_folder, f\"{prefix.rstrip('/')}.tar.gz\")\n        s3.download_file(bucket_name, file_key, local_filename)\n        print(f\"Downloaded and renamed {file_key} to {local_filename}\")\n        return True\n    except Exception as e:\n        print(f\"Error downloading {file_key}: {e}\")\n        return False\n\ndef validate_s3_connection():\n    \"\"\"\n    Validate the S3 connection using the provided bucket name and profile name.\n\n    Raises:\n        ValueError: If bucket name or profile name is not defined.\n        ConnectionError: If unable to connect to the S3 bucket.\n    \"\"\"\n    if not bucket_name or not profile_name:\n        raise ValueError(\"Bucket name and profile name must be defined in environment variables.\")\n    \n    try:\n        s3.head_bucket(Bucket=bucket_name)\n        print(f\"Successfully connected to bucket: {bucket_name}\")\n    except Exception as e:\n        raise ConnectionError(f\"Unable to connect to the bucket: {e}\")\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(description='Check and download model files from S3.')\n    parser.add_argument('--pattern', type=str, required=True, help='Pattern for prefixes to check')\n    parser.add_argument('--output_folder', type=str, default='.', help='Folder to store downloaded files')\n    args = parser.parse_args()\n\n    validate_s3_connection()\n\n    matching_prefixes = list_matching_prefixes(bucket_name, args.pattern)\n    for prefix in matching_prefixes:\n        model_file_path = check_model_file(prefix)\n        if model_file_path:\n            download_and_rename_model_file(prefix, model_file_path, args.output_folder)"
  },
  {
    "path": "utils/evaluate.sh",
    "content": "#!/usr/bin/env bash\n\n# This script evaluates DeepRacer models by managing the evaluation process.\n# It requires one argument: the path to the environment configuration file.\n# The script sources environment variables from the specified file, then:\n# 1. Validates the existence of the environment file.\n# 2. Sources the activate.sh script to set up necessary environment variables.\n# 3. Prints the evaluation configuration, including Run ID, Model Name, and Track.\n# 4. Executes the evaluation process by stopping any ongoing evaluation, and starts a new evaluation.\n\n# To run this script every 3 minutes using crontab, follow these steps:\n# 1. Open the crontab editor by executing `crontab -e` in your terminal.\n# 2. Add the following line to schedule the script:\n#    `*/3 * * * * <DRFC_PATH>/utils/evaluate.sh run.env >> <LOG_PATH>/evaluate.log 2>&1`\n# 3. Save and close the editor. The script is now scheduled to run every 3 minutes.\n\nif [ \"$#\" -ne 1 ]; then\n  echo \"Usage: $0 <environment file>\"\n  exit 1\nfi\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" >/dev/null 2>&1 && pwd)\"\nDR_DIR=\"$(dirname $SCRIPT_DIR)\"\nENV_FILE=\"$1\"\n\nif [[ -f \"$DR_DIR/$ENV_FILE\" ]]; then\n  source $DR_DIR/bin/activate.sh $DR_DIR/$ENV_FILE\nelse\n  echo \"File $ENV_FILE does not exist.\"\n  exit 1\nfi\n\nprintf \"\\n##################################################\\n\"\nprintf \"### %-15s %-15s\\n\" \"Configuration:\" \"$ENV_FILE\"\nprintf \"### %-15s %-15s\\n\" \"Run ID:\" \"$DR_RUN_ID\"\nprintf \"### %-15s %-15s\\n\" \"Model Name:\" \"$DR_LOCAL_S3_MODEL_PREFIX\"\nprintf \"### %-15s %-15s\\n\" \"Track:\" \"$DR_WORLD_NAME\"\nprintf \"### %-15s %-15s\\n\" \"Start:\" \"$(date)\"\nprintf \"##################################################\\n\\n\"\n\ndr-stop-evaluation\n\n# Check if Docker style is set to swarm and wait for all containers to stop\nif [ \"$DR_DOCKER_STYLE\" == \"swarm\" ]; then\n\tSTACK_NAME=\"deepracer-eval-$DR_RUN_ID\"\n\tSTACK_CONTAINERS=$(docker stack ps $STACK_NAME 2>/dev/null | wc -l)\n\twhile [[ \"$STACK_CONTAINERS\" -gt 1 ]]; do\n\t\techo \"Waiting for all containers in the stack to stop...\"\n\t\tsleep 5\n\t\tSTACK_CONTAINERS=$(docker stack ps $STACK_NAME 2>/dev/null | wc -l)\n\tdone\nfi\n\ndr-start-evaluation -q\n"
  },
  {
    "path": "utils/sample-createspot.sh",
    "content": "#!/usr/bin/env bash\n\n##  This is sample code that will generally show you how to launch a spot instance on aws and leverage the \n##  automation built into deepracer-for-cloud to automatically start training\n##  Changes required to work:\n##     Input location where your training will take place -- S3_LOCATION\n##     Input security group, iam role, and key-name\n\n## First you need to tell the script where in s3 your training will take place\n## can be either a bucket at the root level, or a bucket/prefix.  don't include the s3://\n\nS3_LOCATION=<#########>\n\n## extract bucket location\nBUCKET=${S3_LOCATION%%/*}\n\n## extract prefix location\nif [[ \"$S3_LOCATION\" == *\"/\"* ]]\nthen\n  PREFIX=${S3_LOCATION#*/}\nelse\n  PREFIX=\"\"\nfi\n\n## Fill these out with your custom information if you want to upload and submit to leaderboard.  not required to run\nDR_UPLOAD_S3_PREFIX=########\n\n## set the instance type you want to launch\nINSTANCE_TYPE=c5.2xlarge\n\n## if you want to modify additional variables from the default, add them here, then add them to section further below called replace static paramamters.  I've only done World name for now\nWORLD_NAME=FS_June2020\n\n## modify this if you want additional robomaker workers\nDR_WORKERS=1\n\n## select which images you want to use.  these will be used later for a docker pull\nDR_SAGEMAKER_IMAGE=cpu-avx-mkl\nDR_ROBOMAKER_IMAGE=cpu-avx2\n\n## check the s3 location for existing training folders\n## automatically determine the latest training run (highest number), and set model parameters accordingly\n## this script assumes the format rl-deepracer-1, rl-deepracer-2, etc.  you will need to modify if your schema differs\n\nLAST_TRAINING=$(aws s3 ls $S3_LOCATION/rl-deepracer | sort -t - -k 3 -g | tail -n 1 | awk '{print $2}')\n## drop trailing slash\nLAST_TRAINING=$(echo $LAST_TRAINING | sed 's:/*$::')\n\nCONFIG_FILE=\"./run.env\"\nOLD_SYSTEMENV=\"./system.env\"\n\n## incorporate logic from increment.sh, slightly modified to use last training \nOPT_DELIM='-'\n## Read in data\nCURRENT_RUN_MODEL=$(aws s3 ls $S3_LOCATION/rl-deepracer | sort -t - -k 3 -g | tail -n 1 | awk '{print $2}')\n## drop trailing slash\nCURRENT_RUN_MODEL=$(echo $LAST_TRAINING | sed 's:/*$::')\n## get number at the end\nCURRENT_RUN_MODEL_NUM=$(echo \"${CURRENT_RUN_MODEL}\" | \\\n                    awk -v DELIM=\"${OPT_DELIM}\" '{ n=split($0,a,DELIM); if (a[n] ~ /[0-9]*/) print a[n]; else print \"\"; }')\n\nif [ -z $LAST_TRAINING ]\nthen\n    echo No prior training found\n    if [[ $PREFIX == \"\" ]]\n    then\n      NEW_RUN_MODEL=rl-deepracer-1\n    else\n      NEW_RUN_MODEL=\"$PREFIX/rl-deepracer-1\"\n    fi\n    PRETRAINED=False\n    CURRENT_RUN_MODEL=$NEW_RUN_MODEL\nelse\n\n    NEW_RUN_MODEL_NUM=$(echo \"${CURRENT_RUN_MODEL_NUM} + 1\" | bc )\n    PRETRAINED=True\n\n    if [[ $PREFIX == \"\" ]]\n    then\n      NEW_RUN_MODEL=$(echo $CURRENT_RUN_MODEL | sed \"s/${CURRENT_RUN_MODEL_NUM}\\$/${NEW_RUN_MODEL_NUM}/\")\n    else\n      NEW_RUN_MODEL=$(echo $CURRENT_RUN_MODEL | sed \"s/${CURRENT_RUN_MODEL_NUM}\\$/${NEW_RUN_MODEL_NUM}/\")     \n      NEW_RUN_MODEL=\"$PREFIX/$NEW_RUN_MODEL\"\n      CURRENT_RUN_MODEL=\"$PREFIX/$CURRENT_RUN_MODEL\"\n    fi\n    echo Last training was $CURRENT_RUN_MODEL so next training is $NEW_RUN_MODEL\nfi\n\nif [[ $PREFIX == \"\" ]]\nthen\n    CUSTOM_FILES_PREFIX=\"custom_files\"\nelse\n    CUSTOM_FILES_PREFIX=\"$PREFIX/custom_files\"\nfi\n\n## Replace dynamic parameters in run.env (still local to your directory)\nsed -i.bak -re \"s:(DR_LOCAL_S3_PRETRAINED_PREFIX=).*$:\\1$CURRENT_RUN_MODEL:g; s:(DR_LOCAL_S3_PRETRAINED=).*$:\\1$PRETRAINED:g; s:(DR_LOCAL_S3_MODEL_PREFIX=).*$:\\1$NEW_RUN_MODEL:g; s:(DR_LOCAL_S3_CUSTOM_FILES_PREFIX=).*$:\\1$CUSTOM_FILES_PREFIX:g\" \"$CONFIG_FILE\"\nsed -i.bak -re \"s/(DR_LOCAL_S3_BUCKET=).*$/\\1$BUCKET/g\" \"$CONFIG_FILE\"\n\n## Replace static parameters in run.env (still local to your directory)\nsed -i.bak -re \"s/(DR_UPLOAD_S3_PREFIX=).*$/\\1$DR_UPLOAD_S3_PREFIX/g\" \"$CONFIG_FILE\"\nsed -i.bak -re \"s/(DR_WORLD_NAME=).*$/\\1$WORLD_NAME/g\" \"$CONFIG_FILE\"\n\n## Replace static paramaters in system.env file, including sagemaker and robomaker images (still local to your directory) and the number of DR_workers\nsed -i.bak -re \"s/(DR_UPLOAD_S3_BUCKET=).*$/\\1$DR_UPLOAD_S3_BUCKET/g; s/(DR_SAGEMAKER_IMAGE=).*$/\\1$DR_SAGEMAKER_IMAGE/g; s/(DR_ROBOMAKER_IMAGE=).*$/\\1$DR_ROBOMAKER_IMAGE/g; s/(DR_WORKERS=).*$/\\1$DR_WORKERS/g\" \"$OLD_SYSTEMENV\"\n\n## upload the new run.env and system.env files into your S3 bucket (same s3 location identified earlier)\n## files are loaded into the node-config folder/prefix.  You can also upload other files to node config, and they\n## will sync to the EC2 instance as part of the autorun script later.  If you add other files, make sure they are \n## in node-config in the same directory structure as DRfc;   example:   s3location/node-config/scripts/training/.start.sh\nRUNENV_LOCATION=$S3_LOCATION/node-config/run.env\nSYSENV_LOCATION=$S3_LOCATION/node-config/system.env\n\naws s3 cp ./run.env s3://$RUNENV_LOCATION\naws s3 cp ./system.env s3://$SYSENV_LOCATION\n\n## upload a custom autorun script to S3.  there is a default autorun script in the repo that will be used unless a custom one is specified here instead\n#aws s3 cp ./autorun.sh s3://$S3_LOCATION/autorun.sh\n\n## upload custom files -- if you dont want this, comment these lines out\naws s3 cp ./model_metadata.json s3://$S3_LOCATION/custom_files/model_metadata.json\naws s3 cp ./reward_function.py s3://$S3_LOCATION/custom_files/reward_function.py\naws s3 cp ./hyperparameters.json s3://$S3_LOCATION/custom_files/hyperparameters.json\n\n## launch an ec2\n## update with your own settings, including key-name, security-group, and iam-instance-profile at a minimum\n## user data includes a command to create a .txt file which simply contains the name of the s3 location\n## this filename will be used as fundamental input to autorun.sh script run later on that instance\n## you need to ensure you have proper IAM permissions to launch this instance\n\naws ec2 run-instances \\\n    --image-id ami-085925f297f89fce1 \\\n    --count 1 \\\n    --instance-type $INSTANCE_TYPE \\\n    --key-name <####keyname####> \\\n    --security-group-ids sg-<####sgid####> \\\n    --block-device-mappings 'DeviceName=/dev/sda1,Ebs={DeleteOnTermination=true,VolumeSize=40}' \\\n    --iam-instance-profile Arn=arn:aws:iam::<####acct_num####>:instance-profile/<####role_name####> \\\n    --instance-market-options MarketType=spot \\\n    --user-data \"#!/bin/bash\n    su -c 'git clone https://github.com/aws-deepracer-community/deepracer-for-cloud.git && echo \"$S3_LOCATION/node-config\" > /home/ubuntu/deepracer-for-cloud/autorun.s3url && /home/ubuntu/deepracer-for-cloud/bin/prepare.sh' - ubuntu\"\n"
  },
  {
    "path": "utils/setup-xorg.sh",
    "content": "#!/usr/bin/env bash\n\nset -e\n\n# Script to install basic X-Windows on a headless instance (e.g. in EC2)\n\n# Script shall run as user, not root. Sudo will be used when needed.\nif [[ $EUID == 0 ]]; then\n    echo \"ERROR: Do not run as root / via sudo.\"\n    exit 1\nfi\n\n# Deepracer environment variables must be set.\nif [ -z \"$DR_DIR\" ]; then\n    echo \"ERROR: DR_DIR not set. Run 'source bin/activate.sh' before setup-xorg.sh.\"\n    exit 1\nfi\n\n# Install additional packages\nsudo apt-get install xinit xserver-xorg-legacy x11-xserver-utils x11-utils \\\n        menu mesa-utils xterm mwm x11vnc pkg-config screen -y --no-install-recommends\n\n# Configure\nsudo sed -i -e \"s/console/anybody/\" /etc/X11/Xwrapper.config\nBUS_ID=$(nvidia-xconfig --query-gpu-info | grep \"PCI BusID\" | cut -f2- -d: | sed -e 's/^[[:space:]]*//' | head -1)\nsudo nvidia-xconfig --busid=$BUS_ID -o $DR_DIR/tmp/xorg.conf\n\ntouch ~/.Xauthority\n\nsudo tee -a $DR_DIR/tmp/xorg.conf <<EOF\n\nSection \"DRI\"\n        Mode 0666\nEndSection\nEOF\n"
  },
  {
    "path": "utils/start-local-browser.sh",
    "content": "#!/usr/bin/env bash\n\nsource $DR_DIR/bin/scripts_wrapper.sh\n\nusage() {\n  echo \"Usage: $0 [-t topic] [-w width] [-h height] [-q quality] -b [browser-command]\"\n  echo \"       -w        Width of individual stream.\"\n  echo \"       -h        Heigth of individual stream.\"\n  echo \"       -q        Quality of the stream image.\"\n  echo \"       -t        Topic to follow - default /racecar/deepracer/kvs_stream\"\n  echo \"       -b        Browser command (default: firefox --new-tab)\"\n  exit 1\n}\n\ntrap ctrl_c INT\n\nfunction ctrl_c() {\n  echo \"Requested to stop.\"\n  exit 1\n}\n\n# Stream definition\nTOPIC=\"/racecar/deepracer/kvs_stream\"\nWIDTH=480\nHEIGHT=360\nQUALITY=75\nBROWSER=\"firefox --new-tab\"\n\nwhile getopts \":w:h:q:t:b:\" opt; do\n  case $opt in\n  w)\n    WIDTH=\"$OPTARG\"\n    ;;\n  h)\n    HEIGHT=\"$OPTARG\"\n    ;;\n  q)\n    QUALITY=\"$OPTARG\"\n    ;;\n  t)\n    TOPIC=\"$OPTARG\"\n    ;;\n  b)\n    BROWSER=\"$OPTARG\"\n    ;;\n  \\?)\n    echo \"Invalid option -$OPTARG\" >&2\n    usage\n    ;;\n  esac\ndone\n\nFILE=$DR_DIR/tmp/streams-$DR_RUN_ID.html\n\n# Check if we will use Docker Swarm or Docker Compose\nif [[ \"${DR_DOCKER_STYLE,,}\" == \"swarm\" ]]; then\n  echo \"This script does not support swarm mode. Use $(dr-start-viewer).\"\n  exit\nfi\n\necho \"<html><head><title>DR-$DR_RUN_ID - $DR_LOCAL_S3_MODEL_PREFIX - $TOPIC</title></head><body><h1>DR-$DR_RUN_ID - $DR_LOCAL_S3_MODEL_PREFIX - $TOPIC</h1>\" >$FILE\n\nROBOMAKER_CONTAINERS=$(docker ps --format \"{{.ID}}\" --filter name=deepracer-$DR_RUN_ID --filter \"ancestor=${DR_SIMAPP_SOURCE}:${DR_SIMAPP_VERSION}\")\nif [ -z \"$ROBOMAKER_CONTAINERS\" ]; then\n  echo \"No running robomakers. Exiting.\"\n  exit\nfi\n\nfor c in $ROBOMAKER_CONTAINERS; do\n  C_PORT=$(docker inspect $c | jq -r '.[0].NetworkSettings.Ports[\"8080/tcp\"][0].HostPort')\n  C_URL=\"http://localhost:${C_PORT}/stream?topic=${TOPIC}&quality=${QUALITY}&width=${WIDTH}&height=${HEIGHT}\"\n  C_IMG=\"<img src=\\\"${C_URL}\\\"></img>\"\n  echo $C_IMG >>$FILE\ndone\n\necho \"</body></html>\" >>$FILE\necho \"Starting browser '$BROWSER'.\"\n$BROWSER $(_realpath \"$FILE\") &\n"
  },
  {
    "path": "utils/start-xorg.sh",
    "content": "#!/usr/bin/env bash\n\nset -e\n\n# Script shall run as user, not root. Sudo will be used when needed.\nif [[ $EUID == 0 ]]; then\n    echo \"ERROR: Do not run as root / via sudo.\"\n    exit 1\nfi\n\n# X must not be running when we try to start it.\nif timeout 1s xset -display $DR_DISPLAY q &>/dev/null; then\n    echo \"ERROR: X Server already running on display $DR_DISPLAY.\"\n    exit 1\nfi\n\n# Deepracer environment variables must be set.\nif [ -z \"$DR_DIR\" ]; then\n    echo \"ERROR: DR_DIR not set. Run 'source bin/activate.sh' before start-xorg.sh.\"\n    exit 1\nfi\n\nif [ -z \"$DR_DISPLAY\" ]; then\n    echo \"ERROR: DR_DISPLAY not set. Ensure the variable is configured in system.env.\"\n    exit 1\nfi\n\n# Start inside a sudo-screen to prevent it from stopping when disconnecting terminal.\nsudo screen -d -S DeepracerXorg -m bash -c \"xinit /usr/bin/mwm -display $DR_DISPLAY -- /usr/lib/xorg/Xorg $DR_DISPLAY -config $DR_DIR/tmp/xorg.conf > $DR_DIR/tmp/xorg.log 2>&1\"\n\n# Screen detaches; let it have some time to start X.\nsleep 1\n\nif [[ \"${DR_GUI_ENABLE,,}\" == \"true\" ]]; then\n    x11vnc -bg -forever -no6 -nopw -rfbport 5901 -rfbportv6 -1 -loop -display WAIT$DR_DISPLAY &\n    sleep 1\nfi\n\n# Create xauth mit-magic-cookie.\nxauth generate $DR_DISPLAY\n\n# Check if X started successfully. If not, print error message and exit.\nif timeout 1s xset -display $DR_DISPLAY q &>/dev/null; then\n    echo \"X Server started on display $DR_DISPLAY\"\nelse\n    echo \"Server failed to start on display $DR_DISPLAY\"\nfi\n"
  },
  {
    "path": "utils/timed-stop.sh",
    "content": "#!/usr/bin/env bash\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" >/dev/null 2>&1 && pwd)\"\nDR_DIR=\"$(dirname $SCRIPT_DIR)\"\nENV_FILE=\"$1\"\n\nsource $DR_DIR/bin/activate.sh $DR_DIR/$1\ndr-stop-training"
  },
  {
    "path": "utils/upload-rotate.sh",
    "content": "#!/usr/bin/env bash\n# This script uploads the latest DeepRacer model and activates the necessary environment.\n# It processes command line options to customize the environment file path, enable local upload, and specify an evaluation environment file.\n# After processing the options, it activates the environment, uploads the model, and updates the evaluation environment file with the new model prefix if specified.\n#\n# Usage:\n# ./upload-rotate.sh [-e <environment file>] [-L] [-E <evaluation environment file>] [-c <counter file>] [-v]\n#\n# Options:\n# -c <counter file>               Specify the path to the counter file. This is optional.\n# -e <environment file>           Specify the path to the environment configuration file. Defaults to 'run.env' in the script's directory.\n# -L                              Enable local upload. This option does not require a value.\n# -v                              Add more verbose logging, capturing iteration and entropy numbers.\n# -E <evaluation environment file> Specify the path to the evaluation environment configuration file. This is optional.\n# -C                              Upload the car file. This option does not require a value.\n#\n# Example:\n# ./upload-rotate.sh -e custom.env -L -E eval.env\n#\n# To run this script manually, navigate to its directory and execute it with desired options.\n# Ensure you have the necessary permissions to execute the script.\n\n# Navigate to the script directory\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" && pwd)\"\nDR_DIR=\"$(dirname \"$SCRIPT_DIR\")\"\n\n# Default environment file path\nENV_FILE=\"$DR_DIR/run.env\"\nLOCAL_UPLOAD=\"\"\nEVAL_ENV_FILE=\"\"\n\n# Process command line options\nwhile getopts \"e:LE:vc:C\" opt; do\n  case $opt in\n    c) COUNTER_FILE=\"$OPTARG\" ;;\n    e) ENV_FILE=\"$OPTARG\" ;;\n    L) LOCAL_UPLOAD=\"-L\" ;;\n    E) EVAL_ENV_FILE=\"$OPTARG\" ;;\n    v) VERBOSE_LOGGING=\"true\" ;;\n    C) CAR_FILE=\"-C\" ;;\n    *) echo \"Invalid option: -$OPTARG\" >&2; exit 1 ;;\n  esac\ndone\n\n# If a counter file is specified, increment the counter\nif [ -n \"$COUNTER_FILE\" ]; then\n  if [ -f \"$COUNTER_FILE\" ]; then\n    COUNTER=$(cat \"$COUNTER_FILE\")\n    COUNTER=$((COUNTER + 1))\n    echo \"$COUNTER\" > \"$COUNTER_FILE\"\n    export UPLOAD_COUNTER=$COUNTER\n  else\n    echo \"Error: Counter file '$COUNTER_FILE' not found.\" >&2\n    exit 1\n  fi\nfi\n\n# Activate the environment\nif [ -f \"$ENV_FILE\" ]; then\n  source \"$DR_DIR/bin/activate.sh\" \"$ENV_FILE\"\nelse\n  if [ -f \"$DR_DIR/$ENV_FILE\" ]; then\n    source \"$DR_DIR/bin/activate.sh\" \"$DR_DIR/$ENV_FILE\"\n  else\n    echo \"Error: Environment file '$ENV_FILE' not found.\" >&2\n    exit 1\n  fi\nfi\n\n# Execute the upload command\nif [ -n \"$COUNTER_FILE\" ]; then\n  dr-upload-model $LOCAL_UPLOAD -f\nelse\n  dr-upload-model $LOCAL_UPLOAD -1 -f\nfi\ndr-update\n\n# If the car file option is specified, upload the car file\nif [ -n \"$CAR_FILE\" ]; then\n  dr-upload-car-zip $LOCAL_UPLOAD -f\nfi\n\n# If an evaluation environment file is specified then alter the model prefix to enable evaluation\nif [ -n \"$EVAL_ENV_FILE\" ]; then\n  if [ ! -f \"$EVAL_ENV_FILE\" ]; then\n    if [ -f \"$DR_DIR/$EVAL_ENV_FILE\" ]; then\n      EVAL_ENV_FILE=\"$DR_DIR/$EVAL_ENV_FILE\"\n    else\n      echo \"Error: Evaluation environment file '$EVAL_ENV_FILE' not found.\" >&2\n      exit 1\n    fi\n  fi\n  MODEL_PREFIX=$(echo $DR_UPLOAD_S3_PREFIX)\n  echo \"Updating evaluation environment file $EVAL_ENV_FILE to use $MODEL_PREFIX\"\n  sed -i \"s/DR_LOCAL_S3_MODEL_PREFIX=.*/DR_LOCAL_S3_MODEL_PREFIX=$MODEL_PREFIX/\" $EVAL_ENV_FILE\nfi\n\nprintf \"\\n############################################################\\n\"\nprintf \"### %-15s %-15s\\n\" \"Configuration:\" \"$ENV_FILE\"\nprintf \"### %-15s %-15s\\n\" \"Model Name:\" \"$DR_LOCAL_S3_MODEL_PREFIX\"\nprintf \"### %-15s %-15s\\n\" \"Uploaded Model:\" \"$DR_UPLOAD_S3_PREFIX\"\n\n# If verbose logging is enabled, retrieve the entropy and iteration numbers.\nif [ -n \"$VERBOSE_LOGGING\" ]; then\n  CONTAINER_ID=$(docker ps -f \"name=deepracer-${DR_RUN_ID}_rl_coach\" --format \"{{.ID}}\")\n  if [ -n \"$CONTAINER_ID\" ]; then\n    LAST_ITERATION=$(docker logs --since 20m \"$CONTAINER_ID\" 2>/dev/null | awk '{if (match($0, /Best checkpoint number: ([0-9]+), Last checkpoint number: ([0-9]+)/, arr)) {print arr[2]}}' | tail -n 1)\n    printf \"### %-15s %-15s\\n\" \"Last iteration:\" \"$LAST_ITERATION\"\n\n    ENTROPY=$(docker logs --since 20m \"$CONTAINER_ID\" 2>/dev/null | awk '{if (match($0, /Entropy=([0-9.]+)/, arr)) {print arr[1]}}' | tail -n 1)\n    printf \"### %-15s %-15s\\n\" \"Entropy:\" \"$ENTROPY\"\n  fi\nfi\n\nprintf \"### %-15s %-15s\\n\" \"Completed at:\" \"$(date)\"\nprintf \"############################################################\\n\\n\"\n"
  }
]