[
  {
    "path": ".envrc",
    "content": "echo \"Processing .direnv...\"\nfunction template {\n  echo \"Creating a skeleton tutorial in $1.\"\n  mkdir -p $1\t\n  cp $(pwd)/guide/TUTORIAL_TEMPLATE.mdx $1/README.md\n}\necho \"Done.\"\n\n"
  },
  {
    "path": ".gitignore",
    "content": ".DS_Store\n"
  },
  {
    "path": "HCL2/add_local_file/README.md",
    "content": "# Include a Local File at Job Runtime\n\nYou can use the HCL2 file function and a runtime variable to include a file in\nyour Nomad jobs. **These files should be small because they are stored in the\nNomad server state until the job is eligible for garbage collection.**\n\n## Techniques\n\n### Use the HCL2 file() function\n\n- [`use_file.nomad`] — demonstrates the file function. This allows you to include\na template to be rendered.\n\n### Wrap included files\n\nNomad will inject the file content into the template stanza directly and it\nwill be rendered by the client. You might want to prevent Nomad from seeing\nthe content as renderable. There are a few techniques that you can use for\nthis.\n\n- [`raw_file_delims.nomad`] — Uses alternative delimiters for the template\n  stanza. These delimiter characters must never appear in the included file\n  content. You can use interesting characters like emoji as delimiters\n  because of golang's Unicode support.\n\n- [`raw_file_json.nomad`] — JSON encodes the file and uses the Nomad template\n  engine to decode it on the client. The input file must not contain the default\n  template delimiters (`{{` and `}}`) or you must redefine them because they are\n  not escaped.\n  <details><summary>You can even use emoji, depending on OS support.</summary>\n\n  ![\"Image of the Nomad UI's job definition tab showing the \"prohibited\" emoji as LedtDelimiter and RightDelimiter\"](doc/emoji-delimiters.png \"Emoji are fun and functional.\")\n\n  </details>\n\n- [`raw_file_b64.nomad`] — demonstrates using base64 as a means to wrap your\n  included file so that it is only unwrapped on the destination client.\n\n## Explore\n\nThis directory contains a test file you can use named `input.file`, or you can\nsupply your own file to include.\n\n### Run the job\n\nThe jobs all define an input variable named `input_file`. You must supply the\npath to the file to include. You must provide it as an environment variable or\nas a flag. \n\n#### Environment variable\n\n```\nexport NOMAD_VAR_input_file=./input.file\nnomad job run use_file.nomad\n```\n\n#### Flag\n\n```\nnomad job run -var \"input_file=./input.file\" use_file.nomad\n```\n\n### Inspect the job\n\nRun the `nomad job inspect` command to see how the JSON job specification\nrepresents the job. Some techniques are very clear and some opaque the\nfile contents completely.\n\n```\nnomad job inspect use_file.nomad\n```\n\n### Get the logs from the allocation\n\nGet the allocation ID from the output of the `nomad job run` command and fetch\nthe logs.\n\n```\nnomad alloc logs «alloc_id»\n```\n\n### Stop the job.\n\n```\nnomad job stop use_file.nomad\n```\n\n## About the job\n\nThe job contains one task. Nomad renders the `template` stanza's content—the\nincluded file—into the task's `local` directory. It then starts an\n`alpine:latest` container that runs `cat` on the rendered file and sleeps\nuntil stopped.  the task's `local` directoryuses Nomad's Docker task driver to\ndownload an Alpine container.\n\n[`use_file.nomad`]: ./use_file.nomad\n[`raw_file_delims.nomad`]: ./raw_file_delims.nomad\n[`raw_file_json.nomad`]: ./raw_file_json.nomad\n[`raw_file_b64.nomad`]: ./raw_file_b64.nomad\n"
  },
  {
    "path": "HCL2/add_local_file/input.file",
    "content": "This is the input file content\n\nParticularly evil stuff:\n\nSingle quotes: 'hello'\nDouble quotes: \"howdy\"\nGo-template: {{ \"hello\" }}\nBackticks: `this is a raw-string in go, but raw strings can't be in rawstrings`\nJSON:\n{\n\t\"object\": {\n\t\t\"foo\": true,\n\t\t\"bar\": 5,\n\t\t\"baz\": [1,2,3]\n\t}\n}\n"
  },
  {
    "path": "HCL2/add_local_file/raw_file_b64.nomad",
    "content": "variable \"input_file\" {\n  type = string\n  description = \"local path to the redis configuration to inject into the job.\"\n}\n\njob \"raw_file_b64.nomad\" {\n  datacenters = [\"dc1\"]\n\n  group \"services\" {\n    task \"alpine\" {\n      driver = \"docker\"\n\n      template {\n        destination = \"local/file.out\"\n      }\n\n      config {\n        image   = \"alpine\"\n        command = \"bash\"\n        args    = [\n          \"-c\",\n          \"cat local/file.out; while true; do sleep 30; done\",\n        ]\n      }\n\n      template {\n        destination = \"local/file.out\"\n        data = \"{{base64Decode \\\"${base64encode(file(var.input_file))}\\\"}}\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/add_local_file/raw_file_delims.nomad",
    "content": "variable \"input_file\" {\n  type = string\n  description = \"local path to the redis configuration to inject into the job.\"\n}\n\njob \"raw_file_delims.nomad\" {\n  datacenters = [\"dc1\"]\n\n  group \"services\" {\n    task \"alpine\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"alpine\"\n        command = \"sh\"\n        args    = [\n          \"-c\",\n          \"cat local/file.out; while true; do sleep 30; done\",\n        ]\n      }\n\n      template {\n        destination = \"local/file.out\"\n        data = file(var.input_file)\n        left_delimiter = \"🚫\"\n        right_delimiter = \"🚫\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/add_local_file/raw_file_json.nomad",
    "content": "variable \"input_file\" {\n  type = string\n  description = \"local path to the redis configuration to inject into the job.\"\n}\n\njob \"raw_file_json.nomad\" {\n  datacenters = [\"dc1\"]\n\n  group \"services\" {\n    task \"alpine\" {\n      driver = \"docker\"\n\n      template {\n        destination = \"local/file.out\"\n      }\n\n      config {\n        image   = \"alpine\"\n        command = \"bash\"\n        args    = [\n          \"-c\",\n          \"cat local/file.out; while true; do sleep 30; done\",\n        ]\n      }\n      \n      template {\n        destination = \"local/file.out\"\n        data = \"{{jsonDecode \\\"${jsonencode(file(var.input_file))}\\\"}}\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/add_local_file/use_file.nomad",
    "content": "variable \"input_file\" {\n  type = string\n  description = \"local path to the redis configuration to inject into the job.\"\n}\n\njob \"use_file.nomad\" {\n  datacenters = [\"dc1\"]\n\n  group \"services\" {\n    task \"alpine\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"alpine\"\n        command = \"sh\"\n        args    = [\n          \"-c\",\n          \"cat local/file.out; while true; do sleep 30; done\",\n        ]\n      }\n\n      template {\n        destination = \"local/file.out\"\n        data = file(var.input_file)\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/always_change/README.md",
    "content": "# Use HCL2 to make re-runnable batch jobs\n\nNomad will refuse to run a batch job again unless it detects a change to the job.\nThis behavior exists to prevent duplicate job submissions from creating unnecessary\nwork—unchanged jobs are \"the same job\" to Nomad. A Nomad job's `meta` stanza is\nan ideal place to make changes to a Nomad job that do not change the behavior of\nthe job itself. Some ways to provide variation in a meta value are using an HCL2\nvariable or the `uuidv4()` function.\n\n- [`before.nomad`]—Demonstrates the normal behavior.\n\n- [`uuid.nomad`]—Use a random UUID to change the job every time it's run. This\n  guarantees that Nomad will always run the submitted job.\n\n- [`variable.nomad`]—Submit a variable at runtime. This can preserve the single\n  run behavior in cases where the job submission is a duplicate.\n\n## Nomad's default behavior\n\nRun the `before.nomad` job. Nomad will start a copy of the `hello-world:latest`\ndocker container. This container outputs some text and exits.\n\n```text\n$ nomad run before.nomad\n==> Monitoring evaluation \"1fef4d80\"\n    Evaluation triggered by job \"before.nomad\"\n==> Monitoring evaluation \"1fef4d80\"\n    Allocation \"7e6a767b\" created: node \"14ab9290\", group \"before\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"1fef4d80\" finished with status \"complete\"\n```\n\nCheck the status of the allocation created by the run command.\n\n```text\n$ nomad alloc status 7eg\nID                  = 7e6a767b-5604-5268-653b-905948928de5\nEval ID             = 1fef4d80\nName                = before.nomad.before[0]\nNode ID             = 14ab9290\nNode Name           = nomad-client-2.node.consul\nJob ID              = before.nomad\nJob Version         = 0\nClient Status       = complete\nClient Description  = All tasks have completed\nDesired Status      = run\nDesired Description = <none>\nCreated             = 6m55s ago\nModified            = 6m45s ago\n\nTask \"hello-world\" is \"dead\"\nTask Resources\nCPU      Memory   Disk     Addresses\n100 MHz  300 MiB  300 MiB\n\nTask Events:\nStarted At     = 2021-05-18T18:03:10Z\nFinished At    = 2021-05-18T18:03:10Z\nTotal Restarts = 0\nLast Restart   = N/A\n\nRecent Events:\nTime                       Type        Description\n2021-05-18T14:03:10-04:00  Terminated  Exit Code: 0\n2021-05-18T14:03:10-04:00  Started     Task started by client\n2021-05-18T14:03:01-04:00  Driver      Downloading image\n2021-05-18T14:03:01-04:00  Task Setup  Building Task Directory\n2021-05-18T14:03:01-04:00  Received    Task received by client\n```\n\nAs expected, the Docker container finished and exited with exit code 0.\n\nCheck the status of the job to verify that its status is `dead`.\n\n```text\n$ nomad status\nID            Type     Priority  Status   Submit Date\nbefore.nomad  batch    50        dead     2021-05-18T14:03:00-04:00\n```\n\nTry running the `before.nomad` job again.\n\n```text\n$ nomad run before.nomad\n==> Monitoring evaluation \"a855fa2b\"\n    Evaluation triggered by job \"before.nomad\"\n==> Monitoring evaluation \"a855fa2b\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"a855fa2b\" finished with status \"complete\"\n```\n\nNote that this time, Nomad did not schedule an allocation and the\njob remains dead. This is expected and is a safety feature of Nomad\nto prevent duplicated submissions of the same job from creating\nunnecessary duplicate work.\n\nIf your job should always run you can use one of the following\ntechniques to inject variation in ways that don't require you\nto alter the job files contents.\n\n## Techniques\n\n### Use a UUID as an ever-changing value\n\n```text\n$ nomad run uuid.nomad\n==> Monitoring evaluation \"27fe0c84\"\n    Evaluation triggered by job \"uuid.nomad\"\n==> Monitoring evaluation \"27fe0c84\"\n    Allocation \"6de97aa7\" created: node \"14ab9290\", group \"uuid\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"27fe0c84\" finished with status \"complete\"\n```\n\n```text\n$ nomad alloc status 6de\nID                  = 6de97aa7-e6b1-c6bf-e8e0-16d5f7ed39bf\nEval ID             = 27fe0c84\nName                = uuid.nomad.uuid[0]\nNode ID             = 14ab9290\nNode Name           = nomad-client-2.node.consul\nJob ID              = uuid.nomad\nJob Version         = 0\nClient Status       = complete\nClient Description  = All tasks have completed\nDesired Status      = run\nDesired Description = <none>\nCreated             = 6m52s ago\nModified            = 6m50s ago\n\nTask \"hello-world\" is \"dead\"\nTask Resources\nCPU      Memory   Disk     Addresses\n100 MHz  300 MiB  300 MiB\n\nTask Events:\nStarted At     = 2021-05-18T18:07:33Z\nFinished At    = 2021-05-18T18:07:33Z\nTotal Restarts = 0\nLast Restart   = N/A\n\nRecent Events:\nTime                       Type        Description\n2021-05-18T14:07:33-04:00  Terminated  Exit Code: 0\n2021-05-18T14:07:33-04:00  Started     Task started by client\n2021-05-18T14:07:31-04:00  Driver      Downloading image\n2021-05-18T14:07:31-04:00  Task Setup  Building Task Directory\n2021-05-18T14:07:31-04:00  Received    Task received by client\n```\n\n```text\n$ nomad status\nID            Type     Priority  Status   Submit Date\nuuid.nomad    batch    50        dead     2021-05-18T14:07:30-04:00\nbefore.nomad  batch    50        dead     2021-05-18T14:03:00-04:00\n```\n\n```text\n$ nomad run uuid.nomad\n==> Monitoring evaluation \"2943fe82\"\n    Evaluation triggered by job \"uuid.nomad\"\n    Allocation \"61f5861a\" created: node \"f7bc1f2d\", group \"uuid\"\n==> Monitoring evaluation \"2943fe82\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"2943fe82\" finished with status \"complete\"\n```\n\n### Use an HCL2 variable\n\nUsing a variable can allow you to leverage Nomad's default behavior\nof not running unchanged work, but also to provide a change to the\njob without requiring a round trip to source control.\n\n```text\n$ nomad run -var run_index=1 variable.nomad\n==> Monitoring evaluation \"454f6fb4\"\n    Evaluation triggered by job \"variable.nomad\"\n==> Monitoring evaluation \"454f6fb4\"\n    Allocation \"74f9cbf5\" created: node \"f7bc1f2d\", group \"variable\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"454f6fb4\" finished with status \"complete\"\n```\n\n```text\n$ nomad alloc status 74f\nID                  = 74f9cbf5-a793-5022-c831-b83e31712725\nEval ID             = 454f6fb4\nName                = variable.nomad.variable[0]\nNode ID             = f7bc1f2d\nNode Name           = nomad-client-1.node.consul\nJob ID              = variable.nomad\nJob Version         = 0\nClient Status       = complete\nClient Description  = All tasks have completed\nDesired Status      = run\nDesired Description = <none>\nCreated             = 6m52s ago\nModified            = 6m48s ago\n\nTask \"hello-world\" is \"dead\"\nTask Resources\nCPU      Memory   Disk     Addresses\n100 MHz  300 MiB  300 MiB\n\nTask Events:\nStarted At     = 2021-05-18T18:21:27Z\nFinished At    = 2021-05-18T18:21:27Z\nTotal Restarts = 0\nLast Restart   = N/A\n\nRecent Events:\nTime                       Type        Description\n2021-05-18T14:21:27-04:00  Terminated  Exit Code: 0\n2021-05-18T14:21:27-04:00  Started     Task started by client\n2021-05-18T14:21:24-04:00  Driver      Downloading image\n2021-05-18T14:21:24-04:00  Task Setup  Building Task Directory\n2021-05-18T14:21:24-04:00  Received    Task received by client\n```\n\n```text\n$ nomad status\nID              Type   Priority  Status  Submit Date\nvariable.nomad  batch  50        dead    2021-05-18T14:21:23-04:00\n```\n\nResubmit the job with the same `run_index` value—`1`.\n\n```text\n$ nomad run -var run_index=1 variable.nomad\n==> Monitoring evaluation \"4d7064ea\"\n    Evaluation triggered by job \"variable.nomad\"\n==> Monitoring evaluation \"4d7064ea\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"4d7064ea\" finished with status \"complete\"\n```\n\nNote that Nomad does not re-run the job. Now, change the\n`run_index` value to `2` and run the command again.\n\n```text\n$ nomad run -var run_index=2 variable.nomad\n==> Monitoring evaluation \"73e7902f\"\n    Evaluation triggered by job \"variable.nomad\"\n==> Monitoring evaluation \"73e7902f\"\n    Allocation \"9e8cbc58\" created: node \"f7bc1f2d\", group \"variable\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"73e7902f\" finished with status \"complete\"\n```\n\nNomad runs a fresh allocation of the batch job.\n\n## Clean up\n\nRun `nomad job stop variable.nomad` to stop the job.\n\n[`before.nomad`]: ./before.nomad\n[`uuid.nomad`]: ./uuid.nomad\n[`variable.nomad`]: ./variable.nomad\n"
  },
  {
    "path": "HCL2/always_change/before.nomad",
    "content": "job \"before.nomad\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  group \"before\" {\n    task \"hello-world\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hello-world:latest\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/always_change/uuid.nomad",
    "content": "job \"uuid.nomad\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  meta {\n    run_uuid = \"${uuidv4()}\"\n  }\n\n  group \"uuid\" {\n    task \"hello-world\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hello-world:latest\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/always_change/variable.nomad",
    "content": "job \"variable.nomad\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  meta {\n    run_index = \"${floor(var.run_index)}\"\n  }\n\n  group \"variable\" {\n    task \"hello-world\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hello-world:latest\"\n      }\n    }\n  }\n}\n\nvariable \"run_index\" {\n  type = number\n  description = \"An integer that, when changed from the current value causes the job to restart.\"\n  validation {\n    condition = var.run_index == floor(var.run_index)\n    error_message = \"The run_index must be an integer.\"\n  }\n}"
  },
  {
    "path": "HCL2/dynamic/README.md",
    "content": "# HCL2 dynamic blocks\n\nThis job specification leverages the `dynamic` HCL2 blocks and HCL2 variables to\ncreate a multi-task job specification."
  },
  {
    "path": "HCL2/dynamic/example.nomad",
    "content": "variable \"job_name\" {\n  type = string\n  default = \"\"\n}\n\nlocals {\n  targets = {\n    \"1\": \"zpool\"\n    \"2\": \"zmirror\"\n  }\n  tasks = {\n    \"redis\": {\"name\":\"db\",\"port\":6379}\n  }\n  docker_versions = {\n    \"zpool\": \"redis:7\"\n    \"zmirror\": \"redis:latest\"\n  }\n  job_name = \"%{ if var.job_name != \"\" }${var.job_name}%{ else }example%{ endif }\"\n}\n\njob \"example\" {\n  name = local.job_name\n  datacenters = [\"dc1\"]\n\n  dynamic \"group\" {\n    for_each = local.targets\n    labels = [\"${local.job_name}-${group.value}\"]\n    content {\n      network {\n        dynamic \"port\" {\n          labels = [\"${local.job_name}-${group.value}-${port.key}-${port.value.name}-${port.value.port}\"]\n          for_each = local.tasks\n          content {\n            to = port.value.port\n          }\n        }\n      }\n\n      dynamic \"task\" {\n        labels = [\"${local.job_name}-${group.value}-${task.key}\"]\n        for_each = local.tasks\n\n        content {\n          driver = \"docker\"\n\n          config {\n            image = local.docker_versions[group.value]\n            ports = [\"${local.job_name}-${group.value}-${task.key}-${task.value.name}-${task.value.port}\"]\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/object_to_template/README.md",
    "content": ""
  },
  {
    "path": "HCL2/object_to_template/example.nomad",
    "content": "variable \"datacenters\" {\n    type = list(string)\n    default = [\"dc1\"]\n}\n\nvariable \"ports\" {\n  type = list(object({\n    name     = string\n    internal = number\n    external = number\n  }))\n  default = [\n    {\n      name     = \"db\"\n      internal = 8300\n      external = 8300\n    },\n    {\n      name     = \"db2\"\n      internal = 8301\n      external = 8301\n    }\n  ]\n}\n\njob \"example\" {\n  datacenters = var.datacenters\n  type = \"batch\"\n\n  group \"group\" {\n    task \"task\" {\n        driver = \"exec\"\n\n        config {\n          command = \"bash\"\n          args    = [\"-c\", \"cat template.out\"]\n        }\n\n        template {\n          destination = \"template.out\"\n          data        = <<EOT\n{{ $ports := parseJSON `${jsonencode(var.ports)}` }}\n{{range $ports}}{{.name}}:{{.external}}->{{.internal}}{{println}}{{end}}\nEOT\n        }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/variable_jobs/README.md",
    "content": "# Using HCL2 to add variables to Nomad jobs\n\nNomad's HCL2 support enables you to use variables in your Nomad job specifications.\nThis can decrease the number of job files you have to maintain in source control\nand can encourage job reuse.\n\nThis example contains a job that consumes HCL2 variables and uses them to generate\na Docker service job.\n\nThe `job.nomad` file defines 3 variables:\n\n- `datacenters`(default `[ \"dc1\" ]`)—a list of the Nomad datacenters to run\n  the job in.\n\n- `docker_image`—The docker image name to run. Since this is a service job,\n  the image needs to run until explicitly stopped. The `redis` container is a\n  small example that works well.\n\n- `image-version`—The specific version of the `docker_image` image to run. For\n  the `redis` container, try versions like `\"3\"`,`\"4\"`, and `\"latest\"`.\n\n## Quickstart\n\n### Run the example\n\n```bash\nnomad job run -var docker_image=\"redis\" -var image_version=\"3\" job.nomad\n```\n\nNomad will start a `redis:3` container\n\n```bash\nnomad job run -var docker_image=\"redis\" -var image_version=\"latest\" job.nomad\n```\n\nNomad will stop the `redis:3` container and start a 'redis:latest' container.\n\n## Stop the examples\n\n```bash\nnomad job stop job\n```\n\n## Submitting variable values\n\nThere are three ways to provide values for HCL2 variables.\n\n- Individual `-var` flags\n- With a variable file and the `-var-file` flag\n- Environment variables\n\nYou can use one or all these methods in the same call. Flags override values\nfrom the environment. The flags are parsed in the order they are presented.\n\nPrecedence (highest to lowest)\n\n- `-var` flag (if a variable repeats, the last one in the command line wins)\n- `-var-file` flag (if a variable repeats in the files, the last one listed in the command line wins)\n- environment variables\n\n### Environment variables\n\nTo provide a value to the HCL2 engine via the environment, you need to create\nan environment variable named `NOMAD_VAR_«variable name»`. For example, to\nset the value of the `docker_image` variable, create an environment variable\nnamed `NOMAD_VAR_docker_image`.\n\n## Using variable files with multiple jobs\n\nThe HCL2 engine expects every variable that you supply using the `-var` or\n`-var-file` flags to be consumed by the job specification.\n\nYou are some techniques to work around this constraint:\n\n- [Provide HCL2 variable values using environment variables](./env-vars)\n- [Use multiple `-var-files`](./var-files)\n- [Decode the contents of an external file into a `local` variable](./decode-external-file)\n"
  },
  {
    "path": "HCL2/variable_jobs/decode-external-file/README.MD",
    "content": "# Decode the contents of an external file into a `local` variable\n\nThe HCL2 `file` function when paired with the `jsondecode` or `yamldecode` function enables you to externalize shared configuration elements for Nomad jobs to a JSON or YAML file.\n\nThis example contains two jobs that read the `env.json` file to and use values from it to configure the Nomad job during submission from the CLI.\n\n\n## Run the examples\n\n```bash\nnomad job run -var=\"config=env.json\" job1.nomad\n```\n\nNomad will start a Redis 3 container\n\n```bash\nnomad job run -var=\"config=env.json\" job2.nomad\n```\n\nNomad will start a Redis 4 container\n\n## Stop the examples\n\n```bash\nnomad job stop job1\nnomad job stop job2\n```\n"
  },
  {
    "path": "HCL2/variable_jobs/decode-external-file/env.json",
    "content": "{\n  \"datacenters\": [\n    \"dc1\"\n  ],\n  \"docker_image_job1\": \"redis:3\",\n  \"docker_image_job2\": \"redis:4\"\n}\n"
  },
  {
    "path": "HCL2/variable_jobs/decode-external-file/job1.nomad",
    "content": "#----------------------------------------------------------------------------\n# This value can be supplied as a flag to nomad job run.\n#   `nomad job run -var config_file=«path to config» job1.nomad`\n# or as an environment variable\n#   `export NOMAD_VAR_config_file=«path to config»`\n#   `nomad job run job1.nomad`\n#----------------------------------------------------------------------------\nvariable \"config_file\" {\n  type = string\n  description = \"Path to JSON formatted shared job configuration.\"\n}\n\nlocals {\n  config = jsondecode(file(var.config_file))\n}\n\njob \"job1\" {\n  datacenters = local.config.datacenters\n\n  group \"job1\" {\n    task \"job1\" {\n      driver = \"docker\"\n\n      config {\n        image = local.config.docker_image_job1\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/variable_jobs/decode-external-file/job2.nomad",
    "content": "#----------------------------------------------------------------------------\n# This value can be supplied as a flag to nomad job run.\n#   `nomad job run -var config_file=«path to config» job2.nomad`\n# or as an environment variable\n#   `export NOMAD_VAR_config_file=«path to config»`\n#   `nomad job run job2.nomad`\n#----------------------------------------------------------------------------\nvariable \"config_file\" {\n  type = string\n  description = \"Path to JSON formatted shared job configuration.\"\n}\n\nlocals {\n  config = jsondecode(file(var.config_file))\n}\n\njob \"job2\" {\n  datacenters = local.config.datacenters\n\n  group \"job2\" {\n    task \"job2\" {\n      driver = \"docker\"\n\n      config {\n        image = local.config.docker_image_job2\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/variable_jobs/env-vars/README.MD",
    "content": "# Provide HCL2 variable values using environment variables\n\nThis example contains two jobs that read HCL2 variable values from the\nenvironment and populates the Nomad job with them during submission from the\nCLI. This can be a very powerful feature when paired with [`direnv`],\n[`envconsul`], and other tools that can manipulate environment variables.\n\n## Run the sample\n\n### Read in the environment variables\n```bash\nsource ./env.vars\n```\n\n```bash\nnomad job run job1.nomad\n```\nNomad will start a Redis 3 container\n\n```bash\nnomad job run job2.nomad\n```\n\nNomad will start a Redis 4 container\n\n## Stop the example\n\n```bash\nnomad job stop job1\nnomad job stop job2\nunset NOMAD_VAR_datacenters \\\n  NOMAD_VAR_docker_image_job1 \\\n  NOMAD_VAR_docker_image_job2\n```\n\n[`envconsul`]: https://github.com/hashicorp/envconsul\n[`direnv`]: https://direnv.net/\n"
  },
  {
    "path": "HCL2/variable_jobs/env-vars/env.vars",
    "content": "export NOMAD_VAR_datacenters='[\"dc1\"]'\nexport NOMAD_VAR_docker_image_job1=\"redis:3\"\nexport NOMAD_VAR_docker_image_job2=\"redis:4\"\n"
  },
  {
    "path": "HCL2/variable_jobs/env-vars/job1.nomad",
    "content": "variable \"datacenters\" {\n  type = list(string)\n  description = \"Path to JSON formatted shared job configuration.\"\n}\n\nvariable \"docker_image_job1\" {\n  type = string\n  description = \"Image for job1 to run\"\n}\n\njob \"job1\" {\n  datacenters = var.datacenters\n\n  group \"job1\" {\n    task \"job1\" {\n      driver = \"docker\"\n\n      config {\n        image = var.docker_image_job1\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/variable_jobs/env-vars/job2.nomad",
    "content": "variable \"datacenters\" {\n  type = list(string)\n  description = \"Path to JSON formatted shared job configuration.\"\n}\n\nvariable \"docker_image_job2\" {\n  type = string\n  description = \"Image for job2 to run\"\n}\n\njob \"job2\" {\n  datacenters = var.datacenters\n\n  group \"job2\" {\n    task \"job2\" {\n      driver = \"docker\"\n\n      config {\n        image = var.docker_image_job2\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/variable_jobs/job.nomad",
    "content": "variable \"datacenters\" {\n  type = list(string)\n  description = \"List of Nomad datacenters to run the job in. Defaults to `[\\\"dc1\\\"]`\"\n  default = [\"dc1\"]\n}\n\nvariable \"docker_image\" {\n  type = string\n  description = \"Docker image for the job to run\"\n}\n\nvariable \"image_version\" {\n  type = string\n  description = \"Version of the docker image to run\"\n}\n\njob \"job1\" {\n  datacenters = var.datacenters\n\n  group \"job1\" {\n    task \"job1\" {\n      driver = \"docker\"\n\n      config {\n        image = \"${var.docker_image}:${var.image_version}\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/variable_jobs/job.vars",
    "content": "image_version = \"99\"\n\n"
  },
  {
    "path": "HCL2/variable_jobs/multiple-var-files/README.MD",
    "content": "# Provide HCL2 variable values using environment variables\n\nThis example contains two jobs that consumes multiple HCL2 variable files and\npopulates the Nomad job with them during submission from the CLI.\n\nThe `shared.vars` file defines 2 variables:\n\n- `datacenters = [ \"dc1\" ]`\n- `docker_image = \"redis\"`\n\nThe job .vars files set the `image_version_«job name»` value to complete the\njob specification.\n\n## Run the examples\n\n```bash\nnomad job run -var-file=./shared.vars -var-file=./job1.vars job1.nomad\n```\n\nNomad will start a Redis 3 container\n\n```bash\nnomad job run -var-file=./shared.vars -var-file=./job2.vars job2.nomad\n```\n\nNomad will start a Redis 4 container\n\n```bash\nnomad job run -var-file=./shared.vars -var-file=./job3.vars job3.nomad\n```\n\nNomad will start a hello-world:latest container by overriding docker_image from\nthe `./shared.vars` file.\n\n## Stop the examples\n\n```bash\nnomad job stop job1\nnomad job stop job2\nnomad job stop job3\n```\n"
  },
  {
    "path": "HCL2/variable_jobs/multiple-var-files/job1.nomad",
    "content": "variable \"datacenters\" {\n  type = list(string)\n  description = \"Path to JSON formatted shared job configuration.\"\n}\n\nvariable \"docker_image\" {\n  type = string\n  description = \"Shared docker image\"\n}\n\nvariable \"image_version_job1\" {\n  type = string\n  description = \"Docker image version to run for job1\"\n}\n\njob \"job1\" {\n  datacenters = var.datacenters\n\n  group \"job1\" {\n    task \"job1\" {\n      driver = \"docker\"\n\n      config {\n        image = \"${var.docker_image}:${var.image_version_job1}\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/variable_jobs/multiple-var-files/job1.vars",
    "content": "image_version_job1 = \"3\"\n"
  },
  {
    "path": "HCL2/variable_jobs/multiple-var-files/job2.nomad",
    "content": "variable \"datacenters\" {\n  type = list(string)\n  description = \"Path to JSON formatted shared job configuration.\"\n}\n\nvariable \"docker_image\" {\n  type = string\n  description = \"Shared docker image\"\n}\n\nvariable \"image_version_job2\" {\n  type = string\n  description = \"Docker image version to run for job2\"\n}\n\njob \"job2\" {\n  datacenters = var.datacenters\n\n  group \"job2\" {\n    task \"job2\" {\n      driver = \"docker\"\n\n      config {\n        image = \"${var.docker_image}:${var.image_version_job2}\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/variable_jobs/multiple-var-files/job2.vars",
    "content": "image_version_job2 = \"4\"\n"
  },
  {
    "path": "HCL2/variable_jobs/multiple-var-files/job3.nomad",
    "content": "variable \"datacenters\" {\n  type = list(string)\n  description = \"Path to JSON formatted shared job configuration.\"\n}\n\nvariable \"docker_image\" {\n  type = string\n  description = \"Shared docker image\"\n}\n\nvariable \"image_version_job3\" {\n  type = string\n  description = \"Docker image version to run for job3\"\n}\n\njob \"job3\" {\n  datacenters = var.datacenters\n\n  group \"job3\" {\n    task \"job3\" {\n      driver = \"docker\"\n\n      config {\n        image = \"${var.docker_image}:${var.image_version_job3}\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "HCL2/variable_jobs/multiple-var-files/job3.vars",
    "content": "docker_image = \"hello-world\"\nimage_version_job3 = \"latest\"\n"
  },
  {
    "path": "HCL2/variable_jobs/multiple-var-files/shared.vars",
    "content": "datacenters = [ \"dc1\" ]\ndocker_image = \"redis\"\n"
  },
  {
    "path": "README.md",
    "content": "# Nomad Example Jobs\n\nThis repository holds jobs and job skeletons that I have used to create\nreproducers or minimum viable cases. I use them when creating guides as\nsimple workloads as well.\n\nSome specifically useful bits:\n\n- **csi** - Example jobs that use CSI to connect to external resources such as\n  block devices.\n\n- **fabio** - Several different fabio configurations that can be used to spin up\n  consul-aware load balancing in your Nomad cluster.\n\n- **sleepy** - Jobs that do a thing and then sleep (perhaps redoing the thing\n  when they wake up).\n\n- **template_playground** - a batch job that can be used to practice iterative\n  template development.\n  "
  },
  {
    "path": "alloc_folder/mount_alloc.nomad",
    "content": "job \"alloc_folder\" {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"docker\" {\n      driver = \"docker\"\n\n      config {\n        image = \"busybox:latest\"\n        command = \"sh\"\n        args = [\"-c\", \"while true; do echo $(date) | tee -a /my_data/output.txt; sleep 2; done\"]\n        volumes = [\"alloc/data:/my_data\"]\n\n      }\n\n      resources {\n        cpu    = 100\n        memory = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "alloc_folder/sidecar.nomad",
    "content": "job \"alloc_folder\" {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"docker\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox:latest\"\n        command = \"sh\"\n        args    = [\"-c\", \"while true; do echo $(date) | tee -a /alloc/output.txt; sleep 2; done\"]\n      }\n\n      resources {\n        cpu    = 100\n        memory = 100\n      }\n    }\n\n    task \"exec\" {\n      driver = \"exec\"\n\n      config {\n        command = \"tail\"\n        args    = [\"-f\", \"/alloc/output.txt\"]\n      }\n\n      resources {\n        cpu    = 100\n        memory = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/artifactory_oss/README.md",
    "content": "# Docker Registry\n\nThis job uses Nomad Host Volumes to provide an internal Docker registry which\ncan be used to host private containers for a Nomad cluster.\n\n## Prerequisites\n\n- **Consul** - This job leverages Consul service registrations for locating the registry\ninstances.\n\n## Necessary configuration\n\n### Create the host volume in the configuration\n\nCreate a folder on one of your Nomad clients to host your registry files. This\nexample uses `/opt/volumes/docker-registry`\n\n```shell-session\n$ mkdir -p /opt/volumes/docker-registry\n```\n\nAdd the host_volume information to the client stanza in the Nomad configuration.\n\n```hcl\nclient {\n# ...\n  host_volume \"docker-registry\" {\n    path = \"/opt/volumes/docker-registry\"\n    read_only = false\n  }\n}\n```\n\nRestart Nomad to read the new configuration.\n\n```shell-session\n$ systemctl restart nomad\n```\n\n### Add your registry to your daemon.json file\n\nIf you would like to use your registry with Nomad and do not want to configure\nSSL, you can add the following to the `daemon.json` file on each of your Nomad\nclients and restart Docker.\n\n```json\n{\n  \"insecure-registries\" : [\"registry.service.consul:5000\"],\n}\n```\n\nYou will need to do this on any machine that you would like to push to or pull\nfrom your registry.\n\n"
  },
  {
    "path": "applications/artifactory_oss/registry.nomad",
    "content": "job \"registry\" {\n  datacenters = [\"dc1\"]\n  priority    = 80\n\n  group \"docker\" {\n    network {\n      port \"registry\" {\n        to     = 5000\n        static = 5000\n      }\n    }\n\n    service {\n      name = \"registry\"\n      port = \"registry\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"registry\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    volume \"artifactory-registry\" {\n      type      = \"host\"\n      source    = \"artifactory-registry\"\n      read_only = false\n    }\n\n    task \"container\" {\n      driver = \"docker\"\n\n      volume_mount {\n        volume      = \"artifactory-registry\"\n        destination = \"/var/lib/registry\"\n      }\n\n      config {\n        image = \"docker.bintray.io/jfrog/artifactory-oss:latest\"\n        ports = [\"registry\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/cluster-broccoli/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"redis:7\"\n        ports          = [\"db\"]\n        auth_soft_fail = true\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/docker_registry/README.md",
    "content": "# Docker Registry\n\nThis job uses Nomad Host Volumes to provide an internal Docker registry which\ncan be used to host private containers for a Nomad cluster.\n\n## Prerequisites\n\n- **Consul** - This job leverages Consul service registrations for locating the registry\ninstances.\n\n## Necessary configuration\n\n### Create the host volume in the configuration\n\nCreate a folder on one of your Nomad clients to host your registry files. This\nexample uses `/opt/volumes/docker-registry`\n\n```shell-session\n$ mkdir -p /opt/volumes/docker-registry\n```\n\nAdd the host_volume information to the client stanza in the Nomad configuration.\n\n```hcl\nclient {\n# ...\n  host_volume \"docker-registry\" {\n    path = \"/opt/volumes/docker-registry\"\n    read_only = false\n  }\n}\n```\n\nRestart Nomad to read the new configuration.\n\n```shell-session\n$ systemctl restart nomad\n```\n\n### Add your registry to your daemon.json file\n\nIf you would like to use your registry with Nomad and do not want to configure\nSSL, you can add the following to the `daemon.json` file on each of your Nomad\nclients and restart Docker.\n\n```json\n{\n  \"insecure-registries\" : [\"registry.service.consul:5000\"],\n}\n```\n\nYou will need to do this on any machine that you would like to push to or pull\nfrom your registry.\n\n"
  },
  {
    "path": "applications/docker_registry/registry.nomad",
    "content": "job \"registry\" {\n  datacenters = [\"dc1\"]\n  priority    = 80\n\n  group \"docker\" {\n    network {\n      port \"registry\" {\n        to     = 5000\n        static = 5000\n      }\n    }\n\n    service {\n      name = \"registry\"\n      port = \"registry\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"registry\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    volume \"docker-registry\" {\n      type      = \"host\"\n      source    = \"docker-registry\"\n      read_only = false\n    }\n\n    task \"container\" {\n      driver = \"docker\"\n\n      volume_mount {\n        volume      = \"docker-registry\"\n        destination = \"/var/lib/registry\"\n      }\n\n      config {\n        image = \"registry\"\n        ports = [\"registry\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/docker_registry_v2/README.md",
    "content": "# Docker Registry\n\nThis job uses Nomad Host Volumes to provide an internal Docker registry which\ncan be used to host private containers for a Nomad cluster.\n\n## Prerequisites\n\n- **Consul** - This job leverages Consul service registrations for locating the registry\ninstances.\n\n## Necessary configuration\n\n### Create the host volume in the configuration\n\nCreate a folder on one of your Nomad clients to host your registry files. This\nexample uses `/opt/nomad/volumes/docker-registry`\n\n```shell-session\n$ mkdir -p /opt/nomad/volumes/docker-registry\n```\n\nAdd the host_volume information to the client stanza in the Nomad configuration.\n\n```hcl\nclient {\n# ...\n  host_volume \"docker-registry\" {\n    path = \"/opt/nomad/volumes/docker-registry\"\n    read_only = false\n  }\n}\n```\n\nRestart Nomad to read the new configuration.\n\n```shell-session\n$ systemctl restart nomad\n```\n\n### Add your registry to your daemon.json file\n\nIf you would like to use your registry with Nomad and do not want to configure\nSSL, you can add the following to the `daemon.json` file on each of your Nomad\nclients and restart Docker.\n\n```json\n{\n  \"insecure-registries\" : [\"registry.service.consul:5000\"],\n}\n```\n\nYou will need to do this on any machine that you would like to push to or pull\nfrom your registry.\n"
  },
  {
    "path": "applications/docker_registry_v2/htpasswd",
    "content": "user:$2y$05$kyEyguS/Sisz7SMjqKQZ1eQDCM7pSFiItkL9yiVIDOVyQfj8XTCAS\n"
  },
  {
    "path": "applications/docker_registry_v2/make_password.sh",
    "content": "#!/bin/bash\n\ndocker run --rm -it -v $(pwd):/out --entrypoint=\"htpasswd\" xmartlabs/htpasswd -Bbc /out/$1 $2 $3\n"
  },
  {
    "path": "applications/docker_registry_v2/registry.nomad",
    "content": "job \"registry\" {\n  datacenters = [\"dc1\"]\n  priority    = 80\n\n  group \"docker\" {\n    network {\n      port \"registry\" {\n        to     = 5000\n        static = 5000\n      }\n    }\n\n    service {\n      name = \"registry\"\n      port = \"registry\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"registry\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    volume \"docker-registry\" {\n      type      = \"host\"\n      source    = \"docker-registry\"\n      read_only = false\n    }\n\n    task \"container\" {\n      driver = \"docker\"\n\n      template {\n        destination = \"secrets/htpasswd\"\n        data = <<EOH\nuser:$2y$05$kyEyguS/Sisz7SMjqKQZ1eQDCM7pSFiItkL9yiVIDOVyQfj8XTCAS\nEOH\n      }\n\n      volume_mount {\n        volume      = \"docker-registry\"\n        destination = \"/var/lib/registry\"\n      }\n\n      env {\n        REGISTRY_AUTH=\"htpasswd\"\n        REGISTRY_AUTH_HTPASSWD_REALM=\"Registry Realm\"\n        REGISTRY_AUTH_HTPASSWD_PATH=\"/secrets/htpasswd\"\n      }\n\n      config {\n        image = \"registry\"\n        ports = [\"registry\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/docker_registry_v3/README.md",
    "content": "# Docker Registry\n\nThis job uses Nomad Host Volumes to provide an internal Docker registry which\ncan be used to host private containers for a Nomad cluster.\n\n## Prerequisites\n\n- **Nomad 1.4+** - This job leverages:\n  - Nomad service discovery for locating the registry instances.\n  - Nomad variables for maintaining the user authentication information\n\n## Necessary configuration\n\n### Create the host volume in the configuration\n\nCreate a folder on one of your Nomad clients to host your registry files. This\nexample uses `/opt/nomad/volumes/docker-registry`\n\n```shell-session\n$ mkdir -p /opt/nomad/volumes/docker-registry\n```\n\nAdd the host_volume information to the client stanza in the Nomad configuration.\n\n```hcl\nclient {\n# ...\n  host_volume \"docker-registry\" {\n    path      = \"/opt/nomad/volumes/docker-registry\"\n    read_only = false\n  }\n}\n```\n\nRestart Nomad to read the new configuration.\n\n```shell-session\n$ systemctl restart nomad\n```\n\n### Add your registry to your daemon.json file\n\nIf you would like to use your registry with Nomad and do not want to configure\nSSL, you can add the following to the `daemon.json` file on each of your Nomad\nclients and restart Docker.\n\n```json\n{\n  \"insecure-registries\" : [\"registry.service.consul:5000\"],\n}\n```\n\nYou will need to do this on any machine that you would like to push to or pull\nfrom your registry.\n"
  },
  {
    "path": "applications/docker_registry_v3/make_password.sh",
    "content": "#!/bin/bash\n\ncmd=\"htpasswd -Bbn $1 $2\"\nif ! [ -x \"$(command -v htpasswd)\" ]; then\n  if ! [ -x \"$(command -v docker)\" ]; then\n    echo 'Notice: this script requires htpasswd or docker.' >&2\n    exit 1\n  fi\n\n  echo 'Notice: htpasswd is not installed. Using docker to run it.' >&2\n  fetchedDocker=true\n  cmd=\"docker run --rm -it -v $(pwd):/out --entrypoint=\"htpasswd\" xmartlabs/htpasswd -Bbn $1 $2\"\nfi\n\nuser=$1\npassword=$(eval $cmd | tr -d \"\\n\"| tr \":\" \" \" | awk '{print $2}')\n\nvarPath=\"nomad/jobs/registry/docker/container\"\nnomad var get $varPath | nomad var put - \"$user\"=\"$password\"\n"
  },
  {
    "path": "applications/docker_registry_v3/registry.nomad",
    "content": "job \"registry\" {\n  datacenters = [\"dc1\"]\n  priority    = 80\n\n  group \"docker\" {\n    network {\n      port \"registry\" {\n        to     = 5000\n        static = 5000\n      }\n    }\n\n    service {\n      name = \"registry\"\n      port = \"registry\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"registry\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    volume \"docker-registry\" {\n      type      = \"host\"\n      source    = \"docker-registry\"\n      read_only = false\n    }\n\n    task \"container\" {\n      driver = \"docker\"\n\n      template {\n        destination = \"secrets/htpasswd\"\n        data = <<EOH\n{{ with nomadVar \"nomad/jobs/registry/docker/container\"}}{{range $K, $V := .}}{{printf \"%s:%s\\n\" $K $V}}{{end}}{{end}}\nEOH\n      }\n\n      volume_mount {\n        volume      = \"docker-registry\"\n        destination = \"/var/lib/registry\"\n      }\n\n      env {\n        REGISTRY_AUTH=\"htpasswd\"\n        REGISTRY_AUTH_HTPASSWD_REALM=\"Registry Realm\"\n        REGISTRY_AUTH_HTPASSWD_PATH=\"/secrets/htpasswd\"\n      }\n\n      config {\n        image = \"registry\"\n        ports = [\"registry\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/mariadb/mariadb.nomad",
    "content": "job \"mariadb\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  group \"bootstrap\" {\n    count = 1\n\n    network {\n       mode = \"bridge\"\n       port \"mysql\" {\n         to     = 3306\n       }\n    }\n\n    service {\n      name = \"mariadb-${NOMAD_ALLOC_ID}\"\n      port = \"mysql\"\n      check {\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"        \n      }\n    }\n\n    task \"mariadb-bootstrap\" {\n      driver = \"docker\"\n      user = \"root\"\n      config {\n        image = \"bitnami/mariadb-galera:10.5\"\n      }\n      env {\n        MARIADB_GALERA_NODE_NAME = \"localhost\"\n        MARIADB_GALERA_NODE_ADDRESS = \"${NOMAD_ADDRESS_mariadb-bootstrap}\"\n        MARIADB_GALERA_CLUSTER_BOOTSTRAP = \"yes\"\n        MARIADB_GALERA_CLUSTER_ADDRESS = \"${NOMAD_ADDRESS_mariadb-bootstrap}\"\n        MARIADB_GALERA_CLUSTER_NAME = \"my_galera\"\n        MARIADB_GALERA_MARIABACKUP_USER = \"my_mariabackup_user\"\n        MARIADB_GALERA_MARIABACKUP_PASSWORD = \"my_mariabackup_password\"\n        MARIADB_ROOT_PASSWORD = \"my_root_password\"\n        MARIADB_USER = \"my_user\"\n        MARIADB_PASSWORD = \"my_password\"\n        MARIADB_DATABASE = \"my_database\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/membrane-soa/README.md",
    "content": "Deploying a Java REST to SOAP Proxy in Connect\n\nTechnologies:\n\n- Consul Service Mesh\n- Consul Egress Gateways\n- Nomad Java Task Driver\n\nhttps://www.membrane-soa.org/service-proxy-doc/4.7/soap-quickstart.htm\n\n<https://www.membrane-soa.org/service-proxy-doc/4.7/rest2soap-gateway.htm>\n\n\nhttp://localhost:2000/bank/37050198\n\n\n\n\nservice-proxy.sh\n\n```\n#!/bin/bash\nhomeSet() {\n echo \"MEMBRANE_HOME variable is now set\"\n CLASSPATH=\"$MEMBRANE_HOME/conf\"\n CLASSPATH=\"$CLASSPATH:$MEMBRANE_HOME/starter.jar\"\n export CLASSPATH\n echo Membrane Router running...\n java  -classpath \"$CLASSPATH\" com.predic8.membrane.core.Starter -c proxies.xml\n \n}\n\nterminate() {\n\techo \"Starting of Membrane Router failed.\"\n\techo \"Please execute this script from the appropriate subfolder of MEMBRANE_HOME/examples/\"\n\t\n}\n\nhomeNotSet() {\n  echo \"MEMBRANE_HOME variable is not set\"\n\n  if [ -f  \"`pwd`/../../starter.jar\" ]\n    then \n    \texport MEMBRANE_HOME=\"`pwd`/../..\"\n    \thomeSet\t\n    else\n    \tterminate    \n  fi \n}\n\n\nif  [ \"$MEMBRANE_HOME\" ]  \n\tthen homeSet\n\telse homeNotSet\nfi\n\n```\n\n"
  },
  {
    "path": "applications/membrane-soa/soap-proxy-v1-linux.nomad",
    "content": "job \"soap-proxy\" {\n  datacenters = [\"dc1\"]\n\n  group \"membrane\" {\n    network {\n      port \"admin\" {\n        static = 9000\n      }\n      \n      port \"proxy\" {\n        static = 2000 \n      }\n    }\n\n    task \"membrane\" {\n      artifact {\n        source = \"https://github.com/membrane/service-proxy/releases/download/v4.7.3/membrane-service-proxy-4.7.3.zip\"\n        destination = \"local\"\n      }\n\n      template {\n        destination = \"local/proxy-conf/proxies.xml\"\n        data =<<EOD\n<spring:beans xmlns=\"http://membrane-soa.org/proxies/1/\"\n  xmlns:spring=\"http://www.springframework.org/schema/beans\"\n  xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.2.xsd\n              http://membrane-soa.org/proxies/1/ http://membrane-soa.org/schemas/proxies-1.xsd\">\n\n  <router>\n\n    <serviceProxy port=\"2000\">\n      <rest2Soap>   \n        <mapping regex=\"/bank/.*\" soapAction=\"\"\n          soapURI=\"/axis2/services/BLZService\" requestXSLT=\"./get2soap.xsl\"\n          responseXSLT=\"./strip-env.xsl\" />\n      </rest2Soap>\n      <target host=\"thomas-bayer.com\" />\n    </serviceProxy>\n    \n    <serviceProxy name=\"Console\" port=\"9000\">\n      <adminConsole />\n    </serviceProxy>\n\n  </router>\n  \n</spring:beans>\nEOD\n      }\n\n      template {\n        destination = \"local/proxy-conf/get2soap.xsl\"\n        data =<<EOD\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\"\n                xmlns:s11=\"http://schemas.xmlsoap.org/soap/envelope/\">\n  <xsl:template match=\"/\">\n    <s11:Envelope >\n      <s11:Body>\n        <blz:getBank xmlns:blz=\"http://thomas-bayer.com/blz/\">\n          <blz:blz><xsl:value-of select=\"//path/component[2]\"/></blz:blz>\n        </blz:getBank>\n      </s11:Body>\n    </s11:Envelope> \n  </xsl:template>\n</xsl:stylesheet>\nEOD\n      }\n\n      template {\n        destination = \"local/proxy-conf/strip-env.xsl\"\n        data =<<EOD\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\"\n                xmlns:s11=\"http://schemas.xmlsoap.org/soap/envelope/\">\n                \n  <xsl:template match=\"/\">\n    <xsl:apply-templates select=\"//s11:Body/*\"/>\n  </xsl:template>\n  \n  <xsl:template match=\"@*|node()\">\n    <xsl:copy>\n      <xsl:apply-templates />\n    </xsl:copy>\n  </xsl:template> \n  \n  <!-- Get rid of the namespace prefixes in json. So\n    \n       ns1:getBank will be just getBank   \n  -->\n  <xsl:template match=\"*\">\n    <xsl:element name=\"{local-name()}\">\n      <xsl:apply-templates/>\n    </xsl:element>\n  </xsl:template>\n  \n</xsl:stylesheet>\nEOD\n      }\n\n      env {\n        MEMBRANE_HOME = \"/local/membrane-service-proxy-4.7.3\"\n      }\n\n      driver = \"java\"\n      config {\n        class = \"com.predic8.membrane.core.Starter\"\n        class_path = \"/local/membrane-service-proxy-4.7.3/conf:/local/membrane-service-proxy-4.7.3/starter.jar\"\n        args = [\"-c\",\"/local/proxy-conf/proxies.xml\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/membrane-soa/soap-proxy-v1-windows.nomad",
    "content": "job \"soap-proxy\" {\n  datacenters = [\"dc1\"]\n\n  group \"membrane\" {\n    network {\n      port \"admin\" {\n        static = 9000\n      }\n      \n      port \"proxy\" {\n        static = 2000 \n      }\n    }\n\n    task \"membrane\" {\n      artifact {\n        source = \"https://github.com/membrane/service-proxy/releases/download/v4.7.3/membrane-service-proxy-4.7.3.zip\"\n        destination = \"local\"\n      }\n\n      template {\n        destination = \"local/proxy-conf/proxies.xml\"\n        data =<<EOD\n<spring:beans xmlns=\"http://membrane-soa.org/proxies/1/\"\n  xmlns:spring=\"http://www.springframework.org/schema/beans\"\n  xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.2.xsd\n              http://membrane-soa.org/proxies/1/ http://membrane-soa.org/schemas/proxies-1.xsd\">\n\n  <router>\n\n    <serviceProxy port=\"2000\">\n      <rest2Soap>   \n        <mapping regex=\"/bank/.*\" soapAction=\"\"\n          soapURI=\"/axis2/services/BLZService\" requestXSLT=\"./get2soap.xsl\"\n          responseXSLT=\"./strip-env.xsl\" />\n      </rest2Soap>\n      <target host=\"thomas-bayer.com\" />\n    </serviceProxy>\n    \n    <serviceProxy name=\"Console\" port=\"9000\">\n      <adminConsole />\n    </serviceProxy>\n\n  </router>\n  \n</spring:beans>\nEOD\n      }\n\n      template {\n        destination = \"local/proxy-conf/get2soap.xsl\"\n        data =<<EOD\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\"\n                xmlns:s11=\"http://schemas.xmlsoap.org/soap/envelope/\">\n  <xsl:template match=\"/\">\n    <s11:Envelope >\n      <s11:Body>\n        <blz:getBank xmlns:blz=\"http://thomas-bayer.com/blz/\">\n          <blz:blz><xsl:value-of select=\"//path/component[2]\"/></blz:blz>\n        </blz:getBank>\n      </s11:Body>\n    </s11:Envelope> \n  </xsl:template>\n</xsl:stylesheet>\nEOD\n      }\n\n      template {\n        destination = \"local/proxy-conf/strip-env.xsl\"\n        data =<<EOD\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\"\n                xmlns:s11=\"http://schemas.xmlsoap.org/soap/envelope/\">\n                \n  <xsl:template match=\"/\">\n    <xsl:apply-templates select=\"//s11:Body/*\"/>\n  </xsl:template>\n  \n  <xsl:template match=\"@*|node()\">\n    <xsl:copy>\n      <xsl:apply-templates />\n    </xsl:copy>\n  </xsl:template> \n  \n  <!-- Get rid of the namespace prefixes in json. So\n    \n       ns1:getBank will be just getBank   \n  -->\n  <xsl:template match=\"*\">\n    <xsl:element name=\"{local-name()}\">\n      <xsl:apply-templates/>\n    </xsl:element>\n  </xsl:template>\n  \n</xsl:stylesheet>\nEOD\n      }\n\n      env {\n        MEMBRANE_HOME = \"local/membrane-service-proxy-4.7.3\"\n      }\n\n      driver = \"java\"\n      config {\n        class = \"com.predic8.membrane.core.Starter\"\n        class_path = \"local/membrane-service-proxy-4.7.3/conf;/local/membrane-service-proxy-4.7.3/starter.jar\"\n        args = [\"-c\",\"local/proxy-conf/proxies.xml\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/membrane-soa/soap-proxy.nomad",
    "content": "locals {\n  membrane_home = \"/local/membrane-service-proxy-4.7.3\"\n  class_path = \"${local.membrane_home}/conf:${local.membrane_home}/starter.jar\"\n}\n\n\njob \"soap-proxy\" {\n  datacenters = [\"dc1\"]\n\n  group \"membrane\" {\n    network {\n      mode = \"bridge\"\n      \n      dns {\n        servers = [\"8.8.8.8\", \"8.8.4.4\"]\n      }\n\n      port \"admin\" {\n        to = 9000\n      }\n      \n      port \"proxy\" {\n        to = 2000 \n      }\n    }\n\n    task \"membrane\" {\n      artifact {\n        source = \"https://github.com/membrane/service-proxy/releases/download/v4.7.3/membrane-service-proxy-4.7.3.zip\"\n        destination = \"local\"\n      }\n\n      template {\n        destination = \"local/proxy-conf/proxies.xml\"\n        data =<<EOD\n<spring:beans xmlns=\"http://membrane-soa.org/proxies/1/\"\n  xmlns:spring=\"http://www.springframework.org/schema/beans\"\n  xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.2.xsd\n              http://membrane-soa.org/proxies/1/ http://membrane-soa.org/schemas/proxies-1.xsd\">\n\n  <router>\n\n    <serviceProxy port=\"2000\">\n      <rest2Soap>   \n        <mapping regex=\"/bank/.*\" soapAction=\"\"\n          soapURI=\"/axis2/services/BLZService\" requestXSLT=\"./get2soap.xsl\"\n          responseXSLT=\"./strip-env.xsl\" />\n      </rest2Soap>\n      <target host=\"thomas-bayer.com\" />\n    </serviceProxy>\n    \n    <serviceProxy name=\"Console\" port=\"9000\">\n      <adminConsole />\n    </serviceProxy>\n\n  </router>\n  \n</spring:beans>\nEOD\n      }\n\n      template {\n        destination = \"local/proxy-conf/get2soap.xsl\"\n        data =<<EOD\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\"\n                xmlns:s11=\"http://schemas.xmlsoap.org/soap/envelope/\">\n  <xsl:template match=\"/\">\n    <s11:Envelope >\n      <s11:Body>\n        <blz:getBank xmlns:blz=\"http://thomas-bayer.com/blz/\">\n          <blz:blz><xsl:value-of select=\"//path/component[2]\"/></blz:blz>\n        </blz:getBank>\n      </s11:Body>\n    </s11:Envelope> \n  </xsl:template>\n</xsl:stylesheet>\nEOD\n      }\n\n      template {\n        destination = \"local/proxy-conf/strip-env.xsl\"\n        data =<<EOD\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\"\n                xmlns:s11=\"http://schemas.xmlsoap.org/soap/envelope/\">\n                \n  <xsl:template match=\"/\">\n    <xsl:apply-templates select=\"//s11:Body/*\"/>\n  </xsl:template>\n  \n  <xsl:template match=\"@*|node()\">\n    <xsl:copy>\n      <xsl:apply-templates />\n    </xsl:copy>\n  </xsl:template> \n  \n  <!-- Get rid of the namespace prefixes in json. So\n    \n       ns1:getBank will be just getBank   \n  -->\n  <xsl:template match=\"*\">\n    <xsl:element name=\"{local-name()}\">\n      <xsl:apply-templates/>\n    </xsl:element>\n  </xsl:template>\n  \n</xsl:stylesheet>\nEOD\n      }\n\n      env {\n        MEMBRANE_HOME = \"/local/membrane-service-proxy-4.7.3\"\n      }\n\n      driver = \"java\"\n      config {\n        class = \"com.predic8.membrane.core.Starter\"\n        class_path = \"/local/membrane-service-proxy-4.7.3/conf:/local/membrane-service-proxy-4.7.3/starter.jar\"\n        args = [\"-c\",\"/local/proxy-conf/proxies.xml\"]\n      }\n\n      # driver = \"exec\"\n      # config {\n      #   command = \"/bin/bash\"\n      #   args = [\"-c\",\"while true; do sleep 500; done\"]\n      # }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/minio/README.md",
    "content": "# Minio S3-compatible Storage\n\nThis job uses Nomad Host Volumes to provide an internal s3 compatible storage\nenvironment which can be used to host private artifacts for a Nomad clusters.\n\n## Prerequisites\n\n- **Consul** - This job leverages Consul service registrations for locating the\n  MinIO instance.\n\n## Necessary configuration\n\n### Create the host volume in the configuration\n\nCreate a folder on one of your Nomad clients to host your registry files. This\nexample uses `/opt/volumes/minio-data`\n\n```shell-session\n$ mkdir -p /opt/volumes/minio-data\n```\n\nAdd the host_volume information to the client stanza in the Nomad configuration.\n\n```hcl\nclient {\n# ...\n  host_volume \"minio-data\" {\n    path = \"/opt/volumes/minio-data\"\n    read_only = false\n  }\n}\n```\n\nRestart Nomad to read the new configuration.\n\n```shell-session\n$ systemctl restart nomad\n```\n"
  },
  {
    "path": "applications/minio/minio.nomad",
    "content": "job \"minio\" {\n  datacenters = [\"dc1\"]\n  priority    = 80\n\n  group \"storage\" {\n    network {\n      port \"api\" {\n        to = 9000\n        static = 9000\n      }\n    }\n\n    service {\n      name = \"minio\"\n      port = \"api\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"api\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    volume \"minio-data\" {\n      type      = \"host\"\n      source    = \"minio-data\"\n      read_only = false\n    }\n\n    task \"minio\" {\n      driver = \"docker\"\n\n      env {\n        MINIO_ROOT_USER = \"AKIAIOSFODNN7EXAMPLE\"\n        MINIO_ROOT_PASSWORD = \"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\"\n      }\n\n      volume_mount {\n        volume      = \"minio-data\"\n        destination = \"/data\"\n      }\n\n      config {\n        image = \"minio/minio\"\n        args = [\"server\", \"/data\"]\n        ports = [\"api\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n\n# docker run -p 9000:9000 \\\n#   --name minio1 \\\n#   -v /mnt/data:/data \\\n#   -e \"MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE\" \\\n#   -e \"MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\" \\\n#   minio/minio server /data"
  },
  {
    "path": "applications/minio/secure-variables/README.md",
    "content": "# Minio S3-compatible Storage\n\nThis job uses Nomad Host Volumes to provide an internal s3 compatible storage\nenvironment which can be used to host private artifacts for a Nomad clusters.\n\n## Prerequisites\n\n- **Nomad 1.4** - This job leverages Nomad service registrations for locating the\n  MinIO instance and used Nomad Variables.\n\n## Necessary configuration\n\n### Create the host volume in the configuration\n\nCreate a folder on one of your Nomad clients to host your registry files. This\nexample uses `/opt/volumes/minio-data`\n\n```shell-session\n$ mkdir -p /opt/volumes/minio-data\n```\n\nAdd the host_volume information to the client stanza in the Nomad configuration.\n\n```hcl\nclient {\n# ...\n  host_volume \"minio-data\" {\n    path = \"/opt/volumes/minio-data\"\n    read_only = false\n  }\n}\n```\n\nRestart Nomad to read the new configuration.\n\n```shell-session\n$ systemctl restart nomad\n```\n"
  },
  {
    "path": "applications/minio/secure-variables/minio-data/.gitkeep",
    "content": ""
  },
  {
    "path": "applications/minio/secure-variables/minio.nomad",
    "content": "# minio is an AWS S3-compatible storage engine\n\njob \"minio\" {\n  datacenters = [\"dc1\"]\n  priority    = 80\n\n  group \"storage\" {\n    network {\n      port \"api\" {\n        to     = 9000\n        static = 9000\n      }\n    }\n\n    service {\n      name     = \"minio\"\n      port     = \"api\"\n      provider = \"nomad\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"api\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    volume \"minio-data\" {\n      type      = \"host\"\n      source    = \"minio-data\"\n      read_only = false\n    }\n\n    task \"minio\" {\n      driver = \"docker\"\n\n      template {\n        destination = \"${NOMAD_SECRETS_DIR}/env.vars\"\n        env         = true\n        change_mode = \"restart\"\n        data =<<EOF\n{{- with nomadVar \"nomad/jobs/minio/storage/minio\" -}}\nMINIO_ROOT_USER = {{.root_user}}\nMINIO_ROOT_PASSWORD = {{.root_password}}\n{{- end -}}\nEOF\n      }\n      volume_mount {\n        volume      = \"minio-data\"\n        destination = \"/data\"\n      }\n\n      config {\n        image = \"minio/minio\"\n        args  = [\"server\", \"/data\"]\n        ports = [\"api\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/minio/secure-variables/start.sh",
    "content": "#! /usr/bin/env bash\n\nmkdir -p minio-data\nsed \"s|«/absolute/path/to»|$(pwd)|g\" volume.hcl > .volume_patch.hcl\nnohup nomad agent -dev -config=.volume_patch.hcl -acl-enabled >nomad.log 2>&1 &\n\necho -n $! > .nomad.pid\necho \"Nomad PID is $(cat .nomad.pid)\"\ndisown\n\n# wait for leadership\nsleep 3\n\necho '{\"BootstrapSecret\": \"2b778dd9-f5f1-6f29-b4b4-9a5fa948757a\"}' | nomad operator api /v1/acl/bootstrap\necho ''\n\nexport NOMAD_TOKEN=2b778dd9-f5f1-6f29-b4b4-9a5fa948757a\necho -n ${NOMAD_TOKEN} > .nomad.token\n\n\nnomad var put nomad/jobs/minio/storage/minio \\\n  root_user=\"AKIAIOSFODNN7EXAMPLE\" \\\n  root_password=\"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\"\n\nnomad job run -detach minio.nomad\n\necho 'export NOMAD_TOKEN=2b778dd9-f5f1-6f29-b4b4-9a5fa948757a'\n"
  },
  {
    "path": "applications/minio/secure-variables/stop.sh",
    "content": "#! /usr/bin/env bash\n\nPID=$(cat .nomad.pid)\necho \"Stopping Nomad (pid: ${PID})\"\nrm -rf .nomad.pid\nrm -rf .nomad.token\nrm -rf .volume_patch.hcl\nrm -rf nomad.log\nrm -rf minio_data\necho \"Done.\""
  },
  {
    "path": "applications/minio/secure-variables/volume.hcl",
    "content": "# The host volume configuration for the minio task. The start.sh\n# script will make a derived copy of this file with the place-\n# holder--«/absolute/path/to»--replaced with the output of `pwd`\n\nclient {\n    host_volume \"minio-data\" {\n        path      = \"«/absolute/path/to»/minio-data\"\n        read_only = false\n  }\n}\n"
  },
  {
    "path": "applications/postgres/README.md",
    "content": "# Stateful example of Postgres with Host Volumes\n\n## Configure a supportive host volume\n\nThis job uses a volume named\n`pg-data`. On one of your Nomad clients, either create an additional\nconfiguration file (if you're `config` is pointed to a directory)\nor add a `host_volume` stanza to your existing client configuration\nsimilar to the following.\n\n```hcl\nclient {\n  host_volume \"pg-data\" {\n    path = \"/opt/nomad/volumes/pg-data\"\n    read_only = false\n  }\n}\n```\n\nCreate the directory to support the volume.\n\n```shell-session\n$ mkdir -p /opt/nomad/volumes/pg-data\n```\n\nRestart Nomad to enable the new host volume.\n\n```shell-session\n$ systemctl restart nomad\n```\n\nVerify that the host volume is available.\n\n```shell-session\n$ nomad node status -self -verbose\n```\n\nOnce the client finishes starting, you should see the `pg-data` host volume\nlisted in the **Host Volumes** section of the output.\n\n```\nHost Volumes\nName           ReadOnly  Source\npg-data        false     /opt/nomad/volumes/pg-data\n```\n\nRun the job.\n\n```shell-session\n$ nomad job run postgres.nomad\n```\n\nOnce the job starts, check the allocation status to determine what IP and\nport you need to connect to.\n\nConnect to the instance using a postgres client at the scheduled IP address\nand port. Use user `postgres` and secret `mysecretpassword`.\n"
  },
  {
    "path": "applications/postgres/postgres.nomad",
    "content": "job \"postgres.nomad\" {\n  datacenters = [\"dc1\"]\n\n  group \"database\" {\n    network {\n      port \"db\" {\n        to = 5432\n      }\n    }\n\n    service {\n      name = \"db\"\n      port = \"db\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"db\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    volume \"pg-data\" {\n      type      = \"host\"\n      source    = \"pg-data\"\n      read_only = false\n    }\n\n    task \"postgres\" {\n      driver = \"docker\"\n\n      env {\n        POSTGRES_PASSWORD=\"mysecretpassword\"\n#        POSTGRES_USER=\"\"\n#        POSTGRES_DB=\"\"\n        PGDATA=\"/var/lib/postgresql/data/pgdata\"\n      }\n\n      volume_mount {\n        volume      = \"pg-data\"\n        destination = \"/var/lib/postgresql/data\"\n      }\n\n      config {\n        image = \"postgres\"\n        ports = [\"db\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}"
  },
  {
    "path": "applications/prometheus/README.md",
    "content": "# Prometheus\n\n\nOn the client, you will need a rule to allow the docker containers to talk to the local\nconsul agents.\n\n```\nfirewall-cmd --permanent --zone=public --add-rich-rule='rule family=ipv4 source address=172.17.0.0/16 accept' && firewall-cmd --reload\n```\n\n\n## Connecting to the instances\n\n\n"
  },
  {
    "path": "applications/prometheus/fabio-service.nomad",
    "content": "# For ACL-enabled Consul Clusters, you need to specify a Consul ACL token down\n# in the `fabio-linux-amd64` task's env stanza. Uncomment the example and\n# replace the token with a valid Consul ACL token.\n\njob \"fabio\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n\n  update {\n    stagger = \"5s\"\n    max_parallel = 1\n  }\n\n  group \"fabio-linux-amd64\" {\n    network {\n      port \"http\" {\n        static = \"9999\"\n      }\n\n      port \"ui\" {\n        static = \"9998\"\n      }\n    }\n\n    task \"fabio-linux-amd64\" {\n      constraint {\n        attribute = \"${attr.cpu.arch}\"\n        operator  = \"=\"\n        value     = \"amd64\"\n      }\n\n      constraint {\n        attribute = \"${attr.kernel.name}\"\n        operator  = \"=\"\n        value     = \"linux\"\n      }\n\n      artifact {\n        source = \"https://github.com/fabiolb/fabio/releases/download/v1.5.15/fabio-1.5.15-go1.15.5-linux_amd64\"\n        options {\n          checksum = \"sha256:14c7a02ca95fb00a4f3010eab4e3c0e354a3f4953d2a793cb800332012f42066\"\n        }\n      }\n\n      driver = \"exec\"\n\n      config {\n        command = \"fabio-1.5.15-go1.15.5-linux_amd64\"\n      }\n\n      env {\n#        FABIO_REGISTRY_CONSUL_TOKEN = \"c62d8564-c0c5-8dfe-3e75-005debbd0e40\"\n      }\n\n      resources {\n        cpu = 200\n        memory = 32\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "applications/prometheus/grafana/README.md",
    "content": "Thanks to [Nextty](https://grafana.com/orgs/derekamz) for two great grafana dashboards to start with:\n\n* Nomad Jobs - https://grafana.com/dashboards/6281\n* Nomad Cluster - \n"
  },
  {
    "path": "applications/prometheus/grafana/nomad_jobs.json",
    "content": "{\n  \"__inputs\": [\n    {\n      \"name\": \"DS_PROMETHEUS\",\n      \"label\": \"prometheus\",\n      \"description\": \"\",\n      \"type\": \"datasource\",\n      \"pluginId\": \"prometheus\",\n      \"pluginName\": \"Prometheus\"\n    }\n  ],\n  \"__requires\": [\n    {\n      \"type\": \"grafana\",\n      \"id\": \"grafana\",\n      \"name\": \"Grafana\",\n      \"version\": \"5.1.2\"\n    },\n    {\n      \"type\": \"panel\",\n      \"id\": \"graph\",\n      \"name\": \"Graph\",\n      \"version\": \"5.0.0\"\n    },\n    {\n      \"type\": \"datasource\",\n      \"id\": \"prometheus\",\n      \"name\": \"Prometheus\",\n      \"version\": \"5.0.0\"\n    }\n  ],\n  \"annotations\": {\n    \"list\": [\n      {\n        \"builtIn\": 1,\n        \"datasource\": \"-- Grafana --\",\n        \"enable\": true,\n        \"hide\": true,\n        \"iconColor\": \"rgba(0, 211, 255, 1)\",\n        \"name\": \"Annotations & Alerts\",\n        \"type\": \"dashboard\"\n      }\n    ]\n  },\n  \"editable\": true,\n  \"gnetId\": 6281,\n  \"graphTooltip\": 0,\n  \"id\": null,\n  \"iteration\": 1527401878265,\n  \"links\": [],\n  \"panels\": [\n    {\n      \"aliasColors\": {},\n      \"bars\": false,\n      \"dashLength\": 10,\n      \"dashes\": false,\n      \"datasource\": \"${DS_PROMETHEUS}\",\n      \"fill\": 1,\n      \"gridPos\": {\n        \"h\": 6,\n        \"w\": 12,\n        \"x\": 0,\n        \"y\": 0\n      },\n      \"id\": 2,\n      \"legend\": {\n        \"avg\": false,\n        \"current\": false,\n        \"max\": false,\n        \"min\": false,\n        \"show\": true,\n        \"total\": false,\n        \"values\": false\n      },\n      \"lines\": true,\n      \"linewidth\": 1,\n      \"links\": [],\n      \"nullPointMode\": \"null\",\n      \"percentage\": false,\n      \"pointradius\": 5,\n      \"points\": false,\n      \"renderer\": \"flot\",\n      \"repeat\": \"host\",\n      \"repeatDirection\": \"v\",\n      \"seriesOverrides\": [],\n      \"spaceLength\": 10,\n      \"stack\": false,\n      \"steppedLine\": false,\n      \"targets\": [\n        {\n          \"expr\": \"avg(nomad_client_allocs_cpu_total_percent{host=~\\\"$host\\\"}) by(exported_job, task)\",\n          \"format\": \"time_series\",\n          \"interval\": \"\",\n          \"intervalFactor\": 1,\n          \"legendFormat\": \"{{task}}\",\n          \"refId\": \"A\"\n        }\n      ],\n      \"thresholds\": [],\n      \"timeFrom\": null,\n      \"timeShift\": null,\n      \"title\": \"CPU Usage Percent - $host\",\n      \"tooltip\": {\n        \"shared\": true,\n        \"sort\": 0,\n        \"value_type\": \"individual\"\n      },\n      \"type\": \"graph\",\n      \"xaxis\": {\n        \"buckets\": null,\n        \"mode\": \"time\",\n        \"name\": null,\n        \"show\": true,\n        \"values\": []\n      },\n      \"yaxes\": [\n        {\n          \"decimals\": 3,\n          \"format\": \"percentunit\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        },\n        {\n          \"format\": \"short\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        }\n      ],\n      \"yaxis\": {\n        \"align\": false,\n        \"alignLevel\": null\n      }\n    },\n    {\n      \"aliasColors\": {},\n      \"bars\": false,\n      \"dashLength\": 10,\n      \"dashes\": false,\n      \"datasource\": \"${DS_PROMETHEUS}\",\n      \"fill\": 1,\n      \"gridPos\": {\n        \"h\": 6,\n        \"w\": 12,\n        \"x\": 12,\n        \"y\": 0\n      },\n      \"id\": 3,\n      \"legend\": {\n        \"avg\": false,\n        \"current\": false,\n        \"max\": false,\n        \"min\": false,\n        \"show\": true,\n        \"total\": false,\n        \"values\": false\n      },\n      \"lines\": true,\n      \"linewidth\": 1,\n      \"links\": [],\n      \"nullPointMode\": \"null\",\n      \"percentage\": false,\n      \"pointradius\": 5,\n      \"points\": false,\n      \"renderer\": \"flot\",\n      \"repeat\": \"host\",\n      \"repeatDirection\": \"v\",\n      \"seriesOverrides\": [],\n      \"spaceLength\": 10,\n      \"stack\": false,\n      \"steppedLine\": false,\n      \"targets\": [\n        {\n          \"expr\": \"avg(nomad_client_allocs_cpu_total_ticks{host=~\\\"$host\\\"}) by(exported_job, task)\",\n          \"format\": \"time_series\",\n          \"interval\": \"\",\n          \"intervalFactor\": 1,\n          \"legendFormat\": \"{{task}}\",\n          \"refId\": \"A\"\n        }\n      ],\n      \"thresholds\": [],\n      \"timeFrom\": null,\n      \"timeShift\": null,\n      \"title\": \"CPU Total Ticks - $host\",\n      \"tooltip\": {\n        \"shared\": true,\n        \"sort\": 0,\n        \"value_type\": \"individual\"\n      },\n      \"type\": \"graph\",\n      \"xaxis\": {\n        \"buckets\": null,\n        \"mode\": \"time\",\n        \"name\": null,\n        \"show\": true,\n        \"values\": []\n      },\n      \"yaxes\": [\n        {\n          \"decimals\": 3,\n          \"format\": \"timeticks\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        },\n        {\n          \"format\": \"short\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        }\n      ],\n      \"yaxis\": {\n        \"align\": false,\n        \"alignLevel\": null\n      }\n    },\n    {\n      \"aliasColors\": {},\n      \"bars\": false,\n      \"dashLength\": 10,\n      \"dashes\": false,\n      \"datasource\": \"${DS_PROMETHEUS}\",\n      \"fill\": 1,\n      \"gridPos\": {\n        \"h\": 6,\n        \"w\": 12,\n        \"x\": 0,\n        \"y\": 6\n      },\n      \"id\": 6,\n      \"legend\": {\n        \"avg\": false,\n        \"current\": false,\n        \"max\": false,\n        \"min\": false,\n        \"show\": true,\n        \"total\": false,\n        \"values\": false\n      },\n      \"lines\": true,\n      \"linewidth\": 1,\n      \"links\": [],\n      \"nullPointMode\": \"null\",\n      \"percentage\": false,\n      \"pointradius\": 5,\n      \"points\": false,\n      \"renderer\": \"flot\",\n      \"repeat\": \"host\",\n      \"repeatDirection\": \"v\",\n      \"seriesOverrides\": [],\n      \"spaceLength\": 10,\n      \"stack\": false,\n      \"steppedLine\": false,\n      \"targets\": [\n        {\n          \"expr\": \"avg(nomad_client_allocs_memory_rss{host=~\\\"$host\\\"}) by(exported_job, task)\",\n          \"format\": \"time_series\",\n          \"interval\": \"\",\n          \"intervalFactor\": 1,\n          \"legendFormat\": \"{{task}}\",\n          \"refId\": \"A\"\n        }\n      ],\n      \"thresholds\": [],\n      \"timeFrom\": null,\n      \"timeShift\": null,\n      \"title\": \"RSS - $host\",\n      \"tooltip\": {\n        \"shared\": true,\n        \"sort\": 0,\n        \"value_type\": \"individual\"\n      },\n      \"type\": \"graph\",\n      \"xaxis\": {\n        \"buckets\": null,\n        \"mode\": \"time\",\n        \"name\": null,\n        \"show\": true,\n        \"values\": []\n      },\n      \"yaxes\": [\n        {\n          \"decimals\": 3,\n          \"format\": \"decbytes\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        },\n        {\n          \"format\": \"short\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        }\n      ],\n      \"yaxis\": {\n        \"align\": false,\n        \"alignLevel\": null\n      }\n    },\n    {\n      \"aliasColors\": {},\n      \"bars\": false,\n      \"dashLength\": 10,\n      \"dashes\": false,\n      \"datasource\": \"${DS_PROMETHEUS}\",\n      \"fill\": 1,\n      \"gridPos\": {\n        \"h\": 6,\n        \"w\": 12,\n        \"x\": 12,\n        \"y\": 6\n      },\n      \"id\": 7,\n      \"legend\": {\n        \"avg\": false,\n        \"current\": false,\n        \"max\": false,\n        \"min\": false,\n        \"show\": true,\n        \"total\": false,\n        \"values\": false\n      },\n      \"lines\": true,\n      \"linewidth\": 1,\n      \"links\": [],\n      \"nullPointMode\": \"null\",\n      \"percentage\": false,\n      \"pointradius\": 5,\n      \"points\": false,\n      \"renderer\": \"flot\",\n      \"repeat\": \"host\",\n      \"repeatDirection\": \"v\",\n      \"seriesOverrides\": [],\n      \"spaceLength\": 10,\n      \"stack\": false,\n      \"steppedLine\": false,\n      \"targets\": [\n        {\n          \"expr\": \"avg(nomad_client_allocs_memory_cache{host=~\\\"$host\\\"}) by(exported_job, task)\",\n          \"format\": \"time_series\",\n          \"interval\": \"\",\n          \"intervalFactor\": 1,\n          \"legendFormat\": \"{{task}}\",\n          \"refId\": \"A\"\n        }\n      ],\n      \"thresholds\": [],\n      \"timeFrom\": null,\n      \"timeShift\": null,\n      \"title\": \"Memory Cache - $host\",\n      \"tooltip\": {\n        \"shared\": true,\n        \"sort\": 0,\n        \"value_type\": \"individual\"\n      },\n      \"type\": \"graph\",\n      \"xaxis\": {\n        \"buckets\": null,\n        \"mode\": \"time\",\n        \"name\": null,\n        \"show\": true,\n        \"values\": []\n      },\n      \"yaxes\": [\n        {\n          \"decimals\": 3,\n          \"format\": \"decbytes\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        },\n        {\n          \"format\": \"short\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        }\n      ],\n      \"yaxis\": {\n        \"align\": false,\n        \"alignLevel\": null\n      }\n    }\n  ],\n  \"schemaVersion\": 16,\n  \"style\": \"dark\",\n  \"tags\": [],\n  \"templating\": {\n    \"list\": [\n      {\n        \"allValue\": null,\n        \"current\": {},\n        \"datasource\": \"${DS_PROMETHEUS}\",\n        \"hide\": 0,\n        \"includeAll\": false,\n        \"label\": \"DC\",\n        \"multi\": false,\n        \"name\": \"datacenter\",\n        \"options\": [],\n        \"query\": \"label_values(nomad_client_uptime, datacenter)\",\n        \"refresh\": 1,\n        \"regex\": \"\",\n        \"sort\": 0,\n        \"tagValuesQuery\": \"\",\n        \"tags\": [],\n        \"tagsQuery\": \"\",\n        \"type\": \"query\",\n        \"useTags\": false\n      },\n      {\n        \"allValue\": null,\n        \"current\": {},\n        \"datasource\": \"${DS_PROMETHEUS}\",\n        \"hide\": 0,\n        \"includeAll\": true,\n        \"label\": \"Host\",\n        \"multi\": true,\n        \"name\": \"host\",\n        \"options\": [],\n        \"query\": \"label_values(nomad_client_uptime{datacenter=~\\\"$datacenter\\\"}, host)\",\n        \"refresh\": 2,\n        \"regex\": \"\",\n        \"sort\": 0,\n        \"tagValuesQuery\": \"\",\n        \"tags\": [],\n        \"tagsQuery\": \"\",\n        \"type\": \"query\",\n        \"useTags\": false\n      }\n    ]\n  },\n  \"time\": {\n    \"from\": \"now-6h\",\n    \"to\": \"now\"\n  },\n  \"timepicker\": {\n    \"refresh_intervals\": [\n      \"5s\",\n      \"10s\",\n      \"30s\",\n      \"1m\",\n      \"5m\",\n      \"15m\",\n      \"30m\",\n      \"1h\",\n      \"2h\",\n      \"1d\"\n    ],\n    \"time_options\": [\n      \"5m\",\n      \"15m\",\n      \"1h\",\n      \"6h\",\n      \"12h\",\n      \"24h\",\n      \"2d\",\n      \"7d\",\n      \"30d\"\n    ]\n  },\n  \"timezone\": \"\",\n  \"title\": \"Nomad Jobs\",\n  \"uid\": \"TvqbbhViz\",\n  \"version\": 12,\n  \"description\": \"Nomad Jobs metrics\"\n}\n"
  },
  {
    "path": "applications/prometheus/node-exporter.nomad",
    "content": "# The Prometheus Node Exporter needs access to the proc filesystem which is not\n# mounted into the exec jail, so it requires the raw_exec driver to run.\n\njob \"prometheus-node-exporter\" {\n  datacenters = [\"dc1\"]\n  type        = \"system\"\n\n  group \"system\" {\n    network {\n      port \"exporter\" {\n        static = 9100 \n      }\n    }\n\n    service {\n      name = \"node-exporter\"\n      tags = []\n      port = \"exporter\"\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"node-exporter\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"local/node_exporter-0.18.1.linux-amd64/node_exporter\"\n        args = [\n          \"--web.listen-address=:${NOMAD_PORT_exporter}\"\n        ]\n      }\n\n      artifact {\n        source = \"https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-amd64.tar.gz\"\n        destination = \"local\"\n        options { \n          checksum = \"sha256:b2503fd932f85f4e5baf161268854bf5d22001869b84f00fd2d1f57b51b72424\"\n        }\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}"
  },
  {
    "path": "applications/prometheus/prometheus.nomad",
    "content": "# For ACL-enabled Consul Clusters, you need to specify a Consul ACL token down\n# in the `prometheus` task's scrape config.\n\njob \"prometheus\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  update {\n    max_parallel = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"3m\"\n    auto_revert = false\n    canary = 0\n  }\n  group \"monitoring\" {\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay = \"25s\"\n      mode = \"delay\"\n    }\n    network {\n      port \"prometheus_ui\" {\n        to = 9090 \n      }\n      port \"grafana_ui\" {\n        to = 3000\n      }\n    }\n\n    service {\n      name = \"prometheus-ui\"\n      #tags = [\"urlprefix-/prometheus\"]\n      tags = [\"urlprefix-/prometheus strip=/prometheus\"]\n      port = \"prometheus_ui\"\n      check {\n        name     = \"prometheus_ui port alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    service {\n      name = \"grafana-ui\"\n      port = \"grafana_ui\"\n      tags = [\"urlprefix-/grafana strip=/grafana\"] \n      check {\n        name     = \"grafana-ui port alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n    ephemeral_disk { size = 1000 }\n    task \"grafana\" {\n      artifact {\n        source=\"https://gist.githubusercontent.com/angrycub/046cee11bd3d8c4ab9a3819646c9660c/raw/c699095c2cb25b896e2c709da588b668ce82f8b5/prometheus_nomad.json\"\n        destination=\"local/provisioning/dashboards/dashs\"\n      }\n      template {\n        change_mode=\"noop\"\n        destination=\"local/provisioning/dashboards/file_provider.yml\"\n        data = <<EOH\napiVersion: 1\n\nproviders:\n- name: 'default'\n  orgId: 1\n  folder: ''\n  type: file\n  disableDeletion: false\n  updateIntervalSeconds: 10 #how often Grafana will scan for changed dashboards\n  options:\n    path: {{ env \"NOMAD_TASK_DIR\" }}/provisioning/dashboards/dashs\nEOH\n\n      }\n      template {\n        change_mode=\"noop\"\n        destination=\"local/provisioning/datasources/prometheus_datasource.yml\"\n        data = <<EOH\napiVersion: 1\n\ndatasources:\n  - name: Prometheus\n    type: prometheus\n    access: proxy\n    url: http://{{ env \"NOMAD_ADDR_prometheus_ui\" }}\nEOH\n      }\n      env {\n        GF_SERVER_ROOT_URL = \"http://127.0.0.1:9999/grafana/\"\n        GF_PATHS_PROVISIONING =\"/${NOMAD_TASK_DIR}/provisioning\"\n      }\n      driver = \"docker\"\n      config {\n        image = \"grafana/grafana:6.1.4\"\n        ports = [\"grafana_ui\"]\n      }\n    }\n\n    task \"prometheus\" {\n      template  {\n        change_mode = \"noop\"\n        destination=\"local/prometheus.yml\"\n        data = <<EOH\n---\nglobal:\n  scrape_interval:     15s\nscrape_configs:\n  - job_name: 'prometheus'\n    scrape_interval: 5s\n    static_configs:\n      - targets: ['localhost:9090']\n\n  - job_name: 'nomad'\n    scrape_interval: 10s\n    metrics_path: /v1/metrics\n    params:\n        format: ['prometheus']\n    consul_sd_configs:\n      - server: '{{ env \"NOMAD_IP_prometheus_ui\" }}:8500'\n#        token: \"c62d8564-c0c5-8dfe-3e75-005debbd0e40\"\n        services:\n          - \"nomad\"\n          - \"nomad-client\"\n    relabel_configs:\n      - source_labels: ['__meta_consul_tags']\n        regex: .*,http,.*\n        action: keep\nEOH\n\n      }\n\n      driver = \"docker\"\n      config {\n        image = \"prom/prometheus:v2.9.1\"\n        args = [\n          \"--web.external-url=http://127.0.0.1:9999/prometheus\",\n          \"--web.route-prefix=/\",\n          \"--config.file=/local/prometheus.yml\"     \n        ]\n        ports = [\"prometheus_ui\"]\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/vms/freedos/.gitignore",
    "content": "*.img\n\n\n# Created by https://www.toptal.com/developers/gitignore/api/macos\n# Edit at https://www.toptal.com/developers/gitignore?templates=macos\n\n### macOS ###\n# General\n.DS_Store\n.AppleDouble\n.LSOverride\n\n# Icon must end with two \\r\nIcon\n\n\n# Thumbnails\n._*\n\n# Files that might appear in the root of a volume\n.DocumentRevisions-V100\n.fseventsd\n.Spotlight-V100\n.TemporaryItems\n.Trashes\n.VolumeIcon.icns\n.com.apple.timemachine.donotpresent\n\n# Directories potentially created on remote AFP share\n.AppleDB\n.AppleDesktop\nNetwork Trash Folder\nTemporary Items\n.apdisk\n\n# End of https://www.toptal.com/developers/gitignore/api/macos\n"
  },
  {
    "path": "applications/vms/freedos/README.md",
    "content": "## FreeDOS VM\n\nThis job fetches a small remote VM image and starts it in your Nomad cluster. It\nalso contains a task that starts a web-browser based VNC viewer.\n\nTODO: This job requires network namespaces for QEMU, which currently does not\nwork in a released version of Nomad.\n"
  },
  {
    "path": "applications/vms/freedos/freedos.img.tgz.SHASUM",
    "content": "8d2817126bf46ba2b4fca0b0c49eed2cc208c6f6448651e82c6d973fcba36569  freedos.img.tgz\n"
  },
  {
    "path": "applications/vms/freedos/freedos.nomad",
    "content": "job \"freedos\" {\n  datacenters = [\"dc1\"]\n\n  group \"g1\" {\n    network {\n      mode = \"bridge\"\n      port \"webvnc\" {}\n    }\n\n    service {\n      name = \"freedos\"\n      tags = [\"sample\"]\n      port = \"webvnc\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"webvnc\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"novnc\" {\n      driver = \"docker\"\n\n      env {\n        NOVNC_PORT      = \"${NOMAD_PORT_webvnc}\"\n        VNC_SERVER_IP   = \"127.0.0.1\"\n        VNC_SERVER_PORT = \"5901\"\n      }\n\n      config {\n        image = \"voiselle/novnc\"\n        ports = [\"webvnc\"]\n      }\n    }\n\n    task \"freedos\" {\n\n      artifact {\n        source      = \"https://github.com/angrycub/nomad_example_jobs/raw/main/applications/vms/freedos/freedos.img.tgz\"\n        destination = \"local\"\n        options {\n          checksum  = \"sha256:8d2817126bf46ba2b4fca0b0c49eed2cc208c6f6448651e82c6d973fcba36569\"\n        }\n      }\n\n      driver = \"qemu\"\n\n      config {\n        image_path  = \"local/freedos.img\"\n        accelerator = \"kvm\"\n        args = [\n          \"-vnc\", \"127.0.0.1:1\"\n        ]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/vms/tinycore/README.md",
    "content": "# TinyCore QEMU example\n\nThis sample will start a TinyCore Linux VM configured with the SSH daemon\nenabled. It performs port forwarding using the QEMU commands so that Nomad can\ndynamically assign a HTTP and SSH port for the VM.\n\nYou will need to serve the `tinycore.qcow2` image someplace so that it can be\nretrieved using the artifact stanza.\n\n"
  },
  {
    "path": "applications/vms/tinycore/tc_ssh.nomad",
    "content": "job \"j1\" {\n  datacenters = [\"dc1\"]\n\n  group \"g1\" {\n    network {\n      mode = \"bridge\"\n      port \"http\" {\n        to = 80\n      }\n      port \"ssh\" {\n        to = 23\n      }\n      port \"webvnc\" {}\n    }\n\n    service {\n      tags = [\"tag1\"]\n      port = \"http\"\n\n      check {\n        type     = \"http\"\n        port     = \"http\"\n        path     = \"/index.html\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"novnc\" {\n      driver = \"docker\"\n\n      env {\n        NOVNC_PORT      = \"${NOMAD_PORT_webvnc}\"\n        VNC_SERVER_IP   = \"127.0.0.1\"\n        VNC_SERVER_PORT = \"5901\"\n      }\n\n      config {\n        image = \"voiselle/novnc\"\n        ports = [\"webvnc\"]\n      }\n    }\n\n    task \"t1\" {\n      template {\n        data = <<EOH\n      Guest System\n      EOH\n\n        destination = \"local/index.html\"\n      }\n\n      artifact {\n        source = \"http://10.0.0.188:8000/tinycore.qcow2.tgz\"\n      }\n\n      driver = \"qemu\"\n\n      config {\n        image_path = \"local/tinycore.qcow2\"\n\n        ## Uncomment if KVM is available on your system\n        accelerator = \"kvm\"\n\n        args = [\n          \"-drive\", \"file=fat:rw:/opt/nomad/data/alloc/${NOMAD_ALLOC_ID}/${NOMAD_TASK_NAME}/local,format=raw,media=disk\",\n        ]\n        ports = [\"ssh\", \"http\"]\n        vnc {\n          enabled = true\n          ip      = \"127.0.0.1\"\n          display = 1\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/wordpress/README.md",
    "content": "# Wordpress\n\nThis job demonstrates several useful patterns for creating Nomad jobs:\n\n- Nomad Host Volumes for persistent storage\n- Using a prestart task to wait until a dependency is available\n- Template driven configuration to minimize static port references\n\n## Prerequisites\n\n- **Consul** - This job leverages Consul service registrations to locate the\n  supporting MySQL instance.\n\n## Necessary configuration\n\n### Create the host volume in the configuration\n\nCreate a folder on one of your Nomad clients to host your registry files. This\nexample uses `/opt/volumes/my-website-db`\n\n```shell-session\n$ mkdir -p /opt/volumes/my-website-db\n```\n\nAdd the host_volume information to the client stanza in the Nomad configuration.\n\n```hcl\nclient {\n# ...\n  host_volume \"my-website-db\" {\n    path = \"/opt/volumes/my-website-db\"\n    read_only = false\n  }\n}\n```\n\nRestart Nomad to read the new configuration.\n\n```shell-session\n$ systemctl restart nomad\n```\n\n\n"
  },
  {
    "path": "applications/wordpress/distributed/README.md",
    "content": "# WordPress\n\nThis job demonstrates several useful patterns for creating Nomad jobs:\n\n- Nomad Host Volumes for persistent storage\n- Using a pre-start task to wait until a dependency is available\n- Template driven configuration to reduce static port references\n\n## Prerequisites\n\n- **Consul** — This job leverages Consul service registrations to locate\n  the supporting MySQL instance.\n\n## Necessary configuration\n\n### Create the host volume in the configuration\n\nCreate a folder on one of your Nomad clients to host your registry files. This\nexample uses `/opt/nomad/volumes/wordpress-db`.\n\n```shell-session\nmkdir -p /opt/nomad/volumes/wordpress-db\n```\n\nAdd the `host_volume` information to the client stanza in the Nomad configuration.\nIf your `-config` flag points to a directory, you can create this as a standalone\nfile in that same folder.\n\n```hcl\nclient {\n# ...\n  host_volume \"my-website-db\" {\n    path = \"/opt/nomad/volumes/my-website-db\"\n    read_only = false\n  }\n}\n```\n\nRestart Nomad to read the new configuration.\n\n```shell\nsystemctl restart nomad\n```\n"
  },
  {
    "path": "applications/wordpress/distributed/build-site.nomad",
    "content": "job \"build-site\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  parameterized {\n    meta_required = [\"site_name\"]\n  }\n\n  group \"sitebuilder\" {\n    task \"generate-password\" {\n\n      lifecycle {\n        hook = \"prestart\"\n        sidecar = false\n      }\n\n      template {\n        destination = \"secret/generate_keys.sh\"\n        env = true\n        data =<< EOT\n#!/bin/bash\n{{- $NMSN := env \"NOMAD_META_site_name\" -}}\n{{- $UUID := \"${uuidv4}\" -}}\nSite={{ $NMSN }}\nUUID={{ $UUID }}\nCONSUL_HTTP_TOKEN=c62d8564-c0c5-8dfe-3e75-005debbd0e40\necho \"Creating credentials for site $Site...\"\nconsul kv put wordpress/sites/$Site/db/user wp-site-$Site\nconsul kv put wordpress/sites/$Site/db/pass $UUID\nconsul kv put wordpress/sites/$Site/db/name wordpress-$Site\nEOT\n      }\n\n      driver = \"raw_exec\"\n      command = \"secret/generate_keys.sh\"\n    }\n\n    task \"make-database\" {\n\n      template {\n        destination = \"local/run.sql\"\n        data = << EOT\nCREATE DATABASE {{ printf \"wordpress-%s\" .Name }};\nCREATE USER {{ .User }} identified by {{ .Pass }};\n\nEOT\n      }\n\n      template {\n        destination = \"secrets/env.txt\"\n        env = true\n        data = << EOT\nMYSQL_PASSWORD=somewordpress\nEOT\n      }\n\n      driver = \"docker\"\n\n      config {\n        image = \"arey/mysql-client\"\n        args = [\n          \"--host=${MYSQL_HOST}\",\n          \"--port=${MYSQL_PORT}\",\n          \"--user=root\"\n          \"--password=${MYSQL_PASSWORD}\",\n          \"--execute=\\\"source /local/run.sql\\\"\"\n        ]\n      }\n    }\n  }\n}\n\n# $ docker run -v <path to sql>:/sql --link <mysql server container name>:mysql -it arey/mysql-client -h mysql -p <password> -D <database name> -e \"source /sql/<your sql file>\"\n"
  },
  {
    "path": "applications/wordpress/distributed/nginx.nomad",
    "content": "job \"nginx\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n\n  group \"nginx\" {\n    network {\n      port \"http\" {\n        static = 80\n      }\n    }\n\n    service {\n      name = \"wp\"\n      port = \"http\"\n    }\n\n    task \"nginx\" {\n      driver = \"docker\"\n\n      config {\n        image = \"nginx\"\n\n        ports = [\"http\"]\n\n        volumes = [\n          \"local:/etc/nginx/conf.d\",\n        ]\n      }\n\n      template {\n        data = <<EOF\n{{- $ServicesByTag := (service \"wordpress-sites\" | byTag) -}}{{- $I :=0 -}}\n{{- /* {{- printf \"http {\\n\" -}} */ -}}\n{{- range $ServiceTag, $services := $ServicesByTag -}}\n{{- if gt $I 0 -}}{{- printf \"\\n\\n\" -}}{{- end -}}\n{{- printf \"##\\n## %s \\n##\\n\" $ServiceTag -}}\n{{- printf \"  upstream %s {\\n\" $ServiceTag -}}\n    {{- range $services -}}\n       {{- printf \"    server %s:%d;\\n\" .Address .Port -}}\n    {{- else -}}\n       {{- printf \"    server 127.0.0.1:65535; # force a 502\\n\" -}}\n    {{- end -}}\n{{- printf \"  }\\n\" }}\n  server {\n    listen 80;\n    server_name {{$ServiceTag}}.wp.service.consul;\n\n    location / {\n      proxy_pass http://{{$ServiceTag}};\n    }\n  }\n{{- $I = add $I 1 -}}\n{{- end -}}\n{{- printf \"\\n\" -}}\n{{- /* {{- printf \"}\\n\" -}} */ -}}\nEOF\n\n        destination   = \"local/load-balancer.conf\"\n        change_mode   = \"signal\"\n        change_signal = \"SIGHUP\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "applications/wordpress/distributed/reset.sh",
    "content": ""
  },
  {
    "path": "applications/wordpress/distributed/wordpress-db.nomad",
    "content": "job \"wordpress-db\" {\n  datacenters = [\"dc1\"]\n\n  group \"database\" {\n    network {\n      port \"db\" {\n        to = 3306\n      }\n    }\n\n    service {\n      name = \"wordpress-db\"\n      port = \"db\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"db\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    volume \"wordpress-db\" {\n      type      = \"host\"\n      source    = \"wordpress-db\"\n      read_only = false\n    }\n\n    task \"mysql\" {\n      driver = \"docker\"\n\n      env {\n        MYSQL_ROOT_PASSWORD=\"somewordpress\"\n        MYSQL_DATABASE=\"wordpress\"\n        MYSQL_USER=\"wordpress\"\n        MYSQL_PASSWORD=\"wordpress\"\n      }\n\n      volume_mount {\n        volume      = \"wordpress-db\"\n        destination = \"/var/lib/mysql\"\n      }\n\n      config {\n        image = \"mysql:5.7\"\n        ports = [\"db\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}"
  },
  {
    "path": "applications/wordpress/distributed/wordpress.nomad",
    "content": "variable \"site_name\" {\n  type = string\n  description = \"The site_name is used to set the consul tag for the website. This makes them available at \\\"site_name.wordpress-sites.service.consul\\\"\"\n}\n\njob \"my-website\" {\n  name = \"wp-site-${var.site_name}\"\n  id = \"wp-site-${var.site_name}\"\n  datacenters = [\"dc1\"]\n\n  group \"wordpress\" {\n    count = 2\n\n    network {\n      port \"http\" {\n        to = 80\n      }\n    }\n\n    service {\n      name = \"wordpress-sites\"\n      tags = [\"${var.site_name}\"]\n      port = \"http\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"http\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"await-wordpress-db\" {\n      driver = \"docker\"\n\n      template {\n        destination = \"local/await-db.sh\"\n        perms = 700\n        data =<<EOT\n#!/bin/sh\necho -n 'Waiting for wordpress-db service...'\nuntil nslookup -port=8600 wordpress-db.service.consul ${NOMAD_IP_http} 2>&1 >/dev/null\ndo\n  echo -n '.'\n  sleep 2\n  # There is a good opportunity to add a loop counter and a bail-out too, but\n  # this script waits forever.\ndone\necho \" Done.\"\nEOT\n      }\n\n      config {\n        image        = \"alpine:latest\"\n        command      = \"local/await-db.sh\"\n        network_mode = \"host\"\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n\n      lifecycle {\n        hook    = \"prestart\"\n        sidecar = false\n      }\n    }\n\n    task \"wordpress\" {\n      driver = \"docker\"\n\n      template {\n        data = <<EOH\n{{- if service \"wordpress-db\" -}}\n{{- with index (service \"wordpress-db\") 0 -}}\nWORDPRESS_DB_HOST={{ .Address }}:{{ .Port }}\n{{- end -}}\n{{- end }}\nWORDPRESS_DB_USER=wordpress\nWORDPRESS_DB_PASSWORD=wordpress\nWORDPRESS_DB_NAME=wordpress-${var.site_name}\n  EOH\n\n        destination = \"local/envvars.txt\"\n        env = true\n      }\n\n      config {\n        image = \"wordpress:latest\"\n        ports = [\"http\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}"
  },
  {
    "path": "applications/wordpress/simple/README.md",
    "content": "# Wordpress\n\nThis job demonstrates several useful patterns for creating Nomad jobs:\n\n- Nomad Host Volumes for persistent storage\n- Using a prestart task to wait until a dependency is available\n- Template driven configuration to minimize static port references\n\n## Prerequisites\n\n- **Consul** - This job leverages Consul service registrations to locate the\n  supporting MySQL instance.\n\n## Necessary configuration\n\n### Create the host volume in the configuration\n\nCreate a folder on one of your Nomad clients to host your registry files. This\nexample uses `/opt/volumes/my-website-db`\n\n```shell-session\n$ mkdir -p /opt/volumes/my-website-db\n```\n\nAdd the host_volume information to the client stanza in the Nomad configuration.\n\n```hcl\nclient {\n# ...\n  host_volume \"my-website-db\" {\n    path = \"/opt/volumes/my-website-db\"\n    read_only = false\n  }\n}\n```\n\nRestart Nomad to read the new configuration.\n\n```shell-session\n$ systemctl restart nomad\n```\n\n\n"
  },
  {
    "path": "applications/wordpress/simple/wordpress.nomad",
    "content": "job \"my-website\" {\n  datacenters = [\"dc1\"]\n\n  group \"database\" {\n    network {\n      port \"db\" {\n        to = 3306\n      }\n    }\n\n    service {\n      name = \"my-website-db\"\n      port = \"db\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"db\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    volume \"my-website-db\" {\n      type      = \"host\"\n      source    = \"my-website-db\"\n      read_only = false\n    }\n\n    task \"mysql\" {\n      driver = \"docker\"\n\n      env {\n        MYSQL_ROOT_PASSWORD=\"somewordpress\"\n        MYSQL_DATABASE=\"wordpress\"\n        MYSQL_USER=\"wordpress\"\n        MYSQL_PASSWORD=\"wordpress\"\n      }\n\n      volume_mount {\n        volume      = \"my-website-db\"\n        destination = \"/var/lib/mysql\"\n      }\n\n      config {\n        image = \"mysql:5.7\"\n        ports = [\"db\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n\n  group \"wordpress\" {\n    network {\n      port \"http\" {\n        to = 80\n      }\n    }\n\n    service {\n      name = \"my-website\"\n      tags = [\"www\"]\n      port = \"http\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"http\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"await-my-website\" {\n      driver = \"docker\"\n\n      config {\n        image        = \"alpine:latest\"\n        command      = \"sh\"\n        args         = [\"-c\", \"echo -n 'Waiting for service'; until nslookup -port=8600 my-website-db.service.consul ${NOMAD_IP_http} 2>&1 >/dev/null; do echo '.'; sleep 2; done\"]\n        network_mode = \"host\"\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n\n      lifecycle {\n        hook    = \"prestart\"\n        sidecar = false\n      }\n    }\n\n    task \"wordpress\" {\n      driver = \"docker\"\n\n      template {\n        data = <<EOH\n{{- if service \"my-website-db\" -}}\n{{- with index (service \"my-website-db\") 0 -}}\nWORDPRESS_DB_HOST={{ .Address }}:{{ .Port }}\n{{- end -}}\n{{- end }}\nWORDPRESS_DB_USER=wordpress\nWORDPRESS_DB_PASSWORD=wordpress\nWORDPRESS_DB_NAME=wordpress\n  EOH\n\n        destination = \"local/envvars.txt\"\n        env = true\n      }\n\n      config {\n        image = \"wordpress:latest\"\n        ports = [\"http\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}"
  },
  {
    "path": "artifact_sleepyecho/README.md",
    "content": "## artifact_sleepyecho\n\nPurpose:\n\nThis sample was designed to pull a shell script from an AWS S3 bucket and\nrun it locally.  Some additional smarts were included in the shell script\nto enable it to simulate more conditions.  \n\nThe job as committed is somewhat uninteresting, but can be changed up to\nadd Vault Support, Template Stanza testing, Consul KV output.  This should\nbe considered a building block to be used for more robust reproducers.\n\n"
  },
  {
    "path": "artifact_sleepyecho/SleepyEcho.sh",
    "content": "#! /bin/bash\n\nif [ -z \"$1\" ] \nthen\n  SLEEP_SECS=\"2\"\nelse\n  SLEEP_SECS=\"$1\"\nfi\n\nif [ -z \"${EXTRAS}\" ]\nthen\n  extras_part=\"\"\nelse \n  extras_part=\"EXTRAS: [${EXTRAS}]\"\nfi \n\necho \"$(date) -- Starting SleepyEcho. Sleep interval is ${SLEEP_SECS} sec. ${extras_part}\"\n\nif [ ! -f \"/alloc/data/time.txt\" ] \nthen\n  echo \"$(date) -- Writing date to /alloc/data/time.txt\"\n  echo -n \"$(date)\" > /alloc/data/time.txt\nelse\n  echo \"$(date) -- Found time.txt file in /alloc/data -- $(cat /alloc/data/time.txt)\"\nfi\n\nwhile true\ndo \n  echo \"$(date) -- Alive... going back to sleep for ${SLEEP_SECS}.  ${extras_part}\"\n  sleep ${SLEEP_SECS}\ndone\n"
  },
  {
    "path": "artifact_sleepyecho/artifact_sleepyecho.nomad",
    "content": "job \"repro\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  group \"group\" {\n    count = 1\n\n#    constraint {\n#      attribute = \"${attr.kernel.name}\"\n#      value = \"darwin\"\n#    }\n\n    task \"echo-task\" {\n      driver = \"exec\"\n\n      config {\n        command = \"local/bin/SleepyEcho.sh\"\n        args = [\"2\"]\n      }\n\n      artifact {\n\t      source = \"https://angrycub-hc.s3.amazonaws.com/public/SleepyEcho.sh\"\n        destination = \"local/bin\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "artifact_sleepyecho/vault_sleepyecho.nomad",
    "content": "job \"repro\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  group \"group\" {\n    count = 1\n\n    task \"echo-task\" {\n      driver = \"exec\"\n      env {\n          EXTRAS = \"${VAULT_TOKEN}\"\n      }\n      config {\n        command = \"local/bin/SleepyEcho.sh\"\n        args = [\"2\"]\n      }\n      vault {\n        policies = [\"nomad-client\"]\n        change_mode   = \"signal\"\n        change_signal = \"SIGUSR1\"\n      }\n      artifact {\n        source = \"https://angrycub-hc.s3.amazonaws.com/public/SleepyEcho.sh\"\n        destination = \"local/bin\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/batch_gc/example.nomad",
    "content": "variable \"body\" {\n  type    = string\n  default = \"Template Rendered\"\n}\n\njob \"example\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"group\" {\n    task \"output\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"busybox\"\n        auth_soft_fail = true\n        command        = \"cat\"\n        args           = [\"/local/template.out\"]\n      }\n     \n      template {\n        destination = \"${NOMAD_TASK_DIR}/template.out\"\n        data        = var.body\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dispatch/sleepy.nomad",
    "content": "job sleepy {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGINT received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for $${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\nEOH\n      }\n\n      resources {\n        memory = 100\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dispatch/sleepy1.nomad",
    "content": "job sleepy1 {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGINT received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for $${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\nEOH\n      }\n\n      resources {\n        memory = 100\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dispatch/sleepy10.nomad",
    "content": "job sleepy10 {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGINT received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for $${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\nEOH\n      }\n\n      resources {\n        memory = 100\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dispatch/sleepy2.nomad",
    "content": "job sleepy2 {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGINT received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for $${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\nEOH\n      }\n\n      resources {\n        memory = 100\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dispatch/sleepy3.nomad",
    "content": "job sleepy3 {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGINT received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for $${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\nEOH\n      }\n\n      resources {\n        memory = 100\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dispatch/sleepy4.nomad",
    "content": "job sleepy4 {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGINT received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for $${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\nEOH\n      }\n\n      resources {\n        memory = 100\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dispatch/sleepy5.nomad",
    "content": "job sleepy5 {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGINT received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for $${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\nEOH\n      }\n\n      resources {\n        memory = 100\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dispatch/sleepy6.nomad",
    "content": "job sleepy6 {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGINT received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for $${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\nEOH\n      }\n\n      resources {\n        memory = 100\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dispatch/sleepy7.nomad",
    "content": "job sleepy7 {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGINT received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for $${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\nEOH\n      }\n\n      resources {\n        memory = 100\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dispatch/sleepy8.nomad",
    "content": "job sleepy8 {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGINT received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for $${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\nEOH\n      }\n\n      resources {\n        memory = 100\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dispatch/sleepy9.nomad",
    "content": "job sleepy9 {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGINT received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for $${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\nEOH\n      }\n\n      resources {\n        memory = 100\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/dont_restart_fail/README.md",
    "content": "# Don't restart on failure\n\nSometimes you want to craft a job in such a way that it will\nnot be restarted if it fails. This could be useful for work\nthat is periodic in nature and will be retried later.\n"
  },
  {
    "path": "batch/dont_restart_fail/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"nodes\" {\n    reschedule {\n      attempts  = 0\n      unlimited = false\n    }\n\n    restart {\n      attempts = 0\n      mode     = \"fail\"\n    }\n\n    task \"payload\" {\n      driver = \"exec\"\n      config {\n        command = \"/bin/bash\"\n        args    = [\"-c\", \"echo \\\"Sleeping 5 seconds\\\"; sleep 5; echo \\\"Exiting with exit code 1\\\"; exit 1\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/lost_batch/README.md",
    "content": "# Lost batch job\n\nThis is to test the behavior of a lost client with a batch file and the\n`prohibit_overlap` setting in the `periodic` stanza\n"
  },
  {
    "path": "batch/lost_batch/batch.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"sleepers\" {\n    restart {\n      mode     = \"fail\"\n      attempts = 0\n    }\n\n    reschedule {\n      attempts = 0\n      unlimited = false\n    }\n\n    task \"wait\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args    = [\n          \"-c\",\n          \"echo Starting; sleep=300; echo Sleeping $sleep seconds.; sleep $sleep; echo Done; exit 0\"\n        ]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/lost_batch/periodic.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  periodic {\n    cron             = \"*/1 * * * * *\"\n    prohibit_overlap = true\n  }\n\n  group \"sleepers\" {\n    task \"wait\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args    = [\n          \"-c\",\n          \"echo Starting; sleep=`shuf -i30-200 -n1`; echo Sleeping $sleep seconds.; sleep $sleep; echo Done; exit 0\"\n        ]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/lots_of_batches/README.md",
    "content": "# Lots of batches\n\nThis exists to create a noisy history of jobs in the Nomad state.\nOne possible use is to test Nomad UI behaviors with a crufty state.\n"
  },
  {
    "path": "batch/lots_of_batches/payload.nomad.template",
    "content": "job {{jobname}} {\n  group {{groupname}}\n    task {{taskname}}\n      driver = \"raw_exec\" # you could use exec, but that will be so much slower...\n\n      config {\n        command = {{command}}\n        args    = [{{args}}]\n      }\n    }\n    resources {\n      cpu    = {{cpu}}\n      memory = {{memory}}\n    }\n  }\n}\n"
  },
  {
    "path": "batch/periodic/prohibit-overlap.nomad",
    "content": "job \"prohibit-overlap.nomad\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  periodic {\n    cron  = \"* * * * *\"\n    prohibit_overlap = true\n  }\n\n  group \"group\" {\n    task \"payload\" {\n      driver = \"exec\"\n\n      config {\n        command = \"bash\"\n        args    = [ \"-c\",\"echo \\\"Sleeping 5 minutes...\\\"; sleep 300\" ]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/periodic/template.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  periodic {\n    cron  = \"* * * * *\"\n  }\n\n  group \"group\" {\n\n    network {\n      port \"export\" {}\n      port \"exstat\" {\n        static = 8080\n      }\n    }\n\n    task \"command\" {\n      driver = \"exec\"\n\n      config {\n        command = \"bash\"\n        args    = [\"-c\", \"cat local/template.out\"]\n      }\n\n      template {\n        destination = \"local/template.out\"\n        data        = <<EOH\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n   concat key:  service/fabio/{{ env \"NOMAD_JOB_NAME\" }}/listeners\n    key:         {{ keyOrDefault ( printf \"service/fabio/%s/listeners\" ( env \"NOMAD_JOB_NAME\" ) ) \":9999\" }}\n\n{{ define \"custom\" }}service/fabio/{{env \"NOMAD_JOB_NAME\" }}/listeners{{ end }}\n    key:         {{ keyOrDefault (executeTemplate \"custom\") \":9999\" }}\n\n   math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n\n  EOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/spread_batch/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  meta {\n    \"version\" = \"2\"\n  }\n\n  group \"nodes\" {\n    count = 6\n\n    constraint {\n      distinct_hosts = true\n    }\n\n    task \"payload\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"/bin/bash\"\n        args    = [\"-c\", \"echo $(date) > /tmp/payload.txt\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch/spread_batch/example2.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  meta {\n    \"version\" = \"2\"\n  }\n\n  group \"nodes\" {\n    count = 6\n\n    constraint {\n      distinct_hosts = true\n    }\n\n    task \"payload\" {\n      driver = \"exec\"\n\n      config {\n        command = \"/bin/bash\"\n        args    = [\"-c\", \"echo $VAULT_ADDR > test.txt\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch_overload/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"sleepers\" {\n    count = 2000\n    task \"wait\" {\n      driver = \"raw_exec\"\n      config {\n        command = \"bash\"\n        args = [\n          \"-c\",\n          \"echo Starting; sleep=`shuf -i5-10 -n1`; echo Sleeping $sleep seconds.; sleep $sleep; echo Done; exit 0\"\n        ]\n      }\n      resources {\n        # This will cause us to have to create blocking allocs.\n        memory = 200 \n      }\n    }\n  }\n}\n"
  },
  {
    "path": "batch_overload/periodic.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  periodic {\n    cron             = \"*/15 * * * * *\"\n    prohibit_overlap = true\n  }\n  group \"sleepers\" {\n    count = 5\n    task \"wait\" {\n      driver = \"raw_exec\"\n      config {\n        command = \"bash\"\n        args = [\n          \"-c\",\n          \"echo Starting; sleep=`shuf -i5-10 -n1`; echo Sleeping $sleep seconds.; sleep $sleep; echo Done; exit 0\"\n        ]\n      }\n      resources {\n        # This will cause us to have to create blocking allocs.\n        memory = 200 \n      }\n    }\n  }\n}\n"
  },
  {
    "path": "blocked_eval/README.md",
    "content": "# Blocked jobs\n\nThis job can be used to experiment with job behaviors when a job is waiting for\na client that is able to serve the request. This is simulated using a constraint\non a client metadata item.\n\nIt will block until a client comes up with `meta.waituntil = \"charlie\"`.\n"
  },
  {
    "path": "blocked_eval/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  constraint {\n    attribute = \"${meta.waituntil}\"\n    operator  = \"=\"\n    value     = \"charlie\"\n  }\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"redis:7\"\n        ports          = [\"db\"]\n        auth_soft_fail = true\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "check.sh",
    "content": "#!/bin/bash\n\nprintError () {\n  echo -n \"- Checking ${CUR_FILE} ... \"\n\n  icon=\"🔴\"\n  if [ ${NO_ICON:-unset} != \"unset\" ]; then\n    icon=\"[ERROR]\"\n  fi\n  echo ${icon}\n  if [ \"${DEBUG:-unset}\" != \"unset\" ]; then\n    echo \"Command output:\"\n    echo \"\"\n    echo \"${1}\" | awk '/^$/{next} {print $0}'\n    echo \"\"\n  fi\n  output \"${CUR_FILE}\"  \"${icon}\"  \"$(echo \"${1}\" | awk '/^$/{next} {print $0}')\"\n\n  continue\n}\n\nprintWarning () {\n  echo -n \"- Checking ${CUR_FILE} ... \"\n\n  icon=\"🟡\"\n  if [ ${NO_ICON:-unset} != \"unset\" ]; then\n    icon=\"[WARN]\"\n  fi\n  echo ${icon}\n  if [ \"${DEBUG:-unset}\" != \"unset\" ]; then\n    echo \"Job Warning output:\"\n    echo \"\"\n    echo \"${1}\" | awk '/Job Warnings:/{flag=1} /Job Modify Index:/{flag=0} /^$/{next} flag'\n    echo \"\"\n  fi\n  output \"${CUR_FILE}\"  \"${icon}\"  \"$(echo \"${1}\" | awk '/Job Warnings:/{flag=1} /Job Modify Index:/{flag=0} /^$/{next} flag')\"\n\n  continue\n}\n\nprintSuccess () {\n  if [ ${NO_SUCCESS:-unset} != \"unset\" ]; then\n    continue\n  fi\n\n  echo -n \"- Checking ${CUR_FILE} ... \"\n\n  icon=\"✅\"\n  if [ ${NO_ICON:-unset} != \"unset\" ]; then\n    icon=\"[SUCCESS]\"\n  fi\n  echo ${icon}\n  output \"${CUR_FILE}\"  \"${icon}\"  \"\"\n\n  continue\n}\n\noutput() {\n    file=\"${1}\"\n    status=\"${2}\"\n    output=\"${3}\"\n\n    asHTML \"${file}\"  \"${status}\"  \"${output}\"\n}\n\nsetupOutput() {\n    startHTML\n}\n\nfinishOutput() {\n    endHTML\n}\n\nstartHTML() {\n    cat <<HERE > output.html\n<html><head><title>Nomad Job Tester Output</title>\n<style>\nbody {\n  font-family: Helvetica, sans-serif;\n}\n.out {\n    white-space: pre-wrap;\n}\n</style>\n<link rel=\"stylesheet\" type=\"text/css\" href=\"https://cdn.datatables.net/1.12.1/css/jquery.dataTables.css\">\n<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.6.1/jquery.min.js\"></script>\n<script src=\"https://cdn.datatables.net/1.12.1/js/jquery.dataTables.js\"></script>\n</head>\n<body>\n<table border=\"1\" width=\"100%\" id=\"results\">\n<thead><tr><th></th><th>Filename</th><th>Output</th></tr></thead>\n<tbody>\nHERE\n}\n\nasHTML() {\n    file=\"${1}\"\n    status=\"${2}\"\n    output=\"${3}\"\n    maybeOut=\"\"\n    if [ \"${output}\" != \"\" ]; then\n      maybeOut=\"<details><summary>Show Output</summary><pre class=out><code>${output}</code></pre></details>\"\n    fi\n    echo \"<tr><td style=\\\"width: 2em;\\\" align=\\\"center\\\">${status}</td><td width=\\\"25%\\\">${file}</td><td>${maybeOut}</td></tr>\" >> output.html\n}\n\nendHTML() {\n    cat <<HERE >> output.html\n</tbody>\n</table>\n<script>\n\\$(document).ready( function () {\n    \\$('#results').DataTable({\n      paging: false\n    });\n} );\n</script>\nHERE\n}\n\n\n\n## Main begins here\n\nsetupOutput\n\nfiles=$(find -s ${1:-.}  -name \"*.nomad\")\nfor file in ${files}; do\n\n  CUR_FILE=${file}\n  out=$(nomad plan ${CUR_FILE} 2>&1)\n  ec=$?\n\n  if [ \"${ec}\" == \"255\" ]; then\n    printError \"${out}\"\n  fi\n\n  if [ \"${ec}\" == \"1\" ]; then\n    dep=$(echo \"${out}\" | grep -c \"Job Warnings:\")\n\n    if [ \"$dep\" != 0 ]; then\n      printWarning \"${out}\"\n    fi\n  fi\n  printSuccess\ndone\n\nfinishOutput\n"
  },
  {
    "path": "cni/README.md",
    "content": "# Nomad CNI examples\n\nThis folder contains Nomad job specifications and configuration files that show\nhow Nomad can use [Container Network Interface (CNI)](https://cni.dev) plugins\nand network configurations for running workloads.\n\n## Examples\n\n- [`diy_bridge`](diy_bridge) - Create your own bridge network similar to the one Nomad makes\n  for `network_mode = \"bridge\"` jobs.\n"
  },
  {
    "path": "cni/diy_brige/README.md",
    "content": "# DIY CNI bridge network\n\n## About\n\nThis example uses a CNI configuration based on Nomad's internal CNI template\nused to implement the `network_mode = \"bridge\"` behavior.\n\n## Requirements\n\nThis demonstration requires a Linux Nomad client.\n\n## Running\n\n### Validate CNI plugins are installed\n\nGenerally you will install the CNI plugins as part of setting up a Nomad client,\nso this step may already be complete. However, for development clients that\naren't using Nomad's `bridge` network mode, these might not have been installed.\n\nNomad clients look for CNI plugins in the path given in the client's [`cni_path`],\n`/opt/cni/bin` by default. Check your client configuration to see if this value\nhas been overridden.\n\nCheck these folders for the CNI plugins. Verify that you have all the following binaries somewhere in the folders listed in your `cni_path`.\n\n- `bridge`\n- `firewall`\n- `host-local`\n- `loopback`\n"
  },
  {
    "path": "cni/diy_brige/diybridge.conflist",
    "content": "{\n  \"cniVersion\": \"0.4.0\",\n  \"name\": \"diybridge\",\n  \"plugins\": [\n    {\n      \"type\": \"loopback\"\n    },\n    {\n      \"type\": \"bridge\",\n      \"bridge\": \"diybridge\",\n      \"ipMasq\": true,\n      \"isGateway\": true,\n      \"forceAddress\": true,\n      \"hairpinMode\": true,\n      \"ipam\": {\n        \"type\": \"host-local\",\n        \"ranges\": [\n          [\n            {\n              \"subnet\": \"192.168.1.0/24\"\n            }\n          ]\n        ],\n        \"routes\": [\n          { \"dst\": \"0.0.0.0/0\" }\n        ]\n      }\n    },\n    {\n      \"type\": \"firewall\",\n      \"backend\": \"iptables\",\n      \"iptablesAdminChainName\": \"DIY-BRIDGE\"\n    },\n    {\n      \"type\": \"portmap\",\n      \"capabilities\": {\"portMappings\": true},\n      \"snat\": true\n    }\n  ]\n}\n"
  },
  {
    "path": "cni/diy_brige/example.nomad",
    "content": "variable \"dcs\" {\n  description = \"Datacenters to run job in.\"\n  type = list(string)\n  default = [\"dc1\"]\n}\n\njob \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"test\" {\n    network {\n      mode = \"cni/diybridge\"\n    }\n\n    task \"alpine\" {\n      driver = \"docker\"\n\n      config {\n        image = \"busybox:latest\"\n        command = \"sleep\"\n        args = [\"infinity\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "cni/diy_brige/repro.nomad",
    "content": "variable \"dcs\" {\n  type        = list(string)\n  default     = [\"dc1\"]\n  description = \"Nomad datacenters in which to run the job.\"\n}\n\njob \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"g1\" {\n\n    network {\n      mode = \"bridge\"\n      port \"foo\" {\n        to = 1337\n      }\n    }\n\n    task \"nc-alpine\" {\n      driver = \"docker\"\n      config {\n        image = \"alpine\"\n        args  = [\"nc\", \"-lk\", \"-p\", \"${NOMAD_PORT_foo}\", \"-e\", \"cat\"]\n      }\n\n      resources {\n        cpu    = 100\n        memory = 64\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "cni/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"test\" {\n    network {\n      mode = \"cni/mynet3\"\n    }\n\n    task \"alpine\" {\n      driver = \"docker\"\n\n      config {\n        image = \"alpine:latest\"\n        config {\n          command = \"sh\"\n          args = [\"-c\", \"while true; do sleep 300; done \"]\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "complex_meta/template_env.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"group\" {\n    task \"meta-output\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args=[\"-c\", \"echo $RULES | jq .\"]\n      }\n\n      template {\n        destination = \"secrets/rules.env\"\n        env         = true\n        data        = <<EOH\n{{- define \"RULES\" -}}\n[\n  {\n    \"cloudwatch\":{\n      \"asg_cpu_usage_upper_bound\": {\n        \"backend\":\"test-backend\",\n        \"dimension_name\":\"AutoScalingGroupName\",\n        \"metric_namespace\": \"AWS/EC2\",\n        \"metric_name\": \"CPUUtilization\"\n      }\n    },\n    \"enabled\": true\n  },\n  {\n    \"rule2\":{\n      \"foos\":[\n       {\"foo1\": \"bar\"},\n       {\"foo2\": \"bar2\"}\n     ],\n     \"enabled\": true\n    }\n  }\n]\n{{- end }}\nRULES={{ executeTemplate \"RULES\" | toJSON }}\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "complex_meta/template_meta.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"group\" {\n    network {\n      port \"export\" {}\n      port \"exstat\" {\n        static = 8080\n      }\n    }\n\n    meta {\n      \"rules\" = <<EOH\n[\n    {\n      cloudwatch\":{\n        \"asg_cpu_usage_upper_bound\": {\n          \"backend\":\"test-backend\",\n          \"dimension_name\":\"AutoScalingGroupName\",\n          \"metric_namespace\": \"AWS/EC2\",\n          \"metric_name\": \"CPUUtilization\",\n        }\n      },\n      \"enabled\": true\n    },\n    {\n      \"rule2\":{\n        \"foos\":[\n         {\"foo1\": \"bar\"},\n         {\"foo2\": \"bar2\"}\n       ],\n       \"enabled\": true\n      }\n    }\n  ]\nEOH\n    }\n\n    task \"env-output\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"env\"\n      }\n\n      resources {\n        memory = 10\n      }\n    }\n\n    task \"meta-output\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args    = [ \"-c\", \"echo $RULES\" ]\n      }\n\n      template {\n        destination = \"secrets/rules.env\"\n        env         = true\n        data        = <<EOH\nRULES=\"{{ \"charlie\" | toJSON }}\"\nEOH\n      }\n\n      resources {\n        memory = 10\n      }\n    }\n\n    task \"date-output\" {\n      resources {memory=10 network { port \"sample\" {} } }\n      driver = \"raw_exec\"\n      config { command = \"date\" }\n    }\n\n    task \"template\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args    = [\"-c\", \"cat local/template.out\"]\n      }\n\n      template {\n        destination = \"local/template.out\"\n        data        = <<EOH\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n\nFurther Consul Template Magic:\n\nMath\n  math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n\nComposition using inline templates\n\n  {{- define \"custom\" }}NOMAD_ADDR_{{\"date-output\" | replaceAll \"-\" \"_\" }}_sample{{ end }}\n  {{ executeTemplate \"custom\" }}: {{ env (executeTemplate \"custom\") }}\n\nComposition using printf\n  {{ $envKey := printf \"NOMAD_ADDR_%s_%s\" (\"date-output\" | replaceAll \"-\" \"_\" ) \"sample\" }}\n  {{ $envKey }}: {{ env $envKey }}\n\nEOH\n      }\n\n      resources {\n        memory = 10\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "connect/consul.nomad",
    "content": "job \"connect-consul\" {\n\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  group \"connect-consul\" {\n    network {\n      mode = \"bridge\"\n    }\n\n    service {\n      connect {\n        sidecar_service {\n          proxy {\n            upstreams {\n              destination_name = \"consul\"\n              local_bind_port  = 8500\n            }\n          }\n        }\n      }\n    }\n\n    task \"env\" {\n      driver = \"exec\"\n\n      env {\n        COUNTING_SERVICE_URL = \"http://${NOMAD_UPSTREAM_ADDR_count_api}\"\n      }\n\n      config {\n        image = \"env\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "connect/discuss/blocky.yaml",
    "content": "upstream:\n  default:\n    - 46.182.19.48\n    - 80.241.218.68\n    - tcp-tls:fdns1.dismail.de:853\n    - https://dns.digitale-gesellschaft.ch/dns-query\nblocking:\n  blackLists:\n    ads:\n      - https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts\n  clientGroupsBlock:\n    default:\n      - ads\nport: 53\nhttpPort: 4000"
  },
  {
    "path": "connect/discuss/job.nomad",
    "content": "variable \"config_data\" {\n  type = string\n  description = \"Plain text config file for blocky\"\n}\n\njob \"blocky\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  priority = 100\n\n  update {\n    max_parallel = 1\n    auto_revert = true\n  }\n\n  group \"blocky\" {\n\n    network {\n      mode = \"bridge\"\n\n      port \"dns\" {\n        static = \"53\"\n      }\n\n      port \"api\" {\n        # host_network = \"loopback\"\n        to = \"4000\"\n      }\n    }\n\n    service {\n      name = \"blocky-dns\"\n      port = \"dns\"\n    }\n\n    service {\n      name = \"blocky-api\"\n      port = \"api\"\n\n      meta {\n        metrics_addr = \"${NOMAD_ADDR_api}\"\n      }\n\n      tags = [\n        \"traefik.enable=true\",\n      ]\n\n      connect {\n        sidecar_service {\n          proxy {\n            local_service_port = 400\n\n            expose {\n              path {\n                path = \"/metrics\"\n                protocol = \"http\"\n                local_path_port = 4000\n                listener_port = \"api\"\n              }\n            }\n\n            upstreams {\n              destination_name = \"redis\"\n              local_bind_port = 6379\n            }\n          }\n        }\n\n        sidecar_task {\n          resources {\n            cpu    = 50\n            memory = 20\n            memory_max = 50\n          }\n        }\n      }\n\n      check {\n        name     = \"api-health\"\n        port     = \"api\"\n        type     = \"http\"\n        path     = \"/\"\n        interval = \"10s\"\n        timeout  = \"3s\"\n      }\n    }\n\n    task \"blocky\" {\n      driver = \"docker\"\n\n      config {\n        image = \"ghcr.io/0xerr0r/blocky\"\n        ports = [\"dns\", \"api\"]\n\n        mount {\n          type = \"bind\"\n          target = \"/app/config.yml\"\n          source = \"app/config.yml\"\n        }\n      }\n\n      resources {\n        cpu = 50\n        memory = 50\n        memory_max = 100\n      }\n\n      template {\n        data = file(var.config_data)\n        destination = \"app/config.yml\"\n        splay = \"1m\"\n      }\n    }\n  }\n}"
  },
  {
    "path": "connect/dns-via-mesh/README.md",
    "content": "README\n\nThis example demonstrates using the Consul service mesh\nto connect a workload to the Consul DNS query API\n\n## Connect Consul DNS API to the mesh\n\n### Deploy Consul service\n\nCreate a service on the Consul server node. Create a service\ndefinition with the following information.\n\n```hcl\nservice {\n  name = \"consul-dns\"\n  id = \"consul-dns-1\"\n  port = 8600\n\n  connect {\n    sidecar_service {}\n  }\n}\n```\n\n### Start a sidecar for the Consul DNS query API\n\n```\n$ consul connect proxy -sidecar-for consul-dns-1\n```\n\n## Test the connection\n\nUse a local connect proxy to test whether or not the\nservice is accessible via the proxy.\n\nStart a local connect proxy.\n\n```\n$ consul connect proxy -service charlie -upstream consul-dns:8600 \n```\n\nVerify the connection\n"
  },
  {
    "path": "connect/dns-via-mesh/consul-dns.nomad",
    "content": "job \"testdns\" {\n  datacenters = [\"dc1\"]\n\n  group \"ubuntu\" {\n    network {\n      mode = \"bridge\"\n      # dns {\n      #   servers = [\"127.0.0.1\"]\n      # }\n    }\n\n    service {\n      name = \"ubuntu\"\n      connect {\n        sidecar_service {\n          proxy {\n            upstreams {\n              destination_name = \"consul-dns\"\n              local_bind_port  = 8600\n            }\n          }\n        }\n      }\n    }\n\n    task \"ubuntu\" {\n      driver = \"docker\"\n      config {\n        image = \"ubuntu\"\n        args = [\"bash\", \"-c\",\"while true; do sleep 300; done\"]\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "connect/dns-via-mesh/consul-dns2.nomad",
    "content": "job \"testdns2\" {\n  datacenters = [\"dc1\"]\n\n  group \"ubuntu\" {\n    network {\n      mode = \"bridge\"\n      dns {\n        servers = [\"127.0.0.1\"]\n      }\n    }\n\n    service {\n      name = \"ubuntu\"\n      connect {\n        sidecar_service {\n          proxy {\n            upstreams {\n              destination_name = \"consul-dns\"\n              local_bind_port  = 8600\n            }\n          }\n        }\n      }\n    }\n\n    task \"ubuntu\" {\n      driver = \"docker\"\n      artifact {\n        source = \"http://10.0.0.236:8000/dnstest\"\n        destination = \"local\"\n      }\n\n      artifact {\n        source = \"https://github.com/coredns/coredns/releases/download/v1.8.3/coredns_1.8.3_linux_amd64.tgz\"\n        destination = \"local\"\n      }\n\n      template {\n        destination = \"local/Corefile\"\n        data =<<EOT\n. {\n  forward . dns://8.8.8.8\n}\n\nconsul {\n  log\n  forward . dns://127.0.0.1:8600 {\n    force_tcp\n  }\n}\nEOT\n      }\n\n      config {\n        image = \"ubuntu\"\n        args = [\"bash\", \"-c\",\"/local/coredns -conf /local/Corefile & while true; do sleep 200; done\"]\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "connect/dns-via-mesh/go-resolv-test/.gitignore",
    "content": ".DS_Store\nout\n"
  },
  {
    "path": "connect/dns-via-mesh/go-resolv-test/build.sh",
    "content": "#!/bin/bash\n\necho \"Building dnstest binaries...\"\n\necho \"- Linux AMD64\"\nmkdir -p out/linux_amd64/\nGOOS=linux GOARCH=amd64 go build -o out/linux_amd64/dnstest main.go\n\necho \"- Darwin AMD64\"\nmkdir -p out/darwin_amd64/\nGOOS=darwin GOARCH=amd64 go build -o out/darwin_amd64/dnstest main.go\n\necho \"- Windows AMD64\"\nmkdir -p out/windows_amd64/\nGOOS=windows GOARCH=amd64 go build -o out/windows_amd64/dnstest.exe main.go\n\necho \"- Linux ARM64\"\nmkdir -p out/linux_arm64/\nGOOS=linux GOARCH=arm64 go build -o out/linux_arm64/dnstest main.go\n"
  },
  {
    "path": "connect/dns-via-mesh/go-resolv-test/main.go",
    "content": "package main\n\nimport (\n\t\"context\"\n\t\"flag\"\n\t\"fmt\"\n\t\"net\"\n\t\"os\"\n\n)\n\nfunc main() {\n    preferGo := flag.Bool(\"go\", false, \"use host resolution\")\n    // useGoReolve := flag.Bool(\"go\", false, \"a bool\")\n    flag.Parse()\n\t\n\tif len(flag.Args()) != 1 {\n\t\tfmt.Println(\"command takes one argument--hostname to resolve.\");\n\t\tos.Exit(1);\n\t}\n\n\thostname := flag.Args()[0]\n\n\tr := net.Resolver{\n\t\tPreferGo: *preferGo,\n\t}\n\n\tiprecords, err := r.LookupHost(context.Background(), hostname)\n\n\tif err != nil {\n\t\tfmt.Println(err);\n\t\tos.Exit(1);\n\t}\n\n\tif len(iprecords) == 0 {\n\t\tfmt.Println(\"No records found.\");\n\t}\n\n\tfor _, ip := range iprecords {\n\t\tfmt.Println(ip);\n\t}\n}\n"
  },
  {
    "path": "connect/ingress_gateways/ingress_gateway.nomad",
    "content": "job \"ingress-gateway\" {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    network {\n      port \"envoy\" {}\n    }\n\n    task \"ingress-gateway\" {\n      driver = \"docker\"\n\n      config {\n        image        = \"voiselle/ingress-gateway:latest\"\n        network_mode = \"host\"\n        command      = \"/bin/sh\"\n        args         = [\"-c\", \"while true; do sleep 10; done\"]\n        mounts       = [\n          {\n            type   = \"bind\"\n            target = \"/etc/consul.d/ig-services/ingress-gateway.hcl\"\n            source = \"local/ingress-gateway.hcl\"\n            readonly = true\n          }\n        ]\n      }\n\n      env = {\n        \"CONSUL_HTTP_ADDR\"  = \"${NOMAD_IP_envoy}:8500\"\n        \"CONSUL_HTTP_TOKEN\" = \"c62d8564-c0c5-8dfe-3e75-005debbd0e40\",\n        \"CONSUL_ENVOY_IP\"   = \"${NOMAD_IP_envoy}\",\n        \"CONSUL_ENVOY_PORT\" = \"${NOMAD_PORT_envoy}\"\n      }\n\n      template {\n        destination = \"local/ingress-gateway.hcl\"\n        data        = <<EOH\nKind = \"ingress-gateway\"\nName = \"ingress-service\"\n\nListeners = [\n {\n   Port = 8080\n   Protocol = \"http\"\n   Services = [\n     {\n       Name = \"count-dashboard\"\n     }\n   ]\n }\n]\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "connect/native/cn-demo.nomad",
    "content": "job \"cn-demo\" {\n  datacenters = [\"dc1\"]\n  \n  meta {\n    version = \"1\"\n  }\n\n  group \"generator\" {\n    network {\n      port \"api\" {}\n    }\n\n    service {\n      name = \"uuid-api\"\n      port = \"${NOMAD_PORT_api}\"\n\n      connect {\n        native = true\n      }\n    }\n\n    task \"generate\" {\n      driver = \"docker\"\n\n      config {\n        image        = \"hashicorpnomad/uuid-api:v3\"\n        network_mode = \"host\"\n      }\n\n      env {\n        BIND = \"0.0.0.0\"\n        PORT = \"${NOMAD_PORT_api}\"\n      }\n    }\n  }\n\n  group \"frontend\" {\n    network {\n      port \"http\" { \n        static = 25000\n      }\n    }\n\n    service {\n      name = \"uuid-fe\"\n      port = \"25000\"\n\n      connect {\n        native = true\n      }\n    }\n\n    task \"frontend\" {\n      driver = \"docker\"\n\n      config {\n#        image        = \"hashicorpnomad/uuid-fe:v3\"\n        image        = \"registry.service.consul:5000/uuid-fe:latest\"\n        network_mode = \"host\"\n      }\n\n      env {\n        UPSTREAM = \"uuid-api\"\n        BIND     = \"0.0.0.0\"\n        PORT     = \"25000\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "connect/nginx_ingress/countdash.nomad",
    "content": "job \"countdash\" {\n  datacenters = [\"dc1\"]\n\n  group \"api\" {\n    network {\n      mode = \"bridge\"\n    }\n\n    service {\n      name = \"count-api\"\n      port = \"9001\"\n\n      connect {\n        sidecar_service {}\n      }\n    }\n\n    task \"web\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hashicorpnomad/counter-api:v1\"\n      }\n    }\n  }\n\n  group \"dashboard\" {\n    network {\n      mode = \"bridge\"\n    }\n\n    service {\n      name = \"count-dashboard\"\n      port = \"9002\"\n\n      connect {\n        sidecar_service {\n          proxy {\n            upstreams {\n              destination_name = \"count-api\"\n              local_bind_port  = 8080\n            }\n          }\n        }\n      }\n    }\n\n    task \"dashboard\" {\n      driver = \"docker\"\n\n      env {\n        COUNTING_SERVICE_URL = \"http://${NOMAD_UPSTREAM_ADDR_count_api}\"\n      }\n\n      config {\n        image = \"hashicorpnomad/counter-dashboard:v1\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "connect/nginx_ingress/ingress.nomad",
    "content": "job \"ingress\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n\n    network {\n      port \"http\" {\n        to = 8080\n      }\n    }\n\n    service {\n      name = \"ingress\"\n      tags = []\n      port = \"http\"\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"nginx\" {\n      driver = \"docker\"\n\n      config {\n        image = \"nginx:1.19.1-alpine\"\n        ports = [\"http\"]\n\n        mounts = [\n          {\n            type = \"bind\"\n            target = \"/etc/nginx/nginx.conf\"\n            source = \"local/nginx-proxy.conf\"\n            readonly = true\n          }\n        ]\n      }\n\n      template {\n        destination = \"local/nginx-proxy.conf\"\n        data = <<EOH\n# daemon off;\nmaster_process off;\npid nginx.pid;\nerror_log /dev/stdout;\n\nevents {}\n\nhttp {\n  access_log /dev/stdout;\n\n  server {\n    listen 8080 default_server;\n\n    location / {\n{{range connect \"count-dashboard\"}}\n      proxy_pass https://{{.Address}}:{{.Port}};\n{{end}}\n      # these refer to files written by templates above\n      proxy_ssl_certificate /secrets/cert.pem;\n      proxy_ssl_certificate_key /secrets/cert.key;\n      proxy_ssl_trusted_certificate /secrets/ca.crt;\n    }\n  }\n}\nEOH\n      }\n\n      template {\n        destination = \"secrets/ca.crt\"\n        data = <<EOH\n{{ range caRoots}}{{.RootCertPEM}}{{end}}\nEOH\n      }\n\n      template {\n        destination = \"secrets/cert.pem\"\n        data = <<EOH\n{{ with caLeaf \"ingress\" }}{{ .CertPEM }}{{ end }}\nEOH\n      }\n\n      template {\n        destination = \"secrets/cert.key\"\n        data = <<EOH\n{{ with caLeaf \"ingress\" }}{{ .PrivateKeyPEM }}{{ end }}\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "connect/sidecar/countdash.nomad",
    "content": "job \"countdash\" {\n  datacenters = [\"dc1\"]\n\n  group \"api\" {\n    network {\n      mode = \"bridge\"\n    }\n\n    service {\n      name = \"count-api\"\n      port = \"9001\"\n\n      connect {\n        sidecar_service {}\n      }\n    }\n\n    task \"web\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hashicorpnomad/counter-api:v1\"\n      }\n    }\n  }\n\n  group \"dashboard\" {\n    network {\n      mode = \"bridge\"\n\n      port \"http\" {\n        static = 9002\n        to     = 9002\n      }\n    }\n\n    service {\n      name = \"count-dashboard\"\n      port = \"9002\"\n\n      connect {\n        sidecar_service {\n          proxy {\n            upstreams {\n              destination_name = \"count-api\"\n              local_bind_port  = 8080\n            }\n          }\n        }\n      }\n    }\n\n    task \"dashboard\" {\n      driver = \"docker\"\n\n      env {\n        COUNTING_SERVICE_URL = \"http://${NOMAD_UPSTREAM_ADDR_count_api}\"\n      }\n\n      config {\n        image = \"hashicorpnomad/counter-dashboard:v1\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "connect/sidecar/countdash2.nomad",
    "content": "job \"countdash\" {\n  datacenters = [\"dc1\"]\n\n  group \"api\" {\n    network {\n      mode = \"bridge\"\n    }\n\n    service {\n      name = \"count-api\"\n      port = \"9001\"\n\n      connect {\n        sidecar_service {\n          proxy {\n            config {\n              protocol=\"http\"\n            }\n          }\n        }\n      }\n    }\n\n    task \"web\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hashicorpnomad/counter-api:v1\"\n      }\n    }\n  }\n\n  group \"dashboard\" {\n    network {\n      mode = \"bridge\"\n\n      port \"http\" {\n        static = 9002\n        to     = 9002\n      }\n    }\n\n    service {\n      name = \"count-dashboard\"\n      port = \"9002\"\n\n      connect {\n        sidecar_service {\n          proxy {\n            config {\n              protocol = \"http\"\n            }\n            upstreams {\n              destination_name = \"count-api\"\n              local_bind_port  = 8080\n            }\n          }\n        }\n      }\n    }\n\n    task \"dashboard\" {\n      driver = \"docker\"\n\n      env {\n        COUNTING_SERVICE_URL = \"http://${NOMAD_UPSTREAM_ADDR_count_api}\"\n      }\n\n      config {\n        image = \"hashicorpnomad/counter-dashboard:v1\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "consul/add_check/README.md",
    "content": "# Adding a service to a Nomad Job\n\nThis example shows a simple Nomad job (`e1.nomad`) which can be run in the\ncluster. Running `e2.nomad` will add a Consul check to the job. Adding a check\nis a non-destructive operation.\n\n![Nomad Plan result showing an \"in-place upgrade\" when only adding a check](images/e2.png)\n\nRunning `e3.nomad` will cause a destructive change because it adds a job meta\nargument which must be dealt with by restarting the workload. This\ncounterexample helps to illustrate that adding a check is a non-destructive\noperation.\n\n![Nomad Plan result showing a create/destroy update because of meta stanza](images/e3.png)\n"
  },
  {
    "path": "consul/add_check/e1.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"redis:7\"\n        ports          = [\"db\"]\n        auth_soft_fail = true\n      }\n  }\n}\n"
  },
  {
    "path": "consul/add_check/e2.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    service {\n      name = \"redis-cache\"\n      tags = [\"global\", \"cache\"]\n      port = \"db\"\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"redis:7\"\n        ports          = [\"db\"]\n        auth_soft_fail = true\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "consul/add_check/e3.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  meta = {\n    \"test\" = \"rebootparty\"\n  }\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    service {\n      name = \"redis-cache\"\n      tags = [\"global\", \"cache\"]\n      port = \"db\"\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"redis:7\"\n        ports          = [\"db\"]\n        auth_soft_fail = true\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "consul/use_consul_for_kv_path/README.md",
    "content": "## Use Consul for KV Path\n\nThis sample will use a Consul KV key to determine a path for other Consul KV\nelements using `printf` to compose it.\n\n\n## Set up\n\nBuild a small set of Consul KV keys for the job to use\n\n```\nconsul kv put template/current \"config1\"\nconsul kv put template/config1/name \"config1.service.consul\"\nconsul kv put template/config1/ip \"10.0.1.100\"\nconsul kv put template/config1/port \"7777\"\nconsul kv put template/config2/name \"config2.service.consul\"\nconsul kv put template/config2/ip \"10.0.2.200\"\nconsul kv put template/config2/port \"8888\"\n```\n\nRun the `template.nomad` job\n\n```\nnomad job run template.nomad\n```\n\nYou will receive scheduling information in the output; note the allocation ID.\n\n```\n==> Monitoring evaluation \"ba76383e\"\n    Evaluation triggered by job \"template\"\n==> Monitoring evaluation \"ba76383e\"\n    Allocation \"e4d4bcf1\" created: node \"f7bc1f2d\", group \"group\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"ba76383e\" finished with status \"complete\"\n```\n\nFetch the output template file using the `nomad alloc fs` command.\n\n```\nnomad alloc fs e4d4bcf1 command/local/template.out\n```\n\nObserve that the template is built with the `config1` paths.\n\n```\nName: config1.service.consul\nIP: 10.0.1.100:7777\n```\n\nUpdate the KV value to `config2`.\n\n```\nconsul kv put template/current \"config2\"\n```\n\nConsul should indcate success.\n\n```\nSuccess! Data written to: template/current\n```\n\nCheck the status of the allocation. \n\n```\nnomad alloc status e4d4bcf1\n```\n\nObserve that your change caused Nomad to restart it.\n\n```\nID                  = e4d4bcf1-f300-b7e7-2f8a-c252eae04822\nEval ID             = ba76383e\nName                = template.group[0]\nNode ID             = f7bc1f2d\nNode Name           = nomad-client-1.node.consul\nJob ID              = template\nJob Version         = 0\nClient Status       = running\nClient Description  = Tasks are running\nDesired Status      = run\nDesired Description = <none>\nCreated             = 1m23s ago\nModified            = 39s ago\n\nTask \"command\" is \"running\"\nTask Resources\nCPU        Memory           Disk     Addresses\n0/100 MHz  112 KiB/300 MiB  300 MiB  \n\nTask Events:\nStarted At     = 2021-06-07T17:32:22Z\nFinished At    = N/A\nTotal Restarts = 1\nLast Restart   = 2021-06-07T13:32:22-04:00\n\nRecent Events:\nTime                       Type              Description\n2021-06-07T13:32:22-04:00  Started           Task started by client\n2021-06-07T13:32:22-04:00  Driver            Downloading image\n2021-06-07T13:32:22-04:00  Restarting        Task restarting in 0s\n2021-06-07T13:32:22-04:00  Terminated        Exit Code: 137, Exit Message: \"Docker container exited with non-zero exit code: 137\"\n2021-06-07T13:32:16-04:00  Restart Signaled  Template with change_mode restart re-rendered\n2021-06-07T13:31:40-04:00  Started           Task started by client\n2021-06-07T13:31:39-04:00  Driver            Downloading image\n2021-06-07T13:31:39-04:00  Task Setup        Building Task Directory\n2021-06-07T13:31:39-04:00  Received          Task received by client\n```\n\nNow, refetch the rendered file with `nomad alloc fs`.\n```\nnomad alloc fs e4d4bcf1 command/local/template.out\n```\n\nObserve that the content now shows the values for the config2 paths.\n\n```\nName: config2.service.consul\nIP: 10.0.2.200:8888\n```\n\n## Clean up\n\nRemove the running sample job.\n\n```\nnomad job stop -purge template\n```\n\nRemove the Consul keys.\n\n```\nconsul kv delete template/current\nconsul kv delete template/config1/name\nconsul kv delete template/config1/ip\nconsul kv delete template/config1/port\nconsul kv delete template/config2/name\nconsul kv delete template/config2/ip\nconsul kv delete template/config2/port\n```\n"
  },
  {
    "path": "consul/use_consul_for_kv_path/template.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    count = 1\n\n    task \"command\" {\n      template {\n        data = <<EOH\n{{- with key \"template/current\" -}}\nName: {{ key (printf \"template/%v/name\" .) }}\nIP: {{ key (printf \"template/%v/ip\" .) }}:{{ key (printf \"template/%v/port\" .) }}\n{{- printf \"\\n\" -}}\n{{- end -}}\nEOH\n        destination = \"local/template.out\"\n      }\n\n      # This is a favorite do nothing worload.\n      driver = \"docker\"\n\n      config {\n        image = \"alpine\"\n        command = \"sh\"\n        args    = [\"-c\", \"while true; do sleep 300; done\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "consul-template/coordination/README.md",
    "content": "## Using Consul-Template to fake Task Dependencies\n\nThe consul-template library has a blocking behavior in the instances that a key does not yet exist in Consul.  This can be ~~abused~~ leveraged to allow for some light coordination between dependent Nomad tasks.  This would only work in instances where you were able to write to Consul from your workload once you entered the ready state or had a coordinating task that could perform this work based on some sort of application health check.\n"
  },
  {
    "path": "consul-template/coordination/sample.nomad",
    "content": "job sleepy {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n{{ $consulKey := printf \"nomad/jobs/%s/%s/first_task.sh/running\" (env \"NOMAD_JOB_NAME\") (env \"NOMAD_ALLOC_ID\") }}{{ $consulKey }}\n#{{ key $consulKey }}\n\nSLEEP_SECS=${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGTERM received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for ${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\n\nEOH\n      }\n\n      resources {\n        memory = 10\n        cpu    = 100\n      }\n    }\n\n    task \"first_task.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/first_task.sh\"\n      }\n\n      artifact {\n        source = \"https://releases.hashicorp.com/consul/1.2.1/consul_1.2.1_linux_amd64.zip\"\n      }\n\n      template {\n        destination = \"local/first_task.sh\"\n        data        = <<EOH\n#!/bin/bash\nSLEEP_SECS=${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*${1}))); do sleep .5; done ;}\nsigint() { echo \"$(date) - SIGTERM received; Ending.\"; exit 0;}\ntrap 'sigint'  INT\necho \"$(date) - Starting. Sleeping 10 seconds to simulate startup time or something\"\nsleep 10\nchmod +x ${NOMAD_TASK_DIR}/consul\nexport CONSUL_HTTP_ADDR=\"http://127.0.0.1:8500\"\n\n# If your cluster is ACL enabled, you will need to add it here.\n#export CONSUL_HTTP_TOKEN=\"3ef34421-1b20-e543-65d4-54067560d377\"\n{{ $consulKey := printf \"nomad/jobs/%s/%s/%s/running\" (env \"NOMAD_JOB_NAME\") (env \"NOMAD_ALLOC_ID\") (env \"NOMAD_TASK_NAME\") }}\necho \"Running: ${NOMAD_TASK_DIR}/consul kv put \\\"{{ $consulKey }}\\\" \\\"$(date)\\\"\"\n${NOMAD_TASK_DIR}/consul kv put \"{{ $consulKey }}\" \"$(date)\"\nwhile true; do echo \"$(date) - Sleeping for ${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\n\nEOH\n      }\n\n      resources {\n        memory = 10\n        cpu    = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "consul-template/missing_vault_value/sample.nomad",
    "content": "job sleepy {\n  datacenters = [\"dc1\"]\n  type        = \"system\"\n\n  group \"group\" {\n    task \"sleepy.sh\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      restart {\n        attempts = 3\n        delay    = \"30s\"\n        mode     = \"delay\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data = <<EOH\n#!/bin/bash\n{{ $consulKey := printf \"nomad/jobs/%s/%s/first_task.sh/running\" (env \"NOMAD_JOB_NAME\") (env \"NOMAD_ALLOC_ID\") }}{{ $consulKey }}\n#{{ secret $consulKey }}\n\nwhile true; do echo \"$(date) - Sleeping for ${SLEEP_SECS} seconds.\"; sleep ${SLEEP_SECS}; done\n\nEOH\n      }\n\n      resources {\n        memory = 10\n        cpu    = 100\n      }\n\n      vault {\n        policies = [\"default\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "consul-template/my_first_kv/README.md",
    "content": "[template]:https://www.nomadproject.io/docs/job-specification/template.html#environment-variables\n## My First KV\n\nThis job will fetch a single value from Consul and pass it as an environment\nvariable into the Redis Docker container from the sample job.  The job file\nitself is a cut down of the output from `nomad init --short` to take out\nunnecessary whitespace.\n\nOne important note, in order to use the consul-template library for creating\ndynamic environment variables, you must use the [template] stanza with \n`env = true`.  This allows you to create the key/value environment variable as a\nfile and then read it into the environment.  The Nomad `secrets` directory is\ncommonly used as a destination for these rendered files.\n\nYou can create the necessary Consul KV value with the following command:\n\n```\n$ consul kv put my-first-kv/testData MyAwesomeValue\nSuccess! Data written to: my-first-kv/testData\n```\n\nWhen you are done, or to experiment with a missing value, delete the key with:\n\n```\n$ consul kv delete my-first-kv/testData\nSuccess! Deleted key: my-first-kv/testData\n```\n"
  },
  {
    "path": "consul-template/my_first_kv/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {}\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        ports = [\"db\"]\n      }\n\n      template {\n        destination = \"secrets/file.env\"\n        env         = true\n        data        = <<EOH\nCONSUL_test=\"{{key \"consul-server1/testData\"}}\"\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "countdash/connect/countdash.nomad",
    "content": "job \"countdash\" {\n  datacenters = [\"dc1\"]\n\n  group \"api\" {\n    network {\n      mode = \"bridge\"\n    }\n\n    service {\n      name = \"count-api\"\n      port = \"9001\"\n\n      connect {\n        sidecar_service {}\n      }\n    }\n\n    task \"web\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hashicorpdev/counter-api:v3\"\n      }\n    }\n  }\n\n  group \"dashboard\" {\n    network {\n      mode = \"bridge\"\n\n      port \"http\" {\n        static = 9002\n        to     = 9002\n      }\n    }\n\n    service {\n      name = \"count-dashboard\"\n      port = \"http\"\n\n      connect {\n        sidecar_service {\n          proxy {\n            upstreams {\n              destination_name = \"count-api\"\n              local_bind_port  = 8080\n            }\n          }\n        }\n      }\n    }\n\n    task \"dashboard\" {\n      driver = \"docker\"\n\n      env {\n        COUNTING_SERVICE_URL = \"http://${NOMAD_UPSTREAM_ADDR_count_api}\"\n      }\n\n      config {\n        image = \"hashicorpdev/counter-dashboard:v3\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "countdash/simple/countdash.nomad",
    "content": "job \"countdash\" {\n  datacenters = [\"dc1\"]\n\n  group \"api\" {\n    network {\n      port \"dashboard\" {\n        static = 9002\n      }\n\n      port \"count_api\" {\n        static = 9001\n      }\n    }\n\n    task \"web\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hashicorpnomad/counter-api:v1\"\n        ports = [\"count_api\"]\n      }\n    }\n    task \"dashboard\" {\n      driver = \"docker\"\n\n      env {\n        COUNTING_SERVICE_URL = \"http://127.0.0.1:9001\"\n      }\n\n      config {\n        image = \"hashicorpnomad/counter-dashboard:v1\"\n        ports = [\"dashboard\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "csi/aws/ebs/README.md",
    "content": "## Nomad sample job using AWS EBS CSI plugin.\n\nMore information can be found at learn.hashicorp.com/nomad\n"
  },
  {
    "path": "csi/aws/ebs/busybox.nomad",
    "content": "job \"mysql-busybox\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"mysql\" {\n    count = 1\n\n    volume \"mysql\" {\n      type      = \"csi\"\n      read_only = false\n      source    = \"mysql\"\n    }\n\n    task \"busybox\" {\n      driver = \"docker\"\n\n      volume_mount {\n        volume      = \"mysql\"\n        destination = \"/srv\"\n        read_only   = false\n      }\n\n      config {\n        image = \"busybox:latest\"\n        command = \"sh\"\n        args = [\"-c\",\"while true; do echo '.'; sleep 5; done\"]\n      }\n\n      resources {\n        cpu    = 100\n        memory = 128 \n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "csi/aws/ebs/mysql-server.nomad",
    "content": "job \"mysql-server\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"mysql-server\" {\n    count = 1\n\n    volume \"mysql\" {\n      type      = \"csi\"\n      read_only = false\n      source    = \"mysql\"\n    }\n\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n\n    task \"mysql-server\" {\n      driver = \"docker\"\n\n      volume_mount {\n        volume      = \"mysql\"\n        destination = \"/srv\"\n        read_only   = false\n      }\n\n      env = {\n        \"MYSQL_ROOT_PASSWORD\" = \"password\"\n      }\n\n      config {\n        image = \"hashicorp/mysql-portworx-demo:latest\"\n        args = [\"--datadir\", \"/srv/mysql\"]\n\n        port_map {\n          db = 3306\n        }\n      }\n\n      resources {\n        cpu    = 500\n        memory = 512 \n\n        network {\n          port \"db\" {\n            static = 3306\n          }\n        }\n      }\n\n      service {\n        name = \"mysql-server\"\n        port = \"db\"\n\n        check {\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "csi/aws/ebs/plugin-ebs-controller.nomad",
    "content": "job \"plugin-aws-ebs-controller\" {\n  datacenters = [\"dc1\"]\n\n  group \"controller\" {\n    task \"plugin\" {\n      driver = \"docker\"\n\n      config {\n        image = \"amazon/aws-ebs-csi-driver:latest\"\n\n        args = [\n          \"controller\",\n          \"--endpoint=unix://csi/csi.sock\",\n          \"--logtostderr\",\n          \"--v=5\",\n        ]\n      }\n\n      csi_plugin {\n        id        = \"aws-ebs0\"\n        type      = \"controller\"\n        mount_dir = \"/csi\"\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "csi/aws/ebs/plugin-ebs-nodes.nomad",
    "content": "job \"plugin-aws-ebs-nodes\" {\n  datacenters = [\"dc1\"]\n\n  # you can run node plugins as service jobs as well, but this ensures\n  # that all nodes in the DC have a copy.\n  type = \"system\"\n\n  group \"nodes\" {\n    task \"plugin\" {\n      driver = \"docker\"\n\n      config {\n        image = \"amazon/aws-ebs-csi-driver:latest\"\n\n        args = [\n          \"node\",\n          \"--endpoint=unix://csi/csi.sock\",\n          \"--logtostderr\",\n          \"--v=5\",\n        ]\n\n        # node plugins must run as privileged jobs because they\n        # mount disks to the host\n        privileged = true\n      }\n\n      csi_plugin {\n        id        = \"aws-ebs0\"\n        type      = \"node\"\n        mount_dir = \"/csi\"\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "csi/aws/ebs/volume.hcl",
    "content": "# volume registration\ntype = \"csi\"\nid = \"mysql\"\nname = \"mysql\"\nexternal_id = \"vol-098a37a17a40dfa0f\"\naccess_mode = \"single-node-writer\"\nattachment_mode = \"file-system\"\nplugin_id = \"aws-ebs0\"\n\n"
  },
  {
    "path": "csi/aws/efs/README.md",
    "content": "## Demonstration of AWS EFS CSI Plugin on Nomad\n\nPlugin can be found here https://github.com/kubernetes-sigs/aws-efs-csi-driver\n"
  },
  {
    "path": "csi/aws/efs/busybox.nomad",
    "content": "job \"efs-busybox\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"group\" {\n    count = 1\n\n    volume \"jobVolume\" {\n      type      = \"csi\"\n      read_only = false\n      source    = \"csiVolume\"\n    }\n\n    task \"busybox\" {\n      driver = \"docker\"\n\n      volume_mount {\n        volume      = \"jobVolume\"\n        destination = \"/srv\"\n        read_only   = false\n      }\n\n      config {\n        image = \"busybox:latest\"\n        command = \"sh\"\n        args = [\"-c\",\"while true; do echo '.'; sleep 5; done\"]\n      }\n\n      resources {\n        cpu    = 100\n        memory = 128 \n      }\n    }\n  }\n}\n"
  },
  {
    "path": "csi/aws/efs/node.nomad",
    "content": "job \"plugin-aws-efs-nodes\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n\n  group \"nodes\" {\n    task \"plugin\" {\n      driver = \"docker\"\n\n      config {\n        image = \"amazon/aws-efs-csi-driver:latest\"\n\n        args = [\n          \"--endpoint=unix:///csi/csi.sock\",\n          \"--logtostderr\",\n          \"--v=5\",\n        ]\n\n        # node plugins must run as privileged jobs because they\n        # mount disks to the host\n        privileged = true\n      }\n\n      csi_plugin {\n        id        = \"aws-efs\"\n        type      = \"monolith\"\n        mount_dir = \"/csi\"\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128 \n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "csi/aws/efs/volume.hcl",
    "content": "# volume registration\ntype = \"csi\"\nid = \"csiVolume\"\nname = \"efs\"\nexternal_id = \"vol-0c6d464d9c5def899\"\naccess_mode = \"single-node-writer\"\nattachment_mode = \"file-system\"\nplugin_id = \"aws-efs\"\n\n"
  },
  {
    "path": "csi/gcp/gce-pd/README.md",
    "content": "## Nomad Example using GCP Persistent Disk CSI Plugin\n\nSource Repo: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver \n\n### Create a persistent disk\n\nNomad does not handle disk creation and expects this to be done by an operator.\n\n### Edit the disk.hcl file\n\nOnce the Disk is created, edit the disk.hcl file and replace the placeholder\n(`«disk id as listed in the GCP Disks page»`) with the disk ID\nfound in the GCP interface.\n\n### Run an agent to test\n\nYou are now ready to run the node job, register the volume, and run the workload.\n\nYou can use a dev agent to test; however, you will need to pass in additional\nconfiguration to allow the Docker driver to run privileged containers. This is\na requirement to allow the CSI plugin containers to mount and unmount storage.\n\nThere is a config.nomad that has the necessary configuration. Start an agent by\nrunning:\n\n```shell\n$ nomad agent -dev -config=config.nomad\n```\n\nFor full clusters, verify that your clients have the appropriate permission\nconfigured for the docker plugin. Once properly configured, you will be able to\nrun the node.nomad file, wait for the plugins to become healthy, register the\nvolume, and then run the job.nomad file.\n\n### Use nomad alloc exec to check the mount\n\nYou can connect to the mounted container by running `nomad alloc exec` for the\nallocation of the workload. For example.\n\n```shell\n$ nomad alloc exec ac345h /bin/sh\n```\n\nThis will give you a shell prompt inside of the container. If you list the `/srv`\ndirectory, you should see a lost+found directory. This indicates that you are at\nthe base of an ext filesystem and shows that your block device was mounted into\nyour container there.\n\n```shell\n# ls /srv\n.       ..      lost+found\n```\n"
  },
  {
    "path": "csi/gcp/gce-pd/config.nomad",
    "content": "plugin \"docker\" {\n  config {\n    allow_privileged = true\n  }\n}"
  },
  {
    "path": "csi/gcp/gce-pd/controller.nomad",
    "content": "job \"controller\" {\n  datacenters = [\"dc1\"]\n  group \"controller\" {\n    task \"plugin\" {\n      driver = \"docker\"\n      template {\n        data = <<EOH\n{{ key \"service_account\" }}\nEOH\n  destination = \"secrets/creds.json\"\n      }\n       env {\n           \"GOOGLE_APPLICATION_CREDENTIALS\" = \"/secrets/creds.json\"\n        }\n      config {\n        image = \"gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.7.0-gke.0\"\n       args = [\n          \"--endpoint=unix:///csi/csi.sock\",\n          \"--v=6\",\n          \"--logtostderr\",\n          \"--run-node-service=false\"\n        ]\n      }\n      csi_plugin {\n        id        = \"gcepd\"\n        type      = \"controller\"\n        mount_dir = \"/csi\"\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "csi/gcp/gce-pd/cv-nomad.hcl",
    "content": "# volume registration\ntype = \"csi\"\nid = \"myVolume\"\nname = \"cv-nomad\"\nexternal_id = \"projects/cv-nomad-gcp-csi/zones/us-central1-a/disks/cv-disk-1\"\naccess_mode = \"single-node-writer\"\nattachment_mode = \"file-system\"\nplugin_id = \"gcepd\"\n"
  },
  {
    "path": "csi/gcp/gce-pd/disk.hcl",
    "content": "# volume registration\ntype = \"csi\"\nid = \"VolumeID\"\nname = \"VolumeName\"\nexternal_id = \"«selfLink for the disk from the 'Equivalent REST' output»\"\naccess_mode = \"single-node-writer\"\nattachment_mode = \"file-system\"\nplugin_id = \"gcepd\"\n"
  },
  {
    "path": "csi/gcp/gce-pd/job.nomad",
    "content": "job \"alpine\" {\n  datacenters = [\"dc1\"]\n\n  group \"alloc\" {\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n\n    volume \"jobVolume\" {\n      type      = \"csi\"\n      read_only = false\n      source    = \"myVolume\"\n    }\n\n    task \"docker\" {\n      driver = \"docker\"\n\n      volume_mount {\n        volume      = \"jobVolume\"\n        destination = \"/srv\"\n        read_only   = false\n      }\n\n      config {\n        image = \"alpine\"\n        command = \"sh\"\n        args = [\"-c\",\"while true; do sleep 10; done\"]\n      }\n    }\n  }\n}"
  },
  {
    "path": "csi/gcp/gce-pd/nodes.nomad",
    "content": "job \"nodes\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  group \"nodes\" {\n    task \"plugin\" {\n      driver = \"docker\"\n      template {\n        data = <<EOH\n{{ key \"service_account\" }}\nEOH\n  destination = \"secrets/creds.json\"\n      }\n      env { \"GOOGLE_APPLICATION_CREDENTIALS\" = \"/secrets/creds.json\"\n      }\n      config {\n        image = \"gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.7.0-gke.0\"\n        args = [\n          \"--endpoint=unix:///csi/csi.sock\",\n          \"--v=6\",\n          \"--logtostderr\",\n          \"--run-controller-service=false\"\n        ]\n        privileged = true\n      }\n      csi_plugin {\n        id        = \"gcepd\"\n        type      = \"node\"\n        mount_dir = \"/csi\"\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "csi/hetzner/volume/README.md",
    "content": "## Nomad Example using Hetzner Cloud Volume CSI Plugin\n\nSource Repo: https://github.com/hetznercloud/csi-driver\n\n### Create a volume\n\nNomad does not handle volume creation and expects this to be done by an operator.\n\n### Edit the volume.hcl file\n\nOnce the volume is created, edit the volume.hcl file and replace the placeholder\n(`«volume id as listed in the Hetzner UI Volumes page»`) with the volume ID\nfound in the Hetzner interface.\n\n### Run an agent to test\n\nYou are now ready to run the node job, register the volume, and run the workload.\n\nYou can use a dev agent to test; however, you will need to pass in additional\nconfiguration to allow the Docker driver to run privileged containers. This is\na requirement to allow the CSI plugin containers to mount and unmount storage.\n\nThere is a config.nomad that has the necessary configuration. Start an agent by\nrunning:\n\n```shell\n$ nomad agent -dev -config=config.nomad\n```\n\nFor full clusters, verify that your clients have the appropriate permission\nconfigured for the docker plugin. Once properly configured, you will be able to\nrun the node.nomad file, wait for the plugins to become healthy, register the\nvolume, and then run the job.nomad file.\n\n### Use nomad alloc exec to check the mount\n\nYou can connect to the mounted container by running `nomad alloc exec` for the\nallocation of the workload. For example.\n\n```shell\n$ nomad alloc exec ac345h /bin/sh\n```\n\nThis will give you a shell prompt inside of the container. If you list the `/srv`\ndirectory, you should see a lost+found directory. This indicates that you are at\nthe base of an ext filesystem and shows that your block device was mounted into\nyour container there.\n\n```shell\n# ls /srv\n.       ..      lost+found\n```\n"
  },
  {
    "path": "csi/hetzner/volume/config.nomad",
    "content": "plugin \"docker\" {\n  config {\n    allow_privileged = true\n  }\n}"
  },
  {
    "path": "csi/hetzner/volume/job.nomad",
    "content": "job \"alpine\" {\n  datacenters = [\"dc1\"]\n\n  group \"alloc\" {\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n\n    volume \"jobVolume\" {\n      type      = \"csi\"\n      read_only = false\n      source    = \"myVolume\"\n    }\n\n    task \"docker\" {\n      driver = \"docker\"\n\n      volume_mount {\n        volume      = \"jobVolume\"\n        destination = \"/srv\"\n        read_only   = false\n      }\n\n      config {\n        image = \"alpine\"\n        command = \"sh\"\n        args = [\"-c\",\"while true; do sleep 10; done\"]\n      }\n    }\n  }\n}"
  },
  {
    "path": "csi/hetzner/volume/node.nomad",
    "content": "job \"node\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n\n  group \"node\" {\n    task \"plugin\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hetznercloud/hcloud-csi-driver:1.2.3\"\n        privileged = true\n      }\n\n      env {\n        CSI_ENDPOINT=\"unix:///csi/csi.sock\"    \n        HCLOUD_TOKEN=\"«your token»\"\n      }\n\n      csi_plugin {\n        id        = \"csi.hetzner.cloud\"\n        type      = \"monolith\"\n        mount_dir = \"/csi\"\n      }\n    }\n  }\n}"
  },
  {
    "path": "csi/hetzner/volume/volume.hcl",
    "content": "# volume registration\ntype = \"csi\"\nid = \"VolumeID\"\nname = \"VolumeName\"\nexternal_id = \"«volume id as listed in the Hetzner UI Volumes page»\"\naccess_mode = \"single-node-writer\"\nattachment_mode = \"file-system\"\nplugin_id = \"csi.hetzner.cloud\""
  },
  {
    "path": "csi/hostpath/block/README.md",
    "content": "### Nomad CSI Demo using the CSI hostvolume plugin\n\nPrerequisites\n\n- https://github.com/rexray/gocsi/tree/master/csc\n- https://quay.io/repository/k8scsi/hostpathplugin?tag=v1.2.0\n- Nomad 0.11 \n\n\nThis script will create a volume.hcl file \n\n```\n#!/bin/bash\n\n# create the volume in the \"external provider\"\n\nPLUGIN_ID=hostpath-plugin0\nVOLUME_NAME=test-volume0\n\n# non-dev mode\n# CSI_ENDPOINT=\"/var/nomad/client/csi/monolith/$PLUGIN_ID/csi.sock\"\n\n# dev mode path is going to be in a tempdir\nPLUGIN_DOCKER_ID=$(docker ps | grep hostpath | awk -F' +' '{print $1}')\nCSI_ENDPOINT=$(docker inspect $PLUGIN_DOCKER_ID | jq -r '.[0].Mounts[] | select(.Destination == \"/csi\") | .Source')/csi.sock\n\necho \"creating volume...\"\nUUID=$(sudo csc --endpoint $CSI_ENDPOINT controller create-volume $VOLUME_NAME --cap 1,2,ext4 | grep -o '\".*\"' | tr -d '\"')\n\necho \"registering volume $UUID...\"\n\necho $(printf 'id = \"%s\"\nname = \"%s\"\ntype = \"csi\"\nexternal_id = \"%s\"\nplugin_id = \"%s\"\naccess_mode = \"single-node-writer\"\nattachment_mode = \"file-system\"' $VOLUME_NAME $VOLUME_NAME $UUID $PLUGIN_ID) > volume.hcl\n\nnomad volume register volume.hcl\n\necho \"querying volume $UUID...\"\nnomad volume status $UUID\n```\n\n"
  },
  {
    "path": "csi/hostpath/block/csi-hostpath-driver.nomad",
    "content": "job \"csi-hostpath\" {\n  datacenters = [\"dc1\"]\n  type        = \"system\"\n\n  group \"nodes\" {\n    task \"plugin\" {\n      driver = \"docker\"\n\n      config {\n        image        = \"k8s.gcr.io/sig-storage/hostpathplugin:v1.9.0\"\n\n        args = [\n          \"--v=5\",\n          \"--drivername=csi-hostpath\",\n          \"--endpoint=unix://csi/csi.sock\",\n          \"--nodeid=${attr.unique.hostname}\",\n        ]\n        privileged = true\n      }\n\n      csi_plugin {\n        id                      = \"csi_hostpath\"\n        type                    = \"monolith\"\n        mount_dir               = \"/csi\"\n        health_timeout          = \"30s\"\n      }\n\n      resources {\n        cpu    = 250\n        memory = 128\n      }\n    }\n  }\n}"
  },
  {
    "path": "csi/hostpath/block/job.nomad",
    "content": "job \"alpine\" {\n  datacenters = [\"dc1\"]\n\n  group \"alloc\" {\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n\n    volume \"jobVolume\" {\n      type      = \"csi\"\n      read_only = false\n      source    = \"test-volume0\"\n    }\n\n    task \"docker\" {\n      driver = \"docker\"\n\n      volume_mount {\n        volume      = \"jobVolume\"\n        destination = \"/srv\"\n        read_only   = false\n      }\n\n      config {\n        image = \"alpine\"\n        command = \"sleep\"\n        args = [\"infinity\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "csi/hostpath/block/test.sh",
    "content": "#!/bin/bash\n\n# create the volume in the \"external provider\"\n\nPLUGIN_ID=$1\nVOLUME_NAME=$2\n\n# non-dev mode\n# CSI_ENDPOINT=\"/var/nomad/client/csi/monolith/$PLUGIN_ID/csi.sock\"\n\n# dev mode path is going to be in a tempdir\nPLUGIN_DOCKER_ID=$(docker ps | grep hostpath | awk -F' +' '{print $1}')\nCSI_ENDPOINT=$(docker inspect $PLUGIN_DOCKER_ID | jq -r '.[0].Mounts[] | select(.Destination == \"/csi\") | .Source')/csi.sock\n\necho \"creating volume...\"\nUUID=$(sudo csc --endpoint $CSI_ENDPOINT controller create-volume $VOLUME_NAME --cap 1,2,ext4 | grep -o '\".*\"' | tr -d '\"')\n\necho \"registering volume $UUID...\"\n\necho $(printf 'id = \"%s\"\nname = \"%s\"\ntype = \"csi\"\nexternal_id = \"%s\"\nplugin_id = \"%s\"\naccess_mode = \"single-node-writer\"\nattachment_mode = \"file-system\"' $VOLUME_NAME $VOLUME_NAME $UUID $PLUGIN_ID) > volume.hcl\n\nnomad volume register volume.hcl\n\necho \"querying volume $UUID...\"\nnomad volume status $UUID\n"
  },
  {
    "path": "csi/hostpath/file/README.md",
    "content": "### Nomad CSI Demo using the CSI hostvolume plugin\n\nPrerequisites\n\n- https://github.com/rexray/gocsi/tree/master/csc\n- https://quay.io/repository/k8scsi/hostpathplugin?tag=v1.2.0\n- Nomad 0.11 \n\n\nThis script will create a volume.hcl file \n\n```\n#!/bin/bash\n\n# create the volume in the \"external provider\"\n\nPLUGIN_ID=hostpath-plugin0\nVOLUME_NAME=test-volume0\n\n# non-dev mode\n# CSI_ENDPOINT=\"/var/nomad/client/csi/monolith/$PLUGIN_ID/csi.sock\"\n\n# dev mode path is going to be in a tempdir\nPLUGIN_DOCKER_ID=$(docker ps | grep hostpath | awk -F' +' '{print $1}')\nCSI_ENDPOINT=$(docker inspect $PLUGIN_DOCKER_ID | jq -r '.[0].Mounts[] | select(.Destination == \"/csi\") | .Source')/csi.sock\n\necho \"creating volume...\"\nUUID=$(sudo csc --endpoint $CSI_ENDPOINT controller create-volume $VOLUME_NAME --cap 1,2,ext4 | grep -o '\".*\"' | tr -d '\"')\n\necho \"registering volume $UUID...\"\n\necho $(printf 'id = \"%s\"\nname = \"%s\"\ntype = \"csi\"\nexternal_id = \"%s\"\nplugin_id = \"%s\"\naccess_mode = \"single-node-writer\"\nattachment_mode = \"file-system\"' $VOLUME_NAME $VOLUME_NAME $UUID $PLUGIN_ID) > volume.hcl\n\nnomad volume register volume.hcl\n\necho \"querying volume $UUID...\"\nnomad volume status $UUID\n```\n\n"
  },
  {
    "path": "csi/hostpath/file/csi-hostpath-driver.nomad",
    "content": "job \"csi-hostpath-driver\" {\n  datacenters = [\"dc1\"]\n\n  group \"csi\" {\n    task \"driver\" {\n      driver = \"docker\"\n\n      config {\n        image = \"quay.io/k8scsi/hostpathplugin:v1.2.0\"\n\n        args = [\n          \"--drivername=csi-hostpath\",\n          \"--v=5\",\n          \"--endpoint=unix://csi/csi.sock\",\n          \"--nodeid=foo\",\n        ]\n\n        // all known CSI plugins will require privileged=true\n        // because they need add mountpoints. in the ACLs\n        // design we may make csi_plugin implicitly add the\n        // appropriate privileges.\n        privileged = true\n      }\n\n      csi_plugin {\n        id        = \"csi-hostpath\"\n        type      = \"monolith\"\n        mount_dir = \"/csi\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "csi/hostpath/file/job.nomad",
    "content": "job \"alpine\" {\n  datacenters = [\"dc1\"]\n\n  group \"alloc\" {\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n\n    volume \"jobVolume\" {\n      type      = \"csi\"\n      read_only = false\n      source    = \"test-volume0\"\n    }\n\n    task \"docker\" {\n      driver = \"docker\"\n\n      volume_mount {\n        volume      = \"jobVolume\"\n        destination = \"/srv\"\n        read_only   = false\n      }\n\n      config {\n        image = \"alpine\"\n        command = \"sh\"\n        args = [\"-c\",\"while true; do sleep 10; done\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "csi/hostpath/file/test.sh",
    "content": "#!/bin/bash\n\n# create the volume in the \"external provider\"\n\nPLUGIN_ID=$1\nVOLUME_NAME=$2\n\n# non-dev mode\n# CSI_ENDPOINT=\"/var/nomad/client/csi/monolith/$PLUGIN_ID/csi.sock\"\n\n# dev mode path is going to be in a tempdir\nPLUGIN_DOCKER_ID=$(docker ps | grep hostpath | awk -F' +' '{print $1}')\nCSI_ENDPOINT=$(docker inspect $PLUGIN_DOCKER_ID | jq -r '.[0].Mounts[] | select(.Destination == \"/csi\") | .Source')/csi.sock\n\necho \"creating volume...\"\nUUID=$(sudo csc --endpoint $CSI_ENDPOINT controller create-volume $VOLUME_NAME --cap 1,2,ext4 | grep -o '\".*\"' | tr -d '\"')\n\necho \"registering volume $UUID...\"\n\necho $(printf 'id = \"%s\"\nname = \"%s\"\ntype = \"csi\"\nexternal_id = \"%s\"\nplugin_id = \"%s\"\naccess_mode = \"single-node-writer\"\nattachment_mode = \"file-system\"' $VOLUME_NAME $VOLUME_NAME $UUID $PLUGIN_ID) > volume.hcl\n\nnomad volume register volume.hcl\n\necho \"querying volume $UUID...\"\nnomad volume status $UUID\n"
  },
  {
    "path": "csi/hostpath/volume.hcl",
    "content": "id        = \"ebs_prod_db1\"\nnamespace = \"default\"\nname      = \"database\"\ntype      = \"csi\"\nplugin_id = \"plugin_id\"\n\n# For 'nomad volume register', provide the external ID from the storage\n# provider. This field should be omitted when creating a volume with\n# 'nomad volume create'\nexternal_id = \"vol-23452345\"\n\n# For 'nomad volume create', specify a snapshot ID or volume to clone. You can\n# specify only one of these two fields.\nsnapshot_id = \"snap-12345\"\n# clone_id    = \"vol-abcdef\"\n\n# Optional: for 'nomad volume create', specify a maximum and minimum capacity.\n# Registering an existing volume will record but ignore these fields.\ncapacity_min = \"10GiB\"\ncapacity_max = \"20G\"\n\n# Required (at least one): for 'nomad volume create', specify one or more\n# capabilities to validate. Registering an existing volume will record but\n# ignore these fields.\ncapability {\n  access_mode     = \"single-node-writer\"\n  attachment_mode = \"file-system\"\n}\n\ncapability {\n  access_mode     = \"single-node-reader\"\n  attachment_mode = \"block-device\"\n}\n\n# Optional: for 'nomad volume create', specify mount options to validate for\n# 'attachment_mode = \"file-system\". Registering an existing volume will record\n# but ignore these fields.\nmount_options {\n  fs_type     = \"ext4\"\n  mount_flags = [\"ro\"]\n}\n\n# Optional: specify one or more locations where the volume must be accessible\n# from. Refer to the plugin documentation for what segment values are supported.\ntopology_request {\n  preferred {\n    topology { segments { rack = \"R1\" } }\n  }\n  required {\n    topology { segments { rack = \"R1\" } }\n    topology { segments { rack = \"R2\", zone = \"us-east-1a\" } }\n  }\n}\n\n# Optional: provide any secrets specified by the plugin.\nsecrets {\n  example_secret = \"xyzzy\"\n}\n\n# Optional: provide a map of keys to string values expected by the plugin.\nparameters {\n  skuname = \"Premium_LRS\"\n}\n\n# Optional: for 'nomad volume register', provide a map of keys to string\n# values expected by the plugin. This field will populated automatically by\n# 'nomad volume create'.\ncontext {\n  endpoint = \"http://192.168.1.101:9425\"\n}"
  },
  {
    "path": "deployments/failing_deployment/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    service {\n      name = \"redis-cache\"\n      tags = [\"global\", \"cache\"]\n      port = \"db\"\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        ports = [\"db\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}"
  },
  {
    "path": "docker/auth_from_template/README.md",
    "content": "# Auth from Template Example\n\nThis job specification demonstrates using the `template` stanza to create\nenvironment variables suitable for Nomad to use in variable interpolation.\n\nThis example uses Consul KV, since there is less configuration necessary to\nrun the sample; however, this exists to demonstrate that a Vault-based solution\n(once configured with your cluster) would be trivial to switch to.\n\nThis job pairs with the docker_registry_v2 job from the applications folder,\nwhich has basic authentication enabled. Once you have started it, you will need\nto pull the redis:latest image from DockerHub and push it into your local repo.\n\n\n### Add the values for the job to Consul\n\n```shell-session\n$ consul kv put consul kv put kv/docker/config/user user\n$ consul kv put consul kv put kv/docker/config/pass securepassword\n```\n\nRunning the job will start as expected. Stop the job.\n\n### Add the values for the job to Consul\n\n```shell-session\n$ consul kv put consul kv put kv/docker/config/pass securepasswordLOL\n```\n\nRunning the job now will fail since the credential is invalid.\n\n\n\n\n"
  },
  {
    "path": "docker/auth_from_template/auth.nomad",
    "content": "job \"auth\" {\n\n  type        = \"service\"\n  datacenters = [\"dc1\"]\n\n  group \"docker\" {\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      template {\n        destination = \"secrets/secret.env\"\n        env         = true\n        change_mode = \"noop\"\n        data        = <<EOH\nDOCKER_USER={{ key \"kv/docker/config/user\" }}\nDOCKER_PASS={{ key \"kv/docker/config/pass\" }}\nEOH\n      }\n\n      config {\n        # Update this value for your private container\n        # registry\n        image = \"registry.service.consul:5000/redis:latest\"\n        auth {\n          username = \"${DOCKER_USER}\"\n          password = \"${DOCKER_PASS}\"\n        }\n      }\n\n      resources {\n        cpu    = 200\n        memory = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/datadog/container_network.nomad",
    "content": "job \"example\" {\n  type = \"system\"\n  datacenters = [\"dc1\"]\n  group \"monitoring\" {\n    task \"dd-agent\" {\n      driver = \"docker\"\n      env {\n        HOSTIP=\"${attr.unique.network.ip-address}\",\n        STATSD_PORT=\"8125\"\n        API_KEY = \"23cecf6a16b072151c561fe7e6e3938a\"\n        DD_DOGSTATSD_NON_LOCAL_TRAFFIC = \"true\"\n      }\n      config {\n        hostname = \"${node.unique.name}-docker\"\n        image = \"datadog/docker-dd-agent:latest\"\n        port_map {\n          tport = 8125 \n        }\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n        network {\n          mbits = 10\n          port \"tport\" { static = 8125 }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/datadog/ex3.nomad",
    "content": "job \"dd\" {\n  type = \"system\"\n  datacenters = [\"dc1\"]\n  group \"monitoring\" {\n    task \"dd-agent\" {\n      driver = \"docker\"\n      env {\n        HOSTIP=\"${attr.unique.network.ip-address}\",\n        STATSD_PORT=\"8125\"\n        API_KEY = \"23cecf6a16b072151c561fe7e6e3938a\"\n        DD_DOGSTATSD_NON_LOCAL_TRAFFIC = \"true\"\n      }\n      config {\n        network_mode = \"contiv-pod-net\"\n        hostname = \"${node.unique.name}-docker\"\n        image = \"datadog/docker-dd-agent:latest\"\n        port_map {\n          tport = 8125 \n        }\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n        network {\n          mbits = 10\n          port \"tport\" { static = 8125 }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/datadog/example2.nomad",
    "content": "job \"example\" {\n  type = \"system\"\n  datacenters = [\"dc1\"]\n  group \"monitoring\" {\n    task \"dd-agent\" {\n      driver = \"docker\"\n      env {\n        HOSTIP=\"${attr.unique.network.ip-address}\",\n        STATSD_PORT=\"8125\"\n        API_KEY = \"23cecf6a16b072151c561fe7e6e3938a\"\n        DD_DOGSTATSD_NON_LOCAL_TRAFFIC = \"true\"\n      }\n      config {\n        hostname = \"${node.unique.name}-docker\"\n        image = \"datadog/docker-dd-agent:latest\"\n        port_map {\n          tport = 8125 \n        }\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n        network {\n          mbits = 10\n          port \"tport\" { static = 8125 }\n        }\n      }\n      service {\n        name = \"datadog\"\n        tags = [\"cache\"]\n        port = \"tport\"\n        check {\n          name     = \"alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker+host_volume/README.md",
    "content": "# Docker + Host Volumes\n\nThis is a demonstration of using Nomad Host volumes with Docker mounts to make deep mounts from the host volume into\ncontainer paths.\n\n"
  },
  {
    "path": "docker/docker+host_volume/task_deps.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    volume \"test\" {\n      type      = \"host\"\n      source    = \"container-test\"\n      read_only = false\n    }\n\n    task \"mount\" {\n      lifecycle {\n        hook = \"prestart\"\n        sidecar = true\n      }\n      driver = \"exec\"\n      volume_mount {\n        volume      = \"test\"\n        destination = \"alloc/host_vol\"\n      }\n      config {\n        command = \"/bin/bash\"\n        args = [\"-c\",\"while true; do sleep 300; done\"]\n      }\n      resources { cpu=20 memory=100 }\n    }\n    task \"redis\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n        port_map { db = 6379 }\n        mounts = [\n          {\n            type = \"bind\"\n            target = \"/folder1\"\n            source = \"${NOMAD_ALLOC_DIR}/host_vol/folder1\"\n            readonly = false\n            bind_options {\n              propagation = \"rshared\"\n            }\n          }\n        ]\n      }\n      resources { network { port \"db\" {} } }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker+host_volume/unsafe.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    volume \"test\" {\n      type      = \"host\"\n      source    = \"container-test\"\n      read_only = false\n    }\n    task \"redis\" {\n      driver = \"docker\"\n      volume_mount {\n        volume      = \"test\"\n        destination = \"/host_vol\"\n      }\n      config {\n        image = \"redis:7\"\n        port_map { db = 6379 }\n        volumes = [\n          \"/opt/nomad/volumes/container-test/folder1:/folder1\",\n          \"/opt/nomad/volumes/container-test/folder2:/folder2\"\n        ]\n     }\n\n      resources { network { port \"db\" {} } }\n\n      service {\n        port = \"db\"\n        check {\n          name = \"alive\"\n          type = \"tcp\"\n          interval = \"10s\"\n          timeout = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_dynamic_hostname/README.md",
    "content": "# Setting a Docker container's hostname to the Nomad Client name\n\n## Requirements\n\nThis scenario is more interesting when run in a Nomad cluster with multiple\nclients, but it will work in a cluster as small as a dev agent.\n\n## Steps\n\nYou can either run the provided `finished.nomad` job specification or follow\nthe next few steps to create the job specification yourself. If you choose to\nuse the finished.nomad job specification, skip to the [Run the job][#RunTheJob]\nstep\n\n### Build the job specification\n\nRun `nomad job init -short` to produce the example Redis docker job.\n\n```shell\n$ nomad job init -short\nExample job file written to example.nomad\n```\n\nOpen up the `example.nomad` job file in a text editor.\nInside the `job \"example\" > group \"cache\" > task \"redis\" > config` block, add\nthe following:\n\n```hcl\n        hostname       = \"${attr.unique.hostname}\"\n```\n\nSet the count on the `group \"cache\"` to 3.\n\n```hcl\n  group \"cache\" {\n    count = 3\n    ...\n```\n\n### Run the job <a name=\"RunTheJob\"></a>\n\nRun the job in your Nomad cluster and wait for the instances to become healthy.\nYou will be returned to a shell prompt.\n\n```shell\nnomad job run example.nomad\n```\n\n### Validate the allocations' hostnames\nOnce you have been returned to a shell prompt, running `view.sh` shows output\nlike the following. The Allocation IDs, Node Names, and Host Names will vary\nfrom the output here, but you should be able to note that the Docker host name\nmatches the Nomad Client's Node Name.\n\n```shell\n$ ./view.sh\nAllocation ID                         Node Name (Nomad)           Host Name (Docker)\n0053d552-f461-519e-2b26-13f5e8b67524  nomad-client-3.node.consul  nomad-client-3.node.consul\n5767a2a6-38a4-2330-d692-9badc5840edb  nomad-client-1.node.consul  nomad-client-1.node.consul\n59dc75cd-5acf-e21d-7d5f-befed3dfa336  nomad-client-1.node.consul  nomad-client-1.node.consul\n```\n"
  },
  {
    "path": "docker/docker_dynamic_hostname/finished.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    count = 3\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"redis:7\"\n        ports          = [\"db\"]\n        auth_soft_fail = true\n        hostname       = \"${attr.unique.hostname}\"\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_dynamic_hostname/res_file",
    "content": "Allocation ID\\tNode Name (Nomad)\\tHostname (Docker)\nnomad-client-3.node.consul\\tnomad-client-3.node.consul\\t\nnomad-client-1.node.consul\\tnomad-client-1.node.consul\\t\nnomad-client-1.node.consul\\tnomad-client-1.node.consul\\t\nnomad-client-3.node.consul\\tnomad-client-3.node.consul\\t\n"
  },
  {
    "path": "docker/docker_dynamic_hostname/view.sh",
    "content": "#!/usr/bin/env bash\n\nfunction getJobAllocIds {\n  nomad alloc status -t '{{range $A := . }}{{if eq \"example\" .JobID}}{{printf \"%s%s%s\\n\" .ID \"|\" .NodeName }}{{end}}{{end}}'\n}\n\n\nres_file=$(mktemp)\nprintf \"Allocation ID\\tNode Name (Nomad)\\tHostname (Docker)\\n\" > \"$res_file\"\n\nfor ALLOC_INFO in $(getJobAllocIds example)\ndo\nNODENAME=${ALLOC_INFO##*|}\nALLOC_ID=${ALLOC_INFO%%|*}\nDOCKERNAME=$(nomad alloc exec ${ALLOC_ID} cat /etc/hostname)\nprintf \"%s\\t%s\\t%s\\n\" $ALLOC_ID $NODENAME $DOCKERNAME >> \"$res_file\"\ndone \n\ncolumn -t -s\"$(printf \"\\t\")\" $res_file\nrm -rf \"$res_file\"\n"
  },
  {
    "path": "docker/docker_entrypoint/Dockerfile",
    "content": "FROM alpine  \nENTRYPOINT [\"ping\"]  \nCMD [\"www.google.com\"]  \n\n"
  },
  {
    "path": "docker/docker_entrypoint/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"3m\"\n    auto_revert      = false\n    canary           = 0\n  }\n\n  migrate {\n    max_parallel     = 1\n    health_check     = \"checks\"\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"5m\"\n  }\n\n  group \"cache\" {\n    ephemeral_disk {\n      size = 300\n    }\n\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    service {\n      name = \"redis-cache\"\n      tags = [\"global\", \"cache\"]\n      port = \"db\"\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"redis:7\"\n        ports          = [\"db\"]\n        auth_soft_fail = true\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_image_not_found/README.md",
    "content": "# Docker Image Not Found\n\nThis folder containse examples that demonstrate what happens when a requested Docker image can not be found.  \n\n* **restart.nomad** - contains a restart stanza that will cause this to restart infinitely on the same client\n* **reschedule.nomad** - will utilize the defaults and reschedule onto other nodes in nomad 0.8+\n\n"
  },
  {
    "path": "docker/docker_image_not_found/reschedule.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  group \"group\" {\n    task \"broken\" {\n      driver = \"docker\"\n      config {\n        image = \"this_is_not_an_image:latest\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_image_not_found/restart.nomad",
    "content": "job \"restart\" {\n  datacenters = [\"dc1\"]\n  meta {\n    \"serial_num\" = \"2\"\n  }\n  group \"group\" {\n    restart {\n      attempts = 2\n      delay    = \"30s\"\n      interval = \"1m\"\n      mode     = \"delay\"\n    }\n    task \"broken\" {\n      driver = \"docker\"\n      config {\n        image = \"this_is_not_an_image:latest\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_interpolated_image_name/README.md",
    "content": "# Using interpolated Docker image versions\n\nPrerequisites:\n\n- Nomad\n  - Docker\n- Consul\n\nRough Notes:\n\n- The docker image path is interpolated\n- The Nomad `template` block can be used to create environment variables and has access to Consul values\n- You can use the `keyOrDefault` template function to fetch a value from Consul KV\n- You can set and update the value using the `consul kv put` command.\n- Depending on template `change_mode`, this might restart the job.\n- Image caching is at play, so immutable tags help this scenario\n\n```shell-session\nconsul kv put service/redis/version 3.2\n```\n"
  },
  {
    "path": "docker/docker_interpolated_image_name/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    service {\n      tags = [\"redis\", \"cache\"]\n      port = \"db\"\n\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n    task \"redis\" {\n      template {\n        data = <<EOH\nREDIS_VERSION=\"{{ keyOrDefault \"service/redis/version\" \"latest\" }}\"\nEOH\n\n        destination = \"secrets/file.env\"\n        env         = true\n      }\n\n      driver = \"docker\"\n\n      config {\n        image = \"redis:${REDIS_VERSION}\"\n        ports = [\"db\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_interpolated_image_name/hostname.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    service {\n      tags = [\"redis\", \"cache\"]\n      port = \"db\"\n\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n    task \"redis\" {\n      template {\n        destination = \"secrets/env\"\n        env         = true\n        data        = <<EOH\nC_HOSTNAME=\"foo-{{env \"NOMAD_ALLOC_ID\"}}-{{ env \"attr.unique.hostname\" }}\"\nEOH\n      }\n\n      driver = \"docker\"\n\n      config {\n        image    = \"redis:7\"\n        ports    = [\"db\"]\n\t      hostname = \"${C_HOSTNAME}\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_logging/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"redis:7\"\n        ports          = [\"db\"]\n        auth_soft_fail = true\n\n        logging {\n          type = \"journald\"\n          config {\n            tag = \"docker-example\"\n          }\n        }\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_mac_address/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        mac_address = \"A0:97:FA:13:93:03\"\n      }\n\n      resources {\n        cpu    = 100\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_network/example1.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        network_mode = \"myNet\"\n        ports = [\"db\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        network_mode = \"myNet\"\n        ports = [\"db\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_network/example2.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n\n        ports = [\"db\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_nfs/README.md",
    "content": ""
  },
  {
    "path": "docker/docker_nfs/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        port_map { db = 6379 }\n        mounts = [\n          {\n            target = \"/mnt/nfs\"\n            source = \"myRedisNFS\"\n            volume_options {\n              no_copy = false\n              driver_config {\n                name = \"local\"\n                options = {\n                  type = \"nfs\"\n                  device = \":/nfs\"\n                  o = \"addr=10.0.2.41,vers=4\"\n                }\n              }\n            }\n          }\n        ]\n      }\n\n      resources { network { port \"db\" {} } }\n\n      service {\n        port = \"db\"\n        check {\n          name = \"alive\"\n          type = \"tcp\"\n          interval = \"10s\"\n          timeout = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_template/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  group \"cache\" {\n    task \"redis\" {\n      template {\n        data = <<EOH\nfile data here\nEOH\n        destination = \"local/config.yml\"\n        change_mode = \"noop\"\n      }\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n        port_map { db = 6379 }\n        mounts = [\n          {\n            type = \"bind\"\n            target = \"/root/config.yml\"\n            source = \"local/config.yml\"\n            readonly = false\n            volume_options {\n              no_copy = false\n            }\n          }\n        ]\n      }\n      resources { network { port \"db\" {} } }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_twice_in_alloc/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  group \"cache\" {\n    task \"redis1\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n        port_map { db = 6379 }\n      }\n      resources { network { port \"db\" {} } }\n      service {\n        name = \"redis-cache\"\n        tags = [\"global\", \"cache\"]\n        port = \"db\"\n        check { name=\"alive\" type=\"tcp\" interval=\"10s\" timeout=\"2s\" }\n      }\n    }\n    task \"redis2\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n        port_map { db = 6379 }\n      }\n      resources { network { port \"db\" {} } }\n      service {\n        name = \"redis-cache\"\n        tags = [\"global\", \"cache\"]\n        port = \"db\"\n        check { name=\"alive\" type=\"tcp\" interval=\"10s\" timeout=\"2s\" }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/docker_windows_abs_mount/Dockerfile",
    "content": "FROM microsoft/powershell:latest\nRUN mkdir c:\\bin\nRUN mkdir c:\\test\nCOPY SleepyEcho.ps1 c:\\\\bin\nCMD powershell.exe C:\\\\bin\\\\SleepyEcho.ps1"
  },
  {
    "path": "docker/docker_windows_abs_mount/README.md",
    "content": "## Docker Windows Absloute Path Bind Mount\n\n**Summary:** This job will attempt to create a bind mount using the source/destination form of bind mounting with the `mount` stanza.  The continer will start a powershell script that writes a debug log into the mounted folder.  This file is then accessible from the host.\n\n**Issue:** The Docker `-v` style mount option can not handle Windows absolute paths because of the ambiguity around the `:` as a seperator.\n\nThe included [Dockerfile](Dockerfile) was used to create `voiselle/sleepyecho:1.1`"
  },
  {
    "path": "docker/docker_windows_abs_mount/SleepyEcho.ps1",
    "content": "Function Write-Log{\nParam ($out)\n  Write $out\n  if (Test-Path -Path 'C:\\OutMount' -PathType Container) {\n    Add-Content -Path C:\\OutMount\\debug.txt -Value $out\n  }\n}\n\nif (-not (Test-Path env:SLEEP_SECS)) { $env:SLEEP_SECS = 2 }\n\nWrite-Log \"$(get-date) -- Starting SleepyEcho. Sleep interval is $env:SLEEP_SECS sec.\"\n\nwhile ($true) {\n  if (Test-Path env:EXTRAS) { $extras=\" EXTRAS: $env:EXTRAS\" } else {$extras=\"\"}\n  if (Test-Path env:VAULT_TOKEN) { $vt=\" VAULT_TOKEN: $env:VAULT_TOKEN\" } else {$vt=\"\"}\n  Write-Log \"$(get-date) -- Alive... going back to sleep for $env:SLEEP_SECS seconds.$vt$extras\"\n  start-sleep $env:SLEEP_SECS\n}"
  },
  {
    "path": "docker/docker_windows_abs_mount/repro.nomad",
    "content": "job \"test\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  group \"testgroup\" {\n    count = 1\n    task \"testtask\" {\n      driver = \"docker\"\n      config {\n        mounts = [\n          {\n            type = \"bind\"\n            target = \"C:\\\\OutMount\"\n            source = \"C:\\\\Users\\\\Administrator\\\\Desktop\\\\output\"\n            readonly = false\n          }\n        ]\n        image = \"voiselle/sleepyecho:1.1\"\n      }\n    }\n  }\n}"
  },
  {
    "path": "docker/env_var_args/Dockerfile",
    "content": "FROM alpine\nRUN mkdir /scripts\nCOPY *.sh /scripts\nRUN chmod +x /scripts/*.sh\nENTRYPOINT [\"/scripts/entrypoint.sh\"]\nCMD [\"/scripts/cmd.sh\",\"$VAR1\",\"$VAR2\"]\n"
  },
  {
    "path": "docker/env_var_args/README.md",
    "content": "# Using environment variables as arguments\n\nThis example sets environment variables in a container's\nENTRYPOINT. It then runs a command that consumes them.\n\nThe Dockerfile in the project can be used to create an\n`alpine` based container with simple shell scripts to\ntest the case.\n\nThe `start.nomad` file demonstrates the basic behavior and\ncan be used to prove that your container image path is\nset correctly and that the scenario is built properly.\n\nYou will need to change the image paths in the job files\nto match _**your specific image path**_ in both `start.nomad` and `test.nomad`.\n\nRun the start job to validate the basics.\n\n```text\n$ nomad job run start.nomad\n==> 2022-11-22T17:57:01-05:00: Monitoring evaluation \"86382659\"\n    2022-11-22T17:57:01-05:00: Evaluation triggered by job \"example\"\n    2022-11-22T17:57:02-05:00: Allocation \"4691273a\" created: node \"d18649d1\", group \"g1\"\n    2022-11-22T17:57:02-05:00: Evaluation status changed: \"pending\" -> \"complete\"\n==> 2022-11-22T17:57:02-05:00: Evaluation \"86382659\" finished with status \"complete\"\n```\n\nNote from the output that the created allocation's ID starts with 469. Your allocation ID will vary. Use that with the `nomad alloc logs` command to get the output from the latest run.\n\n```text\n$ nomad alloc logs 469\nVAR1=foo\nVAR2=bar\n```\n\nThe `test.nomad` file shows overriding the command with\nan alternative command inside the container and passing\nenvironment variables that are set in the ENTRYPOINT.\n\nThe job sets both values to `$VAR2` to show that\nthey are still being read from the environment.\n\n```text\n$ nomad job run test.nomad\n==> 2022-11-22T17:57:19-05:00: Monitoring evaluation \"c0a0a83f\"\n    2022-11-22T17:57:19-05:00: Evaluation triggered by job \"example\"\n    2022-11-22T17:57:20-05:00: Allocation \"63800968\" created: node \"d18649d1\", group \"g1\"\n    2022-11-22T17:57:20-05:00: Evaluation status changed: \"pending\" -> \"complete\"\n==> 2022-11-22T17:57:20-05:00: Evaluation \"c0a0a83f\" finished with status \"complete\"\n```\n\nNote from the output that the created allocation's ID starts with 638. Your allocation ID will vary. Use that with the `nomad alloc logs` command to get the output from the latest run.\n\n```text\n$ nomad alloc logs 638\nIt's the alternate version! 🎉\nVAR1=bar\nVAR2=bar\n```\n"
  },
  {
    "path": "docker/env_var_args/cmd.sh",
    "content": "#!/bin/sh\n\n# This is the original workload for the container\n# it's going to echo out the values set in the\n# entrypoint\n\necho VAR1=$1\necho VAR2=$2\n"
  },
  {
    "path": "docker/env_var_args/cmd_alt.sh",
    "content": "#!/bin/sh\n\n# This is the original workload for the container\n# it's going to echo out the values set in the\n# entrypoint\necho \"It's the alternate version! 🎉\"\necho VAR1=$1\necho VAR2=$2\n"
  },
  {
    "path": "docker/env_var_args/entrypoint.sh",
    "content": "#!/bin/sh\n# The entrypoint is used to set some values that the\n# command will use\nexport VAR1=\"foo\"\nexport VAR2=\"bar\"\n\neval $@"
  },
  {
    "path": "docker/env_var_args/start.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  meta {\n    run_uuid = \"${uuidv4()}\"\n  }\n\n  group \"g1\" {\n    task \"docker\" {\n      driver = \"docker\"\n\n      config {\n        image = \"registry.service.consul:5000/envfun:latest\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/env_var_args/test.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  meta {\n    run_uuid = \"${uuidv4()}\"\n  }\n\n  group \"g1\" {\n    task \"docker\" {\n      driver = \"docker\"\n\n      config {\n        image = \"registry.service.consul:5000/envfun:latest\"\n        command = \"/scripts/cmd_alt.sh\"\n        args = [\"$VAR2\", \"$VAR2\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/get_fact_from_consul/README.md",
    "content": "## get_fact_from_consul\n\nThese demonstration jobs use Consul templates to fetch values for substitution in\nDocker jobs. These values can be used as interpolated values at workload runtime\nand are seen as concrete values in `docker inspect`. However, they are also\navailable to the workload itself.\n\n- **image.nomad** - uses an enviroment variable that is made concrete during\n  container startup. However, they are available to the workload as well.\n\n- **args.nomad** - uses the `template` stanza to build environment variables\n  and provides them to the job via the `args` list. These are handled by the\n  starting workload.\n\n\n## image.nomad\n\nrequires a consul key named `test/redis/docker-tag`\n\n```shell-session\n$ consul kv put test/redis/docker-tag \"4.0\"\n```\n\n- Run the job. Find the client node that it's running on. SSH there.\n- Run `docker ps` to find the workload; note that it's running the version from the label.\n\n\n## args.nomad\n\nrequires a consul key named `test/echo/content`\n\n```shell-session\n$ consul kv put test/echo/content \"hello world!\"\n```\n\n- Run the job. Find the client node that it's running on. SSH there.\n- Run `docker ps` to find the workload\n- Run `docker inspect` on the running container.\n- Look for `\"Cmd\"` and note that the environment variables have been expanded\nto their concrete values.\n\n"
  },
  {
    "path": "docker/get_fact_from_consul/args.nomad",
    "content": "job \"args.nomad\" {\n  datacenters = [\"dc1\"]\n  group \"g1\" {\n    network { \n      port \"http\" {}\n    }\n\n    task \"echo\" {\n      template {\n        destination = \"secrets/local.env\"\n        env = true\n        data =<<EOT\nMY_VAR={{ key \"test/echo/content\"|toJSON }}\nEOT\n      }\n      driver = \"docker\"\n      config {\n        image = \"hashicorp/http-echo:latest\"\n        ports = [\"http\"]\n        args = [\n          \"-listen=:${NOMAD_PORT_http}\",\n          \"-text=\\\"${MY_VAR}\\\"\",\n        ]\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "docker/get_fact_from_consul/image.nomad",
    "content": "job \"image.nomad\" {\n  datacenters = [\"dc1\"]\n  group \"g1\" {\n    network { \n      port \"db\" {}\n    }\n\n    task \"redis\" {\n      template {\n        destination = \"secrets/local.env\"\n        env = true\n        data =<<EOT\nVERSION_TAG={{ key \"test/redis/docker-tag\"|toJSON }}\nEOT\n      }\n      driver = \"docker\"\n      config {\n        image = \"redis:${VERSION_TAG}\"\n        ports = [\"db\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/host-volumes-and-users/README.md",
    "content": "# Use `user` with Docker and a Nomad Host Volume\n\nIt is possible to use users with Nomad's Docker task driver. This can be coupled with Nomad host volumes\nto provide more complex file access permissions to your workloads and to share files across them.\n\n\n## Requirements\n\n- a client with a host volume named `scratch`\n\n- in the backing directory\n\n  - create a directory named `2001`\n  - change owner to `2001`\n  - change permissions to `700`\n\n  - create a directory named `2002`\n  - change owner to `2002`\n  - change permissions to `700`\n\n  - create a directory named `world`\n  - change permissions to `777`\n\n\n## The scenario\n\n### Run the job\n\n```\nnomad job run scratch.nomad\n```\n\n### Make an ALLOC_ID environment variable\n\n```\nexport ALLOC_ID=«allocation id from the output above»\n```\n\n### Connect to the job\n\n```\nnomad alloc exec -task=2001 ${ALLOC_ID} /bin/sh\n```\n\n\n"
  },
  {
    "path": "docker/host-volumes-and-users/scratch.nomad",
    "content": "job \"scratch\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"group\" {\n    volume \"scratch\" {\n      type      = \"host\"\n      source    = \"scratch\"\n      read_only = false\n    }\n\n    count = 1\n\n    task \"2001\" {\n      driver = \"docker\"\n      user = \"2001\"\n      group = \"12001\"\n\n      config {\n        image   = \"alpine:latest\"\n        command = \"/bin/sh\"\n        args    = [\"-c\", \"while true; do sleep 500; done\"]\n      }\n\n      volume_mount {\n        volume      = \"scratch\"\n        destination = \"/scratch\"\n      }\n    }\n\n    task \"2002\" {\n      driver = \"docker\"\n      user = \"2002\"\n      group = \"12001\"\n\n      config {\n        image   = \"alpine:latest\"\n        command = \"/bin/sh\"\n        args    = [\"-c\", \"while true; do sleep 500; done\"]\n      }\n\n      volume_mount {\n        volume      = \"scratch\"\n        destination = \"/scratch\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/labels/README.md",
    "content": "# Docker Label examples\n\nThese are focused on complex label values, like the Datadog labels.\n\n \n- **literal.nomad**\n- **heredoc.nomad**\n- **interpolation.nomad**\n\n\n"
  },
  {
    "path": "docker/labels/heredoc.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n\n        port_map {\n          db = 6379\n        }\n\tlabels {\n          com.datadoghq.ad.logs = <<EOF\n            [{\n              \"source\": \"atlas\",\n              \"service\": \"atlas\",\n              \"log_processing_rules\": [{\n                \"type\": \"exclude_at_match\",\n                \"name\": \"archivist_sensitive_urls\",\n                \"pattern\": \"Archivist upload completion callback received\"\n              }]\n            }]\nEOF\n\t}\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n\n        network {\n          mbits = 10\n          port \"db\" {}\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/labels/interpolation.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    task \"redis\" {\n      template {\n        destination = \"secrets/log.env\"\n        env = true\n        data = <<EOF\nDATADOG.LOG={{`\n            [{\n              \"source\": \"atlas\",\n              \"service\": \"atlas\",\n              \"log_processing_rules\": [{\n                \"type\": \"exclude_at_match\",\n                \"name\": \"archivist_sensitive_urls\",\n                \"pattern\": \"Archivist upload completion callback received\"\n              }]\n            }]\n` | parseJSON | toJSON | toJSON }}\nEOF\n      }\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n\n        port_map {\n          db = 6379\n        }\n\tlabels {\n          com.datadoghq.ad.logs = \"${DATADOG.LOG}\"\n\t}\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n\n        network {\n          mbits = 10\n          port \"db\" {}\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/labels/literal.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        labels {\n          \"com.datadoghq.ad.logs\" =\"[{\\\"source\\\": \\\"nginx\\\", \\\"service\\\": \\\"webapp\\\"}]\"\n        }\n        port_map {\n          db = 6379\n        }\n      }\n\n      resources {\n        network {\n          port \"db\" {}\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "docker/mount_alloc/README.md",
    "content": "# Mounting the alloc folder to arbitrary paths with the Docker task Driver\n\nThis example demonstrates mounting the alloc folder to an\nadditional path inside the container.\n"
  },
  {
    "path": "docker/mount_alloc/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port db {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"redis:7\"\n        volumes = [\"../alloc:/allocation\"]\n        ports   = [\"db\"]\n      }\n\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "drain/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "dummy/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"alloc\" {\n    count = 3\n    task \"alpine\" {\n      driver = \"docker\"\n\n      config {\n        image = \"alpine:latest\"\n        command = \"sh\"\n        args = [\"-c\", \"while true; do sleep 300; done\"]\n\n      }\n\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "echo_stack/README.md",
    "content": ""
  },
  {
    "path": "echo_stack/fabio-system.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type        = \"system\"\n\n  update {\n    stagger = \"5s\"\n    max_parallel = 1\n  }\n\n  group \"linux-amd64\" {\n    network {\n      port \"http\" {\n        static = 9999\n        to     = 9999\n      }\n\n      port \"ui\" {\n        static = 9998\n        to     = 9998\n      }\n    }\n\n    constraint {\n      attribute = \"${attr.cpu.arch}\"\n      operator  = \"=\"\n      value     = \"amd64\"\n    }\n\n    constraint {\n      attribute = \"${attr.kernel.name}\"\n      operator  = \"=\"\n      value     = \"linux\"\n    }\n\n    service {\n      tags = [\"fabio\", \"lb\"]\n      port = \"ui\"\n      check {\n        name     = \"fabio ui port is alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n      check {\n        name     = \"fabio health check\"\n        type     = \"http\"\n        path     = \"/health\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"fabio\" {\n      driver = \"exec\"\n\n      config {\n        command = \"fabio-1.5.2-go1.8.3-linux_amd64\"\n      }\n\n      artifact {\n        source = \"https://github.com/fabiolb/fabio/releases/download/v1.5.2/fabio-1.5.2-go1.8.3-linux_amd64\"\n#        options {\n#          checksum = \"sha256:7dc786c3dfd8c770d20e524629d0d7cd2cf8bb84a1bf98605405800b28705198\"\n#        }\n      }\n\n      resources {\n        cpu    = 200\n        memory = 32\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "echo_stack/login-service.nomad",
    "content": "job \"login-service\" {\n  datacenters = [\"dc1\"]\n\n  group \"application\" {\n    count = 3\n\n    network {\n      port \"http\" {}\n    }\n\n    service {\n      name = \"login-service\"\n      tags = [\"urlprefix-/login\"]\n      port = \"http\"\n\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"http-echo\"\n        args = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>Welcome to my application.</h1><hr />You are on ${NOMAD_IP_http} and will be redirected to your profile.<script language=\\\"javascript\\\">window.location.replace(\\\"/profile\\\");</script></body></html>\",\n        ]\n      }\n\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\"\n\n        options {\n          checksum = \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n        }\n      }\n\n      resources {\n        memory = 10\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "echo_stack/profile-service.nomad",
    "content": "job \"profile-service\" {\n  datacenters = [\"dc1\"]\n\n  group \"application\" {\n    count = 3\n\n    network {\n      port \"http\" {}\n    }\n\n    service {\n      name = \"profile-service\"\n      tags = [\"urlprefix-/profile\"]\n      port = \"http\"\n\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"http-echo\"\n        args = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>User Profile</h1><hr />This might be a profile page in a while<br />You are on instance ${NOMAD_ALLOC_INDEX} on ${NOMAD_IP_http}.</body></html>\",\n        ]\n      }\n\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\"\n\n        options {\n          checksum = \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n        }\n      }\n\n      resources {\n        memory = 10\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "env/escaped_env_vars/Dockerfile",
    "content": "FROM busybox\nCOPY ./entrypoint.sh /bin/entrypoint.sh\nRUN chmod +x /bin/entrypoint.sh\nENTRYPOINT [\"entrypoint.sh\"]"
  },
  {
    "path": "env/escaped_env_vars/README.md",
    "content": "# Escaped Environment Variables\n\nSuppose you have a Docker job that sets environment variables\nin the entrypoint and you would like to refer to them as\narguments in the subsequent command's arguments.\n\nThis sample will use an exec job to demonstrate how this\nwould be accomplished in a Nomad job\n"
  },
  {
    "path": "env/escaped_env_vars/entrypoint.sh",
    "content": "#!/bin/sh\n\nexport entryVar=\"Entrypoint Variable\"\n\necho \"${1}\""
  },
  {
    "path": "env/escaped_env_vars/example.nomad",
    "content": "variable \"dcs\" {\n  type        = list(string)\n  description = \"Datecenters to run job in.\"\n  default     = [\"dc1\"]\n}\n\njob \"example\" {\n  datacenters = var.dcs\n  type        = \"batch\"\n\n  group \"group\" {\n    task \"escaped\" {\n      driver = \"exec\"\n\n      config {\n        command = \"run.sh\"\n        args = [\n          \"\\\\$$var1\"\n        ]\n      }\n\n      env = {\n        var1 = \"Some value\"\n      }\n\n      template {\n        destination = \"run.sh\"\n        data      = <<EOT\n#!/usr/bin/env bash\necho \"$1 = (eval $1)\"\nEOT\n      }\n    }\n  }\n}"
  },
  {
    "path": "environment/README.md",
    "content": "# Nomad Environment Variable visualization\n\nThis job will start several tasks and write out their environment to the logs.\nIt's instructive to demonstrate how the environment variables are named between\ntasks within a group.\n"
  },
  {
    "path": "environment/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"group1\" {\n    network {\n      port \"dynamic_port1\" {}\n\n      port \"static_port1\" {\n        static = 9999\n      }\n\n      port \"static_port2\" {\n        static = 9998\n      }\n    }\n\n    task \"task1\" {\n      driver = \"exec\"\n\n      config {\n        command = \"env\"\n      }\n    }\n\n    task \"task2\" {\n      driver = \"exec\"\n\n      config {\n        command = \"env\"\n      }\n    }\n\n  }\n  group \"group2\" {\n\n    task \"task1\" {\n      driver = \"exec\"\n\n      config {\n        command = \"env\"\n      }\n    }\n\n    task \"task2\" {\n      driver = \"exec\"\n\n      config {\n        command = \"env\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "exec/host-volumes-and-users/README.md",
    "content": ""
  },
  {
    "path": "exec/host-volumes-and-users/scratch.nomad",
    "content": "job \"scratch\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"group\" {\n    volume \"scratch\" {\n      type      = \"host\"\n      source    = \"scratch\"\n      read_only = false\n    }\n\n    count = 1\n\n    task \"nobody\" {\n      volume_mount {\n        volume      = \"scratch\"\n        destination = \"/scratch\"\n      }\n\n      driver = \"exec\"\n\n      config {\n        command = \"bash\"\n        args    = [\"-c\", \"while true; do sleep 500; done\"]\n      }\n    }\n\n    task \"user1\" {\n      volume_mount {\n        volume      = \"scratch\"\n        destination = \"/scratch\"\n      }\n\n      driver = \"exec\"\n\n      config {\n        command = \"bash\"\n        args    = [\"-c\", \"while true; do sleep 500; done\"]\n      }\n\n      user = \"user1\"\n    }\n  }\n}\n"
  },
  {
    "path": "exec-zip/README.md",
    "content": "## exec-zip\n\nThis sample creates an exec job that downloads a tarball artifact with directory permissions set.  This should enable the jailed user to write into the writable file, but not the non-writable one.\n\n\n```\n[root@nomad-server-1 exec-zip]# tar -tvf folder.tgz \ndrwxr-xr-x root/root         0 2018-04-05 15:05 ./folders/\ndrwxrwxrwx root/root         0 2018-04-05 15:14 ./folders/writable/\n-rwxrwxrwx root/root         0 2018-04-05 15:14 ./folders/writable/file1.txt\ndrwxr-xr-x root/root         0 2018-04-05 15:14 ./folders/not-writable/\n-rw-r--r-- root/root         0 2018-04-05 15:14 ./folders/not-writable/file1.txt\n-rw-r--r-- root/root         0 2018-04-05 15:14 ./folders/not-writable/file2.txt\n```\n\n```\n[root@nomad-client-3 folder]# ls -alR folders\nfolders:\ntotal 0\ndrwxr-xr-x. 4 root root 42 Apr  6 12:27 .\ndrwxr-xr-x. 3 root root 21 Apr  6 12:27 ..\ndrwxr-xr-x. 2 root root 40 Apr  6 12:27 not-writable\ndrwxr-xr-x. 2 root root 23 Apr  6 12:27 writable\n\nfolders/not-writable:\ntotal 0\ndrwxr-xr-x. 2 root root 40 Apr  6 12:27 .\ndrwxr-xr-x. 4 root root 42 Apr  6 12:27 ..\n-rw-r--r--. 1 root root  0 Apr  6 12:27 file1.txt\n-rw-r--r--. 1 root root  0 Apr  6 12:27 file2.txt\n\nfolders/writable:\ntotal 0\ndrwxr-xr-x. 2 root root 23 Apr  6 12:27 .\ndrwxr-xr-x. 4 root root 42 Apr  6 12:27 ..\n-rwxrwxrwx. 1 root root  0 Apr  6 12:27 file1.txt\n[root@nomad-client-3 folder]# \n```\n\n"
  },
  {
    "path": "exec-zip/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  group \"cache\" {\n    count = 1\n    task \"redis\" {\n      artifact {\n\tsource = \"https://angrycub-hc.s3.amazonaws.com/public/folder.tgz\"\n        destination = \"local/folder\"\n      }\n      driver = \"exec\"\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"echo Starting up and sleeping...;sleep 1000\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "fabio/README.md",
    "content": "# Fabio Jobs\n\n>Fabio is an HTTP and TCP reverse proxy that configures itself with data from\nConsul.\n>\n>Traditional load balancers and reverse proxies need to be configured with a\nconfig file. The configuration contains the hostnames and paths the proxy is\nforwarding to upstream services. This process can be automated with tools like\nconsul-template that generate config files and trigger a reload.\n>\n>Fabio works differently since it updates its routing table directly from the\ndata stored in Consul as soon as there is a change and without restart or\nreloading.\n\nMore information about Fabio can be found at the project's website: <https://fabiolb.net/>\n\n## The job specifications\n\n- `fabio-docker.nomad` - A Nomad system job that uses the Docker task driver to\n  run the `latest` tag of the container. This configuration simplifies locating\n  a fabio instance from an external loadbalancer like an ELB. Simplest way to\n  get started with Fabio.\n\n- `fabio-system.nomad` - A Nomad system job that uses the exec task driver to\n  run instances of the Fabio 1.5.15 linux/amd64 binary on all the linux/amd64\n  clients in your cluster. This configuration simplifies locating a fabio\n  instance from an external loadbalancer like an ELB.\n\n- `fabio-service.nomad` - A Nomad service job that uses the exec task driver to\n  run three instances of the Fabio 1.5.15 linux/amd64 binary. This configuration\n  requires a load balancer capable of inspecting Consul or testing the Fabio\n  ports over all of the clients to identify where the Fabio instances landed.\n\n"
  },
  {
    "path": "fabio/fabio-docker.nomad",
    "content": "job \"fabio\" {\n  datacenters = [\"dc1\"]\n  type        = \"system\"\n\n  update {\n    stagger      = \"5s\"\n    max_parallel = 1\n  }\n\n  group \"linux-amd64\" {\n    network {\n      port \"http\" {\n        static = 9999\n      }\n\n      port \"ui\" {\n        static = 9998\n      }\n    }\n\n    service {\n      tags = [\"fabio\", \"lb\"]\n      port = \"ui\"\n\n      check {\n        name     = \"fabio ui port is alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n\n      check {\n        name     = \"fabio health check\"\n        type     = \"http\"\n        path     = \"/health\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"fabio\" {\n      constraint {\n        attribute = \"${attr.cpu.arch}\"\n        operator  = \"=\"\n        value     = \"amd64\"\n      }\n\n      constraint {\n        attribute = \"${attr.kernel.name}\"\n        operator  = \"=\"\n        value     = \"linux\"\n      }\n\n      env {\n       ## Add if your consul agent is not listening on 127.0.0.1:8500 \n       # registry_consul_addr = \"${attr.unique.network.ip-address}:8500\"\n\n       ## Add if your Consul cluster is ACL-enabled.\n       # registry_consul_token = \"«add if you have a consul enabled cluster»\"\n      }\n\n      driver = \"docker\"\n\n      config { \n        image = \"fabiolb/fabio:latest\"\n        network_mode = \"host\"\n        ports = [\"proxy\",\"ui\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 150 \n      }\n    }\n  }\n}\n"
  },
  {
    "path": "fabio/fabio-service.nomad",
    "content": "job \"fabio\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  update {\n    stagger      = \"5s\"\n    max_parallel = 1\n  }\n\n  group \"linux-amd64\" {\n    count = 3\n\n    network {\n      port \"http\" {\n        static = 9999\n      }\n\n      port \"ui\" {\n        static = 9998\n      }\n    }\n\n    service {\n      tags = [\"fabio\", \"lb\"]\n      port = \"ui\"\n\n      check {\n        name     = \"fabio ui port is alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n\n      check {\n        name     = \"fabio health check\"\n        type     = \"http\"\n        path     = \"/health\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"fabio\" {\n      constraint {\n        attribute = \"${attr.cpu.arch}\"\n        operator  = \"=\"\n        value     = \"amd64\"\n      }\n\n      constraint {\n        attribute = \"${attr.kernel.name}\"\n        operator  = \"=\"\n        value     = \"linux\"\n      }\n\n      env {\n        registry_consul_addr = \"${attr.unique.network.ip-address}:8500\"\n       # registry_consul_token = \"«add if you have a consul enabled cluster»\"\n      }\n\n      driver = \"exec\"\n\n      config { \n        command = \"fabio-1.5.15-go1.15.5-linux_amd64\"\n      }\n\n      artifact {\n        source = \"https://github.com/fabiolb/fabio/releases/download/v1.5.15/fabio-1.5.15-go1.15.5-linux_amd64\"\n\n        options {\n          checksum = \"sha256:14c7a02ca95fb00a4f3010eab4e3c0e354a3f4953d2a793cb800332012f42066\"\n        }\n      }\n\n      resources {\n        cpu    = 200\n        memory = 150 \n      }\n    }\n  }\n}\n"
  },
  {
    "path": "fabio/fabio-system.nomad",
    "content": "job \"fabio\" {\n  datacenters = [\"dc1\"]\n  type        = \"system\"\n\n  update {\n    stagger      = \"5s\"\n    max_parallel = 1\n  }\n\n  group \"linux-amd64\" {\n    network {\n      port \"http\" {\n        static = 9999\n      }\n\n      port \"ui\" {\n        static = 9998\n      }\n    }\n\n    service {\n      tags = [\"fabio\", \"lb\"]\n      port = \"ui\"\n\n      check {\n        name     = \"fabio ui port is alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n\n      check {\n        name     = \"fabio health check\"\n        type     = \"http\"\n        path     = \"/health\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"fabio\" {\n      constraint {\n        attribute = \"${attr.cpu.arch}\"\n        operator  = \"=\"\n        value     = \"amd64\"\n      }\n\n      constraint {\n        attribute = \"${attr.kernel.name}\"\n        operator  = \"=\"\n        value     = \"linux\"\n      }\n\n      env {\n        registry_consul_addr = \"${attr.unique.network.ip-address}:8500\"\n       # registry_consul_token = \"«add if you have a consul enabled cluster»\"\n      }\n\n      driver = \"exec\"\n\n      config { \n        command = \"fabio-1.5.15-go1.15.5-linux_amd64\"\n      }\n\n      artifact {\n        source = \"https://github.com/fabiolb/fabio/releases/download/v1.5.15/fabio-1.5.15-go1.15.5-linux_amd64\"\n\n        options {\n          checksum = \"sha256:14c7a02ca95fb00a4f3010eab4e3c0e354a3f4953d2a793cb800332012f42066\"\n        }\n      }\n\n      resources {\n        cpu    = 200\n        memory = 150 \n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "fabio-ssl/fabio-ssl.nomad",
    "content": "job \"fabio-stg\" {\n  datacenters = [\"dc1\"]\n  type        = \"system\"\n\n  group \"fabio\" {\n    network {\n      port \"http\" {\n        static = 80\n      }\n      port \"https\" {\n        static = 443\n      }\n      port \"ui\" {\n        static = 9998\n      }\n      port \"lb\" {\n        static = 9999\n      }\n    }\n\n    service {\n      name = \"fabio-lb\"\n      tags = [\"fabio\"]\n      port = \"http\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"http\"\n        path     = \"/\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    service {\n      name = \"fabio-lb-tls\"\n      tags = [\"fabio\"]\n      port = \"https\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"https\"\n        path     = \"/\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    service {\n      name = \"fabio-ui\"\n      tags = [\"fabio\"]\n      port = \"ui\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"ui\"\n        path     = \"/\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"fabio\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"fabiolb/fabio\"\n        volumes = [\"/etc/fabio:/etc/fabio\"]\n        ports   = [\"http\", \"https\", \"ui\", \"lb\"]\n      }\n\n      resources {\n        cpu    = 1000\n        memory = 70\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "failing_jobs/README.md",
    "content": "Failing Jobs\n\nThis directory contains jobs that will fail by design.  They are useful for viewing the log events and behaviors when these are encountered in the wild\n"
  },
  {
    "path": "failing_jobs/failing_sidecar/README.md",
    "content": "Failing Sidecar\n\nThis task is designed to demonstrate the behavior of a TaskGroup when a Task within it fails to start.\n\n"
  },
  {
    "path": "failing_jobs/failing_sidecar/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    task \"redis\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n        port_map { db = 6379 }\n      }\n      resources { network { port \"db\" {} } }\n      service {\n        name = \"redis\"\n        tags = [\"cache\"]\n        port = \"db\"\n        check {\n          name     = \"alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n\n    task \"faily-mcfailface\" {\n      driver = \"exec\"\n      config {\n        command = \"/bin/bash\"\n        args = [\"-c\", \"echo \\\"I don't feel so good....\\\"; sleep 5; echo \\\"see... I told you I was sick...\\\"; exit 1\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "failing_jobs/impossible_constratint/README.md",
    "content": "# Impossible Constraint\n\nThis job demonstrates what happens when someone sets multiple node name constraints within a Nomad group (the smallest single placement)\n\n"
  },
  {
    "path": "failing_jobs/impossible_constratint/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  group \"cache\" {\n    count = 1\n    task \"redis1\" {\n      constraint {\n        attribute = \"${attr.unique.hostname}\"\n        value     = \"nomad-client-1.example.com\"\n      }\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n      }\n    }\n    task \"redis2\" {\n      constraint {\n        attribute = \"${attr.unique.hostname}\"\n        value     = \"nomad-client-2.example.com\"\n      }\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "giant/example.nomad",
    "content": "job \"giant\" {\n  datacenters = [\"dc1\"]\n\n  group \"mysql\" {\n    volume \"mysql\" { type=\"host\"  source = \"mysql\"  }\n    ephemeral_disk {\n      migrate = false\n      size    = \"2000\"\n      sticky  = true\n    }\n    task \"ls\" {\n      driver = \"exec\"\n      volume_mount { volume=\"mysql\" destination=\"/var/lib/mysql\" }\n      config {\n\tcommand=\"bash\"\n        args=[\"-c\", \"while true; do ls /var/lib/mysql; sleep 60; done\"]\n      }\n\n      resources {\n        cpu=100 memory=128  \n      }\n    }\n  }\n}\n"
  },
  {
    "path": "guide/TUTORIAL_TEMPLATE.mdx",
    "content": "---\nname: <title>\nproducts_used:\n  <products used in this tutorial; list primary product first and do not capitalize>\n  - vault\n  - terraform\n  - consul\n  - nomad\ndescription: |-\n  Short description about what the reader will do/learn. Limit 250 characters; include keyword for SEO.\nredirects: <former URL(s) to be redirected, e.g., tutorials/terraform/intro-gcp>\ndefault_collection_context: <slug of primary collection, e.g., consul/datacenter-deploy>\nkatacoda_scenario_id: <katacoda_scenario_id> <-- if there's no scenario to embed, remove this entry\nvideo_id: <video_id>  <-- if there's no video, remove this entry\nvideo_host: `youtube` or `wistia` <-- if there's no video, remove this entry\n---\n\nIntroduction goes here... (e.g. what you'll learn in this tutorial)\n\n## Challenge\n\n> **OPTIONAL:** If this is covered in the introduction, you don't need to have this\n> explicit header.\n\nState the common business challenge. Often times, you can get this information\nin the **Background** section of the RFC written by the PM. If not, reach out to\nthe PM and ask for an example (customer story).\n\n## Solution\n\n> **OPTIONAL:** If this is covered in the introduction, you don't need to have this\n> explicit header.\n\nHow the product solves this challenge. This is where you explain why you should\nread this tutorial.\n\n## Personas\n\n_If applicable_\n\n> **OPTIONAL:** If this is covered in the introduction, you don't need to have this\n> explicit section.\n\nIf the guided steps involve multiple roles, describe it here.\n\nExample:\n\nThe end-to-end scenario described in this tutorial involves two personas:\n\n- `admin` with privileged permissions to write secrets\n- `apps` reads the secrets from Vault (client)\n\n## Prerequisites\n\nExample:\n\n- Vault **version 1.2.0** or later\n- [Kubernetes comand-line interface (CLI)](https://kubernetes.io/docs/tasks/tools/install-kubectl/)\n- [Minikube](https://minikube.sigs.k8s.io)\n\n> If there is a corresponding Katacoda scenario, be sure to add [`<KatacodaToggleButton />`](https://github.com/hashicorp/learn/blob/master/components/katacoda-embed/README.md) to show the \"Show Terminal\" button.\n\n**NOTE:** An interactive tutorial is also available if you do not have an environment to perform the steps described in this tutorial. Click the **Show Terminal** button to start.\n\n<KatacodaToggleButton />\n\n## Action title 1\n\nShort description of this step.\n\nIf applicable, demonstrate the steps using CLI, API, and/or UI.\n\nLeverage the [tabs component](https://github.com/hashicorp/learn/blob/master/components/tabs/README.md) to organize the content.\n\n<Tabs>\n<Tab heading=\"CLI command\">\n\n\nStep by step instruction here.\n\n</Tab>\n<Tab heading=\"API call using cURL\">\n\n\nStep by step instruction here.\n\n</Tab>\n<Tab heading=\"Web UI\">\n\n\nStep by step instruction here.\n\n</Tab>\n</Tabs>\n\n\n## Action title 2\n\nShort description of this step.\n\nIf applicable, demonstrate the steps using CLI, API, and/or UI.\n\nLeverage the [tabs component](https://github.com/hashicorp/learn/blob/master/components/tabs/README.md) to organize the content.\n\n<Tabs>\n<Tab heading=\"CLI command\">\n\n\nStep by step instruction here.\n\n</Tab>\n<Tab heading=\"API call using cURL\">\n\n\nStep by step instruction here.\n\n</Tab>\n<Tab heading=\"Web UI\">\n\n\nStep by step instruction here.\n\n</Tab>\n</Tabs>\n\n\n## Action title 3\n\nShort description of this step.\n\nIf applicable, demonstrate the steps using CLI, API, and/or UI.\n\nLeverage the [tabs component](https://github.com/hashicorp/learn/blob/master/components/tabs/README.md) to organize the content.\n\n<Tabs>\n<Tab heading=\"CLI command\">\n\n\nStep by step instruction here.\n\n</Tab>\n<Tab heading=\"API call using cURL\">\n\n\nStep by step instruction here.\n\n</Tab>\n<Tab heading=\"Web UI\">\n\n\nStep by step instruction here.\n\n</Tab>\n</Tabs>\n\n\n...\n\n## Additional discussion\n\n_Optional_\n\nOften times, support or TAMs ask you to add extra discussion to explain little\nmore about cloud provider specific pitfalls, etc. You can add them here if it\ndoes not fit into anywhere else.\n\n## Next steps\n\nIn this section, start with a brief **_summary_** of what you have learned in\nthis tutorial re-emphasizing the business value. Then provide some guidance on the\nnext steps to extend the user's knowledge. Briefly describe what the user will do in the next tutorial if the current collection is sequential.\n\nAdd cross-referencing links to get more information about the feature (e.g.\nproduct doc page, webinar links, blog post, etc.).\n"
  },
  {
    "path": "host_volume/README.md",
    "content": "## Host Volume Examples\n\nThese sample job files will exercise a simple host volume configuration.  They assume that the following\nvolumes are configured somewhere in your cluster:\n\n```hcl\nhost_volume \"certs\" {\n  path      = \"/data/certs\"\n  read_only = \"true\"\n}\n\nhost_volume \"mysql\" {\n  path      = \"/data/mysql\"\n  read_only = \"false\"\n}\n\nhost_volume \"prometheus\" {\n  path      = \"/data/prometheus\"\n  read_only = \"false\"\n}\n\nhost_volume \"templates\" {\n  path      = \"/data/templates\"\n  read_only = \"true\"\n}\n```\n"
  },
  {
    "path": "host_volume/mariadb/mariadb.nomad",
    "content": "job \"mariadb\" {\n  datacenters = [\"dc1\"]\n\n  group \"database\" {\n    volume \"mysql\" { type=\"host\"  source = \"mysql\"  }\n    task \"maria\" {\n      driver = \"docker\"\n      volume_mount { volume=\"mysql\" destination=\"/var/lib/mysql\" }\n      env {\n        \"MYSQL_ROOT_PASSWORD\" =\"mypass\"\n      }\n      config {\n        image = \"mariadb/server:10.3\"\n        port_map { db=3306 }\n      }\n\n      resources {\n        cpu=500 memory=256 network { port \"db\" {} }\n      }\n\n      service {\n        name = \"mariadb\"\n        tags = [\"persist\"]\n        port = \"db\"\n        check { name=\"alive\" type=\"tcp\" interval=\"10s\" timeout=\"2s\" }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "host_volume/prometheus/README.md",
    "content": "# Prometheus\n\n\nOn the client, you will need a rule to allow the docker containers to talk to the local\nconsul agents.\n\n```\nfirewall-cmd --permanent --zone=public --add-rich-rule='rule family=ipv4 source address=172.17.0.0/16 accept' && firewall-cmd --reload\n```\n\n\n## Connecting to the instances\n\n\n"
  },
  {
    "path": "host_volume/prometheus/grafana/README.md",
    "content": "Thanks to [Nextty](https://grafana.com/orgs/derekamz) for two great grafana dashboards to start with:\n\n* Nomad Jobs - https://grafana.com/dashboards/6281\n* Nomad Cluster - \n"
  },
  {
    "path": "host_volume/prometheus/grafana/nomad_jobs.json",
    "content": "{\n  \"__inputs\": [\n    {\n      \"name\": \"DS_PROMETHEUS\",\n      \"label\": \"prometheus\",\n      \"description\": \"\",\n      \"type\": \"datasource\",\n      \"pluginId\": \"prometheus\",\n      \"pluginName\": \"Prometheus\"\n    }\n  ],\n  \"__requires\": [\n    {\n      \"type\": \"grafana\",\n      \"id\": \"grafana\",\n      \"name\": \"Grafana\",\n      \"version\": \"5.1.2\"\n    },\n    {\n      \"type\": \"panel\",\n      \"id\": \"graph\",\n      \"name\": \"Graph\",\n      \"version\": \"5.0.0\"\n    },\n    {\n      \"type\": \"datasource\",\n      \"id\": \"prometheus\",\n      \"name\": \"Prometheus\",\n      \"version\": \"5.0.0\"\n    }\n  ],\n  \"annotations\": {\n    \"list\": [\n      {\n        \"builtIn\": 1,\n        \"datasource\": \"-- Grafana --\",\n        \"enable\": true,\n        \"hide\": true,\n        \"iconColor\": \"rgba(0, 211, 255, 1)\",\n        \"name\": \"Annotations & Alerts\",\n        \"type\": \"dashboard\"\n      }\n    ]\n  },\n  \"editable\": true,\n  \"gnetId\": 6281,\n  \"graphTooltip\": 0,\n  \"id\": null,\n  \"iteration\": 1527401878265,\n  \"links\": [],\n  \"panels\": [\n    {\n      \"aliasColors\": {},\n      \"bars\": false,\n      \"dashLength\": 10,\n      \"dashes\": false,\n      \"datasource\": \"${DS_PROMETHEUS}\",\n      \"fill\": 1,\n      \"gridPos\": {\n        \"h\": 6,\n        \"w\": 12,\n        \"x\": 0,\n        \"y\": 0\n      },\n      \"id\": 2,\n      \"legend\": {\n        \"avg\": false,\n        \"current\": false,\n        \"max\": false,\n        \"min\": false,\n        \"show\": true,\n        \"total\": false,\n        \"values\": false\n      },\n      \"lines\": true,\n      \"linewidth\": 1,\n      \"links\": [],\n      \"nullPointMode\": \"null\",\n      \"percentage\": false,\n      \"pointradius\": 5,\n      \"points\": false,\n      \"renderer\": \"flot\",\n      \"repeat\": \"host\",\n      \"repeatDirection\": \"v\",\n      \"seriesOverrides\": [],\n      \"spaceLength\": 10,\n      \"stack\": false,\n      \"steppedLine\": false,\n      \"targets\": [\n        {\n          \"expr\": \"avg(nomad_client_allocs_cpu_total_percent{host=~\\\"$host\\\"}) by(exported_job, task)\",\n          \"format\": \"time_series\",\n          \"interval\": \"\",\n          \"intervalFactor\": 1,\n          \"legendFormat\": \"{{task}}\",\n          \"refId\": \"A\"\n        }\n      ],\n      \"thresholds\": [],\n      \"timeFrom\": null,\n      \"timeShift\": null,\n      \"title\": \"CPU Usage Percent - $host\",\n      \"tooltip\": {\n        \"shared\": true,\n        \"sort\": 0,\n        \"value_type\": \"individual\"\n      },\n      \"type\": \"graph\",\n      \"xaxis\": {\n        \"buckets\": null,\n        \"mode\": \"time\",\n        \"name\": null,\n        \"show\": true,\n        \"values\": []\n      },\n      \"yaxes\": [\n        {\n          \"decimals\": 3,\n          \"format\": \"percentunit\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        },\n        {\n          \"format\": \"short\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        }\n      ],\n      \"yaxis\": {\n        \"align\": false,\n        \"alignLevel\": null\n      }\n    },\n    {\n      \"aliasColors\": {},\n      \"bars\": false,\n      \"dashLength\": 10,\n      \"dashes\": false,\n      \"datasource\": \"${DS_PROMETHEUS}\",\n      \"fill\": 1,\n      \"gridPos\": {\n        \"h\": 6,\n        \"w\": 12,\n        \"x\": 12,\n        \"y\": 0\n      },\n      \"id\": 3,\n      \"legend\": {\n        \"avg\": false,\n        \"current\": false,\n        \"max\": false,\n        \"min\": false,\n        \"show\": true,\n        \"total\": false,\n        \"values\": false\n      },\n      \"lines\": true,\n      \"linewidth\": 1,\n      \"links\": [],\n      \"nullPointMode\": \"null\",\n      \"percentage\": false,\n      \"pointradius\": 5,\n      \"points\": false,\n      \"renderer\": \"flot\",\n      \"repeat\": \"host\",\n      \"repeatDirection\": \"v\",\n      \"seriesOverrides\": [],\n      \"spaceLength\": 10,\n      \"stack\": false,\n      \"steppedLine\": false,\n      \"targets\": [\n        {\n          \"expr\": \"avg(nomad_client_allocs_cpu_total_ticks{host=~\\\"$host\\\"}) by(exported_job, task)\",\n          \"format\": \"time_series\",\n          \"interval\": \"\",\n          \"intervalFactor\": 1,\n          \"legendFormat\": \"{{task}}\",\n          \"refId\": \"A\"\n        }\n      ],\n      \"thresholds\": [],\n      \"timeFrom\": null,\n      \"timeShift\": null,\n      \"title\": \"CPU Total Ticks - $host\",\n      \"tooltip\": {\n        \"shared\": true,\n        \"sort\": 0,\n        \"value_type\": \"individual\"\n      },\n      \"type\": \"graph\",\n      \"xaxis\": {\n        \"buckets\": null,\n        \"mode\": \"time\",\n        \"name\": null,\n        \"show\": true,\n        \"values\": []\n      },\n      \"yaxes\": [\n        {\n          \"decimals\": 3,\n          \"format\": \"timeticks\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        },\n        {\n          \"format\": \"short\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        }\n      ],\n      \"yaxis\": {\n        \"align\": false,\n        \"alignLevel\": null\n      }\n    },\n    {\n      \"aliasColors\": {},\n      \"bars\": false,\n      \"dashLength\": 10,\n      \"dashes\": false,\n      \"datasource\": \"${DS_PROMETHEUS}\",\n      \"fill\": 1,\n      \"gridPos\": {\n        \"h\": 6,\n        \"w\": 12,\n        \"x\": 0,\n        \"y\": 6\n      },\n      \"id\": 6,\n      \"legend\": {\n        \"avg\": false,\n        \"current\": false,\n        \"max\": false,\n        \"min\": false,\n        \"show\": true,\n        \"total\": false,\n        \"values\": false\n      },\n      \"lines\": true,\n      \"linewidth\": 1,\n      \"links\": [],\n      \"nullPointMode\": \"null\",\n      \"percentage\": false,\n      \"pointradius\": 5,\n      \"points\": false,\n      \"renderer\": \"flot\",\n      \"repeat\": \"host\",\n      \"repeatDirection\": \"v\",\n      \"seriesOverrides\": [],\n      \"spaceLength\": 10,\n      \"stack\": false,\n      \"steppedLine\": false,\n      \"targets\": [\n        {\n          \"expr\": \"avg(nomad_client_allocs_memory_rss{host=~\\\"$host\\\"}) by(exported_job, task)\",\n          \"format\": \"time_series\",\n          \"interval\": \"\",\n          \"intervalFactor\": 1,\n          \"legendFormat\": \"{{task}}\",\n          \"refId\": \"A\"\n        }\n      ],\n      \"thresholds\": [],\n      \"timeFrom\": null,\n      \"timeShift\": null,\n      \"title\": \"RSS - $host\",\n      \"tooltip\": {\n        \"shared\": true,\n        \"sort\": 0,\n        \"value_type\": \"individual\"\n      },\n      \"type\": \"graph\",\n      \"xaxis\": {\n        \"buckets\": null,\n        \"mode\": \"time\",\n        \"name\": null,\n        \"show\": true,\n        \"values\": []\n      },\n      \"yaxes\": [\n        {\n          \"decimals\": 3,\n          \"format\": \"decbytes\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        },\n        {\n          \"format\": \"short\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        }\n      ],\n      \"yaxis\": {\n        \"align\": false,\n        \"alignLevel\": null\n      }\n    },\n    {\n      \"aliasColors\": {},\n      \"bars\": false,\n      \"dashLength\": 10,\n      \"dashes\": false,\n      \"datasource\": \"${DS_PROMETHEUS}\",\n      \"fill\": 1,\n      \"gridPos\": {\n        \"h\": 6,\n        \"w\": 12,\n        \"x\": 12,\n        \"y\": 6\n      },\n      \"id\": 7,\n      \"legend\": {\n        \"avg\": false,\n        \"current\": false,\n        \"max\": false,\n        \"min\": false,\n        \"show\": true,\n        \"total\": false,\n        \"values\": false\n      },\n      \"lines\": true,\n      \"linewidth\": 1,\n      \"links\": [],\n      \"nullPointMode\": \"null\",\n      \"percentage\": false,\n      \"pointradius\": 5,\n      \"points\": false,\n      \"renderer\": \"flot\",\n      \"repeat\": \"host\",\n      \"repeatDirection\": \"v\",\n      \"seriesOverrides\": [],\n      \"spaceLength\": 10,\n      \"stack\": false,\n      \"steppedLine\": false,\n      \"targets\": [\n        {\n          \"expr\": \"avg(nomad_client_allocs_memory_cache{host=~\\\"$host\\\"}) by(exported_job, task)\",\n          \"format\": \"time_series\",\n          \"interval\": \"\",\n          \"intervalFactor\": 1,\n          \"legendFormat\": \"{{task}}\",\n          \"refId\": \"A\"\n        }\n      ],\n      \"thresholds\": [],\n      \"timeFrom\": null,\n      \"timeShift\": null,\n      \"title\": \"Memory Cache - $host\",\n      \"tooltip\": {\n        \"shared\": true,\n        \"sort\": 0,\n        \"value_type\": \"individual\"\n      },\n      \"type\": \"graph\",\n      \"xaxis\": {\n        \"buckets\": null,\n        \"mode\": \"time\",\n        \"name\": null,\n        \"show\": true,\n        \"values\": []\n      },\n      \"yaxes\": [\n        {\n          \"decimals\": 3,\n          \"format\": \"decbytes\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        },\n        {\n          \"format\": \"short\",\n          \"label\": null,\n          \"logBase\": 1,\n          \"max\": null,\n          \"min\": null,\n          \"show\": true\n        }\n      ],\n      \"yaxis\": {\n        \"align\": false,\n        \"alignLevel\": null\n      }\n    }\n  ],\n  \"schemaVersion\": 16,\n  \"style\": \"dark\",\n  \"tags\": [],\n  \"templating\": {\n    \"list\": [\n      {\n        \"allValue\": null,\n        \"current\": {},\n        \"datasource\": \"${DS_PROMETHEUS}\",\n        \"hide\": 0,\n        \"includeAll\": false,\n        \"label\": \"DC\",\n        \"multi\": false,\n        \"name\": \"datacenter\",\n        \"options\": [],\n        \"query\": \"label_values(nomad_client_uptime, datacenter)\",\n        \"refresh\": 1,\n        \"regex\": \"\",\n        \"sort\": 0,\n        \"tagValuesQuery\": \"\",\n        \"tags\": [],\n        \"tagsQuery\": \"\",\n        \"type\": \"query\",\n        \"useTags\": false\n      },\n      {\n        \"allValue\": null,\n        \"current\": {},\n        \"datasource\": \"${DS_PROMETHEUS}\",\n        \"hide\": 0,\n        \"includeAll\": true,\n        \"label\": \"Host\",\n        \"multi\": true,\n        \"name\": \"host\",\n        \"options\": [],\n        \"query\": \"label_values(nomad_client_uptime{datacenter=~\\\"$datacenter\\\"}, host)\",\n        \"refresh\": 2,\n        \"regex\": \"\",\n        \"sort\": 0,\n        \"tagValuesQuery\": \"\",\n        \"tags\": [],\n        \"tagsQuery\": \"\",\n        \"type\": \"query\",\n        \"useTags\": false\n      }\n    ]\n  },\n  \"time\": {\n    \"from\": \"now-6h\",\n    \"to\": \"now\"\n  },\n  \"timepicker\": {\n    \"refresh_intervals\": [\n      \"5s\",\n      \"10s\",\n      \"30s\",\n      \"1m\",\n      \"5m\",\n      \"15m\",\n      \"30m\",\n      \"1h\",\n      \"2h\",\n      \"1d\"\n    ],\n    \"time_options\": [\n      \"5m\",\n      \"15m\",\n      \"1h\",\n      \"6h\",\n      \"12h\",\n      \"24h\",\n      \"2d\",\n      \"7d\",\n      \"30d\"\n    ]\n  },\n  \"timezone\": \"\",\n  \"title\": \"Nomad Jobs\",\n  \"uid\": \"TvqbbhViz\",\n  \"version\": 12,\n  \"description\": \"Nomad Jobs metrics\"\n}\n"
  },
  {
    "path": "host_volume/prometheus/prometheus.nomad",
    "content": "job \"prometheus\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  update {\n    max_parallel = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"3m\"\n    auto_revert = false\n    canary = 0\n  }\n  group \"monitoring\" {\n    volume \"prometheus\" { type=\"host\" config { source=\"prometheus\" } }\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay = \"25s\"\n      mode = \"delay\"\n    }\n    ephemeral_disk { size = 1000 }\n    task \"grafana\" {\n      volume_mount { volume=\"prometheus\" destination=\"/mnt/prometheus\" }\n      artifact {\n        source=\"https://gist.githubusercontent.com/angrycub/046cee11bd3d8c4ab9a3819646c9660c/raw/c699095c2cb25b896e2c709da588b668ce82f8b5/prometheus_nomad.json\"\n        destination=\"local/provisioning/dashboards/dashs\"\n      }\n      template {\n        change_mode=\"noop\"\n        destination=\"local/provisioning/dashboards/file_provider.yml\"\n        data = <<EOH\napiVersion: 1\n\nproviders:\n- name: 'default'\n  orgId: 1\n  folder: ''\n  type: file\n  disableDeletion: false\n  updateIntervalSeconds: 10 #how often Grafana will scan for changed dashboards\n  options:\n    path: {{ env \"NOMAD_TASK_DIR\" }}/provisioning/dashboards/dashs\nEOH\n\n      }\n      template {\n        change_mode=\"noop\"\n        destination=\"local/provisioning/datasources/prometheus_datasource.yml\"\n        data = <<EOH\napiVersion: 1\n\ndatasources:\n  - name: Prometheus\n    type: prometheus\n    access: proxy\n    url: http://{{ env \"NOMAD_ADDR_prometheus_prometheus_ui\" }}\nEOH\n      }\n      env {\n        \"GF_SERVER_ROOT_URL\"=\"http://127.0.0.1:9999/grafana/\"\n        \"GF_PATHS_PROVISIONING\"=\"/${NOMAD_TASK_DIR}/provisioning\"\n      }\n      driver = \"docker\"\n      config {\n        image = \"grafana/grafana:6.1.4\"\n        port_map { grafana_ui = 3000 }\n      }\n      resources {\n        network { port \"grafana_ui\" {} }\n      }\n      service {\n        name = \"grafana-ui\"\n        port = \"grafana_ui\"\n        tags = [\"urlprefix-/grafana strip=/grafana\"] \n        check {\n          name     = \"grafana-ui port alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n\n    task \"prometheus\" {\n      volume_mount { volume=\"prometheus\" destination=\"/prometheus/data\" }\n      template  {\n        change_mode = \"noop\"\n        destination=\"local/prometheus.yml\"\n        data = <<EOH\n---\nglobal:\n  scrape_interval:     15s\nscrape_configs:\n  - job_name: 'prometheus'\n    scrape_interval: 5s\n    static_configs:\n      - targets: ['localhost:9090']\n\n  - job_name: 'nomad'\n    scrape_interval: 10s\n    metrics_path: /v1/metrics\n    params:\n        format: ['prometheus']\n    consul_sd_configs:\n      - server: '{{ env \"NOMAD_IP_prometheus_ui\" }}:8500'\n        token: \"3ef34421-1b20-e543-65d4-54067560d377\"\n        services:\n          - \"nomad\"\n          - \"nomad-client\"\n    relabel_configs:\n      - source_labels: ['__meta_consul_tags']\n        regex: .*,http,.*\n        action: keep\nEOH\n\n      }\n\n      driver = \"docker\"\n      config {\n        image = \"prom/prometheus:v2.9.1\"\n        args = [\n          \"--web.external-url=http://127.0.0.1:9999/prometheus\",\n          \"--web.route-prefix=/\",\n          \"--config.file=/local/prometheus.yml\"     \n        ]\n        port_map { prometheus_ui = 9090 }\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n        network { port \"prometheus_ui\" {} }\n      }\n      service {\n        name = \"prometheus-ui\"\n        #tags = [\"urlprefix-/prometheus\"]\n        tags = [\"urlprefix-/prometheus strip=/prometheus\"]\n        port = \"prometheus_ui\"\n        check {\n          name     = \"prometheus_ui port alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "host_volume/read_only/read_only.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"database\" {\n    volume \"mysql\" { type=\"host\" config { source=\"mysql\" } }\n    volume \"certs\" { type=\"host\" read_only=true config { source=\"certs\" } }  \n    task \"maria\" {\n      driver = \"docker\"\n      volume_mount { volume=\"mysql\" destination=\"/var/lib/mysql\" }\n      volume_mount { volume=\"certs\" destination=\"/certs\" }\n      env {\n        \"MYSQL_ROOT_PASSWORD\" =\"mypass\"\n      }\n      config {\n        image = \"mariadb/server:10.3\"\n        port_map { db=3306 }\n      }\n\n      resources {\n        cpu=500 memory=256 network { port \"db\" {} }\n      }\n\n      service {\n        name = \"mariadb\"\n        tags = [\"persist\"]\n        port = \"db\"\n        check { name=\"alive\" type=\"tcp\" interval=\"10s\" timeout=\"2s\" }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "http_echo/arm-service.nomad",
    "content": "job \"bar-service\" {\n  datacenters = [\"dc1\"]\n\n  group \"example\" {\n    network {\n      port \"http\" {}\n    }\n\n    service {\n      name = \"bar-service\"\n      tags = [\"urlprefix-/bar\"]\n      port = \"http\"\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"http-echo\"\n        args    = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>Welcome to the Bar Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\",\n        ]\n      }\n\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\"\n\n        options {\n          checksum = \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "http_echo/bar-service.nomad",
    "content": "job \"bar-service\" {\n  datacenters = [\"dc1\"]\n\n  group \"example\" {\n    count = 6\n\n    network {\n      port \"http\" {}\n    }\n\n    service {\n      name = \"bar-service\"\n      tags = [\"urlprefix-/bar\"]\n      port = \"http\"\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"http-echo\"\n        args    = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>Welcome to the Bar Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\",\n        ]\n      }\n\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\"\n\n        options {\n          checksum = \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "http_echo/car-service-broken-check.nomad",
    "content": "job \"car-service\" {\n  datacenters = [\"dc1\"]\n\n  update {\n    max_parallel      = 1\n    health_check      = \"checks\"\n    min_healthy_time  = \"10s\"\n    healthy_deadline  = \"30s\"\n    progress_deadline = \"2m\"\n    auto_revert       = false\n    stagger           = \"30s\"\n  }\n\n  group \"example\" {\n    count = 3\n\n    network {\n      port \"http\" {}\n      port \"supernotreal\" {}\n    }\n\n    service {\n      name = \"car-service\"\n      tags = [\"urlprefix-/car\"]\n      port = \"supernotreal\"\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"http-echo\"\n        args    = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>Welcome to the Car Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\",\n        ]\n      }\n\n      resources {\n        memory = 10\n      }\n\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\"\n\n        options {\n          checksum = \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "http_echo/foo-service.deployment.nomad",
    "content": "job \"foo-service\" {\n  datacenters = [\"dc1\"]\n\n  meta {\n    foo-service = \"true\"\n  }\n\n  group \"example\" {\n    count = 3\n    meta {\n      \"foo\"=\"bar\"\n    }\n\n    update {\n      max_parallel     = 1\n      min_healthy_time = \"10s\"\n      healthy_deadline = \"3m\"\n      auto_revert      = false\n      canary           = 1\n    }\n\n    network {\n      port \"http\" {}\n    }\n\n    service {\n      name = \"foo-service\"\n      tags = [\"urlprefix-/foo\"]\n      port = \"http\"\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    service {\n      name = \"foo-service-2\"\n      tags = [\"urlprefix-/foo2\"]\n      port = \"http\"\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"http-echo\"\n        args    = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>Welcome to the Foo Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\",\n        ]\n      }\n\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\"\n\n        options {\n          checksum = \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "http_echo/foo-service.nomad",
    "content": "job \"foo-service\" {\n  datacenters = [\"dc1\"]\n\n  meta {\n    foo-service = \"true\"\n  }\n\n  group \"example\" {\n    count = 3\n\n    network {\n      port \"http\" {}\n    }\n\n    service {\n      name = \"foo-service\"\n      tags = [\"urlprefix-/foo\"]\n      port = \"http\"\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    service {\n      name = \"foo-service-2\"\n      tags = [\"urlprefix-/foo2\"]\n      port = \"http\"\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"http-echo\"\n        args    = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>Welcome to the Foo Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\",\n        ]\n      }\n\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\"\n\n        options {\n          checksum = \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "http_echo/foo-test.nomad",
    "content": "job \"foo-service\" {\n  datacenters = [\"dc1\"]\n\n  meta {\n    foo-service = \"true\"\n  }\n\n  group \"example\" {\n    count = 3\n\n    network {\n      port \"http\" {}\n    }\n\n    service {\n      name = \"foo-service\"\n      tags = [\"urlprefix-/foo\"]\n      port = \"http\"\n\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    service {\n      name = \"foo-service-2\"\n      tags = [\"urlprefix-/foo2\"]\n      port = \"http\"\n\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"usr/sbin/http-echo\"\n        args    = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>Welcome to the Foo Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\",\n        ]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "http_echo/template/echo_template.nomad",
    "content": "job \"http-echo\" {\n  datacenters = [\"dc1\"]\n\n  update {\n    max_parallel = 1\n  }\n\n  group \"web\" {\n    constraint {\n      distinct_hosts = true\n    }\n\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n\n    network {\n      port \"http\" {\n        static = 8080\n        to     = 8080\n      }\n    }\n\n    service {\n      name = \"http-echo\"\n      port = \"http\"\n\n      check {\n        name     = \"alive\"\n        type     = \"http\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n        path     = \"/\"\n      }\n    }\n\n\n    task \"http-echo\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hashicorp/http-echo\"\n        args  = [\"-text\", \"$content\", \"-listen\",\":8080\"]\n        ports = [\"http\"]\n      }\n\n      template {\n        destination = \"local/template.out\"\n        env         = true\n        data        = <<EOH\ncontent = \"\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n   concat key:  service/fabio/{{ env \"NOMAD_JOB_NAME\" }}/listeners\n    key:         {{ keyOrDefault ( printf \"service/fabio/%s/listeners\" ( env \"NOMAD_JOB_NAME\" ) ) \":9999\" }}\n\n{{ define \"custom\" }}service/fabio/{{env \"NOMAD_JOB_NAME\" }}/listeners{{ end }}\n    key:         {{ keyOrDefault (executeTemplate \"custom\") \":9999\" }}\n\n   math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n\n\n\"\n  EOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "http_echo/template/ets.nomad",
    "content": "job \"http-echo\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  update {\n    max_parallel = 1\n  }\n\n  group \"web\" {\n    constraint {\n      distinct_hosts = true\n    }\n\n    network {\n      port \"http\" {\n        static = 8080\n        to     = 8080\n      }\n    }\n\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n\n    task \"http-echo\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hashicorp/http-echo\"\n        args  = [\"-text\", \"${content}\", \"-listen\", \":8080\"]\n        ports = [\"http\"]\n      }\n\n template {\n        destination = \"local/template.out\"\n        env         = true\n        data        = <<EOH\ncontent='<table><tr><td>node.unique.id</td><td>{{ env \"node.unique.id\" }}</td></tr><tr><td>node.datacenter</td><td>{{ env \"node.datacenter\" }}</td></tr><tr><td>node.unique.name</td><td>{{ env \"node.unique.name\" }}</td></tr><tr><td>node.class</td><td>{{ env \"node.class\" }}</td></tr><tr><td>attr.cpu.arch</td><td>{{ env \"attr.cpu.arch\" }}</td></tr><tr><td>attr.cpu.numcores</td><td>{{ env \"attr.cpu.numcores\" }}</td></tr><tr><td>attr.cpu.totalcompute</td><td>{{ env \"attr.cpu.totalcompute\" }}</td></tr><tr><td>attr.consul.datacenter</td><td>{{ env \"attr.consul.datacenter\" }}</td></tr><tr><td>attr.unique.hostname</td><td>{{ env \"attr.unique.hostname\" }}</td></tr><tr><td>attr.unique.network.ip-address</td><td>{{ env \"attr.unique.network.ip-address\" }}</td></tr><tr><td>attr.kernel.name</td><td>{{ env \"attr.kernel.name\" }}</td></tr><tr><td>attr.kernel.version</td><td>{{ env \"attr.kernel.version\" }}</td></tr><tr><td>attr.platform.aws.ami-id</td><td>{{ env \"attr.platform.aws.ami-id\" }}</td></tr><tr><td>attr.platform.aws.instance-type</td><td>{{ env \"attr.platform.aws.instance-type\" }}</td></tr><tr><td>attr.os.name</td><td>{{ env \"attr.os.name\" }}</td></tr><tr><td>attr.os.version</td><td>{{ env \"attr.os.version\" }}</td></tr><tr><td>NOMAD_ALLOC_DIR</td><td>{{env \"NOMAD_ALLOC_DIR\"}}</td></tr><tr><td>NOMAD_TASK_DIR</td><td>{{env \"NOMAD_TASK_DIR\"}}</td></tr><tr><td>NOMAD_SECRETS_DIR</td><td>{{env \"NOMAD_SECRETS_DIR\"}}</td></tr><tr><td>NOMAD_MEMORY_LIMIT</td><td>{{env \"NOMAD_MEMORY_LIMIT\"}}</td></tr><tr><td>NOMAD_CPU_LIMIT</td><td>{{env \"NOMAD_CPU_LIMIT\"}}</td></tr><tr><td>NOMAD_ALLOC_ID</td><td>{{env \"NOMAD_ALLOC_ID\"}}</td></tr><tr><td>NOMAD_ALLOC_NAME</td><td>{{env \"NOMAD_ALLOC_NAME\"}}</td></tr><tr><td>NOMAD_ALLOC_INDEX</td><td>{{env \"NOMAD_ALLOC_INDEX\"}}</td></tr><tr><td>NOMAD_TASK_NAME</td><td>{{env \"NOMAD_TASK_NAME\"}}</td></tr><tr><td>NOMAD_GROUP_NAME</td><td>{{env \"NOMAD_GROUP_NAME\"}}</td></tr><tr><td>NOMAD_JOB_NAME</td><td>{{env \"NOMAD_JOB_NAME\"}}</td></tr><tr><td>NOMAD_DC</td><td>{{env \"NOMAD_DC\"}}</td></tr><tr><td>NOMAD_REGION</td><td>{{env \"NOMAD_REGION\"}}</td></tr><tr><td>VAULT_TOKEN</td><td>{{env \"VAULT_TOKEN\"}}</td></tr><tr><td>GOMAXPROCS</td><td>{{env \"GOMAXPROCS\"}}</td></tr><tr><td>HOME</td><td>{{env \"HOME\"}}</td></tr><tr><td>LANG</td><td>{{env \"LANG\"}}</td></tr><tr><td>LOGNAME</td><td>{{env \"LOGNAME\"}}</td></tr><tr><td>NOMAD_ADDR_export</td><td>{{env \"NOMAD_ADDR_export\"}}</td></tr><tr><td>NOMAD_ADDR_exstat</td><td>{{env \"NOMAD_ADDR_exstat\"}}</td></tr><tr><td>NOMAD_ALLOC_DIR</td><td>{{env \"NOMAD_ALLOC_DIR\"}}</td></tr><tr><td>NOMAD_ALLOC_ID</td><td>{{env \"NOMAD_ALLOC_ID\"}}</td></tr><tr><td>NOMAD_ALLOC_INDEX</td><td>{{env \"NOMAD_ALLOC_INDEX\"}}</td></tr><tr><td>NOMAD_ALLOC_NAME</td><td>{{env \"NOMAD_ALLOC_NAME\"}}</td></tr><tr><td>NOMAD_CPU_LIMIT</td><td>{{env \"NOMAD_CPU_LIMIT\"}}</td></tr><tr><td>NOMAD_DC</td><td>{{env \"NOMAD_DC\"}}</td></tr><tr><td>NOMAD_GROUP_NAME</td><td>{{env \"NOMAD_GROUP_NAME\"}}</td></tr><tr><td>NOMAD_HOST_PORT_export</td><td>{{env \"NOMAD_HOST_PORT_export\"}}</td></tr><tr><td>NOMAD_HOST_PORT_exstat</td><td>{{env \"NOMAD_HOST_PORT_exstat\"}}</td></tr><tr><td>NOMAD_IP_export</td><td>{{env \"NOMAD_IP_export\"}}</td></tr><tr><td>NOMAD_IP_exstat</td><td>{{env \"NOMAD_IP_exstat\"}}</td></tr><tr><td>NOMAD_JOB_NAME</td><td>{{env \"NOMAD_JOB_NAME\"}}</td></tr><tr><td>NOMAD_MEMORY_LIMIT</td><td>{{env \"NOMAD_MEMORY_LIMIT\"}}</td></tr><tr><td>NOMAD_PORT_export</td><td>{{env \"NOMAD_PORT_export\"}}</td></tr><tr><td>NOMAD_PORT_exstat</td><td>{{env \"NOMAD_PORT_exstat\"}}</td></tr><tr><td>NOMAD_REGION</td><td>{{env \"NOMAD_REGION\"}}</td></tr><tr><td>NOMAD_SECRETS_DIR</td><td>{{env \"NOMAD_SECRETS_DIR\"}}</td></tr><tr><td>NOMAD_TASK_DIR</td><td>{{env \"NOMAD_TASK_DIR\"}}</td></tr><tr><td>NOMAD_TASK_NAME</td><td>{{env \"NOMAD_TASK_NAME\"}}</td></tr><tr><td>PATH</td><td>{{env \"PATH\"}}</td></tr><tr><td>PWD</td><td>{{env \"PWD\"}}</td></tr><tr><td>SHELL</td><td>{{env \"SHELL\"}}</td></tr><tr><td>SHLVL</td><td>{{env \"SHLVL\"}}</td></tr><tr><td>USER</td><td>{{env \"USER\"}}</td></tr><tr><td>VAULT_TOKEN</td><td>{{env \"VAULT_TOKEN\"}}</td></tr></table>'\nEOH\n      }\n\n      service {\n        name = \"http-echo\"\n        port = \"http\"\n\n        check {\n          name     = \"alive\"\n          type     = \"http\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n          path     = \"/\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "http_echo/template/ets2.nomad",
    "content": "job \"http-echo\" {\n  datacenters = [\"dc1\"]\n\n  update {\n    max_parallel = 1\n  }\n\n  group \"web\" {\n    constraint {\n      distinct_hosts = true\n    }\n\n    network {\n      port \"http\" {}\n    }\n\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"/bin/bash\"\n        args    = [\n          \"-c\",\n          \"local/http-echo -listen :${NOMAD_PORT_http} -text \\\"`cat local/template.out`\\\"\"\n        ]\n      }\n\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\"\n\n        options {\n          checksum = \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n        }\n      }\n\n      template {\n        data = <<EOH\n  <html>\n    <head>\n      <title>Interpolation Demo</title>\n    </head>\n  <body>\n  <h1>Interpolation Demo</h1>\n  <pre>\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n   concat key:  service/fabio/{{ env \"NOMAD_JOB_NAME\" }}/listeners\n    key:         {{ keyOrDefault ( printf \"service/fabio/%s/listeners\" ( env \"NOMAD_JOB_NAME\" ) ) \":9999\" }}\n\n{{ define \"custom\" }}service/fabio/{{env \"NOMAD_JOB_NAME\" }}/listeners{{ end }}\n    key:         {{ keyOrDefault (executeTemplate \"custom\") \":9999\" }}\n\n   math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n</pre>\n</body>\n  EOH\n\n        destination = \"local/template.out\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "http_echo/template/ets3.nomad",
    "content": "job \"http-echo\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  update {\n    max_parallel = 1\n  }\n\n  group \"web\" {\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n\n    network {\n      port \"http\" {}\n    }\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"/bin/bash\"\n        args = [\n          \"-c\",\n          \"local/http-echo -listen :${NOMAD_PORT_http} -text \\\"`env`\\\"\"\n        ]\n      }\n\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\"\n\n        options {\n          checksum = \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n        }\n      }\n\n      template {\n        destination = \"local/template.out\"\n        data        = <<EOH\n  <html>\n    <head>\n      <title>Interpolation Demo</title>\n    </head>\n  <body>\n  <h1>Interpolation Demo</h1>\n  <pre>\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n   concat key:  service/fabio/{{ env \"NOMAD_JOB_NAME\" }}/listeners\n    key:         {{ keyOrDefault ( printf \"service/fabio/%s/listeners\" ( env \"NOMAD_JOB_NAME\" ) ) \":9999\" }}\n\n{{ define \"custom\" }}service/fabio/{{env \"NOMAD_JOB_NAME\" }}/listeners{{ end }}\n    key:         {{ keyOrDefault (executeTemplate \"custom\") \":9999\" }}\n\n   math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n</pre>\n</body>\n  EOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "httpd_site/README.md",
    "content": "# httpd site\n\nThis job will download a website tarball into the allocation and spin up\nthe Apache webserver docker image (2.4-alpine) and mount this container\ninto place.\n\n"
  },
  {
    "path": "httpd_site/httpd.nomad",
    "content": "job \"httpd_site\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  update {\n    stagger = \"5s\"\n    max_parallel = 1\n  }\n  group \"httpd\" {\n    count = 1\n    network {\n      port \"http\" {\n        to = 80\n      }\n    }\n\n    task \"httpd-docker\" {\n      artifact {\n        source = \"https://raw.githubusercontent.com/angrycub/nomad_example_jobs/master/httpd_site/site-content.tgz\"\n        destination = \"tarball\"\n      }\n      driver = \"docker\"\n      config {\n        image = \"httpd:2.4-alpine\"\n        volumes = [\n          \"tarball:/usr/local/apache2/htdocs\"\n        ]\n        ports = [\"http\"]\n      }\n      resources {\n        cpu = 200\n        memory = 32\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "httpd_site/make_site.sh",
    "content": "#!/bin/sh\n\necho \"📦 Creating site tarball...\"\ncd site-content &&\ntar -zcvf ../site-content.tgz * &&\ncd ..\n"
  },
  {
    "path": "httpd_site/site-content/about.html",
    "content": "<!DOCTYPE html>\n<html>\n<head>\n\t<link rel=\"stylesheet\" href=\"css/style.css\">\n\t<title>About the job</title>\n</head>\n<body>\n<h1>About the job</h1>\n<p>This repository that contains this job can be found on GitHub at \n<a href=\"https://github.com/angrycub/nomad_example_jobs/tree/master/httpd_site\" target=\"_new\">angrycub/nomad_example_jobs/httpd_site</a>. The specific site code is in the <code>site-content</code> folder.\n</p>\n<p>\nReturn to <a href=\"index.html\">Home</a>.\n</p>\n</body>\n</html>"
  },
  {
    "path": "httpd_site/site-content/css/style.css",
    "content": "body {\n\tfont-family: \"Helvetica Neue\",\"Helvetica\",\"Arial\", sans-serif;\n}\nh1 {\n\tcolor: white;\n\ttext-shadow: 1px 1px 2px black, 0 0 25px blue, 0 0 5px darkblue;\n\twidth: auto;\n\tborder-bottom: 1px solid #333;\n}\ncode {\n\tbackground: #EEE;\n\tborder: 1px solid #CCC;\n\tborder-radius: 5px;\n\tpadding: 3px;\n}"
  },
  {
    "path": "httpd_site/site-content/index.html",
    "content": "<!DOCTYPE html>\n<html>\n<head>\n\t<link rel=\"stylesheet\" href=\"css/style.css\">\n\t<title>Welcome to the site</title>\n</head>\n<body>\n<h1>Howdy!</h1>\n<p>This is an example site to demonstrate fetching a resource as a tarball into\na Nomad job and mounting it to a Docker Container.</p>\n<p>There's an <a href=\"about.html\">About</a> page too, for fun.\n</body>\n</html>"
  },
  {
    "path": "ipv6/SimpleHTTPServer/sample.nomad",
    "content": "# This job will create a SimpleHTTPServer that is IPV6 enabled.  This will allow\n# a user to browse around in an alloc dir.  Not spectacularly useful, but is a \n# reasonable facsimile of a real workload.\njob http6 {\n  datacenters = [\"dc1\"]\n  group \"group\" {\n    count = 1\n\n    task \"server\" {\n      template {\n        data = <<EOH\n#! /usr/bin/python\n\nimport BaseHTTPServer\nimport SimpleHTTPServer\nimport socket\n\n\nclass HTTPServer6(BaseHTTPServer.HTTPServer):\n    address_family = socket.AF_INET6\n\n\nif __name__ == '__main__':\n    SimpleHTTPServer.test(ServerClass=HTTPServer6)\nEOH\n        destination = \"local/files.py\"\n      }\n\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/files.py\"\n        args = [\"${NOMAD_PORT_http}\"]\n      }\n\n      resources { memory = 10 cpu = 50 network { port \"http\" {} }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "java/JavaDriverTest/java-driver-test.nomad",
    "content": "job \"java-driver-test.nomad\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  # constraint {\n  #   attribute = \"${attr.unique.hostname}\"\n  #   operator  = \"=\"\n  #   value     = \"nomad-client-1.node.consul\"\n  # }\n\n  group \"test\" {\n    task \"test\" {\n      artifact {\n        source = \"https://github.com/angrycub/JavaDriverTest/releases/download/v0.0.2/JavaDriverTest.jar\"\n        destination = \"local/\"\n      }\n\n      driver = \"java\"\n\n      config {\n        class = \"JavaDriverTest\"\n        class_path = \"local/JavaDriverTest.jar\"\n      }\n\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "java/JavaDriverTest/test2.nomad",
    "content": "job \"java-driver-test.nomad\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  constraint {\n    attribute = \"${attr.unique.hostname}\"\n    operator  = \"=\"\n    value     = \"nomad-client-1.node.consul\"\n  }\n\n  group \"test\" {\n    task \"test\" {\n      template {\n        destination = \"secrets/env.txt\"\n        env = true\n        data =<<EOT\n{{- $DIVIDER = \":\" }}{{ $FS_SEP = \"/\" -}}\n{{- $TD := env \"NOMAD_TASK_DIR\" -}}\n\n{{- $isWin := eq (env \"attr.kernel.name\") \"windows\" -}}\n{{- if $isWin }}{{ $DIVIDER = \";\" }}{{ $FS_SEP = `\\` }}{{ end -}}\n\n{{- $MEMBRANE_HOME := print $TD $FS_SEP \"membrane-service-proxy-4.7.3\" -}}\n{{- $CP := print $MEMBRANE_HOME $FS_SEP \"conf\" $DIVIDER $MEMBRANE_HOME $FS_SEP \"starter.jar\" -}}\n\n{{- if $isWin -}}\n{{- $CP = replace `\\` `\\\\` $CP -}}\n{{- $MEMBRANE_HOME = replace `\\` `\\\\` $MEMBRANE_HOME -}}\n{{- end -}}\n\nMEMBRANE_HOME=\"{{ $MEMBRANE_HOME }}\"\nCLASSPATH=\"{{ $CP }}\"\nEOT\n      }\n\n      artifact {\n        source = \"https://github.com/angrycub/JavaDriverTest/releases/download/v0.0.2/JavaDriverTest.jar\"\n        destination = \"local/\"\n      }\n\n      driver = \"java\"\n\n      config {\n        class = \"JavaDriverTest\"\n        class_path = \"${CLASSPATH}\"\n      }\n\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "java/README.md",
    "content": "Deploy a WARfile to Nomad\n\n"
  },
  {
    "path": "java/apache_camel/java_files.nomad",
    "content": "job \"java_files\" {\n  datacenters = [\"dc1\"],\n  group \"exec\" {\n    ephemeral_disk {\n      migrate = true\n      size    = \"500\"\n      sticky  = true\n    }\n    task \"graalvm_run_camel\" {\n      driver = \"java\"\n      config {\n        jar_path = \"local/camel-standalone-helloworld-1.0-SNAPSHOT.jar\"\n        jvm_options = [\"-Xmx1024m\", \"-Xms256m\"]\n      }\n      # Location of artifact\n      artifact {\n        source = \"http://nomad-server-1:8888/camel-standalone-helloworld-1.0-SNAPSHOT.jar\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "java/jar-test/README.md",
    "content": "# Nomad Java JAR Sample\n\nThis sample will download a jarfile and use it to count the words in the\ntemplated text file.\n\n```shell-session\n$ nomad run jar-test.nomad\n==> Monitoring evaluation \"b2d818af\"\n    Evaluation triggered by job \"jar-test.nomad\"\n==> Monitoring evaluation \"b2d818af\"\n    Evaluation within deployment: \"a2ba8e63\"\n    Allocation \"6027314e\" created: node \"14ab9290\", group \"java\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"b2d818af\" finished with status \"complete\"\n```\n\n```shell-session\n$ nomad alloc logs 6027314e\nCounted 1515 chars.\n```\n\n## Building the source\n\n```shell-session\n$ javac --source=7 --target=7 -d bin src/Count.java\n$ jar cf jar/Count.jar -C bin .\n```\n\nUpload the jarfile where you like and update the source in the artifact stanza\n"
  },
  {
    "path": "java/jar-test/jar-test.nomad",
    "content": "job \"jar-test.nomad\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  group \"java\" {\n    task \"sample\" {\n\n      artifact {\n        source = \"https://github.com/angrycub/nomad_example_jobs/raw/main/java/jar-test/jar/Count.jar\"\n        destination = \"local/artifact/\" \n        # mode = \"any\"\n        # options {\n        #   archive = false\n        # } \n      }\n\n      template {\n        destination = \"${NOMAD_TASK_DIR}/textfile.text\"\n        data = <<EOT\nLorem ipsum dolor sit amet, consectetur adipiscing elit. Proin molestie massa\nmi, eget pulvinar purus mollis vitae. Maecenas nec eros a dui tincidunt\ncondimentum faucibus sed libero. Vivamus ut iaculis elit. Vivamus egestas\nornare magna in aliquet. Nulla eu massa ac magna molestie dignissim. Nunc\nvenenatis velit at luctus rhoncus. Sed aliquet sit amet ex et rhoncus.\nCurabitur faucibus magna eget eros lobortis, eget hendrerit quam auctor.\nVivamus convallis augue quis purus rhoncus scelerisque. Proin dapibus, libero\nvitae bibendum facilisis, felis libero molestie ipsum, nec auctor ex purus non\neros. Duis varius malesuada augue interdum tincidunt.\n\nMorbi non rutrum mauris, sed tempus elit. Aenean in gravida mi. Mauris sed\nornare quam, in posuere libero. Donec facilisis orci ac diam molestie rutrum.\nInterdum et malesuada fames ac ante ipsum primis in faucibus. Aenean ac nulla\nac mi sollicitudin fringilla interdum eget dui. Maecenas sollicitudin ipsum\nsit amet leo tempor feugiat. Pellentesque feugiat enim et urna sollicitudin\nfermentum. Ut ornare justo quis quam sagittis porta eu quis tellus. Duis vel\norci quis purus facilisis dignissim. Praesent pulvinar egestas nisi, in\nvehicula nunc suscipit at. Nunc vulputate libero eget dictum viverra. Mauris\naugue nisi, sodales vel rutrum vel, tincidunt sit amet sapien. Mauris\nvulputate, ante nec cursus venenatis, lorem tellus consequat tellus, at\naliquam nisl tortor ut sapien. Proin pharetra blandit erat pretium lobortis.\nIn sit amet sodales odio.       \nEOT\n      }\n\n      driver = \"java\"\n\n      config {\n        class_path = \"local/artifact/Count.jar\"\n        class      = \"Count\"\n        args       = [\"local/textfile.text\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "java/jar-test/src/Count.java",
    "content": "/*\n * Copyright (c) 1995, 2008, Oracle and/or its affiliates. All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n *\n *   - Redistributions of source code must retain the above copyright\n *     notice, this list of conditions and the following disclaimer.\n *\n *   - Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *\n *   - Neither the name of Oracle or the names of its\n *     contributors may be used to endorse or promote products derived\n *     from this software without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS\n * IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,\n * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\n * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR\n * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */ \n\nimport java.io.*;\n\npublic class Count {\n    public static void countChars(InputStream in) throws IOException\n    {\n        int count = 0;\n\n        while (in.read() != -1)\n            count++;\n\n        System.out.println(\"Counted \" + count + \" chars.\");\n    }\n\n    public static void main(String[] args) throws Exception\n    {\n        if (args.length >= 1)\n            countChars(new FileInputStream(args[0]));\n        else\n            System.err.println(\"Usage: Count filename\");\n    }\n}\n\n"
  },
  {
    "path": "job_examples/base-batch.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  // because the sample payload terminates, running it as a\n  // `batch` job allows for that without having to sleep loop\n  type = \"batch\"\n  group \"group\" {\n    task \"task\" {\n      driver = \"exec\"\n      config {\n        command = \"env\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "job_examples/meta/README.md",
    "content": "## The `meta` Stanza\n\nThe meta stanza can be used to provide unstructured key-value data to a Nomad job as an automatically-exported environment variable.  This variable can be used as provided or can be used for more complex expressions via the Nomad `template` stanza\n\nDocumentation for the meta stanza can be found [here](https://www.nomadproject.io/docs/job-specification/meta) in the official Nomad documentation.\n"
  },
  {
    "path": "job_examples/meta/meta-batch.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  // because the sample payload terminates, running it as a\n  // `batch` job allows for that without having to sleep loop\n  type = \"batch\"\n\n  meta {\n    \"meta_key_1\" = \"meta_value_1\"\n  }\n\n  group \"group\" {\n    task \"task\" {\n      driver = \"exec\"\n      config {\n        command = \"env\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "json-jobs/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n\n        ports = [\"db\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "json-jobs/job.json",
    "content": "{\n    \"Job\": {\n        \"Region\": null,\n        \"Namespace\": null,\n        \"ID\": \"example\",\n        \"Name\": \"example\",\n        \"Type\": null,\n        \"Priority\": null,\n        \"AllAtOnce\": null,\n        \"Datacenters\": [\n            \"dc1\"\n        ],\n        \"Constraints\": null,\n        \"Affinities\": null,\n        \"TaskGroups\": [\n            {\n                \"Name\": \"cache\",\n                \"Count\": null,\n                \"Constraints\": null,\n                \"Affinities\": null,\n                \"Tasks\": [\n                    {\n                        \"Name\": \"redis\",\n                        \"Driver\": \"docker\",\n                        \"User\": \"\",\n                        \"Lifecycle\": null,\n                        \"Config\": {\n                            \"image\": \"redis:7\",\n                            \"ports\": [\n                                \"db\"\n                            ]\n                        },\n                        \"Constraints\": null,\n                        \"Affinities\": null,\n                        \"Env\": null,\n                        \"Services\": null,\n                        \"Resources\": {\n                            \"CPU\": 500,\n                            \"MemoryMB\": 256,\n                            \"DiskMB\": null,\n                            \"Networks\": null,\n                            \"Devices\": null,\n                            \"IOPS\": null\n                        },\n                        \"RestartPolicy\": null,\n                        \"Meta\": null,\n                        \"KillTimeout\": null,\n                        \"LogConfig\": null,\n                        \"Artifacts\": null,\n                        \"Vault\": null,\n                        \"Templates\": null,\n                        \"DispatchPayload\": null,\n                        \"VolumeMounts\": null,\n                        \"Leader\": false,\n                        \"ShutdownDelay\": 0,\n                        \"KillSignal\": \"\",\n                        \"Kind\": \"\",\n                        \"ScalingPolicies\": null\n                    }\n                ],\n                \"Spreads\": null,\n                \"Volumes\": null,\n                \"RestartPolicy\": null,\n                \"ReschedulePolicy\": null,\n                \"EphemeralDisk\": null,\n                \"Update\": null,\n                \"Migrate\": null,\n                \"Networks\": [\n                    {\n                        \"Mode\": \"\",\n                        \"Device\": \"\",\n                        \"CIDR\": \"\",\n                        \"IP\": \"\",\n                        \"DNS\": null,\n                        \"ReservedPorts\": null,\n                        \"DynamicPorts\": [\n                            {\n                                \"Label\": \"db\",\n                                \"Value\": 0,\n                                \"To\": 6379,\n                                \"HostNetwork\": \"\"\n                            }\n                        ],\n                        \"MBits\": null\n                    }\n                ],\n                \"Meta\": null,\n                \"Services\": null,\n                \"ShutdownDelay\": null,\n                \"StopAfterClientDisconnect\": null,\n                \"Scaling\": null\n            }\n        ],\n        \"Update\": null,\n        \"Multiregion\": null,\n        \"Spreads\": null,\n        \"Periodic\": null,\n        \"ParameterizedJob\": null,\n        \"Reschedule\": null,\n        \"Migrate\": null,\n        \"Meta\": null,\n        \"ConsulToken\": null,\n        \"VaultToken\": null,\n        \"Stop\": null,\n        \"ParentID\": null,\n        \"Dispatched\": false,\n        \"Payload\": null,\n        \"VaultNamespace\": null,\n        \"NomadTokenID\": null,\n        \"Status\": null,\n        \"StatusDescription\": null,\n        \"Stable\": null,\n        \"Version\": null,\n        \"SubmitTime\": null,\n        \"CreateIndex\": null,\n        \"ModifyIndex\": null,\n        \"JobModifyIndex\": null\n    }\n}\n"
  },
  {
    "path": "load_balancers/traefik/README.md",
    "content": "## Load Balancing with Traefik\n\nThis material is from the HashiCorp [Learn tutorial][]\n\n\n[learn tutorial]: https://learn.hashicorp.com/nomad/load-balancing/traefik\n"
  },
  {
    "path": "load_balancers/traefik/traefik.nomad",
    "content": "job \"traefik\" {\n  datacenters = [\"dc1\"]\n\n  group \"traefik\" {\n    network {\n      port \"http\" {\n        static = 8080\n      }\n\n      port \"api\" {\n        static = 8081\n      }\n    }\n\n    service {\n      name = \"traefik\"\n\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        port     = \"http\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"traefik\" {\n      driver = \"docker\"\n\n      config {\n        image        = \"traefik:v2.2\"\n        network_mode = \"host\"\n\n        volumes = [\n          \"local/traefik.toml:/etc/traefik/traefik.toml\",\n        ]\n      }\n\n      template {\n        data = <<EOF\n[entryPoints]\n    [entryPoints.http]\n    address = \":8080\"\n    [entryPoints.traefik]\n    address = \":8081\"\n\n[api]\n    dashboard = true\n    insecure  = true\n\n# Enable Consul Catalog configuration backend.\n[providers.consulCatalog]\n    prefix           = \"charlie\"\n    exposedByDefault = false\n\n    [providers.consulCatalog.endpoint]\n      address = \"127.0.0.1:8500\"\n      scheme  = \"http\"\nEOF\n\n        destination = \"local/traefik.toml\"\n      }\n\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "load_balancers/traefik/webapp.nomad",
    "content": "job \"demo-webapp\" {\n  datacenters = [\"dc1\"]\n\n  group \"demo\" {\n    count = 3\n\n    network {\n      port \"http\" {}\n    }\n\n    service {\n      name = \"demo-webapp\"\n      port = \"http\"\n      tags = [\n        \"charlie.enable=true\",\n        \"charlie.http.routers.http.rule=Path(`/myapp`)\",\n      ]\n\n      check {\n        type     = \"http\"\n        path     = \"/\"\n        interval = \"2s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"server\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hashicorp/demo-webapp-lb-guide\"\n      }\n\n      env {\n        PORT    = \"${NOMAD_PORT_http}\"\n        NODE_IP = \"${NOMAD_IP_http}\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "load_balancers/traefik/webapp2.nomad",
    "content": "job \"demo-webapp\" {\n  datacenters = [\"dc1\"]\n\n  group \"demo\" {\n    count = 3\n\n    task \"server\" {\n      env {\n        PORT    = \"${NOMAD_PORT_http}\"\n        NODE_IP = \"${NOMAD_IP_http}\"\n      }\n\n      driver = \"docker\"\n\n      config {\n        image = \"hashicorp/demo-webapp-lb-guide\"\n      }\n\n      resources {\n        network {\n          mbits = 10\n          port  \"http\"{}\n        }\n      }\n\n      service {\n        name = \"demo-webapp\"\n        port = \"http\"\n\n        tags = [\n          \"traefik.enable=true\",\n          \"traefik.http.routers.http.rule=Path(`/myapp`)\",\n        ]\n\n        check {\n          type     = \"http\"\n          path     = \"/\"\n          interval = \"2s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "meta/README.md",
    "content": "## Meta Interpolation\n\nThis example attempts to perform interpolation in the meta stanza which as of Nomad 0.10 does not work.\n\nYou can run this example in your cluster and then run:\n\n```console\n$ nomad alloc exec «allocation id» /bin/sh -c env\n```\n\nThe goal is to have the two variables to contain the same data, however:\n\n```text\nENV_TEST_INTERPOLATION=dc1-meta-stanza-test-job\nTEST_INTERPOLATION={{ env NOMAD_DC }}-{{ env NOMAD_JOB_NAME }}\n```\n\n"
  },
  {
    "path": "meta/example.nomad",
    "content": "job \"meta-stanza-test-job\" {\n  datacenters = [\"dc1\"]\n\n  meta {\n    TEST_NUMBER = 1\n    TEST_STRING = \"string\"\n    # Interpolation here fails for both node attributes (e.g. ${node.datacenter}) and runtime environment variables\n    TEST_INTERPOLATION = \"{{ env NOMAD_DC }}-{{ env NOMAD_JOB_NAME }}\"\n  }\n\n  group \"meta-stanza-test-group\" {\n    network {\n      port \"http\" {\n        to = 5000\n      }\n    }\n\n    task \"meta-stanza-test-task\" {\n      driver = \"docker\"\n\n      config {\n        image = \"registry:latest\"\n        ports = [\"http\"]\n      }\n\n      env {\n        TEST_NUMBER = \"${NOMAD_META_TEST_NUMBER}\"\n        TEST_STRING = \"${NOMAD_META_TEST_STRING}\"\n        TEST_INTERPOLATION = \"${NOMAD_META_TEST_INTERPOLATION}\"\n        ENV_TEST_INTERPOLATION = \"${NOMAD_DC}-${NOMAD_JOB_NAME}\"\n      }\n    }\n  }\n}"
  },
  {
    "path": "microservice/example.nomad",
    "content": "job \"system\" {\n  datacenters = [\"dc1\"]\n\n  type = \"system\"\n\n  group \"statsd\" {\n    count = 1\n    \n    task \"statsd\" {\n      driver = \"docker\"\n\n      env {\n        DD_API_KEY = \"da0840ea1581e9f5c400918e67d3fa83\"\n        DD_DOGSTATSD_NON_LOCAL_TRAFFIC = \"true\"\n      }\n\n      config {\n        image = \"datadog/agent:latest\"\n        \n        volumes = [\n          \"/var/run/docker.sock:/var/run/docker.sock:ro\",\n          \"/proc/:/host/proc/:ro\",\n          \"/sys/fs/cgroup/:/host/sys/fs/cgroup:ro\"\n        ]\n\n        port_map {\n          statsd = 8125\n        }\n      }\n\n      resources {\n        cpu    = 100 # 100 MHz\n        memory = 64 # 128MB\n\n        network {\n          mbits = 1\n\n          port \"statsd\" {\n            static = 8125\n          }\n        }\n      }\n    }\n\n    task \"fabio\" {\n      driver = \"docker\"\n      \n      env {\n        registry.consul.addr = \"${NOMAD_IP_http}:8500\"\n      }\n\n      config {\n        image = \"fabiolb/fabio\"\n\n        port_map {\n          http = 9999\n          admin = 9998\n        }\n\n      }\n\n      resources {\n        cpu    = 500 # 500 MHz\n        memory = 256 # 256MB\n\n        network {\n          mbits = 10\n\n          port \"admin\" {\n           static = 9998\n          }\n\n          port \"http\" {\n            static = 80\n          }\n        }\n      }\n\n      service {\n        port = \"admin\"\n        name = \"fabio\"\n        tags = [\"microservice\"]\n\n        check {\n          name     = \"alive\"\n          type     = \"http\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n          path     = \"/health\"\n        }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "minecraft/minecraft.nomad",
    "content": "job \"minecraft\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"minecraft\" {\n    volume \"minecraft\" {\n      type   = \"host\"\n      source = \"minecraft\"\n    }\n\n    task \"eula\" {\n      driver = \"exec\"\n\n      volume_mount {\n        volume      = \"minecraft\"\n        destination = \"/var/volume\"\n      }\n\n      config {\n        command = \"/bin/sh\"\n        args    = [\"-c\", \"echo 'eula=true' > /var/volume/eula.txt; cp local/server.jar /var/volume\"]\n      }\n\n      artifact {\n        source = \"https://launcher.mojang.com/v1/objects/bb2b6b1aefcd70dfd1892149ac3a215f6c636b07/server.jar\"\n        destination = \"local\"\n      }\n\n      lifecycle {\n        hook    = \"prestart\"\n        sidecar = false\n      }\n    }\n\n    task \"minecraft\" {\n      driver = \"java\"\n\n      config {\n        jar_path = \"/var/volume/server.jar\"\n        args    = [\"--nogui\"],\n        jvm_options = [\"-Xms1024M\", \"-Xmx2048M\"]\n      }\n\n      resources {\n        cpu    = 500 \n        memory = 500\n      }\n\n      volume_mount {\n        volume      = \"minecraft\"\n        destination = \"/var/volume\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "minecraft/minecraft_exec.nomad",
    "content": "job \"minecraft\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"minecraft\" {\n    volume \"minecraft\" {\n      type   = \"host\"\n      source = \"minecraft\"\n    }\n\n    task \"eula\" {\n      driver = \"exec\"\n\n      volume_mount {\n        volume      = \"minecraft\"\n        destination = \"/var/volume\"\n      }\n\n      config {\n        command = \"/bin/sh\"\n        args    = [\"-c\", \"echo 'eula=true' > /var/volume/eula.txt\"]\n      }\n\n      lifecycle {\n        hook    = \"prestart\"\n        sidecar = false\n      }\n    }\n\n    task \"minecraft\" {\n      driver = \"exec\"\n\n      config {\n        command = \"/bin/sh\"\n        args    = [\"-c\", \"cd /var/volume && exec java -Xms1024M -Xmx2048M -jar /local/server.jar --nogui; while true; do sleep 5; done\"]\n      }\n\n      artifact {\n        source = \"https://launcher.mojang.com/v1/objects/bb2b6b1aefcd70dfd1892149ac3a215f6c636b07/server.jar\"\n        destination = \"/var/volume\"\n      }\n\n      resources {\n        cpu    = 500 \n        memory = 500\n      }\n\n      volume_mount {\n        volume      = \"minecraft\"\n        destination = \"/var/volume\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "minecraft/plugin.nomad",
    "content": "job \"csi-plugin\" {\n  datacenters = [\"dc1\"]\n\n  group \"csi\" {\n    task \"plugin\" {\n      driver = \"docker\"\n\n      config {\n        image      = \"quay.io/k8scsi/hostpathplugin:v1.2.0\"\n        privileged = true\n        args       = [\n          \"--drivername=csi-hostpath\",\n          \"--v=5\",\n          \"--endpoint=unix://csi/csi.sock\",\n          \"--nodeid=foo\",\n        ]\n      }\n\n      csi_plugin {\n        id        = \"hostpath-plugin0\"\n        type      = \"monolith\"\n        mount_dir = \"/csi\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "monitoring/sensu/fabio-docker.nomad",
    "content": "job \"fabio\" {\n  datacenters = [\"dc1\"]\n  type        = \"system\"\n\n  update {\n    stagger      = \"5s\"\n    max_parallel = 1\n  }\n\n  group \"fabio\" {\n    network {\n      port \"proxy\" {\n        static = 9999\n        to     = 9999\n      }\n\n      port \"ui\" {\n        static = 9998\n        to     = 9998\n      }\n    }\n\n    task \"fabio-docker\" {\n      driver = \"docker\"\n\n      config {\n        image        = \"fabiolb/fabio:latest\"\n        network_mode = \"host\"\n        ports        = [\"proxy\",\"ui\"]\n      }\n\n      env {\n       # FABIO_registry_consul_addr=\"${attr.unique.network.ip-address}:8500\"\n      }\n\n      resources {\n        cpu    = 200\n        memory = 32\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "monitoring/sensu/sensu.nomad",
    "content": "job \"sensu\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  update {\n    max_parallel      = 1\n    min_healthy_time  = \"10s\"\n    healthy_deadline  = \"3m\"\n    progress_deadline = \"10m\"\n    auto_revert       = false\n    canary            = 0\n  }\n\n  migrate {\n    max_parallel     = 1\n    health_check     = \"checks\"\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"5m\"\n  }\n\n  group \"sensu-backend\" {\n    restart {\n      attempts = 2\n      interval = \"30m\"\n      delay    = \"15s\"\n      mode     = \"fail\"\n    }\n\n    network {\n      port  \"web_ui\"{\n        to = 3000\n      }\n\n      port  \"api\" {\n        to = 8080\n      }\n\n      port  \"ws_api\"{\n        to = 8081\n      }\n    }\n\n    service {\n      name = \"sensu\"\n      tags = [\"ui\", \"urlprefix-/sensu strip=/sensu\"]\n      port = \"web_ui\"\n\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"sensu-docker\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"sensu/sensu:latest\"\n        command = \"sensu-backend\"\n        args    = [\n          \"start\",\n          \"--state-dir\",\n          \"/var/lib/sensu/sensu-backend\",\n          \"--log-level\",\n          \"debug\",\n        ]\n        ports   = [\"web_ui\",\"api\",\"ws_api\"]\n      }\n\n      env {\n        SENSU_BACKEND_CLUSTER_ADMIN_USERNAME = \"sensu_admin\"\n        SENSU_BACKEND_CLUSTER_ADMIN_PASSWORD = \"password\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "nginx-fabio-clone/README.md",
    "content": "# Creating a nginx configuration from fabio-style tagging\n\n### Files\n* foo-service.nomad - the foo job.  Exercises the path stripping\n* bar-service.nomad - the bar job\n* tj.out = john's template rendered  works, but fugly\n* tj.ct = john's template\n* e.ct example consul template  trying to be fancy af\n* e.out rendered template\n\n### Render template\n\n```\nconsul-template --template=\"e.ct:e.out\" --once\n```\n"
  },
  {
    "path": "nginx-fabio-clone/bar-service.nomad",
    "content": "job \"bar-service\" {\n  datacenters = [\"dc1\"]\n\n  group \"example\" {\n    count = 3\n\n    network {\n      port \"http\" {}\n    }\n\n    service {\n      name = \"bar-service\"\n      tags = [\"urlprefix-/bar\"]\n      port = \"http\"\n\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"http-echo\"\n        args    = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>Welcome to the Bar Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\",\n        ]\n      }\n\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\"\n\n        options {\n          checksum = \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "nginx-fabio-clone/e.ct",
    "content": "{{- range services -}}\n    {{- $name := .Name -}}\n    {{- $service := service .Name -}}\n    {{- if ne $name \"nginx-wdg-lb-aus\" -}}\n        {{- if ne $name \"nginx-wdg-lb\" }} \nupstream {{$name}} { \n            {{- range $service }} \n  server {{ .Address }}:{{ .Port }} max_fails=3 fail_timeout=60 weight=1; \n            {{- end }}\n}\n        {{- end -}}\n    {{- end -}}\n{{- end }}\n\n{{- range $tag, $services := services | byTag -}}\n  {{- if $tag | regexMatch \"urlprefix-[^:]\" -}}\n    {{- $opts := ($tag | replaceAll \"urlprefix-\" \"urlprefix=\") -}}\n    {{- range $opt := $opts | split \" \" -}}\n      {{- $splitOpt := $opt | split \"=\" -}}\n      {{- scratch.Set (index $splitOpt 0) (index $splitOpt 1) -}}\n    {{- end -}}\nlocation {{ scratch.Get \"urlprefix\" }}\n    {{- if scratch.Key \"strip\" -}}\n        {{- $regex := ( scratch.Get \"strip\" | regexReplaceAll \"(.*)\"  \"^$1\" ) -}}\n        {{- scratch.Set \"urlprefix\" ( scratch.Get \"urlprefix\" | regexReplaceAll $regex \"\" ) -}}\n    rewrite \n    {{- end }}\n    proxy-pass http://{{ scratch.Get \"urlprefix\" }}\n  {{ end -}}\n{{- end -}}"
  },
  {
    "path": "nginx-fabio-clone/e.out",
    "content": "   \nupstream bar-service { \n server 10.0.0.172:31815 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.108:24839 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.128:31970 max_fails=3 fail_timeout=60 weight=1; \n}    \nupstream consul { \n server 10.0.0.52:8300 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.132:8300 max_fails=3 fail_timeout=60 weight=1; \n}    \nupstream foo-service { \n server 10.0.0.172:24438 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.108:25861 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.128:24545 max_fails=3 fail_timeout=60 weight=1; \n}    \nupstream foo-service-2 { \n server 10.0.0.172:24438 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.108:25861 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.128:24545 max_fails=3 fail_timeout=60 weight=1; \n}    \nupstream nomad { \n server 10.0.0.89:4647 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.89:4648 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.89:4646 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.208:4646 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.208:4647 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.208:4648 max_fails=3 fail_timeout=60 weight=1; \n}    \nupstream nomad-client { \n server 10.0.0.172:4646 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.108:4646 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.128:4646 max_fails=3 fail_timeout=60 weight=1; \n} \n\n------------- \nupstream bar-service { \n  server 10.0.0.172:31815 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.108:24839 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.128:31970 max_fails=3 fail_timeout=60 weight=1;\n} \nupstream consul { \n  server 10.0.0.52:8300 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.132:8300 max_fails=3 fail_timeout=60 weight=1;\n} \nupstream foo-service { \n  server 10.0.0.172:24438 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.108:25861 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.128:24545 max_fails=3 fail_timeout=60 weight=1;\n} \nupstream foo-service-2 { \n  server 10.0.0.172:24438 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.108:25861 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.128:24545 max_fails=3 fail_timeout=60 weight=1;\n} \nupstream nomad { \n  server 10.0.0.89:4647 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.89:4648 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.89:4646 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.208:4646 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.208:4647 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.208:4648 max_fails=3 fail_timeout=60 weight=1;\n} \nupstream nomad-client { \n  server 10.0.0.172:4646 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.108:4646 max_fails=3 fail_timeout=60 weight=1; \n  server 10.0.0.128:4646 max_fails=3 fail_timeout=60 weight=1;\n}\n"
  },
  {
    "path": "nginx-fabio-clone/example.nomad",
    "content": "job \"nginx\" {\n  datacenters = [\"dc1\"]\n\n  update {\n    max_parallel = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"3m\"\n    auto_revert = false\n    canary = 0\n  }\n\n  group \"group\" {\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay = \"25s\"\n      mode = \"delay\"\n    }\n\n    network {\n      port \"http\" {\n        to = 80\n      }\n    }\n\n    service {\n      name = \"nginx\"\n      tags = [\"lb\"]\n      port = \"http\"\n\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"nginx_docker\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"nginx:1.13.11\"\n        volumes = [\"local/nginx.conf:/etc/nginx/conf.d/default.conf\"]\n        ports   = [\"http\"]\n      }\n\n      template {\n        destination   = \"local/nginx.conf\"\n        change_mode   = \"signal\"\n        change_signal = \"SIGHUP\"\n        data = <<EOH\n{{ range $tag, $services := services | byTag }}{{ if $tag | regexMatch \"urlprefix-[^:]\" }}{{ range $services }} {{ $name := .Name }} {{ $service := service .Name }}\nupstream {{ $name }} {\n  zone upstream-{{$name}} 64k;\n  {{range $service}}server {{.Address}}:{{.Port}} max_fails=3 fail_timeout=60 weight=1;\n  {{else}}server 127.0.0.1:65535; # force a 502{{end}}\n} {{end}}{{end}}{{end}}\n\nserver {\n  listen 80 default_server;\n\n  location / {\n    root /usr/share/nginx/html/;\n    index index.html;\n  }\n\n  location /stub_status {\n    stub_status;\n  }\n\n{{ range $tag, $services := services | byTag }}{{ if $tag | regexMatch \"urlprefix-[^:]\" }}{{ $path := $tag | replaceAll \"urlprefix-\" \"\" }}{{ range $services }}{{with service .Name}}{{ with index . 0}}\n  location {{$path}} {\n    proxy_pass http://{{.Name}};\n  }\n{{end}}{{end}}{{end}}{{end}}{{end}}\n}\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "nginx-fabio-clone/foo-service.nomad",
    "content": "job \"foo-service\" {\n  datacenters = [\"dc1\"]\n\n  meta {\n    foo-service = \"true\"\n  }\n\n  group \"example\" {\n    count = 3\n\n    network {\n      port \"http\" {}\n    }\n\n    service {\n      name = \"foo-service\"\n      tags = [\"urlprefix-/foo\"]\n      port = \"http\"\n\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    service {\n      name = \"foo-service-2\"\n      tags = [\"urlprefix-/foo/foo2 strip=/foo\"]\n      port = \"http\"\n\n      check {\n        type = \"http\"\n        name = \"health-check\"\n        interval = \"15s\"\n        timeout = \"5s\"\n        path = \"/\"\n      }\n    }\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"http-echo\"\n        args    = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>Welcome to the Foo Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\",\n        ]\n      }\n\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\"\n\n        options {\n          checksum = \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "nginx-fabio-clone/tj.ct",
    "content": "{{range services}} {{$name := .Name}} {{$service := service .Name}}{{if ne $name \"nginx-wdg-lb-aus\"}}{{if ne $name \"nginx-wdg-lb\"}} \nupstream {{$name}} { \n{{range $service}} server {{.Address}}:{{.Port}} max_fails=3 fail_timeout=60 weight=1; \n{{end}}} {{end}}{{end}}{{end}}\n\nserver { \nlisten 80;\n\n  location / { \n    root /usr/share/nginx/html/; \n    index index.html; \n  }\n\n  location /status { \n    stub_status; \n  }\n\n{{range $services := services}} {{$name := .Name}}{{range $s_index, $service := service $name}}{{if eq $s_index 0}}{{range $tags := .Tags}}{{$portmap := . | regexMatch \"urlprefix-:\"}}{{if not $portmap}}{{if . | regexMatch \"urlprefix-\"}} \n  location {{$tags | regexReplaceAll \"urlprefix-\" \"\" | regexReplaceAll \"strip=.*$\" \"\"}} { \n    rewrite {{ $tags | regexReplaceAll \"urlprefix-\" \"\" | regexReplaceAll \"\\\\s*strip\\\\s*=.*\\\\s*$\" \"\" }}/(.*)$ /$1 break; \n    proxy_pass http://{{$name}}; \n  }{{end}}{{end}}{{end}}{{end}}{{end}}{{end}} \n}\n"
  },
  {
    "path": "nginx-fabio-clone/tj.out",
    "content": "   \nupstream bar-service { \n server 10.0.0.172:31815 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.108:24839 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.128:31970 max_fails=3 fail_timeout=60 weight=1; \n}    \nupstream consul { \n server 10.0.0.52:8300 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.132:8300 max_fails=3 fail_timeout=60 weight=1; \n}    \nupstream foo-service { \n server 10.0.0.172:24438 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.108:25861 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.128:24545 max_fails=3 fail_timeout=60 weight=1; \n}    \nupstream foo-service-2 { \n server 10.0.0.172:24438 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.108:25861 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.128:24545 max_fails=3 fail_timeout=60 weight=1; \n}    \nupstream nomad { \n server 10.0.0.89:4647 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.89:4648 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.89:4646 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.208:4646 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.208:4647 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.208:4648 max_fails=3 fail_timeout=60 weight=1; \n}    \nupstream nomad-client { \n server 10.0.0.172:4646 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.108:4646 max_fails=3 fail_timeout=60 weight=1; \n server 10.0.0.128:4646 max_fails=3 fail_timeout=60 weight=1; \n} \n\nserver { \nlisten 80;\n\n  location / { \n    root /usr/share/nginx/html/; \n    index index.html; \n  }\n\n  location /status { \n    stub_status; \n  }\n\n  \n  location /bar { \n    rewrite /bar/(.*)$ /$1 break; \n    proxy_pass http://bar-service; \n  }   \n  location /foo { \n    rewrite /foo/(.*)$ /$1 break; \n    proxy_pass http://foo-service; \n  }  \n  location /foo/foo2  { \n    rewrite /foo/foo2/(.*)$ /$1 break; \n    proxy_pass http://foo-service-2; \n  }   \n}\n"
  },
  {
    "path": "oom/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"redis:7\"\n        ports          = [\"db\"]\n        auth_soft_fail = true\n      }\n\n      resources {\n        memory = 10\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "output.html",
    "content": "<html><head><title>Nomad Job Tester Output</title>\n<style>\nbody {\n  font-family: Helvetica, sans-serif;\n}\n.out {\n    white-space: pre-wrap;\n}\n</style>\n<link rel=\"stylesheet\" type=\"text/css\" href=\"https://cdn.datatables.net/1.12.1/css/jquery.dataTables.css\">\n<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.6.1/jquery.min.js\"></script>\n<script src=\"https://cdn.datatables.net/1.12.1/js/jquery.dataTables.js\"></script>\n</head>\n<body>\n<table border=\"1\" width=\"100%\" id=\"results\">\n<thead><tr><th></th><th>Filename</th><th>Output</th></tr></thead>\n<tbody>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/add_local_file/raw_file_b64.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/add_local_file/raw_file_b64.nomad:\n<nil>: Unset variable \"input_file\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/add_local_file/raw_file_delims.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/add_local_file/raw_file_delims.nomad:\n<nil>: Unset variable \"input_file\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/add_local_file/raw_file_json.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/add_local_file/raw_file_json.nomad:\n<nil>: Unset variable \"input_file\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/add_local_file/use_file.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/add_local_file/use_file.nomad:\n<nil>: Unset variable \"input_file\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./HCL2/always_change/before.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./HCL2/always_change/uuid.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/always_change/variable.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/always_change/variable.nomad:\n<nil>: Unset variable \"run_index\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./HCL2/dynamic/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./HCL2/object_to_template/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/variable_jobs/decode-external-file/job1.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/variable_jobs/decode-external-file/job1.nomad:\n<nil>: Unset variable \"config_file\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/variable_jobs/decode-external-file/job2.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/variable_jobs/decode-external-file/job2.nomad:\n<nil>: Unset variable \"config_file\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/variable_jobs/env-vars/job1.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/variable_jobs/env-vars/job1.nomad:\n<nil>: Unset variable \"datacenters\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.\n<nil>: Unset variable \"docker_image_job1\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/variable_jobs/env-vars/job2.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/variable_jobs/env-vars/job2.nomad:\n<nil>: Unset variable \"datacenters\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.\n<nil>: Unset variable \"docker_image_job2\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/variable_jobs/job.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/variable_jobs/job.nomad:\n<nil>: Unset variable \"docker_image\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.\n<nil>: Unset variable \"image_version\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/variable_jobs/multiple-var-files/job1.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/variable_jobs/multiple-var-files/job1.nomad:\n<nil>: Unset variable \"datacenters\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.\n<nil>: Unset variable \"docker_image\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.\n<nil>: Unset variable \"image_version_job1\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/variable_jobs/multiple-var-files/job2.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/variable_jobs/multiple-var-files/job2.nomad:\n<nil>: Unset variable \"datacenters\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.\n<nil>: Unset variable \"docker_image\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.\n<nil>: Unset variable \"image_version_job2\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./HCL2/variable_jobs/multiple-var-files/job3.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./HCL2/variable_jobs/multiple-var-files/job3.nomad:\n<nil>: Unset variable \"docker_image\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.\n<nil>: Unset variable \"image_version_job3\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.\n<nil>: Unset variable \"datacenters\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./alloc_folder/mount_alloc.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./alloc_folder/sidecar.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/artifactory_oss/registry.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/cluster-broccoli/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/docker_registry/registry.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/docker_registry_v2/registry.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/docker_registry_v3/registry.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/mariadb/mariadb.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/membrane-soa/soap-proxy-v1-linux.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/membrane-soa/soap-proxy-v1-windows.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/membrane-soa/soap-proxy.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/minio/minio.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/minio/secure-variables/minio.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/postgres/postgres.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/prometheus/fabio-service.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/prometheus/node-exporter.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/prometheus/prometheus.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/vms/freedos/freedos.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/vms/tinycore/tc_ssh.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./applications/wordpress/distributed/build-site.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./applications/wordpress/distributed/build-site.nomad:\nbuild-site.nomad:22,5-6: Invalid character; This character is not used within the language., and 15 other diagnostic(s)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/wordpress/distributed/nginx.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/wordpress/distributed/wordpress-db.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./applications/wordpress/distributed/wordpress.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./applications/wordpress/distributed/wordpress.nomad:\n<nil>: Unset variable \"site_name\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./applications/wordpress/simple/wordpress.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./artifact_sleepyecho/artifact_sleepyecho.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./artifact_sleepyecho/vault_sleepyecho.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/batch_gc/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dispatch/sleepy.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dispatch/sleepy1.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dispatch/sleepy10.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dispatch/sleepy2.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dispatch/sleepy3.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dispatch/sleepy4.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dispatch/sleepy5.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dispatch/sleepy6.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dispatch/sleepy7.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dispatch/sleepy8.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dispatch/sleepy9.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/dont_restart_fail/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/lost_batch/batch.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/lost_batch/periodic.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/periodic/prohibit-overlap.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch/periodic/template.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./batch/spread_batch/example.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexample.nomad:6,5-6: Invalid argument name; Argument names must not be quoted.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./batch/spread_batch/example2.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexample2.nomad:6,5-6: Invalid argument name; Argument names must not be quoted.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch_overload/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./batch_overload/periodic.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./blocked_eval/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./cni/diy_brige/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./cni/diy_brige/repro.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./cni/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./complex_meta/template_env.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./complex_meta/template_meta.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\ntemplate_meta.nomad:14,7-8: Invalid argument name; Argument names must not be quoted.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./connect/consul.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./connect/discuss/job.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./connect/discuss/job.nomad:\n<nil>: Unset variable \"config_data\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./connect/dns-via-mesh/consul-dns.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./connect/dns-via-mesh/consul-dns2.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./connect/ingress_gateways/ingress_gateway.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./connect/native/cn-demo.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./connect/nginx_ingress/countdash.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./connect/nginx_ingress/ingress.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./connect/sidecar/countdash.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./connect/sidecar/countdash2.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./consul/add_check/e1.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./consul/add_check/e1.nomad:\ne1.nomad:21,1-1: Argument or block definition required; An argument or block definition is required here.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./consul/add_check/e2.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./consul/add_check/e3.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./consul/use_consul_for_kv_path/template.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./consul-template/coordination/sample.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nsample.nomad:19,24-25: Extra characters after interpolation expression; Expected a closing brace to end the interpolation expression, but found extra characters.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./consul-template/missing_vault_value/sample.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./consul-template/my_first_kv/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./countdash/connect/countdash.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./countdash/simple/countdash.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./csi/aws/ebs/busybox.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error during plan: Unexpected response code: 500 (1 error occurred:\n\t* Task group mysql validation failed: 1 error occurred:\n\t* Task group volume validation for mysql failed: 2 errors occurred:\n\t* CSI volumes must have an attachment mode\n\t* CSI volumes must have an access mode)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./csi/aws/ebs/mysql-server.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error during plan: Unexpected response code: 500 (1 error occurred:\n\t* Task group mysql-server validation failed: 1 error occurred:\n\t* Task group volume validation for mysql failed: 2 errors occurred:\n\t* CSI volumes must have an attachment mode\n\t* CSI volumes must have an access mode)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./csi/aws/ebs/plugin-ebs-controller.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./csi/aws/ebs/plugin-ebs-nodes.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./csi/aws/efs/busybox.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error during plan: Unexpected response code: 500 (1 error occurred:\n\t* Task group group validation failed: 1 error occurred:\n\t* Task group volume validation for jobVolume failed: 2 errors occurred:\n\t* CSI volumes must have an attachment mode\n\t* CSI volumes must have an access mode)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./csi/aws/efs/node.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./csi/gcp/gce-pd/config.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./csi/gcp/gce-pd/config.nomad:\nconfig.nomad:1,1-7: Unsupported block type; Blocks of type \"plugin\" are not expected here.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./csi/gcp/gce-pd/controller.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\ncontroller.nomad:13,12-13: Invalid argument name; Argument names must not be quoted.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./csi/gcp/gce-pd/job.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error during plan: Unexpected response code: 500 (1 error occurred:\n\t* Task group alloc validation failed: 1 error occurred:\n\t* Task group volume validation for jobVolume failed: 2 errors occurred:\n\t* CSI volumes must have an attachment mode\n\t* CSI volumes must have an access mode)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./csi/gcp/gce-pd/nodes.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nnodes.nomad:13,13-14: Argument or block definition required; An argument or block definition is required here.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./csi/hetzner/volume/config.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./csi/hetzner/volume/config.nomad:\nconfig.nomad:1,1-7: Unsupported block type; Blocks of type \"plugin\" are not expected here.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./csi/hetzner/volume/job.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error during plan: Unexpected response code: 500 (1 error occurred:\n\t* Task group alloc validation failed: 1 error occurred:\n\t* Task group volume validation for jobVolume failed: 2 errors occurred:\n\t* CSI volumes must have an attachment mode\n\t* CSI volumes must have an access mode)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./csi/hetzner/volume/node.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./csi/hostpath/block/csi-hostpath-driver.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./csi/hostpath/block/job.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error during plan: Unexpected response code: 500 (1 error occurred:\n\t* Task group alloc validation failed: 1 error occurred:\n\t* Task group volume validation for jobVolume failed: 2 errors occurred:\n\t* CSI volumes must have an attachment mode\n\t* CSI volumes must have an access mode)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./csi/hostpath/file/csi-hostpath-driver.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./csi/hostpath/file/job.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error during plan: Unexpected response code: 500 (1 error occurred:\n\t* Task group alloc validation failed: 1 error occurred:\n\t* Task group volume validation for jobVolume failed: 2 errors occurred:\n\t* CSI volumes must have an attachment mode\n\t* CSI volumes must have an access mode)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./deployments/failing_deployment/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/auth_from_template/auth.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/datadog/container_network.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\ncontainer_network.nomad:8,51-52: Unexpected comma after argument; Argument definitions must be separated by newlines, not commas. An argument definition must end with a newline.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/datadog/ex3.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nex3.nomad:8,51-52: Unexpected comma after argument; Argument definitions must be separated by newlines, not commas. An argument definition must end with a newline.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/datadog/example2.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexample2.nomad:8,51-52: Unexpected comma after argument; Argument definitions must be separated by newlines, not commas. An argument definition must end with a newline.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/docker+host_volume/task_deps.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\ntask_deps.nomad:25,26-32: Invalid single-argument block definition; A single-line block definition must end with a closing brace immediately after its single argument definition., and 1 other diagnostic(s)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/docker+host_volume/unsafe.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nunsafe.nomad:26,1-27,1: Invalid single-argument block definition; An argument definition on the same line as its containing block creates a single-line block definition, which must also be closed on the same line. Place the block's closing brace immediately after the argument definition., and 1 other diagnostic(s)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/docker_dynamic_hostname/finished.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/docker_entrypoint/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/docker_image_not_found/reschedule.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/docker_image_not_found/restart.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nrestart.nomad:4,5-6: Invalid argument name; Argument names must not be quoted.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/docker_interpolated_image_name/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/docker_interpolated_image_name/hostname.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/docker_logging/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/docker_mac_address/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/docker_network/example1.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error during plan: Unexpected response code: 500 (1 error occurred:\n\t* Task group cache validation failed: 1 error occurred:\n\t* Task 2 redefines 'redis' from task 1)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/docker_network/example2.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/docker_nfs/example.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexample.nomad:15,28-29: Missing key/value separator; Expected an equals sign (\"=\") to mark the beginning of the attribute value., and 2 other diagnostic(s)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/docker_template/example.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexample.nomad:22,28-29: Missing key/value separator; Expected an equals sign (\"=\") to mark the beginning of the attribute value., and 1 other diagnostic(s)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/docker_twice_in_alloc/example.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexample.nomad:10,19-28: Argument definition required; A single-line block definition can contain only a single argument. If you meant to define argument \"network\", use an equals sign to assign it a value. To define a nested block, place it on a line of its own within its parent block., and 1 other diagnostic(s)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/docker_windows_abs_mount/repro.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/env_var_args/start.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/env_var_args/test.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/get_fact_from_consul/args.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/get_fact_from_consul/image.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/host-volumes-and-users/scratch.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./docker/host-volumes-and-users/scratch.nomad:\nscratch.nomad:17,7-12: Unsupported argument; An argument named \"group\" is not expected here.\nscratch.nomad:34,7-12: Unsupported argument; An argument named \"group\" is not expected here.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/labels/heredoc.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nheredoc.nomad:15,11-14: Argument or block definition required; An argument or block definition is required here. To set an argument, use the equals sign \"=\" to introduce the argument value.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/labels/interpolation.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\ninterpolation.nomad:32,11-14: Argument or block definition required; An argument or block definition is required here. To set an argument, use the equals sign \"=\" to introduce the argument value.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./docker/labels/literal.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nliteral.nomad:11,11-12: Invalid argument name; Argument names must not be quoted.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./docker/mount_alloc/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./drain/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./dummy/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./echo_stack/fabio-system.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./echo_stack/login-service.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./echo_stack/profile-service.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./env/escaped_env_vars/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./environment/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./exec/host-volumes-and-users/scratch.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./exec-zip/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./fabio/fabio-docker.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./fabio/fabio-service.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./fabio/fabio-system.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./fabio-ssl/fabio-ssl.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./failing_jobs/failing_sidecar/example.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexample.nomad:11,19-28: Argument definition required; A single-line block definition can contain only a single argument. If you meant to define argument \"network\", use an equals sign to assign it a value. To define a nested block, place it on a line of its own within its parent block.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./failing_jobs/impossible_constratint/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./giant/example.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexample.nomad:5,35-41: Invalid single-argument block definition; A single-line block definition must end with a closing brace immediately after its single argument definition.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./host_volume/mariadb/mariadb.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nmariadb.nomad:5,35-41: Invalid single-argument block definition; A single-line block definition must end with a closing brace immediately after its single argument definition.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./host_volume/prometheus/prometheus.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nprometheus.nomad:12,39-45: Invalid single-argument block definition; A single-line block definition must end with a closing brace immediately after its single argument definition., and 2 other diagnostic(s)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./host_volume/read_only/read_only.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nread_only.nomad:5,34-40: Invalid single-argument block definition; A single-line block definition must end with a closing brace immediately after its single argument definition.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./http_echo/arm-service.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./http_echo/bar-service.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./http_echo/car-service-broken-check.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./http_echo/foo-service.deployment.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nfoo-service.deployment.nomad:11,7-8: Invalid argument name; Argument names must not be quoted.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./http_echo/foo-service.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./http_echo/foo-test.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./http_echo/template/echo_template.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./http_echo/template/ets.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./http_echo/template/ets2.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./http_echo/template/ets3.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./httpd_site/httpd.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./ipv6/SimpleHTTPServer/sample.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nsample.nomad:36,31-34: Invalid single-argument block definition; A single-line block definition must end with a closing brace immediately after its single argument definition.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./java/JavaDriverTest/java-driver-test.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./java/JavaDriverTest/test2.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./java/apache_camel/java_files.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\njava_files.nomad:2,24-25: Unexpected comma after argument; Argument definitions must be separated by newlines, not commas. An argument definition must end with a newline.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./java/jar-test/jar-test.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./job_examples/base-batch.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./job_examples/meta/meta-batch.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nmeta-batch.nomad:8,5-6: Invalid argument name; Argument names must not be quoted.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./json-jobs/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./load_balancers/traefik/traefik.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./load_balancers/traefik/webapp.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🟡</td><td width=\"25%\">./load_balancers/traefik/webapp2.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Job Warnings:\n1 warning(s):\n* Group \"demo\" has warnings: 1 error occurred:\n\t* 1 error occurred:\n\t* Task \"server\": task network resources have been deprecated as of Nomad 0.12.0. Please configure networking via group network block.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./meta/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./microservice/example.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexample.nomad:49,9-17: Argument or block definition required; An argument or block definition is required here. To set an argument, use the equals sign \"=\" to introduce the argument value.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./minecraft/minecraft.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nminecraft.nomad:40,30-31: Unexpected comma after argument; Argument definitions must be separated by newlines, not commas. An argument definition must end with a newline.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./minecraft/minecraft_exec.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./minecraft/plugin.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./monitoring/sensu/fabio-docker.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./monitoring/sensu/sensu.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./nginx-fabio-clone/bar-service.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./nginx-fabio-clone/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./nginx-fabio-clone/foo-service.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./oom/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./parameterized/docker_hello_world/hello-world.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./parameterized/template.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./parameterized/to_specific_client/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./parameterized/to_specific_client/workaround/example.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./parameterized/to_specific_client/workaround/example.nomad:\n<nil>: Unset variable \"node_id\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./ports/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./qemu/hass/hass.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nhass.nomad:24,26-27: Unexpected comma after argument; Argument definitions must be separated by newlines, not commas. An argument definition must end with a newline.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./qemu/tc_ssh.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./qemu/tc_ssh2.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./qemu/tc_ssh_arm.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./raw_exec/env.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./raw_exec/mkdir/mkdir-bash.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./raw_exec/mkdir/mkdir.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./raw_exec/ps.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./raw_exec/quoted_args/quoted_args.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./raw_exec/quoted_args/quoted_args_2.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./raw_exec/user/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./reproductions/cpu_rescheduling/repro.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./reschedule/ex.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./restart/restart.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nrestart.nomad:32,13-14: Missing key/value separator; Expected an equals sign (\"=\") to mark the beginning of the attribute value.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./rolling_upgrade/cv-new.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./rolling_upgrade/cv.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./rolling_upgrade/example-new.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./rolling_upgrade/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./sentinel/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./sentinel/exampleGroupMissingNodeClass.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexampleGroupMissingNodeClass.nomad:24,46-51: Invalid single-argument block definition; A single-line block definition must end with a closing brace immediately after its single argument definition.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./sentinel/exampleGroupNodeClass.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexampleGroupNodeClass.nomad:7,47-52: Invalid single-argument block definition; A single-line block definition must end with a closing brace immediately after its single argument definition.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./sentinel/exampleJobNodeClass.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./sentinel/exampleNoNodeClass.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./server-variables/build-site.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./server-variables/build-site.nomad:\nbuild-site.nomad:22,5-6: Invalid character; This character is not used within the language., and 15 other diagnostic(s)</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./server-variables/nginx.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./server-variables/wordpress-db.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./server-variables/wordpress.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./server-variables/wordpress.nomad:\n<nil>: Unset variable \"site_name\"; A used variable must be set or have a default value; see https://www.nomadproject.io/docs/job-specification/hcl2/variables for details.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./sleepy/sleepy_bash/sleepy.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🟡</td><td width=\"25%\">./sleepy/sleepy_python/batch_sleepy_python.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Job Warnings:\n1 warning(s):\n* Group \"group\" has warnings: 1 error occurred:\n\t* 1 error occurred:\n\t* Task \"python\": task network resources have been deprecated as of Nomad 0.12.0. Please configure networking via group network block.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./sleepy/sleepy_python/sleepy_python.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./spread/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🟡</td><td width=\"25%\">./stress/cpu_throttled_time/stress.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Job Warnings:\n1 warning(s):\n* Group \"cache\" has warnings: 1 error occurred:\n\t* 1 error occurred:\n\t* Task \"redis\": task network resources have been deprecated as of Nomad 0.12.0. Please configure networking via group network block.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./super_big/super_big.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🟡</td><td width=\"25%\">./super_big/super_big2.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Job Warnings:\n1 warning(s):\n* Group \"cache\" has warnings: 1 error occurred:\n\t* 1 error occurred:\n\t* Task \"redis\": task network resources have been deprecated as of Nomad 0.12.0. Please configure networking via group network block.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./system_jobs/sleepy/sleepy_bash/sleepy.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nsleepy.nomad:18,24-25: Extra characters after interpolation expression; Expected a closing brace to end the interpolation expression, but found extra characters.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🟡</td><td width=\"25%\">./system_jobs/sleepy/sleepy_python/batch_sleepy_python.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Job Warnings:\n1 warning(s):\n* Group \"group\" has warnings: 1 error occurred:\n\t* 1 error occurred:\n\t* Task \"python\": task network resources have been deprecated as of Nomad 0.12.0. Please configure networking via group network block.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./system_jobs/sleepy/sleepy_python/sleepy_python.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./system_jobs/system_deployment/deploy_jdk.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🟡</td><td width=\"25%\">./system_jobs/system_deployment/fabio-system.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Job Warnings:\n1 warning(s):\n* Group \"linux-amd64\" has warnings: 1 error occurred:\n\t* 1 error occurred:\n\t* Task \"fabio\": task network resources have been deprecated as of Nomad 0.12.0. Please configure networking via group network block.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🟡</td><td width=\"25%\">./system_jobs/system_deployment/foo-system.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Job Warnings:\n1 warning(s):\n* Group \"example\" has warnings: 1 error occurred:\n\t* 1 error occurred:\n\t* Task \"server\": task network resources have been deprecated as of Nomad 0.12.0. Please configure networking via group network block.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./system_jobs/system_filter/filtered.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./system_jobs/system_filter/host_vol.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./task_deps/consul-lock/myapp.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./task_deps/disk_check/disk.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./task_deps/init_artifact/batch-init-artifact.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nbatch-init-artifact.nomad:38,24-25: Extra characters after interpolation expression; Expected a closing brace to end the interpolation expression, but found extra characters.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./task_deps/init_artifact/service-init-artifact.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nservice-init-artifact.nomad:38,24-25: Extra characters after interpolation expression; Expected a closing brace to end the interpolation expression, but found extra characters.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./task_deps/interjob/myapp.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./task_deps/interjob/myservice.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./task_deps/k8sdoc/init.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./task_deps/k8sdoc/k8sdoc1.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./task_deps/k8sdoc/myapp.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./task_deps/k8sdoc/myservice.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./task_deps/sidecar/example.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./task_deps/sidecar/example.nomad:\nexample.nomad:56,1-2: Argument or block definition required; An argument or block definition is required here.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/batch/context.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/batch/parameter.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./template/batch/services.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nservices.nomad:1,40-2,1: Invalid single-argument block definition; An argument definition on the same line as its containing block creates a single-line block definition, which must also be closed on the same line. Place the block's closing brace immediately after the argument definition.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/batch/template.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/from_consul/artifact.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./template/from_consul/init.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./template/from_consul/init.nomad:\n<nil>: Missing job block; A job block is required</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/from_consul/issue.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/rerender/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/secure_variables/example.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/secure_variables/multiregion/template.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/secure_variables/template-playground.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./template/secure_variables/variable_view.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Error parsing job file from ./template/secure_variables/variable_view.nomad:\nvariable_view.nomad:44,16-21: Error in function call; Call to function \"file\" failed: no file exists at template.tmpl.\nvariable_view.nomad:44,16-21: Unsuitable value type; Unsuitable value: value must be known</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./template/services/byTag.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nbyTag.nomad:7,19-28: Argument definition required; A single-line block definition can contain only a single argument. If you meant to define argument \"network\", use an equals sign to assign it a value. To define a nested block, place it on a line of its own within its parent block.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./template/template-system/composed_keys.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\ncomposed_keys.nomad:7,19-28: Argument definition required; A single-line block definition can contain only a single argument. If you meant to define argument \"network\", use an equals sign to assign it a value. To define a nested block, place it on a line of its own within its parent block.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./template/template-system/services-on-nomad-client.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nservices-on-nomad-client.nomad:6,30-33: Invalid single-argument block definition; A single-line block definition must end with a closing brace immediately after its single argument definition.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./template/template-system/template.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\ntemplate.nomad:6,30-33: Invalid single-argument block definition; A single-line block definition must end with a closing brace immediately after its single argument definition.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/template_handoff/handoff.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/template_handoff/handoff_restart.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">🔴</td><td width=\"25%\">./template/template_into_docker/example.nomad</td><td><details><summary>Show Output</summary><pre class=out><code>Error getting job struct: Failed to parse using HCL 2. Use the HCL 1 parser with `nomad run -hcl1`, or address the following issues:\nexample.nomad:23,26-27: Missing key/value separator; Expected an equals sign (\"=\") to mark the beginning of the attribute value.</code></pre></details></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/template_playground/composed_keys.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/template_playground/template-exec.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/template_playground/template-hcl2.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/template_playground/template.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./template/use_whitespace/byTag.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./vault/deleted_policy/temp1.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./vault/deleted_policy/workload.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./vault/pki/sleepy_bash_pki.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./vault/pki/test.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./vault/sleepy_vault_bash/sleepy_bash.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./vault/sleepy_vault_bash/test.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./vault_reload_triggered_by_consul/sample.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./victoriametrics/vm.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./win_rawexec_restart/artifact_sleepyecho.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./windows_docker/docker-iis.nomad</td><td></td></tr>\n<tr><td style=\"width: 2em;\" align=\"center\">✅</td><td width=\"25%\">./windows_docker/windows-test.nomad</td><td></td></tr>\n</tbody>\n</table>\n<script>\n$(document).ready( function () {\n    $('#results').DataTable({\n      paging: false\n    });\n} );\n</script>\n"
  },
  {
    "path": "parameterized/README.md",
    "content": "---\nname: Parameterized Jobs on Nomad\nproducts_used:\n  - nomad\ndescription: |-\n  Short description about what the reader will do/learn. Limit 250 characters; include keyword for SEO.\n\n---\n\n# Parameterized Jobs on Nomad\n\nParameterized Nomad jobs encapsulate a set of work that can be carried out on various input values. \n\nJobs with the parameterized stanza register themselves to the cluster, but they do not run immediately.\n\nYou must \"dispatch\" the job with the necessary values to run them.\n\nYou dispatch a parameterized job using the `nomad job dispatch` command or the Nomad Job Dispatch API.\n\nWhile dispatching the job, you can supply an opaque payload and metadata variables to customize the dispatched instance of the job.\n\n## The `parameterized` stanza\n\n```json\n  parameterized {\n    payload       = \"required\"\n    meta_required = [\"dispatcher_email\"]\n    meta_optional = [\"pager_email\"]\n  }\n```\n\n## Challenge\n\nIn this tutorial, you will take a simple Nomad template job, enhance it with parameters, and dispatch it to your cluster. These basic practices can be used to create more complex batch workloads over time.\n\n## Prerequisites\n\n- Nomad dev agent\n- Nomad cluster\n  - You need either to have the `raw_exec` task driver enabled or to convert the job to use the `exec` driver.\n\n## Build a basic batch job\n\nCreate a file named `template.nomad`. Open it in a text editor and add the following minimal job specification.\n\n```hcl\njob \"«job_name»\" {\n  datacenters = [\"«datacenter»\"]\n\n  group \"«group_name»\" {\n    task \"«job_name»\" {\n      driver = \"«driver_type»\"\n    }\n  }\n}\n```\n\n### Populate the template placeholders\n\nFor this tutorial, replace the placeholders in the minimal job template with these values.\n\n- **«job_name»** - `template`\n- **«group_name»** - `renderer`\n- **«task_name»** - `output`\n- **«driver_name»** - `raw_exec`\n\n### Set `datacenters`\n\n### Set job type to batch\n\nThe default job type of a Nomad job is **service**. For a batch job, you need to explicitly add the type attribute to the **job** stanza.\n\n```hcl\n  type = \"batch\"\n```\n\n### Configure the task\n\nInside of the **task** stanza, add the following `config` stanza. This configuration uses the **cat** command to output a file named **out.txt** that this job creates.\n\n```hcl\n      config {\n        command = \"cat\"\n        args = [\"local/out.txt]\n      }\n```\n\n### Add a `template`\n\nNext, add a **template** stanza inside of the **task** stanza. This template will write the words `This is my template` to the **local/out.txt** file.\n\n```hcl\n      template {\n        destination = \"local/out.txt\"\n        data =<<EOT\nThis is my template.\nEOT\n      }\n```\n\n### Run the completed job\n\n<Accordion heading=\"View a complete version of the job\" collapse>\n\n```hcl\njob \"template\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  group \"renderer\" {\n    task \"output\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"cat\"\n        args = [\"local/out.txt\"]\n      }\n\n      template {\n        destination = \"local/out.txt\"\n        data =<<EOT\nThis is my template.\nEOT\n      }\n    }\n  }\n}\n```\n\n</Accordion>\n\nRun the job with the `nomad job run` command.\n\n```shell-session\n$ nomad job run template.nomad\n==> Monitoring evaluation \"fe273062\"\n    Evaluation triggered by job \"template_render\"\n    Allocation \"bbae901c\" created: node \"3e34dbcd\", group \"renderer\"\n==> Monitoring evaluation \"fe273062\"\n    Allocation \"bbae901c\" status changed: \"pending\" -> \"complete\" (All tasks have completed)\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"fe273062\" finished with status \"complete\"\n```\n\nView the output from the job by running the `nomad alloc logs` command on the allocation that Nomad created. In the above output, the allocation ID is \"**bbae901c**.\"\n\n```shell-session\n$ nomad alloc logs bbae901c\nThis is my template.\n```\n\n## Parameterize the job\n\nAdd a `parameterized` stanza to the **job** stanza. This stanza instructs Nomad to store the job and wait for you to dispatch instances of it.\n\n```hcl\n  parameterized {\n  }\n```\n\nAn empty `parameterized` stanza creates a parameterized job that can't be customized, but allows you to dispatch the job when you would like to run it.\n\nBefore making the job parameterized, you will need to purge the original batch version. Run `nomad job stop` with the `-purge` flag on the `template` job.\n\n```shell-session\n$ nomad job stop -purge template\n```\n\nRun the parameterized version of the job.\n\n```shell-session\n$ nomad run template.v3.nomad\nJob registration successful\n```\n\nNotice that the output doesn't show any scheduling activity—no evaluation or allocation information. You should expect this, since parameterized jobs are not run until they are dispatched.\n\n### If you get an error\n\nIf you will receive the following error, it indicates that you missed purging the non-parameterized version of the template job. Run `nomad job stop -purge template` to resolve it.\n\n```shell-session\n$ nomad job run template.nomad \nError submitting job: Unexpected response code: 500 (cannot update non-parameterized job to being parameterized)\n```\n\nRun the `nomad job status` command to verify your parameterized job is available for dispatch.\n\n```shell-session\n$ nomad job status \nID        Type                 Priority  Status   Submit Date\ntemplate  batch/parameterized  50        running  2021-04-11T22:01:45-04:00\n```\n\n## Dispatch the job\n\nRun the `nomad job dispatch` command to dispatch an instance of the parameterized job.\n\n```shell-session\n$ nomad job dispatch template\nDispatched Job ID = template/dispatch-1618193196-1044eb97\nEvaluation ID     = 00084465\n\n==> Monitoring evaluation \"00084465\"\n    Evaluation triggered by job \"template/dispatch-1618193196-1044eb97\"\n    Allocation \"c842df26\" created: node \"9e9342f5\", group \"renderer\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"00084465\" finished with status \"complete\"\n```\n\nExamine the output of the command. There are some key differences from the typical output of the `nomad job run` command. Notice that Nomad generates a **Dispatched Job ID**. This ID is used to refer to this specific instance of the parameterized job, and they will show in the output of `nomad job status` as well.\n\nThe output also provides scheduling information. Collect the allocation ID from your output. In the above output it is \"**c842df26**.\" As before, run the `nomad alloc logs` command for your allocation ID.\n\n```hcl\n$ nomad alloc logs c842df26\nThis is my template.\n```\n\nParameterized jobs without variables can be used to provide a means for running a batch job without having to supply the job specification.\n\n## Add a dispatch variable\n\nParameterized jobs also provide a the ability to send variables as part of dispatching the job. These variables can be optional or required.\n\nFor example, the following parameterized stanza adds a required variable named `dispatcher_email` and an optional variable named `pager_email`.\n\n```hcl\n  parameterized {\n    meta_required = [\"dispatcher_email\"]\n    meta_optional = [\"pager_email\"]\n  }\n```\n\nAdd two variables to the template job's parameterized stanza—one required variable named `my_name` and an optional variable named `my_title`—by adding the following attributes inside of the parameterized stanza.\n\n```hcl\n    meta_required = [\"my_name\"]\n    meta_optional = [\"my_title\"]\n```\n\n### Add the variables to the template\n\nUpdate the template content inside of the HEREDOC markers(`<<EOT`  and `EOT`). Replace it with the following content. Make sure that the ending HEREDOC delimiter is at the beginning of a line by itself.\n\n```go\nThis is my template.\nHello {{ if ( env \"NOMAD_META_MY_TITLE\" ) }}{{ env \"NOMAD_META_MY_TITLE\" }} {{ end }}{{ env \"NOMAD_META_MY_NAME\" }}.\n```\n\n### Deploy and dispatch the job\n\n```shell-session\n$ nomad run template.nomad              \nJob registration successful\n```\n\n```shell-session\n$ nomad job dispatch -meta my_name=Learner template\nDispatched Job ID = template/dispatch-1618195132-3d59eda3\nEvaluation ID     = 0803be44\n\n==> Monitoring evaluation \"0803be44\"\n    Evaluation triggered by job \"template/dispatch-1618195132-3d59eda3\"\n    Allocation \"0f1c6c7a\" created: node \"9e9342f5\", group \"renderer\"\n==> Monitoring evaluation \"0803be44\"\n    Allocation \"0f1c6c7a\" status changed: \"pending\" -> \"complete\" (All tasks have completed)\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"0803be44\" finished with status \"complete\"\n```\n\n```shell-session\n$ nomad alloc logs 0f1c6c7a\nThis is my template.\nHello Learner.\n```\n\n### Test the requirement of `my_name`\n\nBecause you put the **my_name** variable in the meta_required attribute's value list, the job will not run unless you provide it when dispatching. If you do not, you will receive an error. Try it now.\n\n```shell-session\n$ nomad job dispatch template\nFailed to dispatch job: Unexpected response code: 500 (Dispatch did not provide required meta keys: [my_name])\n```\n\n### Use the optional variable\n\n```shell-session\n$nomad job dispatch -meta my_name=Learner -meta my_title=awesome template\nDispatched Job ID = template/dispatch-1618195957-6256077e\nEvaluation ID     = fdfb6827\n\n==> Monitoring evaluation \"fdfb6827\"\n    Evaluation triggered by job \"template/dispatch-1618195957-6256077e\"\n    Allocation \"2b2ebdc1\" created: node \"9e9342f5\", group \"renderer\"\n==> Monitoring evaluation \"fdfb6827\"\n    Allocation \"2b2ebdc1\" status changed: \"pending\" -> \"complete\" (All tasks have completed)\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"fdfb6827\" finished with status \"complete\"\n```\n\n```shell-session\n$ nomad alloc logs 2b2ebdc1\nThis is my template.\nHello awesome Learner.\n```\n\n### Set default values for optional variables\n\nYou set default values for optional variables by adding the `meta` stanza inside the **job** stanza. Create a default of \"diligent\" for my_title by adding the following meta stanza.\n\n```hcl\n  meta {\n      my_title = \"diligent\"\n  }\n```\n\n```shell-session\n$ nomad job dispatch -meta my_name=Learner template    \nDispatched Job ID = template/dispatch-1618196625-aa9ba981\nEvaluation ID     = 999e5266\n\n==> Monitoring evaluation \"999e5266\"\n    Evaluation triggered by job \"template/dispatch-1618196625-aa9ba981\"\n    Allocation \"ea32501e\" created: node \"9e9342f5\", group \"renderer\"\n==> Monitoring evaluation \"999e5266\"\n    Allocation \"ea32501e\" status changed: \"pending\" -> \"complete\" (All tasks have completed)\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"999e5266\" finished with status \"complete\"```\n\n```shell-session\n$ nomad alloc logs ea32501e\nThis is my template.\nHello diligent Learner.\n```\n\n```shell-session\n$ nomad job dispatch -meta my_name=Learner -meta my_title=fantastic template\nDispatched Job ID = template/dispatch-1618196752-eb39d032\nEvaluation ID     = c9c455b3\n\n==> Monitoring evaluation \"c9c455b3\"\n    Evaluation triggered by job \"template/dispatch-1618196752-eb39d032\"\n    Allocation \"8c04f35c\" created: node \"9e9342f5\", group \"renderer\"\n==> Monitoring evaluation \"c9c455b3\"\n    Allocation \"8c04f35c\" status changed: \"pending\" -> \"complete\" (All tasks have completed)\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"c9c455b3\" finished with status \"complete\"\n```\n\n```shell-session\n$ nomad alloc logs 8c04f35c\nThis is my template.\nHello fantastic Learner.\n```\n\n## Use dispatch payloads\n\n```hcl\n      dispatch_payload {\n        file = \"config.json\"\n      }\n```\n\n## Additional discussion\n\n_Optional_\n\nOften times, support or TAMs ask you to add extra discussion to explain little\nmore about cloud provider specific pitfalls, etc. You can add them here if it\ndoes not fit into anywhere else.\n\n## Next steps\n\nIn this section, start with a brief **_summary_** of what you have learned in\nthis tutorial re-emphasizing the business value. Then provide some guidance on the\nnext steps to extend the user's knowledge. Briefly describe what the user will do in the next tutorial if the current collection is sequential.\n\nAdd cross-referencing links to get more information about the feature (e.g.\nproduct doc page, webinar links, blog post, etc.).\n"
  },
  {
    "path": "parameterized/docker_hello_world/hello-world.nomad",
    "content": "job \"hello-world.nomad\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  \n  parameterized { }\n\n  group \"containers\" {\n    task \"hello\" {\n      driver = \"docker\"\n\n      config {\n        image = \"hello-world:latest\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "parameterized/template.nomad",
    "content": "job \"«job_name»\" {\n  datacenters = [\"«datacenter»\"]\n\n  group \"«group_name»\" {\n    task \"«job_name»\" {\n      driver = \"«driver_type»\"\n    }\n  }\n}"
  },
  {
    "path": "parameterized/to_specific_client/example.nomad",
    "content": "job \"example.nomad\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  parameterized {\n    meta_required = [\"input_node_id\"]\n    meta_optional = []\n    payload = \"forbidden\"\n  }\n\n  group \"cache\" {\n\n    constraint {\n      attribute = \"${node.unique.id}\"\n      value = \"${NOMAD_META_INPUT_NODE_ID}\"\n    }\n\n    task \"task\" {\n      driver = \"docker\"\n\n      config {\n        image = \"alpine\"\n        command = \"sh\"\n        args = [\n          \"-c\",\n          \"env; while true; do sleep 300; done\"\n        ]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "parameterized/to_specific_client/workaround/README.md",
    "content": "# A gross workaround\n\nThis is a very gross workaround to synthesize some things I have learned recently.\n\n\nIt leverages:\n\n- ugly shell script\n- python\n- Nomad HCL2\n\n\n```bash\nRunOutput=`nomad job run -var node_id=f7bc1f2d-34b1-eaf8-b7d3-253f2e7de4d6 example.nomad`\nAllocId=$(echo \"$RunOutput\" | awk '/Allocation/{ print $2}'| tr -d \"\\\"\")\nif []\nthen\n\techo \"No allocation found\"\n\texit 1\nfi\n\nFullAllocId=$(nomad alloc status -verbose $AllocId | grep -e '^ID' | awk '{print $3}')"
  },
  {
    "path": "parameterized/to_specific_client/workaround/example.nomad",
    "content": "variable \"node_id\" {\n  type = string\n  description = \"The destination's Nomad node ID. Must be the full ID from `nomad node status -verbose`\"\n}\n\njob \"example.nomad\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  group \"cache\" {\n\n    constraint {\n      attribute = \"${node.unique.id}\"\n      value = var.node_id\n    }\n\n    task \"task\" {\n      driver = \"docker\"\n\n      config {\n        image = \"alpine\"\n        command = \"sh\"\n        args = [\n          \"-c\",\n          \"env; sleep 5;\"\n        ]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "parameterized/to_specific_client/workaround/rolling_run.sh",
    "content": "#!/bin/bash\n\n\nClientNodeIds=$(nomad node status -t '{{ range  .}}{{printf \"%s\\n\" .ID}}{{end}}')\n\nRunOutput=$(nomad job run -var node_id=f7bc1f2d-34b1-eaf8-b7d3-253f2e7de4d6 example.nomad)\nAllocId=$(echo \"$RunOutput\" | awk '/Allocation/{ print $2}'| tr -d \"\\\" \\t\")\nif [ \"$AllocId\" == \"\" ]\nthen\n\techo \"No allocation found\"\n\texit 1\nfi\n\nFullAllocId=$(nomad alloc status -verbose $AllocId | grep -e '^ID' | awk '{print $3}')\n\nExitCode=./watch.py $FullAllocId\n\nif [ $ExitCode -ne 0 ]\nthen\n\techo \"Bailing out because of an error...\"\n\texit 2\nfi\n\n"
  },
  {
    "path": "parameterized/to_specific_client/workaround/watch.py",
    "content": "#!/usr/local/bin/python3\n\nimport json\nimport os\nimport requests\nimport sys\n\n_url = \"\"\n_alloc_id = \"\"\n\ndef build_url(alloc_id):\n    # Check for NOMAD_ADDR, if found set the base of the URL to it.\n    if os.environ.get('NOMAD_ADDR'):\n        nomad_addr = os.environ.get('NOMAD_ADDR')\n\n        # ... well, unless it's HTTPS.\n        if nomad_addr.startswith(\"https\"):\n            raise ValueError(\"HTTPS is not implemented\")\n\n        url_base = os.environ.get('NOMAD_ADDR')\n    else:\n        url_base = \"http://127.0.0.1:4646\"\n\n\n\n    URL_API_PATH = \"/v1/event/stream\"\n    #URL_QUERY_STRING = \"\"\n    URL_QUERY_STRING = \"?topic=Allocation:\"+alloc_id\n\n    _url = url_base + URL_API_PATH + URL_QUERY_STRING\n    return _url\n\ndef eprint(string):\n    sys.stderr.write(string)\n    sys.stderr.flush()   \n\ndef is_final(event):\n    if event[\"Payload\"][\"Allocation\"][\"ClientStatus\"] == \"complete\":\n        eprint(\"Allocation complete\\n\")\n        sys.exit(0)\n\n    if event[\"Payload\"][\"Allocation\"][\"ClientStatus\"] == \"failed\":\n        eprint(\"Allocation failed\\n\")\n        sys.exit(1)\n\ndef print_tasks(event):\n    tasks = event[\"Payload\"][\"Allocation\"][\"TaskStates\"]\n    # print(json.dumps(tasks, sort_keys=True, indent=2))\n    if tasks:\n        for task_name, task in tasks.items():\n            print(\"--- \"+task_name+\"\\t\"+task[\"State\"]+\"\\t\"+str(task[\"Failed\"]))\n\n\ndef handle_event(event):\n    # print(json.dumps(event[\"Payload\"], sort_keys=True, indent=2))\n    # print(json.dumps(event[\"Allocation\"], sort_keys=True, indent=2))\n\n    allocation = event[\"Payload\"][\"Allocation\"]\n    print(str(event[\"Index\"])+\"\\t\"+ event[\"Type\"]+\"\\t\"+allocation[\"DesiredStatus\"]+\"\\t\"+ allocation[\"ClientStatus\"])\n\n    # print_tasks(event)\n\n    is_final(event)\n\ndef handle_data(response):\n    '''\n    Handle a single line of data from the HTTP stream.\n    '''\n    for line in response.iter_lines():\n        if line:   # filter out keep-alive new lines\n            object = json.loads(line.decode('utf-8'))\n            if len(object) > 1:  # has Events\n                for event in object[\"Events\"]:\n                    handle_event(event)\n\ndef connect(url):\n    try:\n        eprint(\"Connecting to '\"+url+\"'\\n\")\n        response = requests.get(url, stream=True)\n        response.raise_for_status()\n        handle_data(response)\n    except requests.exceptions.RequestException as e:  # This is the correct syntax\n        raise SystemExit(e)\n\ndef start():\n    try:\n        \n        connect(build_url(check_args()))\n    except KeyboardInterrupt:\n        eprint(\"Received keyboard interrupt. Stopping.\\n\")\n        SystemExit()\n\ndef check_args():\n    # look for 2 items, because argv[0] is always the script's name. :\\\n    if len(sys.argv) != 2:\n        raise ValueError(\"Must supply a full Nomad alloc id.\")\n    alloc_id = sys.argv[1]\n    return alloc_id\nstart()"
  },
  {
    "path": "ports/README.md",
    "content": "# Mapping ports into Nomad\n\nThis example will show a job that uses both static and dynamic ports\n"
  },
  {
    "path": "ports/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      # the label for the `port` block is used to refer to that port in the rest of the job:\n      # interpolation, docker port maps, etc.\n      port \"dynamic\" {\n        to = 6379\n      }\n\n      port \"_443\" {\n        static = 443\n        to     = 6379\n      }\n\n      port \"444\" {\n        static = 444\n        to     = 6379\n      }\n    }\n\n    service {\n      name = \"redis-cache\"\n      tags = [\"global\", \"cache\"]\n      port = \"db\"\n\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        ports = [\"dynamic\",\"_443\", \"444\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "preserve_state/bar-service.jsonjob",
    "content": "{\n    \"Job\": {\n        \"AllAtOnce\": false,\n        \"Constraints\": null,\n        \"CreateIndex\": 11412,\n        \"Datacenters\": [\n            \"dc1\"\n        ],\n        \"ID\": \"bar-service\",\n        \"JobModifyIndex\": 11412,\n        \"Meta\": null,\n        \"ModifyIndex\": 11415,\n        \"Name\": \"bar-service\",\n        \"Namespace\": \"default\",\n        \"ParameterizedJob\": null,\n        \"ParentID\": \"\",\n        \"Payload\": null,\n        \"Periodic\": null,\n        \"Priority\": 50,\n        \"Region\": \"global\",\n        \"Stable\": false,\n        \"Status\": \"running\",\n        \"StatusDescription\": \"\",\n        \"Stop\": false,\n        \"SubmitTime\": 1522707675977824527,\n        \"TaskGroups\": [\n            {\n                \"Constraints\": null,\n                \"Count\": 6,\n                \"EphemeralDisk\": {\n                    \"Migrate\": false,\n                    \"SizeMB\": 300,\n                    \"Sticky\": false\n                },\n                \"Meta\": null,\n                \"Name\": \"example\",\n                \"RestartPolicy\": {\n                    \"Attempts\": 2,\n                    \"Delay\": 15000000000,\n                    \"Interval\": 60000000000,\n                    \"Mode\": \"delay\"\n                },\n                \"Tasks\": [\n                    {\n                        \"Artifacts\": [\n                            {\n                                \"GetterMode\": \"any\",\n                                \"GetterOptions\": {\n                                    \"checksum\": \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n                                },\n                                \"GetterSource\": \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\",\n                                \"RelativeDest\": \"local/\"\n                            }\n                        ],\n                        \"Config\": {\n                            \"args\": [\n                                \"-listen\",\n                                \":${NOMAD_PORT_http}\",\n                                \"-text\",\n                                \"<html><body><h1>Welcome to the Bar Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\"\n                            ],\n                            \"command\": \"http-echo\"\n                        },\n                        \"Constraints\": null,\n                        \"DispatchPayload\": null,\n                        \"Driver\": \"exec\",\n                        \"Env\": null,\n                        \"KillSignal\": \"\",\n                        \"KillTimeout\": 5000000000,\n                        \"Leader\": false,\n                        \"LogConfig\": {\n                            \"MaxFileSizeMB\": 10,\n                            \"MaxFiles\": 10\n                        },\n                        \"Meta\": null,\n                        \"Name\": \"server\",\n                        \"Resources\": {\n                            \"CPU\": 100,\n                            \"DiskMB\": 0,\n                            \"IOPS\": 0,\n                            \"MemoryMB\": 300,\n                            \"Networks\": [\n                                {\n                                    \"CIDR\": \"\",\n                                    \"Device\": \"\",\n                                    \"DynamicPorts\": [\n                                        {\n                                            \"Label\": \"http\",\n                                            \"Value\": 0\n                                        }\n                                    ],\n                                    \"IP\": \"\",\n                                    \"MBits\": 10,\n                                    \"ReservedPorts\": null\n                                }\n                            ]\n                        },\n                        \"Services\": [\n                            {\n                                \"AddressMode\": \"auto\",\n                                \"CheckRestart\": null,\n                                \"Checks\": [\n                                    {\n                                        \"AddressMode\": \"\",\n                                        \"Args\": null,\n                                        \"CheckRestart\": null,\n                                        \"Command\": \"\",\n                                        \"Header\": null,\n                                        \"Id\": \"\",\n                                        \"InitialStatus\": \"\",\n                                        \"Interval\": 15000000000,\n                                        \"Method\": \"\",\n                                        \"Name\": \"health-check\",\n                                        \"Path\": \"/\",\n                                        \"PortLabel\": \"\",\n                                        \"Protocol\": \"\",\n                                        \"TLSSkipVerify\": false,\n                                        \"Timeout\": 5000000000,\n                                        \"Type\": \"http\"\n                                    }\n                                ],\n                                \"Id\": \"\",\n                                \"Name\": \"bar-service\",\n                                \"PortLabel\": \"http\",\n                                \"Tags\": [\n                                    \"urlprefix-/bar\"\n                                ]\n                            }\n                        ],\n                        \"ShutdownDelay\": 0,\n                        \"Templates\": null,\n                        \"User\": \"\",\n                        \"Vault\": null\n                    }\n                ],\n                \"Update\": null\n            }\n        ],\n        \"Type\": \"service\",\n        \"Update\": {\n            \"AutoRevert\": false,\n            \"Canary\": 0,\n            \"HealthCheck\": \"\",\n            \"HealthyDeadline\": 0,\n            \"MaxParallel\": 0,\n            \"MinHealthyTime\": 0,\n            \"Stagger\": 0\n        },\n        \"VaultToken\": \"\",\n        \"Version\": 0\n    }\n}\n"
  },
  {
    "path": "preserve_state/example.jsonjob",
    "content": "{\n    \"Job\": {\n        \"AllAtOnce\": false,\n        \"Constraints\": null,\n        \"CreateIndex\": 11414,\n        \"Datacenters\": [\n            \"dc1\"\n        ],\n        \"ID\": \"example\",\n        \"JobModifyIndex\": 11414,\n        \"Meta\": null,\n        \"ModifyIndex\": 11414,\n        \"Name\": \"example\",\n        \"Namespace\": \"default\",\n        \"ParameterizedJob\": null,\n        \"ParentID\": \"\",\n        \"Payload\": null,\n        \"Periodic\": {\n            \"Enabled\": true,\n            \"ProhibitOverlap\": true,\n            \"Spec\": \"*/15 * * * * *\",\n            \"SpecType\": \"cron\",\n            \"TimeZone\": \"UTC\"\n        },\n        \"Priority\": 50,\n        \"Region\": \"global\",\n        \"Stable\": false,\n        \"Status\": \"running\",\n        \"StatusDescription\": \"\",\n        \"Stop\": false,\n        \"SubmitTime\": 1522707676229857749,\n        \"TaskGroups\": [\n            {\n                \"Constraints\": null,\n                \"Count\": 5,\n                \"EphemeralDisk\": {\n                    \"Migrate\": false,\n                    \"SizeMB\": 300,\n                    \"Sticky\": false\n                },\n                \"Meta\": null,\n                \"Name\": \"sleepers\",\n                \"RestartPolicy\": {\n                    \"Attempts\": 15,\n                    \"Delay\": 15000000000,\n                    \"Interval\": 604800000000000,\n                    \"Mode\": \"delay\"\n                },\n                \"Tasks\": [\n                    {\n                        \"Artifacts\": null,\n                        \"Config\": {\n                            \"command\": \"bash\",\n                            \"args\": [\n                                \"-c\",\n                                \"echo Starting; sleep=`shuf -i5-10 -n1`; echo Sleeping $sleep seconds.; sleep $sleep; echo Done; exit 0\"\n                            ]\n                        },\n                        \"Constraints\": null,\n                        \"DispatchPayload\": null,\n                        \"Driver\": \"raw_exec\",\n                        \"Env\": null,\n                        \"KillSignal\": \"\",\n                        \"KillTimeout\": 5000000000,\n                        \"Leader\": false,\n                        \"LogConfig\": {\n                            \"MaxFileSizeMB\": 10,\n                            \"MaxFiles\": 10\n                        },\n                        \"Meta\": null,\n                        \"Name\": \"wait\",\n                        \"Resources\": {\n                            \"CPU\": 100,\n                            \"DiskMB\": 0,\n                            \"IOPS\": 0,\n                            \"MemoryMB\": 200,\n                            \"Networks\": null\n                        },\n                        \"Services\": null,\n                        \"ShutdownDelay\": 0,\n                        \"Templates\": null,\n                        \"User\": \"\",\n                        \"Vault\": null\n                    }\n                ],\n                \"Update\": null\n            }\n        ],\n        \"Type\": \"batch\",\n        \"Update\": {\n            \"AutoRevert\": false,\n            \"Canary\": 0,\n            \"HealthCheck\": \"\",\n            \"HealthyDeadline\": 0,\n            \"MaxParallel\": 0,\n            \"MinHealthyTime\": 0,\n            \"Stagger\": 0\n        },\n        \"VaultToken\": \"\",\n        \"Version\": 0\n    }\n}\n"
  },
  {
    "path": "preserve_state/fabio.jsonjob",
    "content": "{\n    \"Job\": {\n        \"AllAtOnce\": false,\n        \"Constraints\": [\n            {\n                \"LTarget\": \"${attr.cpu.arch}\",\n                \"Operand\": \"!=\",\n                \"RTarget\": \"arm\"\n            },\n            {\n                \"LTarget\": \"${attr.kernel.name}\",\n                \"Operand\": \"!=\",\n                \"RTarget\": \"windows\"\n            }\n        ],\n        \"CreateIndex\": 11416,\n        \"Datacenters\": [\n            \"dc1\"\n        ],\n        \"ID\": \"fabio\",\n        \"JobModifyIndex\": 11416,\n        \"Meta\": null,\n        \"ModifyIndex\": 11416,\n        \"Name\": \"fabio\",\n        \"Namespace\": \"default\",\n        \"ParameterizedJob\": null,\n        \"ParentID\": \"\",\n        \"Payload\": null,\n        \"Periodic\": null,\n        \"Priority\": 50,\n        \"Region\": \"global\",\n        \"Stable\": false,\n        \"Status\": \"running\",\n        \"StatusDescription\": \"\",\n        \"Stop\": false,\n        \"SubmitTime\": 1522707676470085364,\n        \"TaskGroups\": [\n            {\n                \"Constraints\": null,\n                \"Count\": 1,\n                \"EphemeralDisk\": {\n                    \"Migrate\": false,\n                    \"SizeMB\": 300,\n                    \"Sticky\": false\n                },\n                \"Meta\": null,\n                \"Name\": \"fabio\",\n                \"RestartPolicy\": {\n                    \"Attempts\": 2,\n                    \"Delay\": 15000000000,\n                    \"Interval\": 60000000000,\n                    \"Mode\": \"delay\"\n                },\n                \"Tasks\": [\n                    {\n                        \"Artifacts\": [\n                            {\n                                \"GetterMode\": \"any\",\n                                \"GetterOptions\": {\n                                    \"checksum\": \"sha256:7dc786c3dfd8c770d20e524629d0d7cd2cf8bb84a1bf98605405800b28705198\"\n                                },\n                                \"GetterSource\": \"https://github.com/fabiolb/fabio/releases/download/v1.5.0/fabio-1.5.0-go1.8.3-linux_amd64\",\n                                \"RelativeDest\": \"local/\"\n                            }\n                        ],\n                        \"Config\": {\n                            \"command\": \"fabio-1.5.0-go1.8.3-linux_amd64\"\n                        },\n                        \"Constraints\": null,\n                        \"DispatchPayload\": null,\n                        \"Driver\": \"exec\",\n                        \"Env\": null,\n                        \"KillSignal\": \"\",\n                        \"KillTimeout\": 5000000000,\n                        \"Leader\": false,\n                        \"LogConfig\": {\n                            \"MaxFileSizeMB\": 10,\n                            \"MaxFiles\": 10\n                        },\n                        \"Meta\": null,\n                        \"Name\": \"fabio\",\n                        \"Resources\": {\n                            \"CPU\": 500,\n                            \"DiskMB\": 0,\n                            \"IOPS\": 0,\n                            \"MemoryMB\": 64,\n                            \"Networks\": [\n                                {\n                                    \"CIDR\": \"\",\n                                    \"Device\": \"\",\n                                    \"DynamicPorts\": null,\n                                    \"IP\": \"\",\n                                    \"MBits\": 1,\n                                    \"ReservedPorts\": [\n                                        {\n                                            \"Label\": \"http\",\n                                            \"Value\": 9999\n                                        },\n                                        {\n                                            \"Label\": \"ui\",\n                                            \"Value\": 9998\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"Services\": null,\n                        \"ShutdownDelay\": 0,\n                        \"Templates\": null,\n                        \"User\": \"\",\n                        \"Vault\": null\n                    }\n                ],\n                \"Update\": {\n                    \"AutoRevert\": false,\n                    \"Canary\": 0,\n                    \"HealthCheck\": \"checks\",\n                    \"HealthyDeadline\": 300000000000,\n                    \"MaxParallel\": 1,\n                    \"MinHealthyTime\": 10000000000,\n                    \"Stagger\": 5000000000\n                }\n            }\n        ],\n        \"Type\": \"system\",\n        \"Update\": {\n            \"AutoRevert\": false,\n            \"Canary\": 0,\n            \"HealthCheck\": \"\",\n            \"HealthyDeadline\": 0,\n            \"MaxParallel\": 1,\n            \"MinHealthyTime\": 0,\n            \"Stagger\": 5000000000\n        },\n        \"VaultToken\": \"\",\n        \"Version\": 0\n    }\n}\n"
  },
  {
    "path": "preserve_state/foo-service.jsonjob",
    "content": "{\n    \"Job\": {\n        \"AllAtOnce\": false,\n        \"Constraints\": null,\n        \"CreateIndex\": 11420,\n        \"Datacenters\": [\n            \"dc1\"\n        ],\n        \"ID\": \"foo-service\",\n        \"JobModifyIndex\": 11420,\n        \"Meta\": {\n            \"foo-service\": \"true\"\n        },\n        \"ModifyIndex\": 11424,\n        \"Name\": \"foo-service\",\n        \"Namespace\": \"default\",\n        \"ParameterizedJob\": null,\n        \"ParentID\": \"\",\n        \"Payload\": null,\n        \"Periodic\": null,\n        \"Priority\": 50,\n        \"Region\": \"global\",\n        \"Stable\": false,\n        \"Status\": \"running\",\n        \"StatusDescription\": \"\",\n        \"Stop\": false,\n        \"SubmitTime\": 1522707676494575505,\n        \"TaskGroups\": [\n            {\n                \"Constraints\": null,\n                \"Count\": 3,\n                \"EphemeralDisk\": {\n                    \"Migrate\": false,\n                    \"SizeMB\": 300,\n                    \"Sticky\": false\n                },\n                \"Meta\": null,\n                \"Name\": \"example\",\n                \"RestartPolicy\": {\n                    \"Attempts\": 2,\n                    \"Delay\": 15000000000,\n                    \"Interval\": 60000000000,\n                    \"Mode\": \"delay\"\n                },\n                \"Tasks\": [\n                    {\n                        \"Artifacts\": [\n                            {\n                                \"GetterMode\": \"any\",\n                                \"GetterOptions\": {\n                                    \"checksum\": \"sha256:e30b29b72ad5ec1f6dfc8dee0c2fcd162f47127f2251b99e47b9ae8af1d7b917\"\n                                },\n                                \"GetterSource\": \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\",\n                                \"RelativeDest\": \"local/\"\n                            }\n                        ],\n                        \"Config\": {\n                            \"command\": \"http-echo\",\n                            \"args\": [\n                                \"-listen\",\n                                \":${NOMAD_PORT_http}\",\n                                \"-text\",\n                                \"<html><body><h1>Welcome to the Foo Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\"\n                            ]\n                        },\n                        \"Constraints\": null,\n                        \"DispatchPayload\": null,\n                        \"Driver\": \"exec\",\n                        \"Env\": null,\n                        \"KillSignal\": \"\",\n                        \"KillTimeout\": 5000000000,\n                        \"Leader\": false,\n                        \"LogConfig\": {\n                            \"MaxFileSizeMB\": 10,\n                            \"MaxFiles\": 10\n                        },\n                        \"Meta\": null,\n                        \"Name\": \"server\",\n                        \"Resources\": {\n                            \"CPU\": 100,\n                            \"DiskMB\": 0,\n                            \"IOPS\": 0,\n                            \"MemoryMB\": 300,\n                            \"Networks\": [\n                                {\n                                    \"CIDR\": \"\",\n                                    \"Device\": \"\",\n                                    \"DynamicPorts\": [\n                                        {\n                                            \"Label\": \"http\",\n                                            \"Value\": 0\n                                        }\n                                    ],\n                                    \"IP\": \"\",\n                                    \"MBits\": 10,\n                                    \"ReservedPorts\": null\n                                }\n                            ]\n                        },\n                        \"Services\": [\n                            {\n                                \"AddressMode\": \"auto\",\n                                \"CheckRestart\": null,\n                                \"Checks\": [\n                                    {\n                                        \"AddressMode\": \"\",\n                                        \"Args\": null,\n                                        \"CheckRestart\": null,\n                                        \"Command\": \"\",\n                                        \"Header\": null,\n                                        \"Id\": \"\",\n                                        \"InitialStatus\": \"\",\n                                        \"Interval\": 15000000000,\n                                        \"Method\": \"\",\n                                        \"Name\": \"health-check\",\n                                        \"Path\": \"/\",\n                                        \"PortLabel\": \"\",\n                                        \"Protocol\": \"\",\n                                        \"TLSSkipVerify\": false,\n                                        \"Timeout\": 5000000000,\n                                        \"Type\": \"http\"\n                                    }\n                                ],\n                                \"Id\": \"\",\n                                \"Name\": \"foo-service\",\n                                \"PortLabel\": \"http\",\n                                \"Tags\": [\n                                    \"urlprefix-/foo\"\n                                ]\n                            }\n                        ],\n                        \"ShutdownDelay\": 0,\n                        \"Templates\": null,\n                        \"User\": \"\",\n                        \"Vault\": null\n                    }\n                ],\n                \"Update\": null\n            }\n        ],\n        \"Type\": \"service\",\n        \"Update\": {\n            \"AutoRevert\": false,\n            \"Canary\": 0,\n            \"HealthCheck\": \"\",\n            \"HealthyDeadline\": 0,\n            \"MaxParallel\": 0,\n            \"MinHealthyTime\": 0,\n            \"Stagger\": 0\n        },\n        \"VaultToken\": \"\",\n        \"Version\": 0\n    }\n}\n"
  },
  {
    "path": "preserve_state/hashi-ui.jsonjob",
    "content": "{\n    \"Job\": {\n        \"AllAtOnce\": false,\n        \"Constraints\": null,\n        \"CreateIndex\": 11423,\n        \"Datacenters\": [\n            \"dc1\"\n        ],\n        \"ID\": \"hashi-ui\",\n        \"JobModifyIndex\": 11423,\n        \"Meta\": null,\n        \"ModifyIndex\": 11423,\n        \"Name\": \"hashi-ui\",\n        \"Namespace\": \"default\",\n        \"ParameterizedJob\": null,\n        \"ParentID\": \"\",\n        \"Payload\": null,\n        \"Periodic\": null,\n        \"Priority\": 50,\n        \"Region\": \"global\",\n        \"Stable\": false,\n        \"Status\": \"running\",\n        \"StatusDescription\": \"\",\n        \"Stop\": false,\n        \"SubmitTime\": 1522707676888714780,\n        \"TaskGroups\": [\n            {\n                \"Constraints\": null,\n                \"Count\": 1,\n                \"EphemeralDisk\": {\n                    \"Migrate\": false,\n                    \"SizeMB\": 300,\n                    \"Sticky\": false\n                },\n                \"Meta\": null,\n                \"Name\": \"nomad-ui\",\n                \"RestartPolicy\": {\n                    \"Attempts\": 2,\n                    \"Delay\": 15000000000,\n                    \"Interval\": 60000000000,\n                    \"Mode\": \"delay\"\n                },\n                \"Tasks\": [\n                    {\n                        \"Artifacts\": null,\n                        \"Config\": {\n                            \"port_map\": [\n                                {\n                                    \"ui\": 3000.0\n                                }\n                            ],\n                            \"image\": \"jippi/hashi-ui\"\n                        },\n                        \"Constraints\": [\n                            {\n                                \"LTarget\": \"${attr.cpu.arch}\",\n                                \"Operand\": \"=\",\n                                \"RTarget\": \"amd64\"\n                            },\n                            {\n                                \"LTarget\": \"${attr.kernel.name}\",\n                                \"Operand\": \"=\",\n                                \"RTarget\": \"linux\"\n                            }\n                        ],\n                        \"DispatchPayload\": null,\n                        \"Driver\": \"docker\",\n                        \"Env\": null,\n                        \"KillSignal\": \"\",\n                        \"KillTimeout\": 5000000000,\n                        \"Leader\": false,\n                        \"LogConfig\": {\n                            \"MaxFileSizeMB\": 10,\n                            \"MaxFiles\": 10\n                        },\n                        \"Meta\": null,\n                        \"Name\": \"nomad-ui-linux-amd64\",\n                        \"Resources\": {\n                            \"CPU\": 100,\n                            \"DiskMB\": 0,\n                            \"IOPS\": 0,\n                            \"MemoryMB\": 128,\n                            \"Networks\": [\n                                {\n                                    \"CIDR\": \"\",\n                                    \"Device\": \"\",\n                                    \"DynamicPorts\": null,\n                                    \"IP\": \"\",\n                                    \"MBits\": 1,\n                                    \"ReservedPorts\": [\n                                        {\n                                            \"Label\": \"ui\",\n                                            \"Value\": 8000\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"Services\": [\n                            {\n                                \"AddressMode\": \"auto\",\n                                \"CheckRestart\": null,\n                                \"Checks\": [\n                                    {\n                                        \"AddressMode\": \"\",\n                                        \"Args\": null,\n                                        \"CheckRestart\": null,\n                                        \"Command\": \"\",\n                                        \"Header\": null,\n                                        \"Id\": \"\",\n                                        \"InitialStatus\": \"\",\n                                        \"Interval\": 10000000000,\n                                        \"Method\": \"\",\n                                        \"Name\": \"service: \\\"hashi-ui-nomad-ui-nomad-ui-linux-amd64\\\" check\",\n                                        \"Path\": \"/\",\n                                        \"PortLabel\": \"ui\",\n                                        \"Protocol\": \"\",\n                                        \"TLSSkipVerify\": false,\n                                        \"Timeout\": 2000000000,\n                                        \"Type\": \"tcp\"\n                                    }\n                                ],\n                                \"Id\": \"\",\n                                \"Name\": \"hashi-ui-nomad-ui-nomad-ui-linux-amd64\",\n                                \"PortLabel\": \"ui\",\n                                \"Tags\": null\n                            }\n                        ],\n                        \"ShutdownDelay\": 0,\n                        \"Templates\": [\n                            {\n                                \"ChangeMode\": \"restart\",\n                                \"ChangeSignal\": \"\",\n                                \"DestPath\": \"secrets/file.env\",\n                                \"EmbeddedTmpl\": \"        NOMAD_ADDR = \\\"{{with service \\\"nomad\\\"}}http://{{.Address}}:{{.Port}}{{end}}\\\"\\n        NOMAD_ENABLE = 1\\n        CONSUL_ENABLE = 1\\n        CONSUL_ADDR = \\\"{{with service \\\"consul\\\"}}http://{{.Address}}:{{.Port}}{{end}}\\\"\\n        LOG_LEVEL = \\\"info\\\"\\n        NOMAD_READ_ONLY = 0\\n        CONSUL_READ_ONLY = 0\\n\",\n                                \"Envvars\": true,\n                                \"LeftDelim\": \"{{\",\n                                \"Perms\": \"0644\",\n                                \"RightDelim\": \"}}\",\n                                \"SourcePath\": \"\",\n                                \"Splay\": 5000000000,\n                                \"VaultGrace\": 15000000000\n                            }\n                        ],\n                        \"User\": \"\",\n                        \"Vault\": null\n                    }\n                ],\n                \"Update\": null\n            }\n        ],\n        \"Type\": \"system\",\n        \"Update\": {\n            \"AutoRevert\": false,\n            \"Canary\": 0,\n            \"HealthCheck\": \"\",\n            \"HealthyDeadline\": 0,\n            \"MaxParallel\": 0,\n            \"MinHealthyTime\": 0,\n            \"Stagger\": 0\n        },\n        \"VaultToken\": \"\",\n        \"Version\": 0\n    }\n}\n"
  },
  {
    "path": "preserve_state/jam.sh",
    "content": "#! /bin/bash\n\njobs=$(ls *.jsonjob)\n\nfor I in ${jobs}; do\n  echo \"Jamming $I\"\n  curl -X PUT -d @$I http://127.0.0.1:4646/v1/jobs\n  echo \"\"\ndone\n"
  },
  {
    "path": "preserve_state/nomad_debug",
    "content": "#! /usr/bin/python\n\nimport urllib, json\nbaseUrl = \"http://127.0.0.1:4646\"\nurl = baseUrl+\"/v1/jobs\"\nresponse = urllib.urlopen(url)\ndata = json.loads(response.read())\nfor job in data:\n    print(job['Name'], job['Status'], job['Stop'])\n\n"
  },
  {
    "path": "preserve_state/preserve.sh",
    "content": "#! /bin/bash\njobs=$(nomad status | grep ing | grep -v \"/periodic-\" |awk '{print $1}')\necho $(echo \"${jobs}\" |wc -l)\nfor I in ${jobs}; do\n  echo \"Exporting $I\"\n  nomad inspect $I > $I.jsonjob\ndone\n"
  },
  {
    "path": "qemu/README.md",
    "content": "# TinyCore QEMU example\n\nThis sample will start a TinyCore Linux VM configured\nwith the SSH daemon enabled. It performs port forwarding\nusing the QEMU commands so that Nomad can dynamically\nassign a HTTP and SSH port for the VM.\n\nYou will need to serve the image some place so that it\ncan be retrieved using the artifact stanza.\n\nThe default SSH user is `tc` with `tinycore` as password.\n"
  },
  {
    "path": "qemu/hass/hass.nomad",
    "content": "job \"home-assistant\"{\n    datacenters = [\"dc1\"]\n    type = \"service\"\n    priority = \"100\"\n\tgroup \"hass-vm\" {\n        task \"home-assistant\" {\n            driver = \"qemu\"\n            artifact {\n                source = \"https://github.com/home-assistant/operating-system/releases/download/4.16/hassos_ova-4.16.qcow2.gz\"\n                destination =\"hassos_ova-4.16.qcow2\"\n\t\tmode = \"file\"\n\t\t}\n            config {\n                image_path        = \"hassos_ova-4.16.qcow2\"\n                accelerator       = \"kvm\"\n                graceful_shutdown = true\n                args              = [\"nodefaults\",\n                    \"nodefconfig\",\n                    \"net nic,model=e1000\",\n                    \"smbios type=0,uefi=on\",\n                    ]\n                }\n            resources {\n                cpu = 100,\n                memory = 800\n            }\n        }\n        network {\n            mode = \"host\"\n            port \"hasswebui\" {\n                static = 8223\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "qemu/imagebuilder/Dockerfile",
    "content": "FROM ubuntu\n\nRUN export DEBIAN_FRONTEND=noninteractive && \\\n    apt update && \\\n    apt install -y \\\n      qemu \\\n      qemu-utils \\\n      libguestfs-tools \\\n      linux-image-generic \\\n      nbdfuse \\\n      nbd-client \\\n      nbdkit \\\n      nbdkit-plugin-guestfs\n\nRUN mkdir -p /mnt/cdrom /mnt/tinycore\n\n"
  },
  {
    "path": "qemu/imagebuilder/NOTES.md",
    "content": "# Some notes that need to be formatted and properly attended to\n\nYou will need to serve the image someplace so that it\ncan be retrieved using the artifact stanza.\n\n## Creating the image\n\nDownload the boot image - <http://tinycorelinux.net/12.x/x86/release/Core-current.iso>\n\n<https://fabianstumpf.de/articles/tinycore_images.htm> Original article\n\ndocker run -v $(pwd)/working:/working --privileged --name=imagebuilder --rm -it ubuntu /bin/bash\n\n```bash\napt update;\napt install -y \\\n  qemu \\\n  qemu-utils \\\n  libguestfs-tools \\\n  linux-image-generic \\\n  nbdfuse \\\n  nbd-client\n```\n\n```bash\ncd working\nwget http://tinycorelinux.net/12.x/x86/release/Core-current.iso\nmkdir /mnt/cdrom\nmkdir /mnt/tinycore\nmount Core-current.iso /mnt/cdrom\n```\n\n```\ndocker run -v $(pwd):/working --privileged --rm --name=imagebuilder -it imagebuilder /bin/bash\n```\n\n### Using qemu-img to make the disk\n\nThis requires a nbd-capable kernel so that you can mount the qcow as a block\ndevice for more standard manipulation\n\nCreate the qcow and create the block device for it with `qemu-nbd`\n```bash\nqemu-img create -f qcow2 /working/core-image.qcow2 64M\nqemu-nbd -c /dev/nbd0 /working/core-image.qcow2\n```\n\nCreate a partition table\n\n```bash\nfdisk /dev/nbd0\n```\n\nRemove the NBD device\n\n```bash\nqemu-nbd -d /dev/nbd0\n```\n\n```bash\nguestfish -a /working/core-image.qcow2\n```\n\nrun\n\n### Using nbdfuse for systems that don't have kernel nbd support\n\n```bash\nqemu-img create -f qcow2 /working/core-image.qcow2 64M\nmkdir -p /block\nnbdfuse /block/nbd0 --socket-activation qemu-nbd -f qcow2 /working/core-image.qcow2 &\n```\n\n```bash\nfusermount3 -u dir\nrmdir dir\n```\n\n### Using guestfish tools to build an image\n\n```bash\nguestfish -N core-image.qcow2=fs:ext4:64M:mbr exit\nguestmount -a /working/core-image.qcow2 -m /dev/sda1 /mnt/tinycore\n```\n\n\n## Prepare image\n\n```bash\nrm -rf /mnt/tinycore/lost+found\nmkdir -p /mnt/tinycore/boot\nmkdir -p /mnt/tinycore/tce/optional\ntouch /mnt/tinycore/tce/onboot.lst\ngrub-install --boot-directory=/mnt/tinycore/boot\ncp /mnt/cdrom/boot/vmlinuz\n```\n"
  },
  {
    "path": "qemu/job.json",
    "content": "{\n    \"Job\": {\n        \"Affinities\": null,\n        \"AllAtOnce\": false,\n        \"Constraints\": null,\n        \"ConsulToken\": \"\",\n        \"CreateIndex\": 170289,\n        \"Datacenters\": [\n            \"dc1\"\n        ],\n        \"Dispatched\": false,\n        \"ID\": \"example\",\n        \"JobModifyIndex\": 170289,\n        \"Meta\": null,\n        \"Migrate\": null,\n        \"ModifyIndex\": 170290,\n        \"Multiregion\": null,\n        \"Name\": \"example\",\n        \"Namespace\": \"default\",\n        \"NomadTokenID\": \"\",\n        \"ParameterizedJob\": null,\n        \"ParentID\": \"\",\n        \"Payload\": null,\n        \"Periodic\": null,\n        \"Priority\": 50,\n        \"Region\": \"global\",\n        \"Reschedule\": null,\n        \"Spreads\": null,\n        \"Stable\": false,\n        \"Status\": \"dead\",\n        \"StatusDescription\": \"\",\n        \"Stop\": true,\n        \"SubmitTime\": 1621343037528980394,\n        \"TaskGroups\": [\n            {\n                \"Affinities\": null,\n                \"Constraints\": null,\n                \"Count\": 1,\n                \"EphemeralDisk\": {\n                    \"Migrate\": false,\n                    \"SizeMB\": 300,\n                    \"Sticky\": false\n                },\n                \"Meta\": null,\n                \"Migrate\": {\n                    \"HealthCheck\": \"checks\",\n                    \"HealthyDeadline\": 300000000000,\n                    \"MaxParallel\": 1,\n                    \"MinHealthyTime\": 10000000000\n                },\n                \"Name\": \"cache\",\n                \"Networks\": [\n                    {\n                        \"CIDR\": \"\",\n                        \"DNS\": null,\n                        \"Device\": \"\",\n                        \"DynamicPorts\": [\n                            {\n                                \"HostNetwork\": \"default\",\n                                \"Label\": \"db\",\n                                \"To\": 6379,\n                                \"Value\": 0\n                            }\n                        ],\n                        \"IP\": \"\",\n                        \"MBits\": 0,\n                        \"Mode\": \"\",\n                        \"ReservedPorts\": null\n                    }\n                ],\n                \"ReschedulePolicy\": {\n                    \"Attempts\": 0,\n                    \"Delay\": 30000000000,\n                    \"DelayFunction\": \"exponential\",\n                    \"Interval\": 0,\n                    \"MaxDelay\": 3600000000000,\n                    \"Unlimited\": true\n                },\n                \"RestartPolicy\": {\n                    \"Attempts\": 2,\n                    \"Delay\": 15000000000,\n                    \"Interval\": 1800000000000,\n                    \"Mode\": \"fail\"\n                },\n                \"Scaling\": null,\n                \"Services\": null,\n                \"ShutdownDelay\": null,\n                \"Spreads\": null,\n                \"StopAfterClientDisconnect\": null,\n                \"Tasks\": [\n                    {\n                        \"Affinities\": null,\n                        \"Artifacts\": null,\n                        \"Config\": {\n                            \"image\": \"redis:7\",\n                            \"ports\": [\n                                \"db\"\n                            ]\n                        },\n                        \"Constraints\": null,\n                        \"DispatchPayload\": null,\n                        \"Driver\": \"docker\",\n                        \"Env\": null,\n                        \"KillSignal\": \"\",\n                        \"KillTimeout\": 5000000000,\n                        \"Kind\": \"\",\n                        \"Leader\": false,\n                        \"Lifecycle\": null,\n                        \"LogConfig\": {\n                            \"MaxFileSizeMB\": 10,\n                            \"MaxFiles\": 10\n                        },\n                        \"Meta\": null,\n                        \"Name\": \"redis\",\n                        \"Resources\": {\n                            \"CPU\": 500,\n                            \"Devices\": null,\n                            \"DiskMB\": 0,\n                            \"IOPS\": 0,\n                            \"MemoryMB\": 256,\n                            \"Networks\": null\n                        },\n                        \"RestartPolicy\": {\n                            \"Attempts\": 2,\n                            \"Delay\": 15000000000,\n                            \"Interval\": 1800000000000,\n                            \"Mode\": \"fail\"\n                        },\n                        \"ScalingPolicies\": null,\n                        \"Services\": null,\n                        \"ShutdownDelay\": 0,\n                        \"Templates\": null,\n                        \"User\": \"\",\n                        \"Vault\": null,\n                        \"VolumeMounts\": null\n                    }\n                ],\n                \"Update\": {\n                    \"AutoPromote\": false,\n                    \"AutoRevert\": false,\n                    \"Canary\": 0,\n                    \"HealthCheck\": \"checks\",\n                    \"HealthyDeadline\": 300000000000,\n                    \"MaxParallel\": 1,\n                    \"MinHealthyTime\": 10000000000,\n                    \"ProgressDeadline\": 600000000000,\n                    \"Stagger\": 30000000000\n                },\n                \"Volumes\": null\n            }\n        ],\n        \"Type\": \"service\",\n        \"Update\": {\n            \"AutoPromote\": false,\n            \"AutoRevert\": false,\n            \"Canary\": 0,\n            \"HealthCheck\": \"\",\n            \"HealthyDeadline\": 0,\n            \"MaxParallel\": 1,\n            \"MinHealthyTime\": 0,\n            \"ProgressDeadline\": 0,\n            \"Stagger\": 30000000000\n        },\n        \"VaultNamespace\": \"\",\n        \"VaultToken\": \"\",\n        \"Version\": 0\n    }\n}\n"
  },
  {
    "path": "qemu/tc_ssh.nomad",
    "content": "job \"j1\" {\n  datacenters = [\"dc1\"]\n\n  group \"g1\" {\n\n    network {\n      port \"http\" { \n        to = -1\n      }\n      port \"ssh\" {\n        to = -1\n      }\n    }\n\n    service {\n      tags = [\"tag1\"]\n      port = \"http\"\n\n      check {\n        type     = \"http\"\n        port     = \"http\"\n        path     = \"/index.html\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"t1\" {\n      template {\n        data = <<EOH\n      Guest System\n      EOH\n\n        destination = \"local/index.html\"\n      }\n\n      artifact {\n        source = \"http://10.0.0.254:8000/tinycore.qcow2\"\n        destination = \"tinycore.qcow2\"\n        mode = \"file\"\n      }\n\n      driver = \"qemu\"\n\n      config {\n        image_path = \"tinycore.qcow2\"\n\n        ## Uncomment if KVM is available on your system\n        accelerator = \"kvm\"\n\n        args = [\n          \"-device\",\n          \"e1000,netdev=user.0\",\n          \"-netdev\",\n          \"user,id=user.0,hostfwd=tcp::${NOMAD_PORT_http}-:80,hostfwd=tcp::${NOMAD_PORT_ssh}-:22\",\n#          \"-drive\", \"file=fat:rw:/etc,format=raw,media=disk\",\n          \"-drive\", \"file=fat:rw:./local,format=raw,media=disk\"\n        ]\n      }\n    }\n  }\n}\n\n#-blockdev driver=qcow2,node-name=disk,file.driver=http,file.filename=http://example.com/image.qcow2\n"
  },
  {
    "path": "qemu/tc_ssh2.nomad",
    "content": "job \"j1\" {\n  datacenters = [\"dc1\"]\n\n  group \"g1\" {\n\n    network {\n      port \"http\"{}\n      port \"ssh\"{}\n    }\n\n    service {\n      tags = [\"tag1\"]\n      port = \"http\"\n\n      check {\n        type     = \"http\"\n        port     = \"http\"\n        path     = \"/index.html\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n\n    task \"t1\" {\n      template {\n        data = <<EOH\n      Guest System\n      EOH\n\n        destination = \"local/index.html\"\n      }\n\n      artifact {\n        source = \"http://10.0.0.166:8000/tinycore.qcow2\"\n      }\n\n      driver = \"qemu\"\n\n      config {\n        image_path = \"local/tinycore.qcow2\"\n\n        ## Uncomment if KVM is available on your system\n        #        accelerator = \"kvm\"\n\n        args = [\n          \"-device\",\n          \"e1000,netdev=user.0\",\n          \"-netdev\",\n          \"user,id=user.0,hostfwd=tcp::${NOMAD_PORT_http}-:80,hostfwd=tcp::${NOMAD_PORT_ssh}-:22\",\n        ]\n\n        # , \"-drive\", \"file=fat:rw:/opt/nomad/data/alloc/${NOMAD_ALLOC_ID}/${NOMAD_TASK_NAME}/local,format=raw,media=disk\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "qemu/tc_ssh_arm.nomad",
    "content": "job \"j1\" {\n  datacenters = [\"dc1\"]\n\n  group \"g1\" {\n\n    network {\n      port \"http\" { \n        to = -1\n      }\n      port \"ssh\" {\n        to = -1\n      }\n    }\n\n    service {\n      tags = [\"tag1\"]\n      port = \"http\"\n\n      check {\n        type     = \"http\"\n        port     = \"http\"\n        path     = \"/index.html\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"t1\" {\n      template {\n        data = <<EOH\n      Guest System\n      EOH\n\n        destination = \"local/index.html\"\n      }\n\n      artifact {\n        source = \"http://10.0.0.166:8000/Arch-Linux-.qcow2\"\n        destination = \"tinycore.qcow2\"\n        mode = \"file\"\n      }\n\n      driver = \"qemu\"\n\n      config {\n        image_path = \"tinycore.qcow2\"\n\n        ## Uncomment if KVM is available on your system\n        accelerator = \"kvm\"\n\n        args = [\n          \"-device\",\n          \"e1000,netdev=user.0\",\n          \"-netdev\",\n          \"user,id=user.0,hostfwd=tcp::${NOMAD_PORT_http}-:80,hostfwd=tcp::${NOMAD_PORT_ssh}-:22\",\n#          \"-drive\", \"file=fat:rw:/etc,format=raw,media=disk\",\n          \"-drive\", \"file=fat:rw:./local,format=raw,media=disk\"\n        ]\n      }\n    }\n  }\n}\n\n#-blockdev driver=qcow2,node-name=disk,file.driver=http,file.filename=http://example.com/image.qcow2"
  },
  {
    "path": "raw_exec/env.nomad",
    "content": "job \"env\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"group\" {\n    count = 1\n    task \"env\" {\n      driver = \"raw_exec\"\n      config { \n        command = \"env\"\n        args = []\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "raw_exec/mkdir/README.md",
    "content": "# Using mkdir\n\nThis example demonstrates using mkdir to create a few directories on the host before running a job.\n\n- [mkdir.nomad](mkdir.nomad) - demonstrates the use of mkdir; however, it also illustrates that there is no bash expansion because there is no shell running to perform the expansion.\n\n- [mkdir-bash.nomad](mkdir-bash.nomad) - corrects the job to allow the creation of multiple directories via shell expansion by starting a shell and _then_ calling mkdir.\n\n"
  },
  {
    "path": "raw_exec/mkdir/mkdir-bash.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"group\" {\n    count = 1\n    task \"mkdir\" {\n      driver = \"raw_exec\"\n      config { \n        command = \"bash\" \n        args = [\"-c\", \"rm -rf /var/log/service; mkdir -p /var/log/service/{watch,export}\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "raw_exec/mkdir/mkdir.nomad",
    "content": "job \"mkdir\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"group\" {\n    count = 1\n    task \"mkdir\" {\n      driver = \"raw_exec\"\n      config { \n        command = \"mkdir\"\n        # This will create a directory named `/var/log/service/{watch,export}`\n        # which is probably not what you want. \n        args = [\"-p\", \"/var/log/service/{watch,export}\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "raw_exec/ps.nomad",
    "content": "job \"mkdir\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"group\" {\n    count = 1\n    task \"mkdir\" {\n      driver = \"raw_exec\"\n      config { \n        command = \"ps\"\n        args = [\"-aef\", \"--forest\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "raw_exec/quoted_args/quoted_args.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"group\" {\n    count = 1\n    task \"mkdir\" {\n      driver = \"raw_exec\"\n      config { \n        command = \"bash\" \n        args = [\"-c\", \"bash -c \\\"tail -f /dev/null\\\"\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "raw_exec/quoted_args/quoted_args_2.nomad",
    "content": "job \"quoted\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  group \"group\" {\n    count = 1\n    task \"payload\" {\n      driver = \"exec\"\n      config { \n        command = \"bash\" \n        args = [\"-c\", \"bash -c \\\"tail -f /dev/null\\\"\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "raw_exec/user/example.nomad",
    "content": "job \"raw_exec\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"user\" {\n    task \"test\" {\n      driver = \"raw_exec\"\n      user = \"nomad\"\n\n      config {\n\tcommand = \"/usr/bin/whoami\"\n        args = []\n      }\n\n      resources {\n        cpu    = 100\n        memory = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "reproductions/cpu_rescheduling/README.md",
    "content": "## Does changing CPU stimulate a full reschedule?\n\nSo I saw this conversation in Gitter:\n\n>thenon @thenon:matrix.org [m]  Apr 23 08:16\n>Hi all. So I've got a job scheduled in nomad, requiring 200 CPU, and its running on node X. Node X has plenty of resources available, let's say 1000 CPU. If I update the definition of the job to require 201 CPU, it of course triggers another evaluation. But if the result of that evaluation is zero changes (job remains on node X, and zero other changes to any jobs etc. etc. ) I would expect no actions to happen. Instead, my job is stopped, and started. This is bad for me to interrupt the job for no reason. Is there any way to avoid this behaviour? Thanks for any pointers.\n\n>Florian Apolloner @apollo13 Apr 23 08:55\n>changing cpu limits sounds like a change to me and not zero changes ;)\n\n>thenon @thenon:matrix.org [m]  Apr 23 08:57\n>:) but there are zero effective changes. The same jobs end up running on the same nodes. Its a null op in terms of allocations.... b\n\n>thenon @thenon:matrix.org [m]  Apr 23 09:04\n>(what we're tryign to do here is achieve dynamic bin packing. Something out of band is watching the resource usage + other stuff, and updating things like cpu/memory usage, for a job. Most of the time we'd expect no changes, jobs are find on current nodes. When something has changed (e.g. job gets busier) enough that a reallocation results in jobs moving to another node, that's fine. Great even ! That's Nomad doing the hard job of pin backing properly. But right now, jobs restart, for no reason.... )\n\n>manveru @manveru:matrix.org [m]  Apr 23 10:24\n>thenon: i think the initial idea was that nomad would somehow enforce the cpu reservation like it does with the memory one... but i guess that never got implemented, just the restarts remain :(\n\n>thenon @thenon:matrix.org [m]  Apr 23 10:26\n>manveru: I don't know, I think my comment applies to anything. let's say you changed a meta constraint value. and the end result of the re-evaluation of everything was: no changes - every job is already running where it should be, based on new constraint values. why restart the job?\n\n### Let's do a repro\n\n1. I created the example Nomad job using `nomad init --short repro.nomad`\n\n1. Started up a Nomad dev agent in another window\n\n"
  },
  {
    "path": "reproductions/cpu_rescheduling/repro.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n\n        ports = [\"db\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "reschedule/ex.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  update {\n    healthy_deadline  = \"3m\"\n  }\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    reschedule {\n      attempts  = 15\n      interval  = \"1h\"\n      max_delay = \"120s\"\n      unlimited = false\n    }\n\n    service {\n      name = \"redis-cache\"\n      tags = [\"global\", \"cache\"]\n      port = \"db\"\n\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n\n        check_restart {\n          limit           = 2\n          grace           = \"10s\"\n          ignore_warnings = false\n        }\n      }\n    }\n\n    task \"redis\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args    = [\n          \"-c\", \"SLEEP_SECS=2; while true; do echo $(date) -- Alive... going back to sleep for ${SLEEP_SECS}; sleep ${SLEEP_SECS}; done\"\n        ]\n      }\n\n      resources {\n        memory = 10\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "restart/restart.nomad",
    "content": "job \"fail-service\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n\n  reschedule {\n    delay = \"15s\"\n    delay_function = \"constant\"\n    unlimited = true\n  }\n\n  group \"api\" {\n    count = 1\n\n    restart {\n      attempts = 3\n      interval = \"30s\"\n      delay = \"5s\"\n      mode = \"fail\"\n    }\n\n    network {\n      mode = \"bridge\"\n      port \"http\" {\n        to = 8080\n      }\n    }\n\n    service = {\n      name = \"fail-service-nomad\"\n      port = \"http\"\n\n      check {\n        type = \"http\"\n        port = \"http\"\n        path = \"/health\"\n        interval = \"10s\"\n        timeout = \"2s\"\n\n        check_restart {\n          limit = 1\n          grace = \"10s\"\n          ignore_warnings = false\n        }\n      }\n    }\n\n    task \"main\" {\n      driver = \"docker\"\n\n      config {\n        image = \"thobe/fail_service:v0.1.0\"\n        ports = [\"http\"]\n      }\n\n      env = {\n        HEALTHY_FOR = 20\n        UNHEALTHY_FOR = 120\n      }\n\n      resources = {\n        cpu = 100\n        memory = 128\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "rolling_upgrade/README.md",
    "content": "## Rolling Upgrades\n\nThis sample demonstrates the behavior of rolling upgrades in a Nomad cluster. \n\nInstructions:\n\nRun the sample job:\n\n```\nnomad run example.nomad\n```\n\nThis will deploy three instances of the sample redis container to the cluster.\n\nUpgrade the instances:\n\n```\nnomad run example-new.nomad\n```\n\nNomad should perform a rolling upgrade of the three instances.  It should wait for an instance to be healthy for one minute before moving to the next instance.\n\n> **NOTE:** The example job is currently sad and will not upgrade properly.  The cv version presents an alternative configuration file structure that upgrades as expected.\n\n"
  },
  {
    "path": "rolling_upgrade/cv-new.nomad",
    "content": "job \"rolling-upgrade-test\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"1m\"\n    health_check     = \"task_states\"\n  }\n\n  group \"zookeeper\" {\n    restart {\n      attempts = 2\n      delay    = \"15s\"\n      interval = \"1m\"\n      mode     = \"delay\"\n    }\n\n    count = 3\n    task \"redis\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:4.0\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "rolling_upgrade/cv.nomad",
    "content": "job \"rolling-upgrade-test\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"1m\"\n    health_check     = \"task_states\"\n  }\n\n  group \"zookeeper\" {\n    restart {\n      attempts = 2\n      delay    = \"15s\"\n      interval = \"1m\"\n      mode     = \"delay\"\n    }\n\n    count = 3\n    task \"redis\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "rolling_upgrade/example-new.nomad",
    "content": "job \"rolling-upgrade-test\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"1m\"\n    health_check     = \"task_states\"\n  }\n\n  group \"zookeeper-1\" {\n    restart {\n      attempts = 2\n      delay    = \"15s\"\n      interval = \"1m\"\n      mode     = \"delay\"\n    }\n\n    ephemeral_disk {\n      migrate = true\n      size    = \"300\"\n      sticky  = true\n    }\n\n    count = 1\n    task \"zookeeper-1\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:4.0\"\n      }\n    }\n  }\n\n  group \"zookeeper-2\" {\n    restart {\n      attempts = 2\n      delay    = \"15s\"\n      interval = \"1m\"\n      mode     = \"delay\"\n    }\n\n    ephemeral_disk {\n      migrate = true\n      size    = \"300\"\n      sticky  = true\n    }\n\n    count = 1\n    task \"zookeeper-2\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:4.0\"\n      }\n    }\n  }\n\n  group \"zookeeper-3\" {\n    restart {\n      attempts = 2\n      delay    = \"15s\"\n      interval = \"1m\"\n      mode     = \"delay\"\n    }\n\n    ephemeral_disk {\n      migrate = true\n      size    = \"300\"\n      sticky  = true\n    }\n\n    count = 1\n    task \"zookeeper-3\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:4.0\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "rolling_upgrade/example.nomad",
    "content": "job \"rolling-upgrade-test\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"1m\"\n    health_check     = \"task_states\"\n  }\n\n  group \"zookeeper-1\" {\n    restart {\n      attempts = 2\n      delay    = \"15s\"\n      interval = \"1m\"\n      mode     = \"delay\"\n    }\n\n    ephemeral_disk {\n      migrate = true\n      size    = \"300\"\n      sticky  = true\n    }\n\n    count = 1\n    task \"zookeeper-1\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n      }\n    }\n  }\n\n  group \"zookeeper-2\" {\n    restart {\n      attempts = 2\n      delay    = \"15s\"\n      interval = \"1m\"\n      mode     = \"delay\"\n    }\n\n    ephemeral_disk {\n      migrate = true\n      size    = \"300\"\n      sticky  = true\n    }\n\n    count = 1\n    task \"zookeeper-2\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n      }\n    }\n  }\n\n  group \"zookeeper-3\" {\n    restart {\n      attempts = 2\n      delay    = \"15s\"\n      interval = \"1m\"\n      mode     = \"delay\"\n    }\n\n    ephemeral_disk {\n      migrate = true\n      size    = \"300\"\n      sticky  = true\n    }\n\n    count = 1\n    task \"zookeeper-3\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "sentinel/README.md",
    "content": "## Sentinel Samples\n\nThese jobs utilize Sentinel for enforcement.  To use Sentinel, ACLs must be enabled on all of the nodes and bootstrapped.\n\n"
  },
  {
    "path": "sentinel/alwaysFalse.sentinel",
    "content": "# Test policy always fails for demonstration purposes\nmain = rule { false }\n\n"
  },
  {
    "path": "sentinel/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  constraint {\n    distinct_hosts = true\n  }\n\n  constraint {\n    attribute = \"${node.class}\"\n    value     = \"gpu\"\n  }\n  group \"cache\" {\n    network {\n      port \"db\" {}\n    }\n\n    service {\n      name = \"global-redis-check\"\n      tags = [\"global\", \"cache\"]\n      port = \"db\"\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n        ports = [\"db\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "sentinel/exampleGroupMissingNodeClass.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  constraint { distinct_hosts = true }\n  group \"cache\" {\n    count = 1\n    task \"redis\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n        port_map {\n          db = 6379\n        }\n      }\n      resources {\n        network {\n          port \"db\" {}\n        }\n      }\n    }\n  }\n  group \"cache2\" {\n    count = 1\n    constraint { attribute = \"${node.class}\" value = \"gpu\" }\n    task \"redis\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n        port_map {\n          db = 6379\n        }\n      }\n      resources {\n        network {\n          port \"db\" {}\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "sentinel/exampleGroupNodeClass.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  constraint { distinct_hosts = true }\n  group \"cache\" {\n    count = 1\n     constraint { attribute = \"${node.class}\" value = \"gpu\" }\n    task \"redis\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n        port_map {\n          db = 6379\n        }\n      }\n      resources {\n        network {\n          port \"db\" {}\n        }\n      }\n    }\n  }\n  group \"cache2\" {\n    count = 1\n    constraint { attribute = \"${node.class}\" value = \"gpu\" }\n    task \"redis\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n        port_map {\n          db = 6379\n        }\n      }\n      resources {\n        network {\n          port \"db\" {}\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "sentinel/exampleJobNodeClass.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  constraint {\n    distinct_hosts = true\n  }\n\n  constraint {\n    attribute = \"${node.class}\"\n    value    = \"gpu\"\n  }\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    service {\n      name = \"global-redis-check\"\n      tags = [\"global\", \"cache\"]\n      port = \"db\"\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        ports = [\"db\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "sentinel/exampleNoNodeClass.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  constraint {\n    distinct_hosts = true\n  }\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        ports = [\"db\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "sentinel/payload.json",
    "content": "{\n    \"Name\": \"anonymous\",\n    \"Description\": \"Allow read-only access for anonymous requests\",\n    \"Rules\": \"\n        namespace \\\"default\\\" {\n            policy = \\\"read\\\"\n        }\n        agent {\n            policy = \\\"read\\\"\n        }\n        node {\n            policy = \\\"read\\\"\n        }\n    \"\n}\n"
  },
  {
    "path": "sentinel/requireNodeClass.sentinel",
    "content": "REQUIRED_CONSTRAINT = \"${node.class}\"\n\nmain = rule {\n    job_has_required_constraint or all_groups_have_required_constraint\n}\n\njob_has_required_constraint = rule { has_job_required_constraint() }\nall_groups_have_required_constraint = rule { has_groups_required_constraint() }\n\nhas_job_required_constraint = func() {\n    for job.constraints as constraint {\n        if constraint.l_target is REQUIRED_CONSTRAINT {\n            return true\n        } \n    }\n    return false\n}\n\nhas_groups_required_constraint = func() {\n    for job.task_groups as tg {\n        group_has_required_constraint = false\n        for tg.constraints as constraint {\n            if constraint.l_target is REQUIRED_CONSTRAINT {\n                group_has_required_constraint = true\n            }\n        }\n        # if there is a group with no node_class, we can stop looking\n        # and fail quickly\n        if not(group_has_required_constraint) {\n            print(tg.name)\n            msg =  \"Test\" + \".\"\n            print(msg) \n            return false \n        }  \n    }\n    # If we make it here, all of the task groups have node_class set\n    return true\n}\n"
  },
  {
    "path": "server-variables/README.md",
    "content": "# WordPress\n\nThis job demonstrates several useful patterns for creating Nomad jobs:\n\n- Nomad Host Volumes for persistent storage\n- Using a pre-start task to wait until a dependency is available\n- Template driven configuration to reduce static port references\n\n## Prerequisites\n\n- **Consul** — This job leverages Consul service registrations to locate\n  the supporting MySQL instance.\n\n## Necessary configuration\n\n### Create the host volume in the configuration\n\nCreate a folder on one of your Nomad clients to host your registry files. This\nexample uses `/opt/nomad/volumes/wordpress-db`.\n\n```shell-session\nmkdir -p /opt/nomad/volumes/wordpress-db\n```\n\nAdd the `host_volume` information to the client stanza in the Nomad configuration.\nIf your `-config` flag points to a directory, you can create this as a standalone\nfile in that same folder.\n\n```hcl\nclient {\n# ...\n  host_volume \"my-website-db\" {\n    path = \"/opt/nomad/volumes/my-website-db\"\n    read_only = false\n  }\n}\n```\n\nRestart Nomad to read the new configuration.\n\n```shell\nsystemctl restart nomad\n```\n"
  },
  {
    "path": "server-variables/build-site.nomad",
    "content": "job \"build-site\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  parameterized {\n    meta_required = [\"site_name\"]\n  }\n\n  group \"sitebuilder\" {\n    task \"generate-password\" {\n\n      lifecycle {\n        hook = \"prestart\"\n        sidecar = false\n      }\n\n      template {\n        destination = \"secret/generate_keys.sh\"\n        env = true\n        data =<< EOT\n#!/bin/bash\n{{- $NMSN := env \"NOMAD_META_site_name\" -}}\n{{- $UUID := \"${uuidv4}\" -}}\nSite={{ $NMSN }}\nUUID={{ $UUID }}\nCONSUL_HTTP_TOKEN=c62d8564-c0c5-8dfe-3e75-005debbd0e40\necho \"Creating credentials for site $Site...\"\nconsul kv put wordpress/sites/$Site/db/user wp-site-$Site\nconsul kv put wordpress/sites/$Site/db/pass $UUID\nconsul kv put wordpress/sites/$Site/db/name wordpress-$Site\nEOT\n      }\n\n      driver = \"raw_exec\"\n      command = \"secret/generate_keys.sh\"\n    }\n\n    task \"make-database\" {\n\n      template {\n        destination = \"local/run.sql\"\n        data = << EOT\nCREATE DATABASE {{ printf \"wordpress-%s\" .Name }};\nCREATE USER {{ .User }} identified by {{ .Pass }};\n\nEOT\n      }\n\n      template {\n        destination = \"secrets/env.txt\"\n        env = true\n        data = << EOT\nMYSQL_PASSWORD=somewordpress\nEOT\n      }\n\n      driver = \"docker\"\n\n      config {\n        image = \"arey/mysql-client\"\n        args = [\n          \"--host=${MYSQL_HOST}\",\n          \"--port=${MYSQL_PORT}\",\n          \"--user=root\"\n          \"--password=${MYSQL_PASSWORD}\",\n          \"--execute=\\\"source /local/run.sql\\\"\"\n        ]\n      }\n    }\n  }\n}\n\n# $ docker run -v <path to sql>:/sql --link <mysql server container name>:mysql -it arey/mysql-client -h mysql -p <password> -D <database name> -e \"source /sql/<your sql file>\"\n"
  },
  {
    "path": "server-variables/nginx.nomad",
    "content": "job \"nginx\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n\n  group \"nginx\" {\n    network {\n      port \"http\" {\n        static = 80\n      }\n    }\n\n    service {\n      name = \"wp\"\n      port = \"http\"\n    }\n\n    task \"nginx\" {\n      driver = \"docker\"\n\n      config {\n        image = \"nginx\"\n\n        ports = [\"http\"]\n\n        volumes = [\n          \"local:/etc/nginx/conf.d\",\n        ]\n      }\n\n      template {\n        data = <<EOF\n{{- $ServicesByTag := (service \"wordpress-sites\" | byTag) -}}{{- $I :=0 -}}\n{{- /* {{- printf \"http {\\n\" -}} */ -}}\n{{- range $ServiceTag, $services := $ServicesByTag -}}\n{{- if gt $I 0 -}}{{- printf \"\\n\\n\" -}}{{- end -}}\n{{- printf \"##\\n## %s \\n##\\n\" $ServiceTag -}}\n{{- printf \"  upstream %s {\\n\" $ServiceTag -}}\n    {{- range $services -}}\n       {{- printf \"    server %s:%d;\\n\" .Address .Port -}}\n    {{- else -}}\n       {{- printf \"    server 127.0.0.1:65535; # force a 502\\n\" -}}\n    {{- end -}}\n{{- printf \"  }\\n\" }}\n  server {\n    listen 80;\n    server_name {{$ServiceTag}}.wp.service.consul;\n\n    location / {\n      proxy_pass http://{{$ServiceTag}};\n    }\n  }\n{{- $I = add $I 1 -}}\n{{- end -}}\n{{- printf \"\\n\" -}}\n{{- /* {{- printf \"}\\n\" -}} */ -}}\nEOF\n\n        destination   = \"local/load-balancer.conf\"\n        change_mode   = \"signal\"\n        change_signal = \"SIGHUP\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "server-variables/reset.sh",
    "content": ""
  },
  {
    "path": "server-variables/wordpress-db.nomad",
    "content": "job \"wordpress-db\" {\n  datacenters = [\"dc1\"]\n\n  group \"database\" {\n    network {\n      port \"db\" {\n        to = 3306\n      }\n    }\n\n    service {\n      name = \"wordpress-db\"\n      port = \"db\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"db\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    volume \"wordpress-db\" {\n      type      = \"host\"\n      source    = \"wordpress-db\"\n      read_only = false\n    }\n\n    task \"mysql\" {\n      driver = \"docker\"\n\n      env {\n        MYSQL_ROOT_PASSWORD=\"somewordpress\"\n        MYSQL_DATABASE=\"wordpress\"\n        MYSQL_USER=\"wordpress\"\n        MYSQL_PASSWORD=\"wordpress\"\n      }\n\n      volume_mount {\n        volume      = \"wordpress-db\"\n        destination = \"/var/lib/mysql\"\n      }\n\n      config {\n        image = \"mysql:5.7\"\n        ports = [\"db\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}"
  },
  {
    "path": "server-variables/wordpress.nomad",
    "content": "variable \"site_name\" {\n  type = string\n  description = \"The site_name is used to set the consul tag for the website. This makes them available at \\\"site_name.wordpress-sites.service.consul\\\"\"\n}\n\njob \"my-website\" {\n  name = \"wp-site-${var.site_name}\"\n  id = \"wp-site-${var.site_name}\"\n  datacenters = [\"dc1\"]\n\n  group \"wordpress\" {\n    count = 2\n\n    network {\n      port \"http\" {\n        to = 80\n      }\n    }\n\n    service {\n      name = \"wordpress-sites\"\n      tags = [\"${var.site_name}\"]\n      port = \"http\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"http\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"await-wordpress-db\" {\n      driver = \"docker\"\n\n      template {\n        destination = \"local/await-db.sh\"\n        perms = 700\n        data =<<EOT\n#!/bin/sh\necho -n 'Waiting for wordpress-db service...'\nuntil nslookup -port=8600 wordpress-db.service.consul ${NOMAD_IP_http} 2>&1 >/dev/null\ndo\n  echo -n '.'\n  sleep 2\n  # There is a good opportunity to add a loop counter and a bail-out too, but\n  # this script waits forever.\ndone\necho \" Done.\"\nEOT\n      }\n\n      config {\n        image        = \"alpine:latest\"\n        command      = \"local/await-db.sh\"\n        network_mode = \"host\"\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n\n      lifecycle {\n        hook    = \"prestart\"\n        sidecar = false\n      }\n    }\n\n    task \"wordpress\" {\n      driver = \"docker\"\n\n      template {\n        data = <<EOH\n{{- if service \"wordpress-db\" -}}\n{{- with index (service \"wordpress-db\") 0 -}}\nWORDPRESS_DB_HOST={{ .Address }}:{{ .Port }}\n{{- end -}}\n{{- end }}\nWORDPRESS_DB_USER=wordpress\nWORDPRESS_DB_PASSWORD=wordpress\nWORDPRESS_DB_NAME=wordpress-${var.site_name}\n  EOH\n\n        destination = \"local/envvars.txt\"\n        env = true\n      }\n\n      config {\n        image = \"wordpress:latest\"\n        ports = [\"http\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}"
  },
  {
    "path": "sleepy/README.md",
    "content": "## Sleepy\n\nThis is a set of binaries that perform dumb loops over time in the exec driver and log each time it wakes up.  They are useful for creating workload simulators.\n\n\n"
  },
  {
    "path": "sleepy/sleepy_bash/sleepy.nomad",
    "content": "job sleepy {\n  datacenters = [\"dc1\"]\n  group \"group\" {\n    count = 1\n\n## You might want to constrain this, so here's one to help\n#    constraint {\n#      attribute = \"${attr.unique.hostname}\"\n#      operator  = \"=\"\n#      value     = \"nomad-client-1.node.consul\"\n#    }\n\ntask \"sleepy.sh\" {\n      template {\n        data =<<EOH\n#!/bin/bash\n\nSLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds\necho \"$(date) - Starting. SLEEP_SECS=${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for ${SLEEP_SECS} seconds.\"; sleep ${SLEEP_SECS}; done\n\nEOH\n        destination = \"local/sleepy.sh\"\n      }\n\n      driver = \"exec\"\n \n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n \n      resources {\n        memory = 100\n        cpu = 100\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "sleepy/sleepy_python/README.md",
    "content": "# sleepy_python\n\nBackground : The Python interpreter buffers output to sys.stdout by default. We have to flush this buffer regularly in order to see this output using the `nomad alloc logs ...` or Nomad web UI.\n\nSolution : do sys.stdout.flush() after write\n"
  },
  {
    "path": "sleepy/sleepy_python/batch_sleepy_python.nomad",
    "content": "job sleepy {\n  type = \"batch\"\n  datacenters = [\"dc1\"]\n  group \"group\" {\n    count = 6\n\n## You might want to constrain this, so here's one to help\n#    constraint {\n#      attribute = \"${attr.unique.hostname}\"\n#      operator  = \"=\"\n#      value     = \"nomad-client-1.node.consul\"\n#    }\n    task \"python\" {\n      template {\n        data = <<EOH\n#! /usr/bin/python\n\nimport datetime\nimport time\nimport sys\nprint(str(datetime.datetime.now())+\" - Starting.\")\nsys.stdout.flush()\nwhile True:\n    print(str(datetime.datetime.now())+\" - Sleeping for 5 seconds.\")\n    sys.stdout.flush()\n    time.sleep(5)\nprint(str(datetime.datetime.now())+\" - Ending.\")\nsys.stdout.flush()\nEOH\n        destination = \"local/files.py\"\n      }\n\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/files.py\"\n      }\n\n      resources {\n        memory = 100\n        cpu = 100\n        network {\n          port \"http\" {}\n        }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "sleepy/sleepy_python/sleepy_python.nomad",
    "content": "job sleepy {\n  datacenters = [\"dc1\"]\n  group \"group\" {\n    count = 1\n\n## You might want to constrain this, so here's one to help\n#    constraint {\n#      attribute = \"${attr.unique.hostname}\"\n#      operator  = \"=\"\n#      value     = \"nomad-client-1.node.consul\"\n#    }\n    task \"python\" {\n      template {\n        data = <<EOH\n#! /usr/bin/python\n\nimport datetime\nimport time\nimport sys\nprint(str(datetime.datetime.now())+\" - Starting.\")\nsys.stdout.flush()\nwhile True:\n    print(str(datetime.datetime.now())+\" - Sleeping for 5 seconds.\")\n    sys.stdout.flush()\n    time.sleep(5)\nprint(str(datetime.datetime.now())+\" - Ending.\")\nsys.stdout.flush()\nEOH\n        destination = \"local/files.py\"\n      }\n\n      driver = \"exec\"\n\n      config {\n        command = \"python\"\n        args = [\"${NOMAD_TASK_DIR}/files.py\"]\n        # command = \"${NOMAD_TASK_DIR}/files.py\"\n      }\n\n      resources {\n        memory = 100\n        cpu = 50\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "spread/example.nomad",
    "content": "job \"exampleNUM\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n     ephemeral_disk {\n        size = \"11\"\n      }\n    task \"redis\" {\n      logs {\n        max_files     = 1\n        max_file_size = 10\n      }\n      driver = \"docker\"\n\n      config {\n        image = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo The service is running! && while true; do sleep 2; done\"]\n      }\n\n      resources {\n        cpu    = 50\n        memory = 50\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spread/scheduler.json",
    "content": "{\n  \"SchedulerAlgorithm\": \"spread\",\n  \"PreemptionConfig\": {\n    \"SystemSchedulerEnabled\": true,\n    \"BatchSchedulerEnabled\": true,\n    \"ServiceSchedulerEnabled\": true\n  }\n}\n"
  },
  {
    "path": "spread/scheduler_b.json",
    "content": "{\n  \"SchedulerAlgorithm\": \"binpack\",\n  \"PreemptionConfig\": {\n    \"SystemSchedulerEnabled\": true,\n    \"BatchSchedulerEnabled\": true,\n    \"ServiceSchedulerEnabled\": true\n  }\n}\n"
  },
  {
    "path": "stress/README.md",
    "content": "## Stress\n\nThese are some sample job files that leverage the `progrium/stress` docker container to make your cluster work hard.\n"
  },
  {
    "path": "stress/cpu_throttled_time/README.md",
    "content": "## cpu_throttled_time\n\nThis job demonstrates the nomad.client.allocs.cpu.throttled_time metric by providing a CPU-constrained docker environment and runnign stress inside of it.\n\nYou will need to have allocation metrics enabled on your Nomad clients:\n\n```\ntelemetry {\n  publish_allocation_metrics = true\n  publish_node_metrics       = true\n}\n```\n\n\n"
  },
  {
    "path": "stress/cpu_throttled_time/stress.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        cpu_hard_limit = true\n        image = \"pkane/train-os:latest\"\n        command = \"stress\"\n        args = [\n          \"-v\",\"--cpu\",\"2\",\"--io\", \"1\", \"--vm\", \"2\", \"--vm-bytes\", \"128M\", \"--timeout\", \"480s\"\n        ]\n        port_map {\n          db = 6379\n        }\n      }\n\n      resources {\n        cpu    = 50\n        memory = 256\n        network {\n          mbits = 10\n          port \"db\" {}\n        }\n      }\n\n      service {\n        name = \"redis-cache\"\n        tags = [\"global\", \"cache\"]\n        port = \"db\"\n        check {\n          name     = \"alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "super_big/README.md",
    "content": "### super_big\n\nThese nomad files are to create jobs that will naturally exhaust available memory on the nodes.\n\n"
  },
  {
    "path": "super_big/super_big.nomad",
    "content": "\njob \"sticky\" {\n  datacenters = [\"dc1\"]\n\n  update {\n    stagger = \"10s\"\n    max_parallel = 1\n  }\n\n  group \"cache\" {\n    count = 6\n\n    network {\n      port \"db\" {\n        to = 6378\n      }\n    }\n\n    ephemeral_disk {\n      sticky = true\n      migrate = true\n      size = 3000\n    }\n\n    service {\n      name = \"sticky-redis\"\n      tags = [\"global\", \"sticky\", \"redis\", \"cache\"]\n      port = \"db\"\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        ports = [\"db\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "super_big/super_big2.nomad",
    "content": "\njob \"super-big\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n\n  update {\n    stagger = \"10s\"\n    max_parallel = 1\n  }\n\n  group \"cache\" {\n    count = 6\n\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay = \"25s\"\n      mode = \"delay\"\n    }\n\n    ephemeral_disk {\n      sticky = true\n      migrate = true\n      size = 3000\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        port_map {\n          db = 6379\n        }\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n\n        network {\n          mbits = 10\n          port \"db\" {}\n        }\n      }\n\n      service {\n        name = \"sticky-redis\"\n        tags = [\"global\", \"sticky\", \"redis\", \"cache\"]\n        port = \"db\"\n        check {\n          name     = \"alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "system_jobs/sleepy/README.md",
    "content": "## Sleepy\n\nThis is a set of binaries that perform dumb loops over time in the exec driver and log each time it wakes up.  They are useful for creating workload simulators.\n\n\n"
  },
  {
    "path": "system_jobs/sleepy/sleepy_bash/sleepy.nomad",
    "content": "job sleepy-system {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  group \"group\" {\n    count = 1\n\n## You might want to constrain this, so here's one to help\n#    constraint {\n#      attribute = \"${attr.unique.hostname}\"\n#      operator  = \"=\"\n#      value     = \"nomad-client-1.node.consul\"\n#    }\n    task \"sleepy.sh\" {\n      template {\n        data = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=${SLEEP_SECS:-2} # provide default of 2 seconds\ninterruptable_sleep() { for i in $(seq 1 $((2*${1}))); do sleep .5; done; }\nsigint() { echo \"$(date) - SIGTERM received; Ending.\"; exit 0; }\ntrap 'sigint'  INT\necho \"$(date) - Starting. SLEEP_SECS=${SLEEP_SECS}\"\nwhile true; do echo \"$(date) - Sleeping for ${SLEEP_SECS} seconds.\"; interruptable_sleep ${SLEEP_SECS}; done\n\nEOH\n        destination = \"local/sleepy.sh\"\n      }\n\n      driver = \"exec\"\n      config { command = \"${NOMAD_TASK_DIR}/sleepy.sh\" }\n      resources { memory = 100 cpu = 100 }\n    }\n  }\n}\n\n"
  },
  {
    "path": "system_jobs/sleepy/sleepy_python/README.md",
    "content": "# sleepy_python\n\nLet's talk about this\n"
  },
  {
    "path": "system_jobs/sleepy/sleepy_python/batch_sleepy_python.nomad",
    "content": "job sleepy {\n  type = \"batch\"\n  datacenters = [\"dc1\"]\n  group \"group\" {\n    count = 6\n\n## You might want to constrain this, so here's one to help\n#    constraint {\n#      attribute = \"${attr.unique.hostname}\"\n#      operator  = \"=\"\n#      value     = \"nomad-client-1.node.consul\"\n#    }\n    task \"python\" {\n      template {\n        data = <<EOH\n#! /usr/bin/python\n\nimport datetime\nimport time\nimport sys\nprint(str(datetime.datetime.now())+\" - Starting.\")\nsys.stdout.flush()\nwhile True:\n    print(str(datetime.datetime.now())+\" - Sleeping for 5 seconds.\")\n    sys.stdout.flush()\n    time.sleep(5)\nprint(str(datetime.datetime.now())+\" - Ending.\")\nsys.stdout.flush()\nEOH\n        destination = \"local/files.py\"\n      }\n\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/files.py\"\n      }\n\n      resources {\n        memory = 100\n        cpu = 100\n        network {\n          port \"http\" {}\n        }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "system_jobs/sleepy/sleepy_python/sleepy_python.nomad",
    "content": "job sleepy {\n  datacenters = [\"dc1\"]\n  group \"group\" {\n    count = 1\n\n## You might want to constrain this, so here's one to help\n#    constraint {\n#      attribute = \"${attr.unique.hostname}\"\n#      operator  = \"=\"\n#      value     = \"nomad-client-1.node.consul\"\n#    }\n    task \"python\" {\n      template {\n        data = <<EOH\n#! /usr/bin/python\n\nimport datetime\nimport time\nimport sys\nprint(str(datetime.datetime.now())+\" - Starting.\")\nsys.stdout.flush()\nwhile True:\n    print(str(datetime.datetime.now())+\" - Sleeping for 5 seconds.\")\n    sys.stdout.flush()\n    time.sleep(5)\nprint(str(datetime.datetime.now())+\" - Ending.\")\nsys.stdout.flush()\nEOH\n        destination = \"local/files.py\"\n      }\n\n      driver = \"exec\"\n\n      config {\n        command = \"python\"\n        args = [\"${NOMAD_TASK_DIR}/files.py\"]\n        # command = \"${NOMAD_TASK_DIR}/files.py\"\n      }\n\n      resources {\n        memory = 10\n        cpu = 50\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "system_jobs/system_deployment/deploy_jdk.nomad",
    "content": "job deploy_jdk {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n\n  group \"group\" {\n    task \"deploy_and_sleep\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"/bin/bash\"\n        args = [\"-c\", \"yum install java; echo \\\"Deployment Complete\\\"; while true; do echo -n \\\".\\\"; sleep 5; done\"]\n      }\n\n      resources {\n        memory = 10\n        cpu = 50\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "system_jobs/system_deployment/fabio-system.nomad",
    "content": "job \"fabio\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  update {\n      max_parallel     = 1\n      canary           = 1\n      min_healthy_time = \"30s\"\n      healthy_deadline = \"2m\"\n      auto_revert      = true\n  }\n  group \"linux-amd64\" {\n    task \"fabio\" {\n      constraint {\n        attribute = \"${attr.cpu.arch}\"\n        operator  = \"=\"\n        value     = \"amd64\"\n      }\n      constraint {\n        attribute = \"${attr.kernel.name}\"\n        operator  = \"=\"\n        value     = \"linux\"\n      }\n      driver = \"exec\"\n      config { command = \"fabio-1.5.2-go1.8.3-linux_amd64\" }\n      artifact {\n        source = \"https://github.com/fabiolb/fabio/releases/download/v1.5.2/fabio-1.5.2-go1.8.3-linux_amd64\"\n#        options {\n#          checksum = \"sha256:7dc786c3dfd8c770d20e524629d0d7cd2cf8bb84a1bf98605405800b28705198\"\n#        }\n      }\n      resources {\n        cpu = 200\n        memory = 32\n        network {\n          mbits = 1\n          port \"http\" {static=9999}\n          port \"ui\" {static=9998}\n        }\n      }\n      service {\n        tags = [\"fabio\", \"lb\"]\n        canary_tags = [\"fabio-canary\", \"lb-canary\"]\n        port = \"ui\"\n        check {\n          name     = \"fabio ui port is alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n        check {\n          name     = \"fabio health check\"\n          type     = \"http\"\n          path     = \"/health\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "system_jobs/system_deployment/fabio-system.nomad2",
    "content": "job \"fabio\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  update {\n      max_parallel     = 1\n      canary           = 1\n      min_healthy_time = \"30s\"\n      healthy_deadline = \"2m\"\n      auto_revert      = true\n  }\n  group \"linux-amd64\" {\n    task \"fabio\" {\n      constraint {\n        attribute = \"${attr.cpu.arch}\"\n        operator  = \"=\"\n        value     = \"amd64\"\n      }\n      constraint {\n        attribute = \"${attr.kernel.name}\"\n        operator  = \"=\"\n        value     = \"linux\"\n      }\n      driver = \"exec\"\n      config { command = \"fabio-1.5.9-go1.10.2-linux_amd64\" }\n      artifact {\n        source = \"https://github.com/fabiolb/fabio/releases/download/v1.5.9/fabio-1.5.9-go1.10.2-linux_amd64\"\n      }\n      resources {\n        cpu = 200\n        memory = 32\n        network {\n          mbits = 1\n          port \"http\" {static=9999}\n          port \"ui\" {static=9998}\n        }\n      }\n      service {\n        tags = [\"fabio\", \"lb\"]\n        port = \"ui\"\n        check {\n          name     = \"fabio ui port is alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n        check {\n          name     = \"fabio health check\"\n          type     = \"http\"\n          path     = \"/health\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "system_jobs/system_deployment/foo-system.nomad",
    "content": "job \"foo-service\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  update {\n    max_parallel = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"2m\"\n    progress_deadline = \"5m\"\n    canary = 1\n  }\n  group \"example\" {\n    ephemeral_disk {\n       size = \"110\"\n    }\n\n    task \"server\" {\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\" \n      }\n      driver = \"exec\"\n\n      config {\n        command = \"http-echo\"\n        args = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>Welcome to the Foo Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\",\n        ]\n      }\n\n      resources {\n        network {\n          port \"http\" {}\n        }\n      }\n\n      service {\n        name = \"foo-service\"\n        tags = [\"urlprefix-/foo\"]\n        canary_tags = [\"urlprefix-/cfoo\"]\n        port = \"http\"\n        check {\n          type = \"http\"\n          name = \"health-check\"\n          interval = \"15s\"\n          timeout = \"5s\"\n          path = \"/\"\n        }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "system_jobs/system_deployment/foo-system.nomad2",
    "content": "job \"foo-service\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  update {\n    max_parallel = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"2m\"\n    progress_deadline = \"5m\"\n    canary = 1\n  }\n  group \"example\" {\n    ephemeral_disk {\n       size = \"110\"\n    }\n\n    task \"server\" {\n      artifact {\n        source = \"https://github.com/hashicorp/http-echo/releases/download/v0.2.3/http-echo_0.2.3_linux_amd64.tar.gz\" \n      }\n      driver = \"exec\"\n\n      config {\n        command = \"http-echo\"\n        args = [\n          \"-listen\", \":${NOMAD_PORT_http}\",\n          \"-text\", \"<html><body><h1>Welcome to the NEW NEW NEW NEW Foo Service.</h1><hr />You are on ${NOMAD_IP_http}.</body></html>\",\n        ]\n      }\n\n      resources {\n        network {\n          port \"http\" {}\n        }\n      }\n\n      service {\n        name = \"foo-service\"\n        tags = [\"urlprefix-/foo\"]\n        canary_tags = [\"urlprefix-/cfoo\"]\n        port = \"http\"\n        check {\n          type = \"http\"\n          name = \"health-check\"\n          interval = \"15s\"\n          timeout = \"5s\"\n          path = \"/\"\n        }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "system_jobs/system_filter/filtered.nomad",
    "content": "job \"filtered\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  group \"cache\" {\n    constraint {\n      attribute = \"${attr.kernel.name}\"\n      operator  = \"=\"\n      value     = \"windows\"\n    }\n    task \"job\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"C:\\\\Windows\\\\System32\\\\notepad.exe\"\n      }\n\n      resources {\n        cpu    = 100\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "system_jobs/system_filter/host_vol.nomad",
    "content": "job \"registry-system\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  priority    = 80\n\n  group \"docker\" {\n    network {\n      port \"registry\" {\n        to     = 5000\n        static = 5000\n      }\n    }\n\n    service {\n      name = \"registry\"\n      port = \"registry\"\n\n      check {\n        type     = \"tcp\"\n        port     = \"registry\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    volume \"docker-registry\" {\n      type      = \"host\"\n      source    = \"docker-registry\"\n      read_only = false\n    }\n\n    task \"container\" {\n      driver = \"docker\"\n\n      volume_mount {\n        volume      = \"docker-registry\"\n        destination = \"/var/lib/registry\"\n      }\n\n      config {\n        image = \"registry\"\n        ports = [\"registry\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "task_deps/consul-lock/myapp.nomad",
    "content": "job \"myapp\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"myapp\" {\n    # disable deployments\n    update {\n      max_parallel = 0\n    }\n\n    task \"await-myservice\" {\n      driver = \"docker\"\n\n      config {\n        image       = \"busybox:1.28\"\n        command     = \"sh\"\n        args        = [\"-c\", \"echo -n 'Waiting for service'; until nslookup myservice.service.consul 2>&1 >/dev/null; do echo '.'; sleep 2; done\"]\n        dns_servers = [\"10.0.2.21\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n\n      lifecycle {\n        hook    = \"prestart\"\n        sidecar = false\n      }\n    }\n\n    task \"myapp-container\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo The app is running! && sleep 3600\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "task_deps/disk_check/README.md",
    "content": "## Task Dependency:Available Disk\n\nThis demonstrates using a batch script to test for a resource, before starting a\nworkload.  This will also cause the job to fail which should stimulate\nrescheduling\n\nkeywords: template, task dependency, reschedule, diskspace, disk"
  },
  {
    "path": "task_deps/disk_check/disk.nomad",
    "content": "# this job will hopefully die if the node doesn't have\n# enough disk space to service the job\njob \"lifecycle\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n\n  group \"cache\" {\n    # disable deployments\n    update {\n      max_parallel = 0\n    }\n    task \"init\" {\n      template {\n        data = <<EOH\n#!/bin/bash\n\nGBFREE=$(($(stat -f --format=\"%a*%S/1073741824\" .)))\nif [[ $GBFREE -lt $1 ]]\nthen\n  echo \"ERROR: Not enough disk free.  Wanted $1 gb, had $GBFREE available.\"\n  exit 1\nfi\n\nEOH\n        destination = \"local/diskfree.sh\"\n      }\n\n      driver = \"exec\"\n      lifecycle {\n        hook = \"prestart\"\n      }\n      config {\n        command = \"${NOMAD_TASK_DIR}/diskfree.sh\"\n        args = [\"3\"]\n      }\n      resources {\n        cpu    = 20\n        memory = 10\n      }\n    }\n\n    task \"zebra-main-app\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:7\"\n      }\n      resources {\n        cpu    = 500\n        memory = 512\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "task_deps/init_artifact/README.md",
    "content": "# init-artifact.nomad\n\nThis sample job demonstrates priming the alloc directory with artifacts and\ntemplates generated with an init job.  The `template` task then runs the\ndownloaded levant executable and renders the template that the init task\nplaced in the alloc folder.\n\n- **batch-init-artifact.nomad** - batch version of the job\n\n- **service-init-artifact.nomad** - service version (renders and then goes\n  into a sleep loop)\n\n"
  },
  {
    "path": "task_deps/init_artifact/batch-init-artifact.nomad",
    "content": "job \"init-artifacts\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  group \"test\" {\n    task \"init\" {\n      template {\n        data = <<EOH\n\nNOMAD_ALLOC_ID:  [[ env \"NOMAD_ALLOC_ID\" ]]\n\nEOH\n        destination = \"alloc/hello.levant\"\n      }\n      artifact {\n\tsource = \"https://github.com/hashicorp/levant/releases/download/0.2.9/linux-amd64-levant\"\n\tdestination = \"alloc\"\n      }\n      driver = \"exec\"\n      lifecycle {\n        hook = \"prestart\"   \n      } \n      config {\n        command = \"${NOMAD_ALLOC_DIR}/linux-amd64-levant\"\n        args = [\"-version\"]\n      }\n      resources {\n        cpu    = 20\n        memory = 10\n      }\n    }\n\n    task \"template\" {\n      template {\n        data = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=${SLEEP_SECS:-300} # provide default of 300 seconds\nsleepLoop() { while true; do sleep ${SLEEP_SECS}; done }\n\necho \"$(date) - Starting.\"\n\n${NOMAD_ALLOC_DIR}/linux-amd64-levant render ${NOMAD_ALLOC_DIR}/hello.levant;\n\nEOH\n        destination = \"local/renderTemplate.sh\"\n      }\n\n      driver = \"exec\"\n      config {\n        command = \"${NOMAD_TASK_DIR}/renderTemplate.sh\"\n      }\n      resources {\n        cpu    = 100\n        memory = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "task_deps/init_artifact/service-init-artifact.nomad",
    "content": "job \"init-artifacts\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n\n  group \"test\" {\n    task \"init\" {\n      template {\n        data = <<EOH\n\nNOMAD_ALLOC_ID:  [[ env \"NOMAD_ALLOC_ID\" ]]\n\nEOH\n        destination = \"alloc/hello.levant\"\n      }\n      artifact {\n\tsource = \"https://github.com/hashicorp/levant/releases/download/0.2.9/linux-amd64-levant\"\n\tdestination = \"alloc\"\n      }\n      driver = \"exec\"\n      lifecycle {\n        hook = \"prestart\"   \n      } \n      config {\n        command = \"${NOMAD_ALLOC_DIR}/linux-amd64-levant\"\n        args = [\"-version\"]\n      }\n      resources {\n        cpu    = 20\n        memory = 10\n      }\n    }\n\n    task \"template\" {\n      template {\n        data = <<EOH\n#!/bin/bash\n\nSLEEP_SECS=${SLEEP_SECS:-300} # provide default of 300 seconds\nsleepLoop() { while true; do sleep ${SLEEP_SECS}; done }\n\necho \"$(date) - Starting.\"\n\n${NOMAD_ALLOC_DIR}/linux-amd64-levant render ${NOMAD_ALLOC_DIR}/hello.levant;\n\n# sleepLoop ensures that the task remains running to meet Nomad's\n# requirement that services never stop. If this is a batch task,\n# you can comment it out.\nsleepLoop\n\nEOH\n        destination = \"local/renderTemplate.sh\"\n      }\n\n      driver = \"exec\"\n      config {\n        command = \"${NOMAD_TASK_DIR}/renderTemplate.sh\"\n      }\n      resources {\n        cpu    = 100\n        memory = 100\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "task_deps/interjob/README.md",
    "content": "# Modeling Inter-Service Dependencies using Nomad Task Dependencies\n\nNomad task dependencies provide the ability to use init-style tasks. These tasks can be used to delay a jobs main tasks from running until a service that the job depends on is available.  \n\n## Create the job files\nThis example uses simple looping scripts to mock service payloads. Create a file named `myservice.nomad` with the following content.\n\n```hcl\njob \"myservice\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"myservice\" {\n    task \"myservice\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo The service is running! && while true; do sleep 2; done\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n\n      service {\n        name = \"myservice\"\n      }\n    }\n  }\n}\n\n```\n\nCreate a file named `myapp.nomad` with the collowing content.\n\n```hcl\njob \"myapp\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"myapp\" {\n    # disable deployments\n    update {\n      max_parallel = 0\n    }\n\n    task \"await-myservice\" {\n      driver = \"docker\"\n\n      config {\n        image       = \"busybox:1.28\"\n        command     = \"sh\"\n        args        = [\"-c\", \"echo -n 'Waiting for service'; until nslookup myservice.service.consul 2>&1 >/dev/null; do echo '.'; sleep 2; done\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n\n      lifecycle {\n        hook    = \"prestart\"\n        sidecar = false\n      }\n    }\n\n    task \"myapp-container\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo The app is running! && sleep 3600\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n    }\n  }\n}\n\n```\n\nThis job contains a prestart task that will query a Consul DNS API endpoint for the \"myservice\" service.  \n\nNote, you might need to add the `dns_servers` value to the config stanza of the await-myservice task in the myapp.nomad file to direct the query to a DNS server that can receive queries on port 53 for your Consul DNS query root domain.\n\n\n## Run the myapp job\n\nRun `nomad run myapp.nomad`.  \n\n```shell\n$ nomad run myapp.nomad\n```\n\nThe job will launch and provide you an allocation ID in the output.\n\n```plaintext\n$ nomad run myapp.nomad\n==> Monitoring evaluation \"01c73d5a\"\n    Evaluation triggered by job \"myapp\"\n    Allocation \"3044dda0\" created: node \"f26809e6\", group \"myapp\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"01c73d5a\" finished with status \"complete\"\n```\n\nRun the `nomad alloc status` command with the provided allocation ID.\n\n```shell\n$ nomad alloc status 3044dda0\n```\n\n## Verify myapp-container is blocked\n\nYou will receive a lot of information back. For this guide, focus on the status of each task. Each task's status is output in lines that look like `Task \"await-myservice\" is \"running\"`.\n\n```plaintext\n$ nomad alloc status 3044dda0\nID                  = 3044dda0-8dc1-1bac-86ea-66a3557c67d3\nEval ID             = 01c73d5a\nName                = myapp.myapp[0]\nNode ID             = f26809e6\nNode Name           = nomad-client-2.node.consul\nJob ID              = myapp\nJob Version         = 0\nClient Status       = running\nClient Description  = Tasks are running\nDesired Status      = run\nDesired Description = <none>\nCreated             = 43s ago\nModified            = 42s ago\n\nTask \"await-myservice\" is \"running\"\nTask Resources\nCPU        Memory          Disk     Addresses\n3/200 MHz  80 KiB/128 MiB  300 MiB  \n\nTask Events:\nStarted At     = 2020-03-18T17:07:26Z\nFinished At    = N/A\nTotal Restarts = 0\nLast Restart   = N/A\n\nRecent Events:\nTime                       Type        Description\n2020-03-18T13:07:26-04:00  Started     Task started by client\n2020-03-18T13:07:26-04:00  Task Setup  Building Task Directory\n2020-03-18T13:07:26-04:00  Received    Task received by client\n\nTask \"myapp-container\" is \"pending\"\nTask Resources\nCPU      Memory   Disk     Addresses\n200 MHz  128 MiB  300 MiB  \n\nTask Events:\nStarted At     = N/A\nFinished At    = N/A\nTotal Restarts = 0\nLast Restart   = N/A\n\nRecent Events:\nTime                       Type      Description\n2020-03-18T13:07:26-04:00  Received  Task received by client\n```\n\nNotice that the await-myservice task is running and that the myapp-container task is pending. The myapp-container will remain in pending until the await-myservice container completes successfully.\n\n## Start myservice job\n\nYou can run the myservice.nomad job to create a job that creates a \"myservice\" service in Consul. This will allow the await-myservice task to terminate successfully. Run `nomad run myservice.nomad`.\n\n```shell\n$ nomad run myservice.nomad\n```\n\nNomad will start the job and return information about the scheduling information.\n\n```plaintext\n$ nomad run myservice.nomad\n==> Monitoring evaluation \"f31f8eb1\"\n    Evaluation triggered by job \"myservice\"\n    Allocation \"d7767adf\" created: node \"f26809e6\", group \"myservice\"\n    Evaluation within deployment: \"3d86e09a\"\n    Evaluation status changed: \"pending\" -> \"complete\"\n==> Evaluation \"f31f8eb1\" finished with status \"complete\"\n```\n\nRe-check the allocation status of your myapp allocation.\n\n```shell\n$ nomad alloc status 3044dda0\n```\n\n## Verify myapp-container is running\n\nFinally, check the output of the alloc status command for the task statuses.\n\n```plaintext\n$ nomad alloc status 3044dda0\nID                  = 3044dda0-8dc1-1bac-86ea-66a3557c67d3\nEval ID             = 01c73d5a\nName                = myapp.myapp[0]\nNode ID             = f26809e6\nNode Name           = nomad-client-2.node.consul\nJob ID              = myapp\nJob Version         = 0\nClient Status       = running\nClient Description  = Tasks are running\nDesired Status      = run\nDesired Description = <none>\nCreated             = 21m38s ago\nModified            = 7m27s ago\n\nTask \"await-myservice\" is \"dead\"\nTask Resources\nCPU        Memory          Disk     Addresses\n0/200 MHz  80 KiB/128 MiB  300 MiB  \n\nTask Events:\nStarted At     = 2020-03-18T17:07:26Z\nFinished At    = 2020-03-18T17:21:35Z\nTotal Restarts = 0\nLast Restart   = N/A\n\nRecent Events:\nTime                       Type        Description\n2020-03-18T13:21:35-04:00  Terminated  Exit Code: 0\n2020-03-18T13:07:26-04:00  Started     Task started by client\n2020-03-18T13:07:26-04:00  Task Setup  Building Task Directory\n2020-03-18T13:07:26-04:00  Received    Task received by client\n\nTask \"myapp-container\" is \"running\"\nTask Resources\nCPU        Memory          Disk     Addresses\n0/200 MHz  32 KiB/128 MiB  300 MiB  \n\nTask Events:\nStarted At     = 2020-03-18T17:21:37Z\nFinished At    = N/A\nTotal Restarts = 0\nLast Restart   = N/A\n\nRecent Events:\nTime                       Type        Description\n2020-03-18T13:21:37-04:00  Started     Task started by client\n2020-03-18T13:21:35-04:00  Driver      Downloading image\n2020-03-18T13:21:35-04:00  Task Setup  Building Task Directory\n2020-03-18T13:07:26-04:00  Received    Task received by client\n```\n\nNotice, the await-myservice task is dead and based on the Recent Events table terminated with \"Exit Code: 0\"—this indicates that it completed successfully. The myapp-container has now moved to the \"running\" status and the container is running.\n\n"
  },
  {
    "path": "task_deps/interjob/myapp.nomad",
    "content": "job \"myapp\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"myapp\" {\n    # disable deployments\n    update {\n      max_parallel = 0\n    }\n\n    task \"await-myservice\" {\n      driver = \"docker\"\n\n      config {\n        image       = \"busybox:1.28\"\n        command     = \"sh\"\n        args        = [\"-c\", \"echo -n 'Waiting for service'; until nslookup myservice.service.consul 2>&1 >/dev/null; do echo '.'; sleep 2; done\"]\n        dns_servers = [\"10.0.2.21\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n\n      lifecycle {\n        hook    = \"prestart\"\n        sidecar = false\n      }\n    }\n\n    task \"myapp-container\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo The app is running! && sleep 3600\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "task_deps/interjob/myservice.nomad",
    "content": "job \"myservice\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"myservice\" {\n    task \"myservice\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo The service is running! && while true; do sleep 2; done\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n\n      service {\n        name = \"myservice\"\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "task_deps/k8sdoc/README.md",
    "content": "# Task Dependencies: Kubernetes init containers doc comparison\n\nThis looks at the [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) documentation for Kubernetes and attempts to reproduce the\nexamples for Nomad using Task Dependencies.\n"
  },
  {
    "path": "task_deps/k8sdoc/init.nomad",
    "content": "# this job will hopefully die if the node doesn't have\n# enough disk space to service the job\njob \"lifecycle\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"myservice\" {\n    task \"myservice\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo The service is running! && while true; do sleep 2; done\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n    }\n  }\n\n  group \"cache\" {\n    # disable deployments\n    update {\n      max_parallel = 0\n    }\n\n    task \"init-myservice\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo -n 'Waiting for service...'; until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo -n '.'; sleep 2; done\"]\n      }\n      lifecycle {\n        hook    = \"prestart\"\n        sidecar = true\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n    }\n\n    task \"init-mydb\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]\n      }\n\n      lifecycle {\n        hook = \"prestart\"\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n    }\n\n    task \"myapp-container\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo The app is running! && sleep 3600\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "task_deps/k8sdoc/k8sdoc1.nomad",
    "content": "# this job will hopefully die if the node doesn't have\n# enough disk space to service the job\njob \"lifecycle\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"cache\" {\n    # disable deployments\n    update {\n      max_parallel = 0\n    }\n\n    task \"init-myservice\" {\n      driver = \"docker\"\n\n      config {\n        image       = \"busybox:1.28\"\n        command     = \"sh\"\n        dns_servers = [ \"10.0.2.21\" ]\n        args        = [\"-c\", \"echo -n 'Waiting for service...'; until nslookup myservice.service.consul; do echo '.'; sleep 2; done\"]\n      }\n\n      lifecycle {\n        hook = \"prestart\"\n        sidecar = false\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n    }\n\n    task \"myapp-container\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo The app is running! && sleep 3600\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "task_deps/k8sdoc/myapp.nomad",
    "content": "job \"myapp\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"myapp\" {\n    # disable deployments\n    update {\n      max_parallel = 0\n    }\n\n    task \"await-myservice\" {\n      driver = \"docker\"\n      config {\n        image       = \"busybox:1.28\"\n        command     = \"sh\"\n        dns_servers = [ \"10.0.2.21\" ]\n        args        = [\"-c\", \"echo -n 'Waiting for service'; until nslookup myservice.service.consul 2>&1 >/dev/null; do echo '.'; sleep 2; done\"]\n      }\n\n      lifecycle {\n        hook    = \"prestart\"\n        sidecar = false\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n    }\n\n    task \"myapp-container\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo The app is running! && sleep 3600\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "task_deps/k8sdoc/myservice.nomad",
    "content": "job \"myservice\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"myservice\" {\n    task \"myservice\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"busybox\"\n        command = \"sh\"\n        args    = [\"-c\", \"echo The service is running! && while true; do sleep 2; done\"]\n      }\n\n      resources {\n        cpu    = 200\n        memory = 128\n      }\n\n      service {\n        name = \"myservice\"\n      }\n    }\n  }\n}"
  },
  {
    "path": "task_deps/sidecar/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port  \"db\"  {}\n    }\n\n    task \"remote_syslog_stdout\" {\n      driver = \"docker\"\n\n      config {\n        image = \"octohost/remote_syslog\"\n        args = [\n         \"-p\", \"29655\", \"-d\", \"logs5.papertrailapp.com\", \"/alloc/logs/redis.stdout.0\"\n        ]\n     }\n\n      lifecycle {\n        sidecar = true\n        hook = \"prestart\"\n      }\n    }\n\n    task \"remote_syslog_stderr\" {\n      driver = \"docker\"\n\n      config {\n        image = \"octohost/remote_syslog\"\n        args = [\n         \"-p\", \"29655\", \"-d\", \"logs5.papertrailapp.com\", \"/alloc/logs/redis.stderr.0\"\n        ]\n     }\n\n      lifecycle {\n        sidecar = true\n        hook = \"prestart\"\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        ports = [\"db\"]\n        }\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/batch/README.md",
    "content": "## Batch Templates\n\nUsing batch jobs can provide a way to experiment with templates.  \n\n* **parameter.nomad** - This job demonstrates using a provided meta variable to create a composed key which could be used in another template tag, like key, service, secret, etc.\n\n\n\n"
  },
  {
    "path": "template/batch/context.nomad",
    "content": "job \"parameter\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"group\" {\n    count = 1\n    task \"command\" {\n      driver = \"exec\"\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat local/template.out\"]\n      }\n      template {\n        data = <<EOH\n{{ printf \"%#v\" . }}\n  EOH\n\n        destination = \"local/template.out\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/batch/parameter.nomad",
    "content": "job \"parameter\" {\n  parameterized {\n    payload       = \"optional\"\n    meta_optional = [\"CLIENT\",\"APP\"]\n  }\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"group\" {\n    count = 1\n    task \"command\" {\n      driver = \"exec\"\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat local/template.out\"]\n      }\n      template {\n        data = <<EOH\n{{- $myKey := printf \"secret/endpoints/%s/%s/info\" (env \"NOMAD_META_CLIENT\") (env \"NOMAD_META_APP\") -}}\nCLIENT_ID=\"{{ with secret $myKey }}{{ .Data.clientID }}{{ end }}\" \nCLIENT_PWD=\"{{ with secret $myKey }}{{ .Data.clientPWD }}{{ end }}\" \nAPP_ENDPOINT=\"{{ with secret $myKey }}{{ .Data.uriFQDN }}{{ end }}\" \n  EOH\n\n        destination = \"local/template.out\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/batch/services.nomad",
    "content": "job \"parameter\" { datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"group\" {\n    count = 1\n    task \"command\" {\n      driver = \"exec\"\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat local/template.out\"]\n      }\n      template {\n        data = <<EOH\n{{ range $index, $instance := service \"consul\"}}\n{{ printf \"--- %v ---\" $index }}\n{{ printf \"%#v\" (toJSONPretty $instance) }}\n{{ end }}\n  EOH\n\n        destination = \"local/template.out\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/batch/template.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  task \"command\" {\n    driver = \"exec\"\n    config {\n      command = \"cat\"\n      args = [\"local/template.out\"]\n    }\n    template {\n      destination = \"local/template.out\"\n      data = <<EOH\nHello.\nEOH\n\n    }\n  }\n}\n\n"
  },
  {
    "path": "template/from_consul/README.md",
    "content": "## From Template\n\nThis sample will use Consul KV to hold a template for a job rather than embedding\nit in the jobfile itself.  There will need to be a small wrapper template to pull\nand execute the fetched template.\n\n\n```\n$ consul kv put template/test \"{{printf \\\"this is from consul's template\\\"}}\"\n```\n\n```\nSuccess! Data written to: template/test\n```\n\n### issue.nomad\n\nThis example demonstrates what happens when you try to reference a KV value's\ncontents as a template inside of another template—basically nothing. You will\nreceive unrendered template as output.\n\n```\n➜ nomad alloc logs 898a69d7-9593-3ca0-c258-2500b6656122\n{{printf \"this is from consul's template\"}}\n```\n\n### artifact.nomad\n\nThis example demonstrates using the `artifact` stanza to do a direct call to the\nConsul API to fetch the KV value into a local file to be run on the Nomad client.\n\nPros: Not terribly complicated, works much like you would expect.\n\nCons: Needs a separate token in the workload, unless the path can be reached by:\n    - the Consul agent receiving the API call's token\n    - the anonymous token\n\n### init.nomad"
  },
  {
    "path": "template/from_consul/artifact.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"group\" {\n    count = 1\n\n    task \"command\" {\n      driver = \"raw_exec\"\n\n      artifact {\n        source      = \"http://consul.service.consul:8500/v1/kv/template/test?raw\"\n        destination = \"local/template.out\"\n        mode        = \"file\"\n## You might need to pass a consul token for the API request.\n#        headers {\n#          X-Consul-Token = \"«a consul token with access to the kv path»\"\n#        }\n      }\n\n      config {\n        command = \"bash\"\n        args    = [\"-c\", \"cat local/rendered.out\"]\n      }\n\n      template {\n        source      = \"local/template.out\"\n        destination = \"local/rendered.out\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/from_consul/init.nomad",
    "content": "## replace me with an attempt to do this using a prestart task\n\n\n\n"
  },
  {
    "path": "template/from_consul/issue.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"group\" {\n    count = 1\n\n    task \"command\" {\n      driver = \"raw_exec\"\n\n      template {\n        data = <<EOH\n{{- define \"custom\" }}{{ key \"template/test\" }}{{ end -}}\n{{ executeTemplate \"custom\" }}\nEOH\n\n        destination = \"local/template.out\"\n      }\n\n      config {\n        command = \"bash\"\n        args    = [\"-c\", \"cat local/template.out\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/learning/README.md",
    "content": ""
  },
  {
    "path": "template/rerender/example.nomad",
    "content": "job \"myjobname\" {\n  type        = \"service\"\n  datacenters = [\"dc1\"]\n\n  constraint {\n    attribute = \"${attr.kernel.name}\"\n    value     = \"linux\"\n  }\n\n  group \"mygroup1\" {\n    count = 1\n\n    restart {\n      interval = \"5m\"\n      attempts = 4\n      delay    = \"30s\"\n      mode     = \"delay\"\n    }\n\n    network {\n      port \"myport\" {\n        static = 12345\n      }\n    }\n\n    service {\n      name = \"myjobname\"\n      port = \"myport\"\n\n      tags = [\"myjobname\"]\n\n      meta {\n        my_cluster_name = \"myjobname\"\n      }\n\n      // check {\n      //   name     = \"healthcheck\"\n      //   type     = \"tcp\"\n      //   interval = \"60s\"\n      //   timeout  = \"10s\"\n      //   port     = \"myport\"\n      // }\n    } # service\n\n    task \"mytask\" {\n      driver = \"raw_exec\"\n\n      template {\n        data = <<EOH\n#!/bin/bash\necho \"local/config.file\"\ncat local/config.file\necho \"Going to sleep...\"\nwhile true; do\n  sleep 5\ndone\nEOH\n        destination = \"local/starter.bash\"\n      }\n\n      template {\n        data = <<EOH\nkey1=val1\nkey2=val2\n\nmyvariable=[[range $index, $service := service \"nomad\" ]][[if ne $index 0]],[[end]][[$service.Address]]:[[$service.Port]][[end]]\n\nEOH\n\n        destination     = \"local/config.file\"\n\n        left_delimiter  = \"[[\"\n        right_delimiter = \"]]\"\n      }\n\n\n      config {\n        command = \"/bin/bash\"\n        args    = [\"local/starter.bash\", \"local/config.file\"]\n      }\n\n      resources {\n        cores  = 1\n        memory = 128\n      } # resources\n    } # task\n  } # group\n} # job\n\n"
  },
  {
    "path": "template/secure_variables/README.md",
    "content": "# Using Nomad Secure Variables with Consul Template\n\n## Inside of a Nomad `template` stanza\n\n## Inside a Consul Template\n\nStart a Nomad dev agent\n\n```shell\nnomad agent -dev\n```\n\nRun the `makeVars.sh` script to create the sample secure variables.\n\n```shell\n./makeVars.sh\n```\n\nRun `consul-template` to render the template file.\n\n```shell\nconsul-template -once -template=template:tmpl;template.html\n```\n\nOpen the generated web page in a browser.\n\n```shell\nopen template.html\n```\n\nThe template lists all of the secure variables that the user has access to in a\na tabular format.\n\n![Screenshot of generated HTML page showing secure variables.](./template.html.screenshot.png)\n\n## Interesting Possibilities\n\n### Dynamic trigger for job reload\n\n### Dynamic job configuration\n\nThe `interpolated_job.nomad` sample uses a job-specific secure variable to determine what version of Redis it should start.\n\nRun the `makeJobVars.sh` script to create the required variables.\n\n```shell\n./makeJobVars.sh\n```\n\nRun the `interpolated_job.hcl` file to start the Redis job\n\n```shell\nnomad job run interpolated_job.nomad\n```\n\nRun the following one-liner to get the created allocation ID into an environment\nvariable.\n\n```shell\nexport REDIS_ALLOC_ID=$(nomad alloc status -t '{{ range .}}{{if and (eq .JobID \"example\") (eq .DesiredStatus \"run\")}}{{.ID}}{{end}}{{end}}')\n```\n\nUse the `nomad alloc exec` command to run the `redis-server -v` command inside\nof the job's running Docker container.\n\n```shell\n$ nomad alloc exec ${REDIS_ALLOC_ID} redis-server -v\nRedis server v=4.0.14 sha=00000000:0 malloc=jemalloc-4.0.3 bits=64 build=7c61ee3c1f3ffc88\n```\n\nUpdate the variable by using the `nomad var get` command and piping its output to\nthe `nomad var put` command.\n\n```shell\n$ nomad var get --format=json nomad/jobs/example | nomad var put - version=4\nReading whole JSON variable specification from stdin\nSuccessfully created secure variable \"nomad/jobs/example\"!\n```\n\nRerun the `nomad alloc exec` command to verify that the Redis version has been\nupdated.\n\n> **NOTE:** While the container is restarting, you might get the following error.\n>\n> ```text\n> failed to exec into task: task \"redis\" is not running.\n> ```\n>\n> If you do, try running the command again in a few seconds.\n\n```shell\n$ nomad alloc exec 64c66418-7b01-db43-02f9-eb169ce99921 redis-server -v\nRedis server v=7.0.4 sha=00000000:0 malloc=jemalloc-5.2.1 bits=64 build=eed36d5f4a2dd39c\n```\n"
  },
  {
    "path": "template/secure_variables/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"redis:7\"\n        ports          = [\"db\"]\n        auth_soft_fail = true\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n\n      template {\n        destination = \"local/template.out\"\n        data = <<EOH\n{{ define \"Header\" }}{{- if eq \"string\" (printf \"%T\" .) -}}\n{{- $t := sprig_default \"          \" . -}}{{$l := len $t}}\n{{ $b1 := sprig_repeat $l \"━\" }}\n{{ $b2 := sprig_repeat (sprig_sub 74 $l | sprig_int ) \"━\" }}\n{{- printf \"  ┏━%s━┓\\n\" $b1 -}}{{- printf \"  ┃ %s ┃\\n\" $t -}}\n{{- printf \"━━┻━%s━┻%s\\n\" $b1 $b2 }}{{- end -}}{{- end -}}\n{{ define \"Splitter\" }}{{ sprig_repeat 80 \"─\" | println }}{{ end }}\n{{ define \"Footer\" }}{{ sprig_repeat 80 \"━\" | println }}{{ end }}\n\n{{template \"Header\" \"Fake List Keys\" }}\n{{- with nomadVarList \"my\" -}}\n  {{- range . -}}\n    {{- println .Path -}}\n    {{- with nomadVar .Path  -}}\n      {{- with .Metadata -}}\n        {{- printf \" - Namespace: %s\\n\" .Namespace -}}\n        {{- printf \" - Path: %s\\n\" .Path -}}\n        {{- printf \" - CreateTime: %s\\n\" .CreateTime -}}\n        {{- printf \" - CreateIndex: %v\\n\" .CreateIndex -}}\n        {{- printf \" - ModifyTime: %s\\n\" .ModifyTime -}}\n        {{- printf \" - ModifyIndex: %v\\n\" .ModifyIndex -}}\n      {{- end -}}\n      {{- println \"Items:\" -}}\n      {{- range . -}}\n        {{- printf \"    - %s: %q\\n\" .Key .Value -}}\n      {{- end -}}\n      {{- template \"Footer\" -}}\n    {{- end -}}\n  {{- end -}}\n{{- end -}}\nEOH\n      }\n\n      template {\n        destination = \"local/template.json\"\n        data = <<EOH\n{{ with nomadVar \"my/var/a\" }}{{ printf \"Type: %T\\n\" . }}{{ sprig_toPrettyJson . }}{{end}}\n\n{{ with nomadVar \"my/var/a\" }}{{ printf \"Type: %T\\n\" .Parent }}{{ .Parent | sprig_toPrettyJson }}{{end}}\n\n{{ with nomadVar \"my/var/a\" }}{{ printf \"Type: %T\\n\" .Metadata }}{{ .Metadata | sprig_toPrettyJson }}{{end}}\n\n{{ with nomadVar \"my/var/a\" }}{{ printf \"Type: %T\\n\" .Tuples }}{{ .Tuples | sprig_toPrettyJson }}{{end}}\n\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/secure_variables/interpolated_job/README.md",
    "content": "### Dynamic job configuration\n\nThe `interpolated_job.nomad` sample uses a job-specific secure variable to determine what version of Redis it should start.\n\nRun the `makeJobVars.sh` script to create the required variables.\n\n```shell\n./makeJobVars.sh\n```\n\nRun the `interpolated_job.hcl` file to start the Redis job\n\n```shell\nnomad job run interpolated_job.nomad\n```\n\nRun the following one-liner to get the created allocation ID into an environment\nvariable.\n\n```shell\nexport REDIS_ALLOC_ID=$(nomad alloc status -t '{{ range .}}{{if and (eq .JobID \"example\") (eq .DesiredStatus \"run\")}}{{.ID}}{{end}}{{end}}')\n```\n\nUse the `nomad alloc exec` command to run the `redis-server -v` command inside\nof the job's running Docker container.\n\n```shell\n$ nomad alloc exec ${REDIS_ALLOC_ID} redis-server -v\nRedis server v=4.0.14 sha=00000000:0 malloc=jemalloc-4.0.3 bits=64 build=7c61ee3c1f3ffc88\n```\n\nUpdate the variable by using the `nomad var get` command and piping its output to\nthe `nomad var put` command.\n\n```shell\n$ nomad var get --format=json nomad/jobs/example | nomad var put - version=4\nReading whole JSON variable specification from stdin\nSuccessfully created secure variable \"nomad/jobs/example\"!\n```\n\nRerun the `nomad alloc exec` command to verify that the Redis version has been\nupdated.\n\n> **NOTE:** While the container is restarting, you might get the following error.\n>\n> ```text\n> failed to exec into task: task \"redis\" is not running.\n> ```\n>\n> If you do, try running the command again in a few seconds.\n\n```shell\n$ nomad alloc exec 64c66418-7b01-db43-02f9-eb169ce99921 redis-server -v\nRedis server v=7.0.4 sha=00000000:0 malloc=jemalloc-5.2.1 bits=64 build=eed36d5f4a2dd39c\n```\n"
  },
  {
    "path": "template/secure_variables/interpolated_job/interpolated_job.hcl",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"db\" {\n        to = 6379\n      }\n    }\n\n    // service {\n    //   tags = [\"redis\", \"cache\"]\n    //   port = \"db\"\n\n    //   check {\n    //     name     = \"alive\"\n    //     type     = \"tcp\"\n    //     interval = \"10s\"\n    //     timeout  = \"2s\"\n    //   }\n    // }\n    task \"redis\" {\n      template {\n        data = <<EOH\n{{- with nomadVar \"nomad/jobs/example\" -}}\nREDIS_IMAGE=\"{{.image}}\"\nREDIS_VERSION=\"{{.version}}\"\n{{ end -}}\nEOH\n\n        destination = \"secrets/file.env\"\n        env         = true\n        change_mode = \"restart\"\n      }\n\n      driver = \"docker\"\n\n      config {\n        image = \"${REDIS_IMAGE}:${REDIS_VERSION}\"\n        ports = [\"db\"]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/secure_variables/interpolated_job/makeJobVars.sh",
    "content": "#!/bin/bash\n\necho '{\"Items\":{\"version\":\"4\",\"image\":\"redis\"}}' | nomad operator api '/v1/var/nomad/jobs/example'\n\necho '{\"Items\":{\"k1\":\"v1\",\"k2\":\"v2\"}}' | nomad operator api '/v1/var/nomad/jobs/variable'\necho '{\"Items\":{\"k1\":\"v1\",\"k2\":\"v2\"}}' | nomad operator api '/v1/var/nomad/jobs/variable/www'\necho '{\"Items\":{\"k1\":\"v1\",\"k2\":\"v2\"}}' | nomad operator api '/v1/var/nomad/jobs/variable/www/nginx'\n"
  },
  {
    "path": "template/secure_variables/makeJobVars.sh",
    "content": "#!/bin/bash\n\necho '{\"Items\":{\"version\":\"4\",\"image\":\"redis\"}}' | nomad operator api '/v1/var/nomad/jobs/example'\n\necho '{\"Items\":{\"k1\":\"v1\",\"k2\":\"v2\"}}' | nomad operator api '/v1/var/nomad/jobs/variable'\necho '{\"Items\":{\"k1\":\"v1\",\"k2\":\"v2\"}}' | nomad operator api '/v1/var/nomad/jobs/variable/www'\necho '{\"Items\":{\"k1\":\"v1\",\"k2\":\"v2\"}}' | nomad operator api '/v1/var/nomad/jobs/variable/www/nginx'\n"
  },
  {
    "path": "template/secure_variables/makeVars.sh",
    "content": "#!/bin/bash\n\necho '{\"Items\":{\"k1\":\"v1\",\"k2\":\"v2\"}}' | nomad operator api '/v1/var/my/var/a'\necho '{\"Items\":{\"k1\":\"v1\",\"k2\":\"v2\"}}' | nomad operator api '/v1/var/my/var/b'\necho '{\"Items\":{\"k1\":\"v1\",\"k2\":\"v2\"}}' | nomad operator api '/v1/var/other/var/a'\n\n"
  },
  {
    "path": "template/secure_variables/multiregion/start.sh",
    "content": "#! /usr/bin/env bash\n\necho \"📝 Creating configuration\"\nfor Region in global dc1\ndo\n  echo \" - \\\"${Region}\\\"\"\n  for Dir in config data log\n  do \n    mkdir -p .state/${Dir}.${Region}\n  done\n  echo \"log_file = \\\"$(pwd)/.state/log.${Region}/\\\"\" > .state/config.${Region}/logging.hcl\n  echo 'log_level = \"DEBUG\"' >> .state/config.${Region}/logging.hcl\n  echo \"data_dir = \\\"$(pwd)/.state/data.${Region}/\\\"\" > .state/config.${Region}/data_dir.hcl\n  echo \"name   = \\\"test_${Region}\\\"\" > .state/config.${Region}/name.hcl\n  echo \"region = \\\"${Region}\\\"\" >> .state/config.${Region}/name.hcl\ndone\necho \"\"\n\nI=1\nfor Region in global dc1\ndo\n  echo \"server { enabled=true bootstrap_expect=1 }\" > .state/config.${Region}/server.hcl\n  echo \"client { enabled=true }\" > .state/config.${Region}/client.hcl\n  echo \"plugin \\\"raw_exec\\\" { config { enabled = true }}\"  > .state/config.${Region}/raw_exec.hcl\n  echo \"addresses {\" > .state/config.${Region}/address.hcl\n  echo \"advertise {\" > .state/config.${Region}/advertise.hcl\n  echo \"ports {\" > .state/config.${Region}/ports.hcl\n  P=6\n  for Proto in http rpc serf\n  do\n    echo \"  ${Proto} = \\\"127.0.0.1\\\"\" >> .state/config.${Region}/address.hcl\n    echo \"  ${Proto} = \\\"${I}464${P}\\\"\" >> .state/config.${Region}/ports.hcl\n    echo \"  ${Proto} = \\\"127.0.0.1:${I}464${P}\\\"\" >> .state/config.${Region}/advertise.hcl\n    P=$((P+1))\n  done\n  echo \"}\" >> .state/config.${Region}/address.hcl\n  echo \"}\" >> .state/config.${Region}/advertise.hcl\n  echo \"}\" >> .state/config.${Region}/ports.hcl\n  I=$((I+1))\ndone\necho \"\"\n\necho \"🚀 Starting clusters...\"\nfor Region in global dc1\ndo\n    echo \" - \\\"${Region}\\\"\"\n    nomad agent -config=$(pwd)/.state/config.${Region} > /dev/null 2>.state/log.${Region}/stderr.out &\n    echo -n $! > .state/${Region}.pid\ndone\necho \"\"\n\necho \"⏳ Waiting for clusters to stabilize\"\nwhile [ -x \"$globalUp\" ] || [ -z \"$dc1Up\" ]\ndo\n  if [ -z \"$globalMsg\" ]; then\n    # First pass through the loop \n    globalMsg=\"  - checking global: \"\n    dc1Msg=\"  - checking    dc1: \"\n  else \n    # move back up 2 lines \n    tput el1; tput cuu1; tput cuu1; tput ed\n  fi\n  sleep 1\n\n  if [ \"$globalUp\" == \"\" ]; then\n    curl -q -s -f http://127.0.0.1:14646/v1/agent/health > /dev/null\n    if [ $? -eq 0 ]\n    then\n      globalMsg=\"${globalMsg}✅\"\n      globalUp=true\n    else\n      globalMsg=\"${globalMsg}.\"\n    fi\n  fi\n  if [ \"$dc1Up\" == \"\" ]; then\n    curl -q -s -f http://127.0.0.1:24646/v1/agent/health > /dev/null\n    if [ $? -eq 0 ]\n    then\n      dc1Msg=\"${dc1Msg}✅\"\n      dc1Up=true\n    else\n      dc1Msg=\"${dc1Msg}.\"\n    fi\n  fi\n  echo \"${globalMsg}\"\n  echo \"${dc1Msg}\"\ndone\necho \"\"\necho \"🔗 Joining clusters\"\nexport NOMAD_ADDR=http://127.0.0.1:14646\nnomad server join 127.0.0.1:24648\necho \"\"\n\necho \"🎉 The environment is running.\"\necho \"To connect to \\\"global\\\" region, run:\"\necho \"  export NOMAD_ADDR=http://127.0.0.1:14646\"\necho \"To connect to \\\"dc1\\\" region, run:\"\necho \"  export NOMAD_ADDR=http://127.0.0.1:24646\"\n"
  },
  {
    "path": "template/secure_variables/multiregion/stop.sh",
    "content": "#! /usr/bin/env bash\n\nfor Region in global dc1\ndo\n  echo \"Stopping region \\\"${Region}...\"\n  kill $(cat .state/${Region}.pid)\ndone\n\necho \"Purging test data\"...\nrm -rf .state\n"
  },
  {
    "path": "template/secure_variables/multiregion/template.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  group \"group\" {\n    count = 1\n    network {\n      port \"http\" {}\n    }\n    task \"template\" {\n      driver = \"raw_exec\"\n      config {\n        command = \"python\"\n        args = [\"-m\", \"http.server\", \"--bind ${NOMAD_IP_http}\", \"${NOMAD_PORT_http}\" ]\n      }\n      template {\n        data = <<EOH\n{{ with nomadVar \"test/multiregion\"}}{{ range $k, $v := .}}\n{{- printf \"%q=%q\" $k $v}}\n{{ end }}{{ end }}\nEOH\n        destination = \"template.out\"\n        change_mode = \"restart\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/secure_variables/multiregion/test.out",
    "content": "-- dc1 --\n[\n  {\n    \"Namespace\": \"default\",\n    \"Path\": \"test\",\n    \"CreateIndex\": 27,\n    \"ModifyIndex\": 36,\n    \"CreateTime\": 1661479577849684000,\n    \"ModifyTime\": 1661479936710065000\n  }\n]\nmap[region-dc1:foo]\n\n-- global --\n[\n  {\n    \"Namespace\": \"default\",\n    \"Path\": \"test\",\n    \"CreateIndex\": 22,\n    \"ModifyIndex\": 22,\n    \"CreateTime\": 1661479585837614000,\n    \"ModifyTime\": 1661479585837614000\n  }\n]\nmap[region-global:foo]\n"
  },
  {
    "path": "template/secure_variables/multiregion/test.tmpl",
    "content": "-- dc1 --\n{{ with nomadVarList \"t@default.dc1\" }}{{ sprig_toPrettyJson . }}{{end}}\n{{ nomadVar \"test@default.dc1\" }}\n\n-- global --\n{{ with nomadVarList \"t@default.global\" }}{{ sprig_toPrettyJson . }}{{end}}\n{{ nomadVar \"test@default.global\" }}\n"
  },
  {
    "path": "template/secure_variables/template copy.tmpl",
    "content": "<html>\n<head>\n<title>Secure Variables</title>\n    <style>\n        * {\n          margin: 0;\n          padding: 0;\n        }\n\n        body { \n          font-family: \"Helvetica Neue\",Helvetica,Arial;\n        }\n\n        .content { \n          margin: 1rem 1rem;\n        }\n\n        .header {\n            background: linear-gradient(to right,#16704d,#1d9467);\n            padding-left: .5em;\n        }\n        .secondary {\n            background: #1d9467;\n        }\n\n        h1 {\n          color: white;\n          letter-spacing: -0.1rem;\n          font-size: 3rem;\n          line-height: 3.5rem;\n          vertical-align: center;\n        }\n\n        h2 {\n          font-weight: 100;\n          letter-spacing: +0.05rem;\n          color: white;\n          font-size: 2rem;\n          line-height: 2.25rem;\n          vertical-align: center;\n        }\n\n        table {\n          border-collapse: collapse;\n          background-color: white\n        }\n        \n        th { background-color: white; }\n        th, td {\n          border: 1px solid black;\n          padding: .5em .75em;\n        }\n\n        td.path {\n          background-color: #60DEA9;\n          vertical-align: top;\n          text-align: right;\n          font-weight: bold;\n        }\n\n        tbody tr:nth-child(odd) {\n            background-color: #f2f2f2;\n        }\n\n        tbody tr:nth-child(even) {\n            background-color: white;\n        }\n\n        td.error { \n          color: red;\n          font-weight: bold;\n        }\n    </style>\n</head>\n<body>\n<div class=\"header\"><h1>Nomad</h1></div>\n<div class=\"header secondary\"><h2>Secure Variables</h2></div>\n<div class=\"content\">\n<table>\n<thead>\n<tr><th>Path</th><th>Metadata</th></tr>\n</thead>\n<tbody>\n{{- range $I, $P := nomadVarList }}\n<tr><td class=\"path\">{{$P}}</td><td>\n<table width=\"100%\"><tbody>\n<tr><td>Namespace</td><td>{{$P.Namespace}}</td></tr>\n<tr><td>Path</td><td>{{$P.Path}}</td></tr>\n<tr><td>Create Time</td><td>{{$P.CreateTime}}</td></tr>\n<tr><td>Create Index</td><td>{{$P.CreateIndex}}</td></tr>\n<tr><td>Modify Time</td><td>{{$P.ModifyTime}}</td></tr>\n<tr><td>Modify Index</td><td>{{$P.ModifyIndex}}</td></tr>\n<tr><td>Items</td><td>\n  {{ with nomadVar $P.Path}}<table><thead><th>Key</th><th>Value</th></thead>{{range $K, $V := .}}<tr><td class=\"path\">{{$K}}</td><td>{{$V}}</td></tr>{{end}}</table>{{end}}</td></tr>\n</tbody></table>\n</td></tr>\n{{else}}\n<tr><td colspan=\"2\" class=\"error\">No Secure Variables Found</tr></td>\n{{end}}\n</tbody>\n</table>\n</div>\n</body>\n"
  },
  {
    "path": "template/secure_variables/template-playground.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"group\" {\n    count = 1\n    network {\n      port \"export\" {}\n      port \"exstat\" {\n        static=8080\n      }\n    }\n    task \"env-output\" {\n      driver = \"raw_exec\"\n      config { \n        command = \"env\"\n      }\n    }\n    task \"date-output\" {\n      driver = \"raw_exec\"\n      config {\n        command = \"date\"\n      }\n    }\n    task \"template\" {\n      driver = \"raw_exec\"\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat local/template.out\"]\n      }\n      template {\n        data = <<EOH\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n\nFurther Consul Template Magic:\n\nMath\n  math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n\nComposition using inline templates\n\n  {{- define \"custom\" }}NOMAD_ADDR_{{\"date-output\" | replaceAll \"-\" \"_\" }}_sample{{ end }}\n  {{ executeTemplate \"custom\" }}: {{ env (executeTemplate \"custom\") }}\n\nComposition using printf\n  {{ $envKey := printf \"NOMAD_ADDR_%s_%s\" (\"date-output\" | replaceAll \"-\" \"_\" ) \"sample\" }}\n  {{ $envKey }}: {{ env $envKey }}\n\n----\n{{ range nomadVarList \"my\"}}\n{{- .}}{{ end }}\n----\n{{ with nomadVar \"my/var/a\"}}{{ range $k, $v := .}}\n{{- printf \"%q=%q\" $k $v}}\n{{ end }}{{ end }}\n----\n\nEOH\n\n        destination = \"local/template.out\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/secure_variables/template.html",
    "content": "<html>\n<head>\n<title>Secure Variables</title>\n    <style>\n        * {\n          margin: 0;\n          padding: 0;\n        }\n\n        body { \n          font-family: \"Helvetica Neue\",Helvetica,Arial;\n        }\n\n        .content { \n          margin: 1rem 1rem;\n        }\n\n        .header {\n            background: linear-gradient(to right,#16704d,#1d9467);\n            padding-left: .5em;\n        }\n        .secondary {\n            background: #1d9467;\n        }\n\n        h1 {\n          color: white;\n          letter-spacing: -0.1rem;\n          font-size: 3rem;\n          line-height: 3.5rem;\n          vertical-align: center;\n        }\n\n        h2 {\n          font-weight: 100;\n          letter-spacing: +0.05rem;\n          color: white;\n          font-size: 2rem;\n          line-height: 2.25rem;\n          vertical-align: center;\n        }\n\n        table {\n          border-collapse: collapse;\n          background-color: white\n        }\n        \n        th { background-color: white; }\n        th, td {\n          border: 1px solid black;\n          padding: .5em .75em;\n        }\n\n        td.path {\n          background-color: #60DEA9;\n          vertical-align: top;\n          text-align: right;\n          font-weight: bold;\n        }\n\n        tbody tr:nth-child(odd) {\n            background-color: #f2f2f2;\n        }\n\n        tbody tr:nth-child(even) {\n            background-color: white;\n        }\n\n        td.error { \n          color: red;\n          font-weight: bold;\n        }\n    </style>\n</head>\n<body>\n<div class=\"header\"><h1>Nomad</h1></div>\n<div class=\"header secondary\"><h2>Secure Variables</h2></div>\n<div class=\"content\">\n<table>\n<thead>\n<tr><th>Path</th><th>Metadata</th></tr>\n</thead>\n<tbody>\n<tr><td class=\"path\">my/var/a</td><td>\n<table width=\"100%\"><tbody>\n<tr><td>Namespace</td><td>default</td></tr>\n<tr><td>Path</td><td>my/var/a</td></tr>\n<tr><td>Create Time</td><td>2022-08-22 15:56:13.771313 -0400 EDT</td></tr>\n<tr><td>Create Index</td><td>28</td></tr>\n<tr><td>Modify Time</td><td>2022-08-22 15:56:13.771313 -0400 EDT</td></tr>\n<tr><td>Modify Index</td><td>28</td></tr>\n<tr><td>Items</td><td>\n  <table><thead><th>Key</th><th>Value</th></thead><tr><td class=\"path\">k1</td><td>v1</td></tr><tr><td class=\"path\">k2</td><td>v2</td></tr></table></td></tr>\n</tbody></table>\n</td></tr>\n\n<tr><td class=\"path\">my/var/b</td><td>\n<table width=\"100%\"><tbody>\n<tr><td>Namespace</td><td>default</td></tr>\n<tr><td>Path</td><td>my/var/b</td></tr>\n<tr><td>Create Time</td><td>2022-08-22 15:56:13.934377 -0400 EDT</td></tr>\n<tr><td>Create Index</td><td>29</td></tr>\n<tr><td>Modify Time</td><td>2022-08-22 15:56:13.934377 -0400 EDT</td></tr>\n<tr><td>Modify Index</td><td>29</td></tr>\n<tr><td>Items</td><td>\n  <table><thead><th>Key</th><th>Value</th></thead><tr><td class=\"path\">k1</td><td>v1</td></tr><tr><td class=\"path\">k2</td><td>v2</td></tr></table></td></tr>\n</tbody></table>\n</td></tr>\n\n<tr><td class=\"path\">other/var/a</td><td>\n<table width=\"100%\"><tbody>\n<tr><td>Namespace</td><td>default</td></tr>\n<tr><td>Path</td><td>other/var/a</td></tr>\n<tr><td>Create Time</td><td>2022-08-22 15:56:14.10122 -0400 EDT</td></tr>\n<tr><td>Create Index</td><td>30</td></tr>\n<tr><td>Modify Time</td><td>2022-08-22 15:56:14.10122 -0400 EDT</td></tr>\n<tr><td>Modify Index</td><td>30</td></tr>\n<tr><td>Items</td><td>\n  <table><thead><th>Key</th><th>Value</th></thead><tr><td class=\"path\">k1</td><td>v1</td></tr><tr><td class=\"path\">k2</td><td>v2</td></tr></table></td></tr>\n</tbody></table>\n</td></tr>\n\n</tbody>\n</table>\n</div>\n</body>\n"
  },
  {
    "path": "template/secure_variables/template.tmpl",
    "content": "<html>\n<head>\n<title>Secure Variables</title>\n    <style>\n        * {\n          margin: 0;\n          padding: 0;\n        }\n\n        body { \n          font-family: \"Helvetica Neue\",Helvetica,Arial;\n        }\n\n        .content { \n          margin: 1rem 1rem;\n        }\n\n        .header {\n            background: linear-gradient(to right,#16704d,#1d9467);\n            padding-left: .5em;\n        }\n        .secondary {\n            background: #1d9467;\n        }\n\n        h1 {\n          color: white;\n          letter-spacing: -0.1rem;\n          font-size: 3rem;\n          line-height: 3.5rem;\n          vertical-align: center;\n        }\n\n        h2 {\n          font-weight: 100;\n          letter-spacing: +0.05rem;\n          color: white;\n          font-size: 2rem;\n          line-height: 2.25rem;\n          vertical-align: center;\n        }\n\n        table {\n          border-collapse: collapse;\n          background-color: white\n        }\n        \n        th { background-color: white; }\n        th, td {\n          border: 1px solid black;\n          padding: .5em .75em;\n        }\n\n        td.path {\n          background-color: #60DEA9;\n          vertical-align: top;\n          text-align: right;\n          font-weight: bold;\n        }\n\n        tbody tr:nth-child(odd) {\n            background-color: #f2f2f2;\n        }\n\n        tbody tr:nth-child(even) {\n            background-color: white;\n        }\n\n        td.error { \n          color: red;\n          font-weight: bold;\n        }\n    </style>\n</head>\n<body>\n<div class=\"header\"><h1>Nomad</h1></div>\n<div class=\"header secondary\"><h2>Secure Variables</h2></div>\n<div class=\"content\">\n<table>\n<thead>\n<tr><th>Path</th><th>Metadata</th></tr>\n</thead>\n<tbody>\n{{- with nomadVar \"nomad/jobs/variable\" }}{{ $P := .Metadata }}\n<tr><td class=\"path\">{{$P}}</td><td>\n<table width=\"100%\"><tbody>\n<tr><td>Namespace</td><td>{{$P.Namespace}}</td></tr>\n<tr><td>Path</td><td>{{$P.Path}}</td></tr>\n<tr><td>Create Time</td><td>{{$P.CreateTime}}</td></tr>\n<tr><td>Create Index</td><td>{{$P.CreateIndex}}</td></tr>\n<tr><td>Modify Time</td><td>{{$P.ModifyTime}}</td></tr>\n<tr><td>Modify Index</td><td>{{$P.ModifyIndex}}</td></tr>\n<tr><td>Items</td><td>\n  {{ with nomadVar $P.Path}}<table><thead><th>Key</th><th>Value</th></thead>{{range $K, $V := .}}<tr><td class=\"path\">{{$K}}</td><td>{{$V}}</td></tr>{{end}}</table>{{end}}</td></tr>\n</tbody></table>\n</td></tr>\n{{else}}\n<tr><td colspan=\"2\" class=\"error\">No Secure Variables Found</tr></td>\n{{end}}\n</tbody>\n</table>\n</div>\n</body>\n"
  },
  {
    "path": "template/secure_variables/variable_view.nomad",
    "content": "job \"variable\" {\n  datacenters = [\"dc1\"]\n\n  group \"www\" {\n    network {\n      port \"www\" {\n        to = 8080\n      }\n    }\n\n    task \"nginx\" {\n      driver = \"docker\"\n\n      config {\n        image          = \"nginx:1.23.1-alpine\"\n        ports          = [\"www\"]\n        auth_soft_fail = true\n        volumes = [ \n          \"local/nginx.conf:/etc/nginx/conf.d/default.conf\",\n          \"local/www/index.html:/usr/share/nginx/html/index.html\",\n        ]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n      template {\n        destination = \"local/nginx.conf\"\n        data = <<EOF\nerror_log stderr info;\naccess_log stdout;\nserver {\n  listen 8080;\n  location / {\n    root /usr/share/nginx/html/;\n    index index.html;\n  }\n}\nEOF\n      }\n      template {\n        destination = \"local/www/index.html\"\n        data = file(\"./template.tmpl\")\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/secure_variables/write/t0.out",
    "content": "map[excite:wow]\n{\n  \"excite\": \"wow\"\n}\n"
  },
  {
    "path": "template/secure_variables/write/t0.tmpl",
    "content": "{{ nomadVarPut \"from/ct@bad\" \"excite=wow\" }}\n{{ with nomadVar \"from/ct\" }}{{ sprig_toPrettyJson . }}{{end}}\n"
  },
  {
    "path": "template/secure_variables/write/t1.out",
    "content": "{\n  \"excite\": \"wow\"\n}\n"
  },
  {
    "path": "template/secure_variables/write/t1.tmpl",
    "content": "{{ with nomadVar \"from/ct\" }}{{ sprig_toPrettyJson . }}{{end}}\n"
  },
  {
    "path": "template/secure_variables/write/t2.out",
    "content": "[\n  {\n    \"Namespace\": \"default\",\n    \"Path\": \"from/ct\",\n    \"CreateIndex\": 163,\n    \"ModifyIndex\": 281,\n    \"CreateTime\": 1661553118420858000,\n    \"ModifyTime\": 1661555678363949000\n  },\n  {\n    \"Namespace\": \"default\",\n    \"Path\": \"test-kv-get/path\",\n    \"CreateIndex\": 203,\n    \"ModifyIndex\": 322,\n    \"CreateTime\": 1661554646385247000,\n    \"ModifyTime\": 1661556594727468000\n  },\n  {\n    \"Namespace\": \"default\",\n    \"Path\": \"test-kv-list/prefix/foo\",\n    \"CreateIndex\": 252,\n    \"ModifyIndex\": 329,\n    \"CreateTime\": 1661555400029986000,\n    \"ModifyTime\": 1661556594738143000\n  },\n  {\n    \"Namespace\": \"default\",\n    \"Path\": \"test-kv-list/prefix/wave/ocean\",\n    \"CreateIndex\": 254,\n    \"ModifyIndex\": 325,\n    \"CreateTime\": 1661555400030957000,\n    \"ModifyTime\": 1661556594729702000\n  },\n  {\n    \"Namespace\": \"default\",\n    \"Path\": \"test-kv-list/prefix/zip\",\n    \"CreateIndex\": 253,\n    \"ModifyIndex\": 324,\n    \"CreateTime\": 1661555400030321000,\n    \"ModifyTime\": 1661556594729216000\n  },\n  {\n    \"Namespace\": \"default\",\n    \"Path\": \"var/foo\",\n    \"CreateIndex\": 17,\n    \"ModifyIndex\": 331,\n    \"CreateTime\": 1661547722124356000,\n    \"ModifyTime\": 1661556594739606000\n  }\n]\n\n"
  },
  {
    "path": "template/secure_variables/write/t2.tmpl",
    "content": "{{ with nomadVarList \"\" }}{{ sprig_toPrettyJson .}}{{end}}\n\n"
  },
  {
    "path": "template/services/README.md",
    "content": ""
  },
  {
    "path": "template/services/byTag.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n  group \"group\" {\n    count = 1\n    task \"command\" {\n      resources { network { port \"export\" {} port \"exstat\" { static=8080 } } }\n      driver = \"exec\"\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat local/template.out\"]\n      }\n      template {\n        data = <<EOH\nConstructive Play\n{{ printf \"%q\" ( services | byTag ) }}\n---\n{{ $tags := services | byTag }}\n{{ ( len $tags.standby ) }}\n---\nGet Service By Tag.  0utput Alternate if None found\n{{ $nomad := service \"nomad\" | byTag }}\n{{- if eq (len $nomad.http) 0 -}}\nno services\n{{- else -}}\n    {{- range $service := $nomad.http -}}\n       {{- $service.Address }}\n    {{ end -}}\n{{ end }}\n---\nGet Service By Tag.  0utput Alternate if None found\n{{ $nomad := service \"nomad\" | byTag }}\n{{- if eq (len $nomad.notATag) 0 -}}\nno services\n{{ else -}}\n    {{- range $service := $nomad.notATag }}\n        {{ $service.Address }}\n    {{ end -}}\n{{ end }}\n---\nGet Service By Tag.  0utput Alternate if None found\n{{ $tag := \"notATag\" }}\n{{ $nomad := service \"nomad\" | byTag }}\n{{- if eq (len (index $nomad $tag) ) 0 -}}\nno services\n{{ else -}}\n    {{- range $service := index $nomad $tag }}\n        {{ $service.Address }}\n    {{ end -}}\n{{ end }}\n\nEOH\n\n        destination = \"local/template.out\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/template-system/README.md",
    "content": "# template-system\n\nThese are some template playground jobs that are set up to run as system jobs.\nBecause they are run by the system scheduler, they have to remain running so\nthey will emit the templates to the alloc log and then sleep forever.\n\n\n"
  },
  {
    "path": "template/template-system/composed_keys.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  group \"group\" {\n    count = 1\n    task \"command\" {\n      resources { network { port \"export\" {} port \"exstat\" { static=8080 } } }\n      driver = \"raw_exec\"\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat local/template.out;while true; do sleep 10; done\"]\n      }\n      template {\n        data = <<EOH\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n                           \n   concat key:  service/fabio/{{ env \"NOMAD_JOB_NAME\" }}/listeners\n    key:         {{ keyOrDefault ( printf \"service/fabio/%s/listeners\" ( env \"NOMAD_JOB_NAME\" ) ) \":9999\" }}\n\n{{ define \"custom\" }}service/fabio/{{env \"NOMAD_JOB_NAME\" }}/listeners{{ end }}\n    key:         {{ keyOrDefault (executeTemplate \"custom\") \":9999\" }}\n\n   math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n\nEOH\n\n        destination = \"local/template.out\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/template-system/services-on-nomad-client.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  group \"group\" {\n    task \"template\" {\n      resources { memory=100 cpu=100 }\n      driver = \"raw_exec\"\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat local/template.out; while true; do sleep 10; done\"]\n      }\n      template {\n        data = <<EOH\n\nhostname = \"{{ env \"node.unique.name\" }}\"\n\n{{- range services -}} \n  {{-  if .Name | contains \"nomad\" -}} \n    {{- range service .Name }}\n      {{ if (env \"node.unique.name\") | contains .Node }}\n# {{ .Name }} \n[[inputs.apache]]\n  urls = [\"http://{{ .Address }}:{{ .Port }}/server-status?auto\"]\n  tagexclude = [\"host\",\"url\"]\n      {{- end -}} \n    {{- end -}}\n  {{- end -}}\n{{- end -}}\nEOH\n        destination = \"local/template.out\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/template-system/template.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type = \"system\"\n  group \"group\" {\n    task \"template\" {\n      resources { memory=100 cpu=100 network { port \"export\" {} port \"exstat\" { static=8080 } } }\n      driver = \"raw_exec\"\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat local/template.out; while true; do sleep 10; done\"]\n      }\n      template {\n        data = <<EOH\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n\nFurther Consul Template Magic:\n\nMath\n  math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n\nComposition using inline templates\n\n  {{- define \"custom\" }}NOMAD_ADDR_{{\"date-output\" | replaceAll \"-\" \"_\" }}_sample{{ end }}\n  {{ executeTemplate \"custom\" }}: {{ env (executeTemplate \"custom\") }}\n\nComposition using printf\n  {{ $envKey := printf \"NOMAD_ADDR_%s_%s\" (\"date-output\" | replaceAll \"-\" \"_\" ) \"sample\" }}\n  {{ $envKey }}: {{ env $envKey }}\n\nEOH\n\n        destination = \"local/template.out\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/template_handoff/README.md",
    "content": "# Handoff\n\nThis job file demonstrates using an init job to write a rendered template into\na location that can be picked up by another job. It also demonstrates the use\nof the ${NOMAD_ALLOC_DIR} variable with a `raw_exec` job (since they have access\nto the entire host's file system)\n"
  },
  {
    "path": "template/template_handoff/handoff.nomad",
    "content": "job \"handoff\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  group \"template-job\" {\n    task \"render-template\" {\n      driver = \"exec\"\n\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"echo \\\"This would be a great place to upload the template from\\\"; cat /alloc/template.out\"]\n      }\n\n      lifecycle {\n        hook = \"prestart\"\n      }\n\n      template {\n        data=<<EOF\nThis allocation is running on {{ env \"attr.unique.network.ip-address\" }}\nEOF\n        destination = \"../alloc/template.out\"\n      }\n    }\n\n    task \"main\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat $NOMAD_ALLOC_DIR/template.out\"]\n      }\n\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/template_handoff/handoff_restart.nomad",
    "content": "job \"handoff\" {\n  datacenters = [\"dc1\"]\n\n  group \"template-job\" {\n    task \"render-template\" {\n      driver = \"exec\"\n\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"echo \\\"This would be a great place to upload the template from\\\"; cat /alloc/template.out; while true; do sleep 300; done\"]\n      }\n\n      lifecycle {\n        hook    = \"prestart\"\n        sidecar = true\n      }\n\n      template {\n        destination = \"../alloc/template.out\"\n        change_mode = \"restart\"\n        data        = <<EOF\nThis is a {{ printf \"%s %s\" \"template\" \". yay!\" }}\nEOF\n      }\n\n      resources {\n        cpu = 100\n        memory = 100\n      }\n    }\n\n    task \"main\" {\n      driver = \"exec\"\n\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"while true; do echo $(date); sleep 30; done\"]\n      }\n\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/template_into_docker/example.nomad",
    "content": "job \"example\" {\n  datacenters = [\"dc1\"]\n\n  group \"cache\" {\n    network {\n      port \"http\" {\n        to = 80\n      }\n    }\n    task \"nginx\" {\n      driver = \"docker\"\n\n      config {\n        image = \"nginx:alpine\"\n        ports = [\"http\"]\n        mounts = [\n          {\n            type     = \"bind\"\n            target   = \"/usr/share/nginx/html/admin/conf.js\"\n            source   = \"conf.js\"\n            readonly = false\n\n            bind_options {\n              propagation = \"rshared\"\n            }\n          },\n        ]\n      }\n\n      template {\n        destination = \"conf.js\"\n        data        = <<EOH\nwindow.env = {\n  apiUrl: \"http://example.com/api\n}\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/template_playground/composed_keys.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"group\" {\n    network {\n      port \"export\" {}\n      port \"exstat\" {\n        static=8080\n      }\n    }\n\n    task \"command\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat local/template.out\"]\n      }\n\n      template {\n        destination = \"local/template.out\"\n        data        = <<EOH\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n\n   concat key:  service/fabio/{{ env \"NOMAD_JOB_NAME\" }}/listeners\n    key:         {{ keyOrDefault ( printf \"service/fabio/%s/listeners\" ( env \"NOMAD_JOB_NAME\" ) ) \":9999\" }}\n\n{{ define \"custom\" }}service/fabio/{{env \"NOMAD_JOB_NAME\" }}/listeners{{ end }}\n    key:         {{ keyOrDefault (executeTemplate \"custom\") \":9999\" }}\n\n   math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/template_playground/template-exec.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"group\" {\n    network {\n      port \"export\" {}\n      port \"exstat\" {\n        static = 8080\n      }\n    }\n\n    task \"template\" {\n      driver = \"exec\"\n\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"while true; do sleep 300; done\"]\n      }\n\n      template {\n        destination = \"local/template.out\"\n        data        = <<EOH\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n\nFurther Consul Template Magic:\n\nMath\n  math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n\nComposition using inline templates\n\n  {{- define \"custom\" }}NOMAD_ADDR_{{\"date-output\" | replaceAll \"-\" \"_\" }}_sample{{ end }}\n  {{ executeTemplate \"custom\" }}: {{ env (executeTemplate \"custom\") }}\n\nComposition using printf\n  {{ $envKey := printf \"NOMAD_ADDR_%s_%s\" (\"date-output\" | replaceAll \"-\" \"_\" ) \"sample\" }}\n  {{ $envKey }}: {{ env $envKey }}\n\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/template_playground/template-hcl2.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type = \"batch\"\n\n  group \"group\" {\n    network {\n      port \"sample\" {}\n      port \"export\" {}\n      port \"exstat\" {\n        static=8080\n      }\n    }\n\n    task \"env-output\" {\n      driver = \"raw_exec\"\n      config { command = \"env\" }\n    }\n\n    task \"date-output\" {\n      driver = \"raw_exec\"\n      config { command = \"date\" }\n    }\n\n    task \"template\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat local/template.out\"]\n      }\n\n      template {\n        destination = \"local/template.out\"\n        data        = <<EOH\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n\nFurther Consul Template Magic:\n\nMath\n  math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n\nComposition using inline templates\n\n  {{- define \"custom\" }}NOMAD_ADDR_{{\"date-output\" | replaceAll \"-\" \"_\" }}_sample{{ end }}\n  {{ executeTemplate \"custom\" }}: {{ env (executeTemplate \"custom\") }}\n\nComposition using printf\n  {{ $envKey := printf \"NOMAD_ADDR_%s_%s\" (\"date-output\" | replaceAll \"-\" \"_\" ) \"sample\" }}\n  {{ $envKey }}: {{ env $envKey }}\n\nOperating System behaviors\n{{ print \"\" }}\n{{- $DIVIDER := \":\" -}}\n{{- if ( eq ( env \"attr.kernel.name\" ) \"windows\" ) -}}\n{{- $DIVIDER = \";\" -}}\n{{- end -}}\nCLASSPATH=\"local/membrane-service-proxy-4.7.3/conf{{$DIVIDER}}local/membrane-service-proxy-4.7.3/starter.jar\"\n\n{{ $MEMBRANE_HOME := print (env \"NOMAD_TASK_DIR\") \"/membrane-service-proxy-4.7.3\" -}}\nMEMBRANE_HOME={{$MEMBRANE_HOME}}\n\n\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/template_playground/template.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"group\" {\n    network {\n      port \"sample\" {}\n      port \"export\" {}\n      port \"exstat\" {\n        static = 8080\n      }\n    }\n\n    task \"env-output\" {\n      driver = \"raw_exec\"\n      config {\n        command = \"env\"\n      }\n    }\n\n    task \"date-output\" {\n      driver = \"raw_exec\"\n      config {\n        command = \"date\"\n      }\n    }\n\n    task \"template\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args = [\"-c\", \"cat local/template.out\"]\n      }\n\n      template {\n        destination = \"local/template.out\"\n        data        = <<EOH\n                 node.unique.id: {{ env \"node.unique.id\" }}\n                node.datacenter: {{ env \"node.datacenter\" }}\n               node.unique.name: {{ env \"node.unique.name\" }}\n                     node.class: {{ env \"node.class\" }}\n                  attr.cpu.arch: {{ env \"attr.cpu.arch\" }}\n              attr.cpu.numcores: {{ env \"attr.cpu.numcores\" }}\n          attr.cpu.totalcompute: {{ env \"attr.cpu.totalcompute\" }}\n         attr.consul.datacenter: {{ env \"attr.consul.datacenter\" }}\n           attr.unique.hostname: {{ env \"attr.unique.hostname\" }}\n attr.unique.network.ip-address: {{ env \"attr.unique.network.ip-address\" }}\n               attr.kernel.name: {{ env \"attr.kernel.name\" }}\n            attr.kernel.version: {{ env \"attr.kernel.version\" }}\n       attr.platform.aws.ami-id: {{ env \"attr.platform.aws.ami-id\" }}\nattr.platform.aws.instance-type: {{ env \"attr.platform.aws.instance-type\" }}\n                   attr.os.name: {{ env \"attr.os.name\" }}\n                attr.os.version: {{ env \"attr.os.version\" }}\n\n                NOMAD_ALLOC_DIR: {{env \"NOMAD_ALLOC_DIR\"}}\n                 NOMAD_TASK_DIR: {{env \"NOMAD_TASK_DIR\"}}\n              NOMAD_SECRETS_DIR: {{env \"NOMAD_SECRETS_DIR\"}}\n             NOMAD_MEMORY_LIMIT: {{env \"NOMAD_MEMORY_LIMIT\"}}\n                NOMAD_CPU_LIMIT: {{env \"NOMAD_CPU_LIMIT\"}}\n                 NOMAD_ALLOC_ID: {{env \"NOMAD_ALLOC_ID\"}}\n               NOMAD_ALLOC_NAME: {{env \"NOMAD_ALLOC_NAME\"}}\n              NOMAD_ALLOC_INDEX: {{env \"NOMAD_ALLOC_INDEX\"}}\n                NOMAD_TASK_NAME: {{env \"NOMAD_TASK_NAME\"}}\n               NOMAD_GROUP_NAME: {{env \"NOMAD_GROUP_NAME\"}}\n                 NOMAD_JOB_NAME: {{env \"NOMAD_JOB_NAME\"}}\n                       NOMAD_DC: {{env \"NOMAD_DC\"}}\n                   NOMAD_REGION: {{env \"NOMAD_REGION\"}}\n                    VAULT_TOKEN: {{env \"VAULT_TOKEN\"}}\n\n              NOMAD_ADDR_export: {{env \"NOMAD_ADDR_export\"}}\n              NOMAD_ADDR_exstat: {{env \"NOMAD_ADDR_exstat\"}}\n         NOMAD_HOST_PORT_export: {{env \"NOMAD_HOST_PORT_export\"}}\n         NOMAD_HOST_PORT_exstat: {{env \"NOMAD_HOST_PORT_exstat\"}}\n                NOMAD_IP_export: {{env \"NOMAD_IP_export\"}}\n                NOMAD_IP_exstat: {{env \"NOMAD_IP_exstat\"}}\n              NOMAD_PORT_export: {{env \"NOMAD_PORT_export\"}}\n              NOMAD_PORT_exstat: {{env \"NOMAD_PORT_exstat\"}}\n                     GOMAXPROCS: {{env \"GOMAXPROCS\"}}\n                           HOME: {{env \"HOME\"}}\n                           LANG: {{env \"LANG\"}}\n                        LOGNAME: {{env \"LOGNAME\"}}\n                           PATH: {{env \"PATH\"}}\n                            PWD: {{env \"PWD\"}}\n                          SHELL: {{env \"SHELL\"}}\n                          SHLVL: {{env \"SHLVL\"}}\n                           USER: {{env \"USER\"}}\n\nFurther Consul Template Magic:\n\nMath\n  math - alloc_id + 1: {{env \"NOMAD_ALLOC_INDEX\" | parseInt | add 1}}\n\nComposition using inline templates\n\n  {{- define \"custom\" }}NOMAD_ADDR_{{\"date-output\" | replaceAll \"-\" \"_\" }}_sample{{ end }}\n  {{ executeTemplate \"custom\" }}: {{ env (executeTemplate \"custom\") }}\n\nComposition using printf\n  {{ $envKey := printf \"NOMAD_ADDR_%s_%s\" (\"date-output\" | replaceAll \"-\" \"_\" ) \"sample\" }}\n  {{ $envKey }}: {{ env $envKey }}\n\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/use_whitespace/byTag.nomad",
    "content": "job \"template\" {\n  datacenters = [\"dc1\"]\n  type        = \"batch\"\n\n  group \"group\" {\n    network {\n      port \"export\" {}\n      port \"exstat\" {\n        static = 8080\n      }\n    }\n\n    task \"command\" {\n      driver = \"exec\"\n\n      config {\n        command = \"bash\"\n        args    = [\"-c\", \"cat local/template.out\"]\n      }\n\n      template {\n        destination = \"local/template.out\"\n        data        = <<EOH\nConstructive Play\n{{ printf \"%q\" ( services | byTag ) }}\n---\n{{ $tags := services | byTag }}\n{{ ( len $tags.standby ) }}\n---\nGet Service By Tag.  0utput Alternate if None found\n{{ $nomad := service \"nomad\" | byTag }}\n{{- if eq (len $nomad.http) 0 -}}\nno services\n{{- else -}}\n    {{- range $service := $nomad.http -}}\n       {{- $service.Address }}\n    {{ end -}}\n{{ end }}\n---\nGet Service By Tag.  Output Alternate if None found\n{{ $nomad := service \"nomad\" | byTag }}\n{{- if eq (len $nomad.notATag) 0 -}}\nno services\n{{ else -}}\n    {{- range $service := $nomad.notATag }}\n        {{ $service.Address }}\n    {{ end -}}\n{{ end }}\n---\nGet Service By Tag.  Output Alternate if None found\n{{ $tag := \"notATag\" }}\n{{ $nomad := service \"nomad\" | byTag }}\n{{- if eq (len (index $nomad $tag) ) 0 -}}\nno services\n{{ else -}}\n    {{- range $service := index $nomad $tag }}\n        {{ $service.Address }}\n    {{ end -}}\n{{ end }}\n\nEOH\n\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "test.sh",
    "content": "#!/usr/bin/perl -w\nuse strict;\nuse Data::Dumper;\n\nmy ($jobName) = @ARGV;\n\nif (not defined $jobName) {\n  die \"Need a path to a Nomad job file.\\n\";\n}\n\nmy $prog = \"nomad\";\nmy @args = (\"job\", \"plan\");\npush @args, $jobName;\n\nmy $exitCode = 0;\n\nmy $return_value = `$prog @args 2>&1`;\nif ($? == -1) {\n    print \"failed to execute: $!\\n\";\n}\nelsif ($? & 127) {\n    printf \"child died with signal %d, %s coredump\\n\",\n        ($? & 127),  ($? & 128) ? 'with' : 'without';\n}\nelse {\n    $exitCode = $? >> 8;\n}\n\nif ($exitCode == 255) {\n  print \"I'm an exitcode 255!\\n\";\n  # try parsing the error some?\n  if ($return_value =~ \"Failed to parse using HCL 2\" ) {\n    print \"I'm here!\\n\";\n    my $re = '/.*:\\n([^:]+):(\\d+),(\\d+)-(\\d+)\\n(.*)/m';\n    my @matches = $return_value =~ $re;\n    # Print the entire match result\n    print Dumper(@matches);\n    exit\n\n  }\n}\nif ($exitCode == 0) {\n    print \"Success!\\n\";\n}\nelse {\n    printf \"Failed with error code %d\\nError message: %s\\n\",\n        $exitCode>>8, $return_value;\n}"
  },
  {
    "path": "vault/deleted_policy/README.md",
    "content": "## Deleted Policy\n\nThe Nomad Vault integration will shut down workload that depends on a specific vault policy once the server dertermines\nthat it can not derive a token that contains the requested policy.  These files will let you reproduce this yourself.\n\nI will need to come back and document how this _actually_ works tho.\n"
  },
  {
    "path": "vault/deleted_policy/break_it.sh",
    "content": "#!/bin/bash\necho \"Breaking the 'nomad-cluster' role\"\nvault write /auth/token/roles/nomad-cluster @nomad-cluster-role.broken.json\n"
  },
  {
    "path": "vault/deleted_policy/nomad-cluster-role.broken.json",
    "content": "{\n  \"disallowed_policies\": [\"nomad-server\"],\n  \"allowed_policies\": [\"nomad-client\"],\n  \"explicit_max_ttl\": 0,\n  \"name\": \"nomad-cluster\",\n  \"orphan\": true,\n  \"period\": 600,\n  \"renewable\": true\n}\n"
  },
  {
    "path": "vault/deleted_policy/nomad-cluster-role.json",
    "content": "{\n  \"disallowed_policies\": [\"nomad-server\"],\n  \"allowed_policies\": [\"nomad-client\",\"my-cool-policy\"],\n  \"explicit_max_ttl\": 0,\n  \"name\": \"nomad-cluster\",\n  \"orphan\": true,\n  \"period\": 600,\n  \"renewable\": true\n}\n"
  },
  {
    "path": "vault/deleted_policy/nomad-server-policy.hcl",
    "content": "path \"auth/token/lookup-self\"     { capabilities = [\"read\"]   }\npath \"auth/token/lookup\"          { capabilities = [\"update\"] }\npath \"auth/token/revoke-accessor\" { capabilities = [\"update\"] }\npath \"sys/capabilities-self\"      { capabilities = [\"update\"] }\npath \"auth/token/renew-self\"      { capabilities = [\"update\"] }\n\npath \"auth/token/create/nomad-cluster\" { capabilities = [\"update\"] }\npath \"auth/token/roles/nomad-cluster\"  { capabilities = [\"read\"]   }\n\npath \"auth/token/create/nomad-aaa\" { capabilities = [\"update\"] }\npath \"auth/token/roles/nomad-aaa\"  { capabilities = [\"read\"]   }\n\n\n"
  },
  {
    "path": "vault/deleted_policy/setup.sh",
    "content": "#!/bin/bash\nwait() {\n\tread -n 1 -s -r -p \"\nPress any key to continue...\"\n\techo \"\"\n}\n\ncuteSleep() {\n    echo -n \"Sleeping for $1 seconds\"\n    for i in $(seq 1 ${1})\n    do\n        echo -n \".\"\n        sleep 1\n    done\n    echo \"\"\n}\n\nexport VAULT_ADDR=http://127.0.0.1:8200\necho \"Starting Vault Dev Server\"\nvault server -dev &>vault.log &\nVAULT_PID=$!\necho \"Started Vault Dev Server (pid ${VAULT_PID})\"\ncuteSleep 2\n# Write the policy to Vault\necho \"Creating the vault policies...\"\necho \"  'nomad-server'\"\nvault policy write nomad-server nomad-server-policy.hcl\necho \"  'nomad-client'\"\nvault policy write nomad-client nomad-server-policy.hcl\necho \"  'my-cool-policy'\"\nvault policy write my-cool-policy nomad-server-policy.hcl\n\n# Create the token role with Vault\necho \"Creating the 'nomad-cluster' role\"\nvault write /auth/token/roles/nomad-cluster @nomad-cluster-role.json\n\nvault token create -policy nomad-server -period 10m -orphan | tee > nomad-server.token.out\ngrep -e \"^token \" nomad-server.token.out | awk '{print $2}' | tr -d '\\n' > nomad-server.token\nDATA_DIR=`pwd`/data\nnomad agent -dev -vault-enabled=true -data-dir=${DATA_DIR} -vault-address=\"http://127.0.0.1:8200\" -vault-token=\"`cat nomad-server.token`\" -vault-create-from-role=nomad-cluster &>nomad.log &\nNOMAD_PID=$!\necho \"Started Nomad Dev Server (pid ${NOMAD_PID})\"\ncuteSleep 8\n\n\nwait\necho \"Killing Nomad Dev Server (pid ${NOMAD_PID})\"\nkill ${NOMAD_PID}\necho \"Killing Vault Dev Server (pid ${VAULT_PID})\"\nkill ${VAULT_PID}\necho \"Cleaning up data directory.\"\nrm -rf ${DATA_DIR}"
  },
  {
    "path": "vault/deleted_policy/temp1.nomad",
    "content": "job temp {\n  datacenters = [\"dc1\"]\n  group \"group\" {\n    count = 1\n\n## You might want to constrain this, so here's one to help\n#    constraint {\n#      attribute = \"${attr.unique.hostname}\"\n#      operator  = \"=\"\n#      value     = \"nomad-client-1.node.consul\"\n#    }\n\n    task \"sleepy-bash\" {\n      template {\n        data = <<EOH\n#!/bin/bash\n\necho \"$(date) -- Starting sleepy.\"\necho \"$(date) -- NOMAD_TASK_DIR=${NOMAD_TASK_DIR}\"\necho \"$(date) -- Going to sleep forever. Stop the job via Nomad when you would like.\"\nwhile true\ndo \n  sleep 5\ndone\nEOH\n        destination = \"local/sleepy.sh\"\n      }\n\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args = [\"${NOMAD_TASK_DIR}/sleepy.sh\"]\n      }\n\n      resources {\n        memory = 10\n        cpu = 50\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "vault/deleted_policy/workload.nomad",
    "content": "job sleepy {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy-bash\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"bash\"\n        args = [\"${NOMAD_TASK_DIR}/sleepy.sh\"]\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\necho \"$(date) -- Starting sleepy.\"\necho \"$(date) -- NOMAD_TASK_DIR=${NOMAD_TASK_DIR}\"\necho \"$(date) -- VAULT_TOKEN=${VAULT_TOKEN}\"\necho \"$(date) -- Going to sleep forever. Stop the job via Nomad when you would like.\"\nwhile true\ndo\n  sleep 5\ndone\nEOH\n      }\n\n      resources {\n        memory = 10\n        cpu    = 50\n      }\n\n      vault {\n        policies      = [\"my-cool-policy\"]\n        change_mode   = \"restart\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "vault/pki/README.md",
    "content": "## Nomad Jobs using the Vault PKI backend\n\nThese jobs provide some examples of using the template block to generate PKI certificates from Vault\n\nThese guides expect a PKI backend configured according to the [Build Your Own Certificate Authority (CA)](https://learn.hashicorp.com/vault/secrets-management/sm-pki-engine) guide.\n"
  },
  {
    "path": "vault/pki/sleepy_bash_pki.nomad",
    "content": "job sleepy {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy-bash\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      vault {\n        policies      = [\"nomad-client\"]\n        change_mode   = \"signal\"\n        change_signal = \"SIGUSR1\"\n      }\n\n      resources {\n        memory = 10\n        cpu    = 50\n      }\n\n      template {\n        destination   = \"${NOMAD_SECRETS_DIR}/certificate.crt\"\n        change_mode   = \"restart\"\n        data          = <<EOH\n{{ with secret \"pki_int/issue/example-dot-com\" \"common_name=test.example.com\" \"ttl=24h\" \"ip_sans=127.0.0.1\" }}\n{{- .Data.certificate -}}\n{{ end }}\nEOH\n      }\n\n      template {\n        destination   = \"${NOMAD_SECRETS_DIR}/ca.crt\"\n        change_mode   = \"restart\"\n        data          = <<EOH\n{{ with secret \"pki_int/issue/example-dot-com\" \"common_name=test.example.com\" \"ttl=24h\" \"ip_sans=127.0.0.1\" }}\n{{- .Data.issuing_ca -}}\n{{ end }}\nEOH\n      }\n\n      template {\n        destination   = \"${NOMAD_SECRETS_DIR}/private_key.key\"\n        change_mode   = \"restart\"\n        data          = <<EOH\n{{ with secret \"pki_int/issue/example-dot-com\" \"common_name=test.example.com\" \"ttl=24h\" \"ip_sans=127.0.0.1\" }}\n{{- .Data.private_key -}}\n{{ end }}\nEOH\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\necho \"$(date) -- Starting sleepy.\"\necho \"$(date) -- ${NOMAD_SECRETS_DIR}/certificate.crt\"\ncat ${NOMAD_SECRETS_DIR}/certificate.crt\necho \"$(date) -- ${NOMAD_SECRETS_DIR}/ca.crt\"\ncat ${NOMAD_SECRETS_DIR}/ca.crt\necho \"$(date) -- ${NOMAD_SECRETS_DIR}/private_key.key\"\ncat ${NOMAD_SECRETS_DIR}/private_key.key\necho \"$(date) -- Going to sleep forever. Stop the job via Nomad when you would like.\"\nwhile true\ndo\n  sleep 5\ndone\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "vault/pki/test.nomad",
    "content": "job sleepy {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    task \"sleepy-bash\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      vault {\n        policies      = [\"nomad-client\"]\n        change_mode   = \"signal\"\n        change_signal = \"SIGUSR1\"\n      }\n\n      resources {\n        memory = 10\n        cpu    = 50\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\necho \"$(date) -- Starting sleepy.\"\necho \"$(date) -- VAULT_TOKEN: ${VAULT_TOKEN}\"\necho \"$(date) -- Going to sleep forever. Stop the job via Nomad when you would like.\"\nwhile true\ndo\n  sleep 5\ndone\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "vault/sleepy_vault_bash/sleepy_bash.nomad",
    "content": "job sleepy {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n## You might want to constrain this, so here's one to help\n#    constraint {\n#      attribute = \"${attr.unique.hostname}\"\n#      operator  = \"=\"\n#      value     = \"nomad-client-1.node.consul\"\n#    }\n\n    task \"sleepy-bash\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\necho \"$(date) -- Starting sleepy.\"\necho \"$(date) -- VAULT_TOKEN=${VAULT_TOKEN}\"\necho \"$(date) -- Going to sleep forever. Stop the job via Nomad when you would like.\"\nwhile true\ndo\n  sleep 5\ndone\nEOH\n      }\n\n      resources {\n        memory = 10\n        cpu    = 50\n      }\n\n      vault {\n        policies      = [\"nomad-client\"]\n        change_mode   = \"signal\"\n        change_signal = \"SIGUSR1\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "vault/sleepy_vault_bash/test.nomad",
    "content": "job sleepy {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n\n## You might want to constrain this, so here's one to help\n#    constraint {\n#      attribute = \"${attr.unique.hostname}\"\n#      operator  = \"=\"\n#      value     = \"nomad-client-1.node.consul\"\n#    }\n\n    task \"sleepy-bash\" {\n      driver = \"exec\"\n\n      config {\n        command = \"${NOMAD_TASK_DIR}/sleepy.sh\"\n      }\n\n      env {\n        SPRING_CLOUD_VAULT_TOKEN = \"${VAULT_TOKEN}\"\n      }\n\n      template {\n        destination = \"local/sleepy.sh\"\n        data        = <<EOH\n#!/bin/bash\n\necho \"$(date) -- Starting sleepy.\"\necho \"$(date) -- VAULT_TOKEN=${VAULT_TOKEN}\"\necho \"$(date) -- SPRING_CLOUD_VAULT_TOKEN=${SPRING_CLOUD_VAULT_TOKEN}\"\necho \"$(date) -- Going to sleep forever. Stop the job via Nomad when you would like.\"\nwhile true\ndo\n  sleep 5\ndone\nEOH\n      }\n\n      resources {\n        memory = 10\n        cpu    = 50\n      }\n\n      vault {\n        policies      = [\"nomad-client\"]\n        change_mode   = \"signal\"\n        change_signal = \"SIGUSR1\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "vault_reload_triggered_by_consul/README.md",
    "content": "This is to determine if a consul kv change can cause a secret to be refectched from vault.\n\n"
  },
  {
    "path": "vault_reload_triggered_by_consul/SleepyEcho.sh",
    "content": "#! /bin/bash\n\nif [ -z \"$1\" ] \nthen\n  SLEEP_SECS=\"2\"\nelse\n  SLEEP_SECS=\"$1\"\nfi\n\nif [ -z \"${EXTRAS}\" ]\nthen\n  ep=\"\"\nelse \n  ep=\"EXTRAS: [${EXTRAS}]\"\nfi \n\necho \"$(date) -- Starting SleepyEcho. Sleep interval is ${SLEEP_SECS} sec. ${ep}\"\n\nwhile true\ndo \n  echo \"$(date) -- Alive... going back to sleep for ${SLEEP_SECS}.  ${ep}\"\n  sleep ${SLEEP_SECS}\ndone\n"
  },
  {
    "path": "vault_reload_triggered_by_consul/sample.nomad",
    "content": "job \"sample\" {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    constraint {\n      attribute = \"${attr.kernel.name}\"\n      value     = \"linux\"\n    }\n\n    task \"task\" {\n      driver = \"raw_exec\"\n\n      config {\n        command = \"local/bin/SleepyEcho.sh\"\n        args    = [\"3\"]\n      }\n\n      artifact {\n        source      = \"https://angrycub-hc.s3.amazonaws.com/public/SleepyEcho.sh\"\n        destination = \"local/bin\"\n      }\n\n      template {\n        destination = \"secrets/file.env\"\n        env         = true\n        data        = <<EOH\nCHANGE_SERIAL=\"{{key \"service/sleepyecho/change_serial\"}}\"\nEXTRAS=\"{{with secret \"secret/sleepyecho/password\"}}{{.Data.value}}{{end}}\"\nEOH\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "victoriametrics/vm.nomad",
    "content": "job \"vm-dc1\" {\n  datacenters = [\"dc1\"]\n\n  meta {\n    datacenter = \"dc1\"\n    team       = \"myTeam\"\n  }\n\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"2m\"\n    auto_revert      = false\n    canary           = 0\n  }\n\n  group \"vm-myTeam\" {\n    network {\n      port \"http\" {\n        static = \"8428\"\n      }\n    }\n\n    service {\n      name = \"vm-myTeam-dc1\"\n      tags = [\"vm-myTeam\", \"dc1\"]\n      port = \"http\"\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        port     = \"http\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"victoriametrics\" {\n      driver = \"docker\"\n\n      config {\n        #        image = \"victoriametrics/victoria-metrics:v${version}\"\n        image = \"victoriametrics/victoria-metrics:latest\"\n        args  = [\n          \"-maxConcurrentInserts=128\",\n          \"-insert.maxQueueDuration=2m0s\"\n        ]\n        ports = [\"http\"]\n      }\n\n      resources {\n        cpu    = 2000\n        memory = 512\n      }\n    }\n  }\n}"
  },
  {
    "path": "win_rawexec_restart/SleepyEcho.ps1",
    "content": "if (-not (Test-Path env:SLEEP_SECS)) { $env:SLEEP_SECS = 2 }\n\n\"$(get-date) -- Starting SleepyEcho. Sleep interval is $env:SLEEP_SECS sec.\"\n\nwhile ($true) { \n  if (Test-Path env:EXTRAS) { $extras=\" EXTRAS: $env:EXTRAS\" } else {$extras=\"\"}\n  if (Test-Path env:VAULT_TOKEN) { $vt=\" VAULT_TOKEN: $env:VAULT_TOKEN\" } else {$vt=\"\"}\n  \"$(get-date) -- Alive... going back to sleep for $env:SLEEP_SECS seconds. $vt $extras\"\n  start-sleep $env:SLEEP_SECS\n} \n"
  },
  {
    "path": "win_rawexec_restart/artifact_sleepyecho.nomad",
    "content": "job \"repro\" {\n  datacenters = [\"dc1\"]\n\n  group \"group\" {\n    constraint {\n      attribute = \"${attr.kernel.name}\"\n      value     = \"windows\"\n    }\n\n    task \"artifact\" {\n      driver = \"raw_exec\"\n\n      template {\n        data        = <<EOH\nEXTRAS=\"{{ key \"sleepyecho/extra\" }}\"\nEOH\n        destination = \"secrets/file.env\"\n        env         = true\n      }\n\n      config {\n        command = \"powershell.exe\"\n        args    = [\"local/bin/SleepyEcho.ps1\"]\n      }\n\n      artifact {\n        source      = \"https://angrycub-hc.s3.amazonaws.com/public/SleepyEcho.ps1\"\n        destination = \"local/bin\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "windows_docker/docker-iis.nomad",
    "content": "job \"docker-iis\" {\n  datacenters = [\"dc1\"]\n\n  group \"windows\" {\n    constraint {\n      attribute = \"${attr.kernel.name}\"\n      operator  = \"=\"\n      value     = \"windows\"\n    }\n\n    network {\n      port \"www\" {\n        to = 80\n      }\n    }\n\n    service {\n      name = \"windows-docker-iis\"\n      tags = [\"windows\",\"iis\"]\n      port = \"www\"\n\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"iis-site\" {\n      driver = \"docker\"\n\n      config {\n        image = \"voiselle/iis-dockerfile:v1\"\n        ports = [\"www\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "windows_docker/windows-test.nomad",
    "content": "job \"docker-iis\" {\n  datacenters = [\"dc1\"]\n\n  group \"windows\" {\n    constraint {\n      attribute = \"${attr.kernel.name}\"\n      operator  = \"=\"\n      value     = \"windows\"\n    }\n\n    network {\n      port \"www\" {\n        to = 80\n      }\n    }\n\n    service {\n      name = \"windows-docker-iis\"\n      tags = [\"windows\",\"iis\"]\n      port = \"www\"\n\n      check {\n        name     = \"alive\"\n        type     = \"tcp\"\n        interval = \"10s\"\n        timeout  = \"2s\"\n      }\n    }\n\n    task \"iis-site\" {\n      driver = \"docker\"\n\n      config {\n        image = \"voiselle/windows-test:v1\"\n        ports = [\"www\"]\n      }\n\n      resources {\n        cpu    = 500\n        memory = 256\n      }\n    }\n  }\n}\n"
  }
]