Showing preview only (657K chars total). Download the full file or copy to clipboard to get everything.
Repository: angrycub/nomad_example_jobs
Branch: main
Commit: 034cc3998d19
Files: 503
Total size: 84.4 MB
Directory structure:
gitextract_paoaxy22/
├── .envrc
├── .gitignore
├── HCL2/
│ ├── add_local_file/
│ │ ├── README.md
│ │ ├── input.file
│ │ ├── raw_file_b64.nomad
│ │ ├── raw_file_delims.nomad
│ │ ├── raw_file_json.nomad
│ │ └── use_file.nomad
│ ├── always_change/
│ │ ├── README.md
│ │ ├── before.nomad
│ │ ├── uuid.nomad
│ │ └── variable.nomad
│ ├── dynamic/
│ │ ├── README.md
│ │ └── example.nomad
│ ├── object_to_template/
│ │ ├── README.md
│ │ └── example.nomad
│ └── variable_jobs/
│ ├── README.md
│ ├── decode-external-file/
│ │ ├── README.MD
│ │ ├── env.json
│ │ ├── job1.nomad
│ │ └── job2.nomad
│ ├── env-vars/
│ │ ├── README.MD
│ │ ├── env.vars
│ │ ├── job1.nomad
│ │ └── job2.nomad
│ ├── job.nomad
│ ├── job.vars
│ └── multiple-var-files/
│ ├── README.MD
│ ├── job1.nomad
│ ├── job1.vars
│ ├── job2.nomad
│ ├── job2.vars
│ ├── job3.nomad
│ ├── job3.vars
│ └── shared.vars
├── README.md
├── alloc_folder/
│ ├── mount_alloc.nomad
│ └── sidecar.nomad
├── applications/
│ ├── artifactory_oss/
│ │ ├── README.md
│ │ └── registry.nomad
│ ├── cluster-broccoli/
│ │ └── example.nomad
│ ├── docker_registry/
│ │ ├── README.md
│ │ └── registry.nomad
│ ├── docker_registry_v2/
│ │ ├── README.md
│ │ ├── htpasswd
│ │ ├── make_password.sh
│ │ └── registry.nomad
│ ├── docker_registry_v3/
│ │ ├── README.md
│ │ ├── make_password.sh
│ │ └── registry.nomad
│ ├── mariadb/
│ │ └── mariadb.nomad
│ ├── membrane-soa/
│ │ ├── README.md
│ │ ├── soap-proxy-v1-linux.nomad
│ │ ├── soap-proxy-v1-windows.nomad
│ │ └── soap-proxy.nomad
│ ├── minio/
│ │ ├── README.md
│ │ ├── minio.nomad
│ │ └── secure-variables/
│ │ ├── README.md
│ │ ├── minio-data/
│ │ │ └── .gitkeep
│ │ ├── minio.nomad
│ │ ├── start.sh
│ │ ├── stop.sh
│ │ └── volume.hcl
│ ├── postgres/
│ │ ├── README.md
│ │ └── postgres.nomad
│ ├── prometheus/
│ │ ├── README.md
│ │ ├── fabio-service.nomad
│ │ ├── grafana/
│ │ │ ├── README.md
│ │ │ └── nomad_jobs.json
│ │ ├── node-exporter.nomad
│ │ └── prometheus.nomad
│ ├── vms/
│ │ ├── freedos/
│ │ │ ├── .gitignore
│ │ │ ├── README.md
│ │ │ ├── freedos.img.tgz
│ │ │ ├── freedos.img.tgz.SHASUM
│ │ │ └── freedos.nomad
│ │ └── tinycore/
│ │ ├── README.md
│ │ ├── tc_ssh.nomad
│ │ └── tinycore.qcow2.tgz
│ └── wordpress/
│ ├── README.md
│ ├── distributed/
│ │ ├── README.md
│ │ ├── build-site.nomad
│ │ ├── nginx.nomad
│ │ ├── reset.sh
│ │ ├── wordpress-db.nomad
│ │ └── wordpress.nomad
│ └── simple/
│ ├── README.md
│ └── wordpress.nomad
├── artifact_sleepyecho/
│ ├── README.md
│ ├── SleepyEcho.sh
│ ├── artifact_sleepyecho.nomad
│ └── vault_sleepyecho.nomad
├── batch/
│ ├── batch_gc/
│ │ └── example.nomad
│ ├── dispatch/
│ │ ├── sleepy.nomad
│ │ ├── sleepy1.nomad
│ │ ├── sleepy10.nomad
│ │ ├── sleepy2.nomad
│ │ ├── sleepy3.nomad
│ │ ├── sleepy4.nomad
│ │ ├── sleepy5.nomad
│ │ ├── sleepy6.nomad
│ │ ├── sleepy7.nomad
│ │ ├── sleepy8.nomad
│ │ └── sleepy9.nomad
│ ├── dont_restart_fail/
│ │ ├── README.md
│ │ └── example.nomad
│ ├── lost_batch/
│ │ ├── README.md
│ │ ├── batch.nomad
│ │ └── periodic.nomad
│ ├── lots_of_batches/
│ │ ├── README.md
│ │ └── payload.nomad.template
│ ├── periodic/
│ │ ├── prohibit-overlap.nomad
│ │ └── template.nomad
│ └── spread_batch/
│ ├── example.nomad
│ └── example2.nomad
├── batch_overload/
│ ├── example.nomad
│ └── periodic.nomad
├── blocked_eval/
│ ├── README.md
│ └── example.nomad
├── check.sh
├── cni/
│ ├── README.md
│ ├── diy_brige/
│ │ ├── README.md
│ │ ├── diybridge.conflist
│ │ ├── example.nomad
│ │ └── repro.nomad
│ └── example.nomad
├── complex_meta/
│ ├── template_env.nomad
│ └── template_meta.nomad
├── connect/
│ ├── consul.nomad
│ ├── discuss/
│ │ ├── blocky.yaml
│ │ └── job.nomad
│ ├── dns-via-mesh/
│ │ ├── README.md
│ │ ├── consul-dns.nomad
│ │ ├── consul-dns2.nomad
│ │ └── go-resolv-test/
│ │ ├── .gitignore
│ │ ├── build.sh
│ │ └── main.go
│ ├── ingress_gateways/
│ │ └── ingress_gateway.nomad
│ ├── native/
│ │ └── cn-demo.nomad
│ ├── nginx_ingress/
│ │ ├── countdash.nomad
│ │ └── ingress.nomad
│ └── sidecar/
│ ├── countdash.nomad
│ └── countdash2.nomad
├── consul/
│ ├── add_check/
│ │ ├── README.md
│ │ ├── e1.nomad
│ │ ├── e2.nomad
│ │ └── e3.nomad
│ └── use_consul_for_kv_path/
│ ├── README.md
│ └── template.nomad
├── consul-template/
│ ├── coordination/
│ │ ├── README.md
│ │ └── sample.nomad
│ ├── missing_vault_value/
│ │ └── sample.nomad
│ └── my_first_kv/
│ ├── README.md
│ └── example.nomad
├── countdash/
│ ├── connect/
│ │ └── countdash.nomad
│ └── simple/
│ └── countdash.nomad
├── csi/
│ ├── aws/
│ │ ├── ebs/
│ │ │ ├── README.md
│ │ │ ├── busybox.nomad
│ │ │ ├── mysql-server.nomad
│ │ │ ├── plugin-ebs-controller.nomad
│ │ │ ├── plugin-ebs-nodes.nomad
│ │ │ └── volume.hcl
│ │ └── efs/
│ │ ├── README.md
│ │ ├── busybox.nomad
│ │ ├── node.nomad
│ │ └── volume.hcl
│ ├── gcp/
│ │ └── gce-pd/
│ │ ├── README.md
│ │ ├── config.nomad
│ │ ├── controller.nomad
│ │ ├── cv-nomad.hcl
│ │ ├── disk.hcl
│ │ ├── job.nomad
│ │ └── nodes.nomad
│ ├── hetzner/
│ │ └── volume/
│ │ ├── README.md
│ │ ├── config.nomad
│ │ ├── job.nomad
│ │ ├── node.nomad
│ │ └── volume.hcl
│ └── hostpath/
│ ├── block/
│ │ ├── README.md
│ │ ├── csi-hostpath-driver.nomad
│ │ ├── job.nomad
│ │ └── test.sh
│ ├── file/
│ │ ├── README.md
│ │ ├── csi-hostpath-driver.nomad
│ │ ├── job.nomad
│ │ └── test.sh
│ └── volume.hcl
├── deployments/
│ └── failing_deployment/
│ └── example.nomad
├── docker/
│ ├── auth_from_template/
│ │ ├── README.md
│ │ └── auth.nomad
│ ├── datadog/
│ │ ├── container_network.nomad
│ │ ├── ex3.nomad
│ │ └── example2.nomad
│ ├── docker+host_volume/
│ │ ├── README.md
│ │ ├── task_deps.nomad
│ │ └── unsafe.nomad
│ ├── docker_dynamic_hostname/
│ │ ├── README.md
│ │ ├── finished.nomad
│ │ ├── res_file
│ │ └── view.sh
│ ├── docker_entrypoint/
│ │ ├── Dockerfile
│ │ └── example.nomad
│ ├── docker_image_not_found/
│ │ ├── README.md
│ │ ├── reschedule.nomad
│ │ └── restart.nomad
│ ├── docker_interpolated_image_name/
│ │ ├── README.md
│ │ ├── example.nomad
│ │ └── hostname.nomad
│ ├── docker_logging/
│ │ └── example.nomad
│ ├── docker_mac_address/
│ │ └── example.nomad
│ ├── docker_network/
│ │ ├── example1.nomad
│ │ └── example2.nomad
│ ├── docker_nfs/
│ │ ├── README.md
│ │ └── example.nomad
│ ├── docker_template/
│ │ └── example.nomad
│ ├── docker_twice_in_alloc/
│ │ └── example.nomad
│ ├── docker_windows_abs_mount/
│ │ ├── Dockerfile
│ │ ├── README.md
│ │ ├── SleepyEcho.ps1
│ │ └── repro.nomad
│ ├── env_var_args/
│ │ ├── Dockerfile
│ │ ├── README.md
│ │ ├── cmd.sh
│ │ ├── cmd_alt.sh
│ │ ├── entrypoint.sh
│ │ ├── start.nomad
│ │ └── test.nomad
│ ├── get_fact_from_consul/
│ │ ├── README.md
│ │ ├── args.nomad
│ │ └── image.nomad
│ ├── host-volumes-and-users/
│ │ ├── README.md
│ │ └── scratch.nomad
│ ├── labels/
│ │ ├── README.md
│ │ ├── heredoc.nomad
│ │ ├── interpolation.nomad
│ │ └── literal.nomad
│ └── mount_alloc/
│ ├── README.md
│ └── example.nomad
├── drain/
│ └── example.nomad
├── dummy/
│ └── example.nomad
├── echo_stack/
│ ├── README.md
│ ├── fabio-system.nomad
│ ├── login-service.nomad
│ └── profile-service.nomad
├── env/
│ └── escaped_env_vars/
│ ├── Dockerfile
│ ├── README.md
│ ├── entrypoint.sh
│ └── example.nomad
├── environment/
│ ├── README.md
│ └── example.nomad
├── exec/
│ └── host-volumes-and-users/
│ ├── README.md
│ └── scratch.nomad
├── exec-zip/
│ ├── README.md
│ ├── example.nomad
│ └── folder.tgz
├── fabio/
│ ├── README.md
│ ├── fabio-docker.nomad
│ ├── fabio-service.nomad
│ └── fabio-system.nomad
├── fabio-ssl/
│ └── fabio-ssl.nomad
├── failing_jobs/
│ ├── README.md
│ ├── failing_sidecar/
│ │ ├── README.md
│ │ └── example.nomad
│ └── impossible_constratint/
│ ├── README.md
│ └── example.nomad
├── giant/
│ └── example.nomad
├── guide/
│ └── TUTORIAL_TEMPLATE.mdx
├── host_volume/
│ ├── README.md
│ ├── mariadb/
│ │ └── mariadb.nomad
│ ├── prometheus/
│ │ ├── README.md
│ │ ├── grafana/
│ │ │ ├── README.md
│ │ │ └── nomad_jobs.json
│ │ └── prometheus.nomad
│ └── read_only/
│ └── read_only.nomad
├── http_echo/
│ ├── arm-service.nomad
│ ├── bar-service.nomad
│ ├── car-service-broken-check.nomad
│ ├── foo-service.deployment.nomad
│ ├── foo-service.nomad
│ ├── foo-test.nomad
│ └── template/
│ ├── echo_template.nomad
│ ├── ets.nomad
│ ├── ets2.nomad
│ └── ets3.nomad
├── httpd_site/
│ ├── README.md
│ ├── httpd.nomad
│ ├── make_site.sh
│ ├── site-content/
│ │ ├── about.html
│ │ ├── css/
│ │ │ └── style.css
│ │ └── index.html
│ └── site-content.tgz
├── ipv6/
│ └── SimpleHTTPServer/
│ └── sample.nomad
├── java/
│ ├── JavaDriverTest/
│ │ ├── java-driver-test.nomad
│ │ └── test2.nomad
│ ├── README.md
│ ├── SampleWebApp.war
│ ├── apache_camel/
│ │ ├── camel-standalone-helloworld-1.0-SNAPSHOT.jar
│ │ └── java_files.nomad
│ └── jar-test/
│ ├── README.md
│ ├── jar/
│ │ └── Count.jar
│ ├── jar-test.nomad
│ └── src/
│ └── Count.java
├── job_examples/
│ ├── base-batch.nomad
│ └── meta/
│ ├── README.md
│ └── meta-batch.nomad
├── json-jobs/
│ ├── example.nomad
│ └── job.json
├── load_balancers/
│ └── traefik/
│ ├── README.md
│ ├── traefik.nomad
│ ├── webapp.nomad
│ └── webapp2.nomad
├── meta/
│ ├── README.md
│ └── example.nomad
├── microservice/
│ └── example.nomad
├── minecraft/
│ ├── minecraft.nomad
│ ├── minecraft_exec.nomad
│ └── plugin.nomad
├── monitoring/
│ └── sensu/
│ ├── fabio-docker.nomad
│ └── sensu.nomad
├── nginx-fabio-clone/
│ ├── README.md
│ ├── bar-service.nomad
│ ├── e.ct
│ ├── e.out
│ ├── example.nomad
│ ├── foo-service.nomad
│ ├── tj.ct
│ └── tj.out
├── oom/
│ └── example.nomad
├── output.html
├── parameterized/
│ ├── README.md
│ ├── docker_hello_world/
│ │ └── hello-world.nomad
│ ├── template.nomad
│ └── to_specific_client/
│ ├── example.nomad
│ └── workaround/
│ ├── README.md
│ ├── example.nomad
│ ├── rolling_run.sh
│ └── watch.py
├── ports/
│ ├── README.md
│ └── example.nomad
├── preserve_state/
│ ├── bar-service.jsonjob
│ ├── example.jsonjob
│ ├── fabio.jsonjob
│ ├── foo-service.jsonjob
│ ├── hashi-ui.jsonjob
│ ├── jam.sh
│ ├── nomad_debug
│ └── preserve.sh
├── qemu/
│ ├── README.md
│ ├── hass/
│ │ └── hass.nomad
│ ├── imagebuilder/
│ │ ├── Core-current.iso
│ │ ├── Dockerfile
│ │ ├── NOTES.md
│ │ └── core-image.qcow2
│ ├── job.json
│ ├── tc.qcow2
│ ├── tc_ssh.nomad
│ ├── tc_ssh2.nomad
│ ├── tc_ssh_arm.nomad
│ └── tinycore.qcow2
├── raw_exec/
│ ├── env.nomad
│ ├── mkdir/
│ │ ├── README.md
│ │ ├── mkdir-bash.nomad
│ │ └── mkdir.nomad
│ ├── ps.nomad
│ ├── quoted_args/
│ │ ├── quoted_args.nomad
│ │ └── quoted_args_2.nomad
│ └── user/
│ └── example.nomad
├── reproductions/
│ └── cpu_rescheduling/
│ ├── README.md
│ └── repro.nomad
├── reschedule/
│ └── ex.nomad
├── restart/
│ └── restart.nomad
├── rolling_upgrade/
│ ├── README.md
│ ├── cv-new.nomad
│ ├── cv.nomad
│ ├── example-new.nomad
│ └── example.nomad
├── sentinel/
│ ├── README.md
│ ├── alwaysFalse.sentinel
│ ├── example.nomad
│ ├── exampleGroupMissingNodeClass.nomad
│ ├── exampleGroupNodeClass.nomad
│ ├── exampleJobNodeClass.nomad
│ ├── exampleNoNodeClass.nomad
│ ├── payload.json
│ └── requireNodeClass.sentinel
├── server-variables/
│ ├── README.md
│ ├── build-site.nomad
│ ├── nginx.nomad
│ ├── reset.sh
│ ├── wordpress-db.nomad
│ └── wordpress.nomad
├── sleepy/
│ ├── README.md
│ ├── sleepy_bash/
│ │ └── sleepy.nomad
│ └── sleepy_python/
│ ├── README.md
│ ├── batch_sleepy_python.nomad
│ └── sleepy_python.nomad
├── spread/
│ ├── example.nomad
│ ├── scheduler.json
│ └── scheduler_b.json
├── stress/
│ ├── README.md
│ └── cpu_throttled_time/
│ ├── README.md
│ └── stress.nomad
├── super_big/
│ ├── README.md
│ ├── super_big.nomad
│ └── super_big2.nomad
├── system_jobs/
│ ├── sleepy/
│ │ ├── README.md
│ │ ├── sleepy_bash/
│ │ │ └── sleepy.nomad
│ │ └── sleepy_python/
│ │ ├── README.md
│ │ ├── batch_sleepy_python.nomad
│ │ └── sleepy_python.nomad
│ ├── system_deployment/
│ │ ├── deploy_jdk.nomad
│ │ ├── fabio-system.nomad
│ │ ├── fabio-system.nomad2
│ │ ├── foo-system.nomad
│ │ └── foo-system.nomad2
│ └── system_filter/
│ ├── filtered.nomad
│ └── host_vol.nomad
├── task_deps/
│ ├── consul-lock/
│ │ └── myapp.nomad
│ ├── disk_check/
│ │ ├── README.md
│ │ └── disk.nomad
│ ├── init_artifact/
│ │ ├── README.md
│ │ ├── batch-init-artifact.nomad
│ │ └── service-init-artifact.nomad
│ ├── interjob/
│ │ ├── README.md
│ │ ├── myapp.nomad
│ │ └── myservice.nomad
│ ├── k8sdoc/
│ │ ├── README.md
│ │ ├── init.nomad
│ │ ├── k8sdoc1.nomad
│ │ ├── myapp.nomad
│ │ └── myservice.nomad
│ └── sidecar/
│ └── example.nomad
├── template/
│ ├── batch/
│ │ ├── README.md
│ │ ├── context.nomad
│ │ ├── parameter.nomad
│ │ ├── services.nomad
│ │ └── template.nomad
│ ├── from_consul/
│ │ ├── README.md
│ │ ├── artifact.nomad
│ │ ├── init.nomad
│ │ └── issue.nomad
│ ├── learning/
│ │ └── README.md
│ ├── rerender/
│ │ └── example.nomad
│ ├── secure_variables/
│ │ ├── README.md
│ │ ├── example.nomad
│ │ ├── interpolated_job/
│ │ │ ├── README.md
│ │ │ ├── interpolated_job.hcl
│ │ │ └── makeJobVars.sh
│ │ ├── makeJobVars.sh
│ │ ├── makeVars.sh
│ │ ├── multiregion/
│ │ │ ├── start.sh
│ │ │ ├── stop.sh
│ │ │ ├── template.nomad
│ │ │ ├── test.out
│ │ │ └── test.tmpl
│ │ ├── template copy.tmpl
│ │ ├── template-playground.nomad
│ │ ├── template.html
│ │ ├── template.tmpl
│ │ ├── variable_view.nomad
│ │ └── write/
│ │ ├── t0.out
│ │ ├── t0.tmpl
│ │ ├── t1.out
│ │ ├── t1.tmpl
│ │ ├── t2.out
│ │ └── t2.tmpl
│ ├── services/
│ │ ├── README.md
│ │ └── byTag.nomad
│ ├── template-system/
│ │ ├── README.md
│ │ ├── composed_keys.nomad
│ │ ├── services-on-nomad-client.nomad
│ │ └── template.nomad
│ ├── template_handoff/
│ │ ├── README.md
│ │ ├── handoff.nomad
│ │ └── handoff_restart.nomad
│ ├── template_into_docker/
│ │ └── example.nomad
│ ├── template_playground/
│ │ ├── composed_keys.nomad
│ │ ├── template-exec.nomad
│ │ ├── template-hcl2.nomad
│ │ └── template.nomad
│ └── use_whitespace/
│ └── byTag.nomad
├── test.sh
├── vault/
│ ├── deleted_policy/
│ │ ├── README.md
│ │ ├── break_it.sh
│ │ ├── nomad-cluster-role.broken.json
│ │ ├── nomad-cluster-role.json
│ │ ├── nomad-server-policy.hcl
│ │ ├── setup.sh
│ │ ├── temp1.nomad
│ │ └── workload.nomad
│ ├── pki/
│ │ ├── README.md
│ │ ├── sleepy_bash_pki.nomad
│ │ └── test.nomad
│ └── sleepy_vault_bash/
│ ├── sleepy_bash.nomad
│ └── test.nomad
├── vault_reload_triggered_by_consul/
│ ├── README.md
│ ├── SleepyEcho.sh
│ └── sample.nomad
├── victoriametrics/
│ └── vm.nomad
├── win_rawexec_restart/
│ ├── SleepyEcho.ps1
│ └── artifact_sleepyecho.nomad
└── windows_docker/
├── docker-iis.nomad
└── windows-test.nomad
================================================
FILE CONTENTS
================================================
================================================
FILE: .envrc
================================================
echo "Processing .direnv..."
function template {
echo "Creating a skeleton tutorial in $1."
mkdir -p $1
cp $(pwd)/guide/TUTORIAL_TEMPLATE.mdx $1/README.md
}
echo "Done."
================================================
FILE: .gitignore
================================================
.DS_Store
================================================
FILE: HCL2/add_local_file/README.md
================================================
# Include a Local File at Job Runtime
You can use the HCL2 file function and a runtime variable to include a file in
your Nomad jobs. **These files should be small because they are stored in the
Nomad server state until the job is eligible for garbage collection.**
## Techniques
### Use the HCL2 file() function
- [`use_file.nomad`] — demonstrates the file function. This allows you to include
a template to be rendered.
### Wrap included files
Nomad will inject the file content into the template stanza directly and it
will be rendered by the client. You might want to prevent Nomad from seeing
the content as renderable. There are a few techniques that you can use for
this.
- [`raw_file_delims.nomad`] — Uses alternative delimiters for the template
stanza. These delimiter characters must never appear in the included file
content. You can use interesting characters like emoji as delimiters
because of golang's Unicode support.
- [`raw_file_json.nomad`] — JSON encodes the file and uses the Nomad template
engine to decode it on the client. The input file must not contain the default
template delimiters (`{{` and `}}`) or you must redefine them because they are
not escaped.
<details><summary>You can even use emoji, depending on OS support.</summary>

</details>
- [`raw_file_b64.nomad`] — demonstrates using base64 as a means to wrap your
included file so that it is only unwrapped on the destination client.
## Explore
This directory contains a test file you can use named `input.file`, or you can
supply your own file to include.
### Run the job
The jobs all define an input variable named `input_file`. You must supply the
path to the file to include. You must provide it as an environment variable or
as a flag.
#### Environment variable
```
export NOMAD_VAR_input_file=./input.file
nomad job run use_file.nomad
```
#### Flag
```
nomad job run -var "input_file=./input.file" use_file.nomad
```
### Inspect the job
Run the `nomad job inspect` command to see how the JSON job specification
represents the job. Some techniques are very clear and some opaque the
file contents completely.
```
nomad job inspect use_file.nomad
```
### Get the logs from the allocation
Get the allocation ID from the output of the `nomad job run` command and fetch
the logs.
```
nomad alloc logs «alloc_id»
```
### Stop the job.
```
nomad job stop use_file.nomad
```
## About the job
The job contains one task. Nomad renders the `template` stanza's content—the
included file—into the task's `local` directory. It then starts an
`alpine:latest` container that runs `cat` on the rendered file and sleeps
until stopped. the task's `local` directoryuses Nomad's Docker task driver to
download an Alpine container.
[`use_file.nomad`]: ./use_file.nomad
[`raw_file_delims.nomad`]: ./raw_file_delims.nomad
[`raw_file_json.nomad`]: ./raw_file_json.nomad
[`raw_file_b64.nomad`]: ./raw_file_b64.nomad
================================================
FILE: HCL2/add_local_file/input.file
================================================
This is the input file content
Particularly evil stuff:
Single quotes: 'hello'
Double quotes: "howdy"
Go-template: {{ "hello" }}
Backticks: `this is a raw-string in go, but raw strings can't be in rawstrings`
JSON:
{
"object": {
"foo": true,
"bar": 5,
"baz": [1,2,3]
}
}
================================================
FILE: HCL2/add_local_file/raw_file_b64.nomad
================================================
variable "input_file" {
type = string
description = "local path to the redis configuration to inject into the job."
}
job "raw_file_b64.nomad" {
datacenters = ["dc1"]
group "services" {
task "alpine" {
driver = "docker"
template {
destination = "local/file.out"
}
config {
image = "alpine"
command = "bash"
args = [
"-c",
"cat local/file.out; while true; do sleep 30; done",
]
}
template {
destination = "local/file.out"
data = "{{base64Decode \"${base64encode(file(var.input_file))}\"}}"
}
}
}
}
================================================
FILE: HCL2/add_local_file/raw_file_delims.nomad
================================================
variable "input_file" {
type = string
description = "local path to the redis configuration to inject into the job."
}
job "raw_file_delims.nomad" {
datacenters = ["dc1"]
group "services" {
task "alpine" {
driver = "docker"
config {
image = "alpine"
command = "sh"
args = [
"-c",
"cat local/file.out; while true; do sleep 30; done",
]
}
template {
destination = "local/file.out"
data = file(var.input_file)
left_delimiter = "🚫"
right_delimiter = "🚫"
}
}
}
}
================================================
FILE: HCL2/add_local_file/raw_file_json.nomad
================================================
variable "input_file" {
type = string
description = "local path to the redis configuration to inject into the job."
}
job "raw_file_json.nomad" {
datacenters = ["dc1"]
group "services" {
task "alpine" {
driver = "docker"
template {
destination = "local/file.out"
}
config {
image = "alpine"
command = "bash"
args = [
"-c",
"cat local/file.out; while true; do sleep 30; done",
]
}
template {
destination = "local/file.out"
data = "{{jsonDecode \"${jsonencode(file(var.input_file))}\"}}"
}
}
}
}
================================================
FILE: HCL2/add_local_file/use_file.nomad
================================================
variable "input_file" {
type = string
description = "local path to the redis configuration to inject into the job."
}
job "use_file.nomad" {
datacenters = ["dc1"]
group "services" {
task "alpine" {
driver = "docker"
config {
image = "alpine"
command = "sh"
args = [
"-c",
"cat local/file.out; while true; do sleep 30; done",
]
}
template {
destination = "local/file.out"
data = file(var.input_file)
}
}
}
}
================================================
FILE: HCL2/always_change/README.md
================================================
# Use HCL2 to make re-runnable batch jobs
Nomad will refuse to run a batch job again unless it detects a change to the job.
This behavior exists to prevent duplicate job submissions from creating unnecessary
work—unchanged jobs are "the same job" to Nomad. A Nomad job's `meta` stanza is
an ideal place to make changes to a Nomad job that do not change the behavior of
the job itself. Some ways to provide variation in a meta value are using an HCL2
variable or the `uuidv4()` function.
- [`before.nomad`]—Demonstrates the normal behavior.
- [`uuid.nomad`]—Use a random UUID to change the job every time it's run. This
guarantees that Nomad will always run the submitted job.
- [`variable.nomad`]—Submit a variable at runtime. This can preserve the single
run behavior in cases where the job submission is a duplicate.
## Nomad's default behavior
Run the `before.nomad` job. Nomad will start a copy of the `hello-world:latest`
docker container. This container outputs some text and exits.
```text
$ nomad run before.nomad
==> Monitoring evaluation "1fef4d80"
Evaluation triggered by job "before.nomad"
==> Monitoring evaluation "1fef4d80"
Allocation "7e6a767b" created: node "14ab9290", group "before"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "1fef4d80" finished with status "complete"
```
Check the status of the allocation created by the run command.
```text
$ nomad alloc status 7eg
ID = 7e6a767b-5604-5268-653b-905948928de5
Eval ID = 1fef4d80
Name = before.nomad.before[0]
Node ID = 14ab9290
Node Name = nomad-client-2.node.consul
Job ID = before.nomad
Job Version = 0
Client Status = complete
Client Description = All tasks have completed
Desired Status = run
Desired Description = <none>
Created = 6m55s ago
Modified = 6m45s ago
Task "hello-world" is "dead"
Task Resources
CPU Memory Disk Addresses
100 MHz 300 MiB 300 MiB
Task Events:
Started At = 2021-05-18T18:03:10Z
Finished At = 2021-05-18T18:03:10Z
Total Restarts = 0
Last Restart = N/A
Recent Events:
Time Type Description
2021-05-18T14:03:10-04:00 Terminated Exit Code: 0
2021-05-18T14:03:10-04:00 Started Task started by client
2021-05-18T14:03:01-04:00 Driver Downloading image
2021-05-18T14:03:01-04:00 Task Setup Building Task Directory
2021-05-18T14:03:01-04:00 Received Task received by client
```
As expected, the Docker container finished and exited with exit code 0.
Check the status of the job to verify that its status is `dead`.
```text
$ nomad status
ID Type Priority Status Submit Date
before.nomad batch 50 dead 2021-05-18T14:03:00-04:00
```
Try running the `before.nomad` job again.
```text
$ nomad run before.nomad
==> Monitoring evaluation "a855fa2b"
Evaluation triggered by job "before.nomad"
==> Monitoring evaluation "a855fa2b"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "a855fa2b" finished with status "complete"
```
Note that this time, Nomad did not schedule an allocation and the
job remains dead. This is expected and is a safety feature of Nomad
to prevent duplicated submissions of the same job from creating
unnecessary duplicate work.
If your job should always run you can use one of the following
techniques to inject variation in ways that don't require you
to alter the job files contents.
## Techniques
### Use a UUID as an ever-changing value
```text
$ nomad run uuid.nomad
==> Monitoring evaluation "27fe0c84"
Evaluation triggered by job "uuid.nomad"
==> Monitoring evaluation "27fe0c84"
Allocation "6de97aa7" created: node "14ab9290", group "uuid"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "27fe0c84" finished with status "complete"
```
```text
$ nomad alloc status 6de
ID = 6de97aa7-e6b1-c6bf-e8e0-16d5f7ed39bf
Eval ID = 27fe0c84
Name = uuid.nomad.uuid[0]
Node ID = 14ab9290
Node Name = nomad-client-2.node.consul
Job ID = uuid.nomad
Job Version = 0
Client Status = complete
Client Description = All tasks have completed
Desired Status = run
Desired Description = <none>
Created = 6m52s ago
Modified = 6m50s ago
Task "hello-world" is "dead"
Task Resources
CPU Memory Disk Addresses
100 MHz 300 MiB 300 MiB
Task Events:
Started At = 2021-05-18T18:07:33Z
Finished At = 2021-05-18T18:07:33Z
Total Restarts = 0
Last Restart = N/A
Recent Events:
Time Type Description
2021-05-18T14:07:33-04:00 Terminated Exit Code: 0
2021-05-18T14:07:33-04:00 Started Task started by client
2021-05-18T14:07:31-04:00 Driver Downloading image
2021-05-18T14:07:31-04:00 Task Setup Building Task Directory
2021-05-18T14:07:31-04:00 Received Task received by client
```
```text
$ nomad status
ID Type Priority Status Submit Date
uuid.nomad batch 50 dead 2021-05-18T14:07:30-04:00
before.nomad batch 50 dead 2021-05-18T14:03:00-04:00
```
```text
$ nomad run uuid.nomad
==> Monitoring evaluation "2943fe82"
Evaluation triggered by job "uuid.nomad"
Allocation "61f5861a" created: node "f7bc1f2d", group "uuid"
==> Monitoring evaluation "2943fe82"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "2943fe82" finished with status "complete"
```
### Use an HCL2 variable
Using a variable can allow you to leverage Nomad's default behavior
of not running unchanged work, but also to provide a change to the
job without requiring a round trip to source control.
```text
$ nomad run -var run_index=1 variable.nomad
==> Monitoring evaluation "454f6fb4"
Evaluation triggered by job "variable.nomad"
==> Monitoring evaluation "454f6fb4"
Allocation "74f9cbf5" created: node "f7bc1f2d", group "variable"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "454f6fb4" finished with status "complete"
```
```text
$ nomad alloc status 74f
ID = 74f9cbf5-a793-5022-c831-b83e31712725
Eval ID = 454f6fb4
Name = variable.nomad.variable[0]
Node ID = f7bc1f2d
Node Name = nomad-client-1.node.consul
Job ID = variable.nomad
Job Version = 0
Client Status = complete
Client Description = All tasks have completed
Desired Status = run
Desired Description = <none>
Created = 6m52s ago
Modified = 6m48s ago
Task "hello-world" is "dead"
Task Resources
CPU Memory Disk Addresses
100 MHz 300 MiB 300 MiB
Task Events:
Started At = 2021-05-18T18:21:27Z
Finished At = 2021-05-18T18:21:27Z
Total Restarts = 0
Last Restart = N/A
Recent Events:
Time Type Description
2021-05-18T14:21:27-04:00 Terminated Exit Code: 0
2021-05-18T14:21:27-04:00 Started Task started by client
2021-05-18T14:21:24-04:00 Driver Downloading image
2021-05-18T14:21:24-04:00 Task Setup Building Task Directory
2021-05-18T14:21:24-04:00 Received Task received by client
```
```text
$ nomad status
ID Type Priority Status Submit Date
variable.nomad batch 50 dead 2021-05-18T14:21:23-04:00
```
Resubmit the job with the same `run_index` value—`1`.
```text
$ nomad run -var run_index=1 variable.nomad
==> Monitoring evaluation "4d7064ea"
Evaluation triggered by job "variable.nomad"
==> Monitoring evaluation "4d7064ea"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "4d7064ea" finished with status "complete"
```
Note that Nomad does not re-run the job. Now, change the
`run_index` value to `2` and run the command again.
```text
$ nomad run -var run_index=2 variable.nomad
==> Monitoring evaluation "73e7902f"
Evaluation triggered by job "variable.nomad"
==> Monitoring evaluation "73e7902f"
Allocation "9e8cbc58" created: node "f7bc1f2d", group "variable"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "73e7902f" finished with status "complete"
```
Nomad runs a fresh allocation of the batch job.
## Clean up
Run `nomad job stop variable.nomad` to stop the job.
[`before.nomad`]: ./before.nomad
[`uuid.nomad`]: ./uuid.nomad
[`variable.nomad`]: ./variable.nomad
================================================
FILE: HCL2/always_change/before.nomad
================================================
job "before.nomad" {
datacenters = ["dc1"]
type = "batch"
group "before" {
task "hello-world" {
driver = "docker"
config {
image = "hello-world:latest"
}
}
}
}
================================================
FILE: HCL2/always_change/uuid.nomad
================================================
job "uuid.nomad" {
datacenters = ["dc1"]
type = "batch"
meta {
run_uuid = "${uuidv4()}"
}
group "uuid" {
task "hello-world" {
driver = "docker"
config {
image = "hello-world:latest"
}
}
}
}
================================================
FILE: HCL2/always_change/variable.nomad
================================================
job "variable.nomad" {
datacenters = ["dc1"]
type = "batch"
meta {
run_index = "${floor(var.run_index)}"
}
group "variable" {
task "hello-world" {
driver = "docker"
config {
image = "hello-world:latest"
}
}
}
}
variable "run_index" {
type = number
description = "An integer that, when changed from the current value causes the job to restart."
validation {
condition = var.run_index == floor(var.run_index)
error_message = "The run_index must be an integer."
}
}
================================================
FILE: HCL2/dynamic/README.md
================================================
# HCL2 dynamic blocks
This job specification leverages the `dynamic` HCL2 blocks and HCL2 variables to
create a multi-task job specification.
================================================
FILE: HCL2/dynamic/example.nomad
================================================
variable "job_name" {
type = string
default = ""
}
locals {
targets = {
"1": "zpool"
"2": "zmirror"
}
tasks = {
"redis": {"name":"db","port":6379}
}
docker_versions = {
"zpool": "redis:7"
"zmirror": "redis:latest"
}
job_name = "%{ if var.job_name != "" }${var.job_name}%{ else }example%{ endif }"
}
job "example" {
name = local.job_name
datacenters = ["dc1"]
dynamic "group" {
for_each = local.targets
labels = ["${local.job_name}-${group.value}"]
content {
network {
dynamic "port" {
labels = ["${local.job_name}-${group.value}-${port.key}-${port.value.name}-${port.value.port}"]
for_each = local.tasks
content {
to = port.value.port
}
}
}
dynamic "task" {
labels = ["${local.job_name}-${group.value}-${task.key}"]
for_each = local.tasks
content {
driver = "docker"
config {
image = local.docker_versions[group.value]
ports = ["${local.job_name}-${group.value}-${task.key}-${task.value.name}-${task.value.port}"]
}
}
}
}
}
}
================================================
FILE: HCL2/object_to_template/README.md
================================================
================================================
FILE: HCL2/object_to_template/example.nomad
================================================
variable "datacenters" {
type = list(string)
default = ["dc1"]
}
variable "ports" {
type = list(object({
name = string
internal = number
external = number
}))
default = [
{
name = "db"
internal = 8300
external = 8300
},
{
name = "db2"
internal = 8301
external = 8301
}
]
}
job "example" {
datacenters = var.datacenters
type = "batch"
group "group" {
task "task" {
driver = "exec"
config {
command = "bash"
args = ["-c", "cat template.out"]
}
template {
destination = "template.out"
data = <<EOT
{{ $ports := parseJSON `${jsonencode(var.ports)}` }}
{{range $ports}}{{.name}}:{{.external}}->{{.internal}}{{println}}{{end}}
EOT
}
}
}
}
================================================
FILE: HCL2/variable_jobs/README.md
================================================
# Using HCL2 to add variables to Nomad jobs
Nomad's HCL2 support enables you to use variables in your Nomad job specifications.
This can decrease the number of job files you have to maintain in source control
and can encourage job reuse.
This example contains a job that consumes HCL2 variables and uses them to generate
a Docker service job.
The `job.nomad` file defines 3 variables:
- `datacenters`(default `[ "dc1" ]`)—a list of the Nomad datacenters to run
the job in.
- `docker_image`—The docker image name to run. Since this is a service job,
the image needs to run until explicitly stopped. The `redis` container is a
small example that works well.
- `image-version`—The specific version of the `docker_image` image to run. For
the `redis` container, try versions like `"3"`,`"4"`, and `"latest"`.
## Quickstart
### Run the example
```bash
nomad job run -var docker_image="redis" -var image_version="3" job.nomad
```
Nomad will start a `redis:3` container
```bash
nomad job run -var docker_image="redis" -var image_version="latest" job.nomad
```
Nomad will stop the `redis:3` container and start a 'redis:latest' container.
## Stop the examples
```bash
nomad job stop job
```
## Submitting variable values
There are three ways to provide values for HCL2 variables.
- Individual `-var` flags
- With a variable file and the `-var-file` flag
- Environment variables
You can use one or all these methods in the same call. Flags override values
from the environment. The flags are parsed in the order they are presented.
Precedence (highest to lowest)
- `-var` flag (if a variable repeats, the last one in the command line wins)
- `-var-file` flag (if a variable repeats in the files, the last one listed in the command line wins)
- environment variables
### Environment variables
To provide a value to the HCL2 engine via the environment, you need to create
an environment variable named `NOMAD_VAR_«variable name»`. For example, to
set the value of the `docker_image` variable, create an environment variable
named `NOMAD_VAR_docker_image`.
## Using variable files with multiple jobs
The HCL2 engine expects every variable that you supply using the `-var` or
`-var-file` flags to be consumed by the job specification.
You are some techniques to work around this constraint:
- [Provide HCL2 variable values using environment variables](./env-vars)
- [Use multiple `-var-files`](./var-files)
- [Decode the contents of an external file into a `local` variable](./decode-external-file)
================================================
FILE: HCL2/variable_jobs/decode-external-file/README.MD
================================================
# Decode the contents of an external file into a `local` variable
The HCL2 `file` function when paired with the `jsondecode` or `yamldecode` function enables you to externalize shared configuration elements for Nomad jobs to a JSON or YAML file.
This example contains two jobs that read the `env.json` file to and use values from it to configure the Nomad job during submission from the CLI.
## Run the examples
```bash
nomad job run -var="config=env.json" job1.nomad
```
Nomad will start a Redis 3 container
```bash
nomad job run -var="config=env.json" job2.nomad
```
Nomad will start a Redis 4 container
## Stop the examples
```bash
nomad job stop job1
nomad job stop job2
```
================================================
FILE: HCL2/variable_jobs/decode-external-file/env.json
================================================
{
"datacenters": [
"dc1"
],
"docker_image_job1": "redis:3",
"docker_image_job2": "redis:4"
}
================================================
FILE: HCL2/variable_jobs/decode-external-file/job1.nomad
================================================
#----------------------------------------------------------------------------
# This value can be supplied as a flag to nomad job run.
# `nomad job run -var config_file=«path to config» job1.nomad`
# or as an environment variable
# `export NOMAD_VAR_config_file=«path to config»`
# `nomad job run job1.nomad`
#----------------------------------------------------------------------------
variable "config_file" {
type = string
description = "Path to JSON formatted shared job configuration."
}
locals {
config = jsondecode(file(var.config_file))
}
job "job1" {
datacenters = local.config.datacenters
group "job1" {
task "job1" {
driver = "docker"
config {
image = local.config.docker_image_job1
}
}
}
}
================================================
FILE: HCL2/variable_jobs/decode-external-file/job2.nomad
================================================
#----------------------------------------------------------------------------
# This value can be supplied as a flag to nomad job run.
# `nomad job run -var config_file=«path to config» job2.nomad`
# or as an environment variable
# `export NOMAD_VAR_config_file=«path to config»`
# `nomad job run job2.nomad`
#----------------------------------------------------------------------------
variable "config_file" {
type = string
description = "Path to JSON formatted shared job configuration."
}
locals {
config = jsondecode(file(var.config_file))
}
job "job2" {
datacenters = local.config.datacenters
group "job2" {
task "job2" {
driver = "docker"
config {
image = local.config.docker_image_job2
}
}
}
}
================================================
FILE: HCL2/variable_jobs/env-vars/README.MD
================================================
# Provide HCL2 variable values using environment variables
This example contains two jobs that read HCL2 variable values from the
environment and populates the Nomad job with them during submission from the
CLI. This can be a very powerful feature when paired with [`direnv`],
[`envconsul`], and other tools that can manipulate environment variables.
## Run the sample
### Read in the environment variables
```bash
source ./env.vars
```
```bash
nomad job run job1.nomad
```
Nomad will start a Redis 3 container
```bash
nomad job run job2.nomad
```
Nomad will start a Redis 4 container
## Stop the example
```bash
nomad job stop job1
nomad job stop job2
unset NOMAD_VAR_datacenters \
NOMAD_VAR_docker_image_job1 \
NOMAD_VAR_docker_image_job2
```
[`envconsul`]: https://github.com/hashicorp/envconsul
[`direnv`]: https://direnv.net/
================================================
FILE: HCL2/variable_jobs/env-vars/env.vars
================================================
export NOMAD_VAR_datacenters='["dc1"]'
export NOMAD_VAR_docker_image_job1="redis:3"
export NOMAD_VAR_docker_image_job2="redis:4"
================================================
FILE: HCL2/variable_jobs/env-vars/job1.nomad
================================================
variable "datacenters" {
type = list(string)
description = "Path to JSON formatted shared job configuration."
}
variable "docker_image_job1" {
type = string
description = "Image for job1 to run"
}
job "job1" {
datacenters = var.datacenters
group "job1" {
task "job1" {
driver = "docker"
config {
image = var.docker_image_job1
}
}
}
}
================================================
FILE: HCL2/variable_jobs/env-vars/job2.nomad
================================================
variable "datacenters" {
type = list(string)
description = "Path to JSON formatted shared job configuration."
}
variable "docker_image_job2" {
type = string
description = "Image for job2 to run"
}
job "job2" {
datacenters = var.datacenters
group "job2" {
task "job2" {
driver = "docker"
config {
image = var.docker_image_job2
}
}
}
}
================================================
FILE: HCL2/variable_jobs/job.nomad
================================================
variable "datacenters" {
type = list(string)
description = "List of Nomad datacenters to run the job in. Defaults to `[\"dc1\"]`"
default = ["dc1"]
}
variable "docker_image" {
type = string
description = "Docker image for the job to run"
}
variable "image_version" {
type = string
description = "Version of the docker image to run"
}
job "job1" {
datacenters = var.datacenters
group "job1" {
task "job1" {
driver = "docker"
config {
image = "${var.docker_image}:${var.image_version}"
}
}
}
}
================================================
FILE: HCL2/variable_jobs/job.vars
================================================
image_version = "99"
================================================
FILE: HCL2/variable_jobs/multiple-var-files/README.MD
================================================
# Provide HCL2 variable values using environment variables
This example contains two jobs that consumes multiple HCL2 variable files and
populates the Nomad job with them during submission from the CLI.
The `shared.vars` file defines 2 variables:
- `datacenters = [ "dc1" ]`
- `docker_image = "redis"`
The job .vars files set the `image_version_«job name»` value to complete the
job specification.
## Run the examples
```bash
nomad job run -var-file=./shared.vars -var-file=./job1.vars job1.nomad
```
Nomad will start a Redis 3 container
```bash
nomad job run -var-file=./shared.vars -var-file=./job2.vars job2.nomad
```
Nomad will start a Redis 4 container
```bash
nomad job run -var-file=./shared.vars -var-file=./job3.vars job3.nomad
```
Nomad will start a hello-world:latest container by overriding docker_image from
the `./shared.vars` file.
## Stop the examples
```bash
nomad job stop job1
nomad job stop job2
nomad job stop job3
```
================================================
FILE: HCL2/variable_jobs/multiple-var-files/job1.nomad
================================================
variable "datacenters" {
type = list(string)
description = "Path to JSON formatted shared job configuration."
}
variable "docker_image" {
type = string
description = "Shared docker image"
}
variable "image_version_job1" {
type = string
description = "Docker image version to run for job1"
}
job "job1" {
datacenters = var.datacenters
group "job1" {
task "job1" {
driver = "docker"
config {
image = "${var.docker_image}:${var.image_version_job1}"
}
}
}
}
================================================
FILE: HCL2/variable_jobs/multiple-var-files/job1.vars
================================================
image_version_job1 = "3"
================================================
FILE: HCL2/variable_jobs/multiple-var-files/job2.nomad
================================================
variable "datacenters" {
type = list(string)
description = "Path to JSON formatted shared job configuration."
}
variable "docker_image" {
type = string
description = "Shared docker image"
}
variable "image_version_job2" {
type = string
description = "Docker image version to run for job2"
}
job "job2" {
datacenters = var.datacenters
group "job2" {
task "job2" {
driver = "docker"
config {
image = "${var.docker_image}:${var.image_version_job2}"
}
}
}
}
================================================
FILE: HCL2/variable_jobs/multiple-var-files/job2.vars
================================================
image_version_job2 = "4"
================================================
FILE: HCL2/variable_jobs/multiple-var-files/job3.nomad
================================================
variable "datacenters" {
type = list(string)
description = "Path to JSON formatted shared job configuration."
}
variable "docker_image" {
type = string
description = "Shared docker image"
}
variable "image_version_job3" {
type = string
description = "Docker image version to run for job3"
}
job "job3" {
datacenters = var.datacenters
group "job3" {
task "job3" {
driver = "docker"
config {
image = "${var.docker_image}:${var.image_version_job3}"
}
}
}
}
================================================
FILE: HCL2/variable_jobs/multiple-var-files/job3.vars
================================================
docker_image = "hello-world"
image_version_job3 = "latest"
================================================
FILE: HCL2/variable_jobs/multiple-var-files/shared.vars
================================================
datacenters = [ "dc1" ]
docker_image = "redis"
================================================
FILE: README.md
================================================
# Nomad Example Jobs
This repository holds jobs and job skeletons that I have used to create
reproducers or minimum viable cases. I use them when creating guides as
simple workloads as well.
Some specifically useful bits:
- **csi** - Example jobs that use CSI to connect to external resources such as
block devices.
- **fabio** - Several different fabio configurations that can be used to spin up
consul-aware load balancing in your Nomad cluster.
- **sleepy** - Jobs that do a thing and then sleep (perhaps redoing the thing
when they wake up).
- **template_playground** - a batch job that can be used to practice iterative
template development.
================================================
FILE: alloc_folder/mount_alloc.nomad
================================================
job "alloc_folder" {
datacenters = ["dc1"]
group "group" {
task "docker" {
driver = "docker"
config {
image = "busybox:latest"
command = "sh"
args = ["-c", "while true; do echo $(date) | tee -a /my_data/output.txt; sleep 2; done"]
volumes = ["alloc/data:/my_data"]
}
resources {
cpu = 100
memory = 100
}
}
}
}
================================================
FILE: alloc_folder/sidecar.nomad
================================================
job "alloc_folder" {
datacenters = ["dc1"]
group "group" {
task "docker" {
driver = "docker"
config {
image = "busybox:latest"
command = "sh"
args = ["-c", "while true; do echo $(date) | tee -a /alloc/output.txt; sleep 2; done"]
}
resources {
cpu = 100
memory = 100
}
}
task "exec" {
driver = "exec"
config {
command = "tail"
args = ["-f", "/alloc/output.txt"]
}
resources {
cpu = 100
memory = 100
}
}
}
}
================================================
FILE: applications/artifactory_oss/README.md
================================================
# Docker Registry
This job uses Nomad Host Volumes to provide an internal Docker registry which
can be used to host private containers for a Nomad cluster.
## Prerequisites
- **Consul** - This job leverages Consul service registrations for locating the registry
instances.
## Necessary configuration
### Create the host volume in the configuration
Create a folder on one of your Nomad clients to host your registry files. This
example uses `/opt/volumes/docker-registry`
```shell-session
$ mkdir -p /opt/volumes/docker-registry
```
Add the host_volume information to the client stanza in the Nomad configuration.
```hcl
client {
# ...
host_volume "docker-registry" {
path = "/opt/volumes/docker-registry"
read_only = false
}
}
```
Restart Nomad to read the new configuration.
```shell-session
$ systemctl restart nomad
```
### Add your registry to your daemon.json file
If you would like to use your registry with Nomad and do not want to configure
SSL, you can add the following to the `daemon.json` file on each of your Nomad
clients and restart Docker.
```json
{
"insecure-registries" : ["registry.service.consul:5000"],
}
```
You will need to do this on any machine that you would like to push to or pull
from your registry.
================================================
FILE: applications/artifactory_oss/registry.nomad
================================================
job "registry" {
datacenters = ["dc1"]
priority = 80
group "docker" {
network {
port "registry" {
to = 5000
static = 5000
}
}
service {
name = "registry"
port = "registry"
check {
type = "tcp"
port = "registry"
interval = "10s"
timeout = "2s"
}
}
volume "artifactory-registry" {
type = "host"
source = "artifactory-registry"
read_only = false
}
task "container" {
driver = "docker"
volume_mount {
volume = "artifactory-registry"
destination = "/var/lib/registry"
}
config {
image = "docker.bintray.io/jfrog/artifactory-oss:latest"
ports = ["registry"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/cluster-broccoli/example.nomad
================================================
job "example" {
datacenters = ["dc1"]
group "cache" {
network {
port "db" {
to = 6379
}
}
task "redis" {
driver = "docker"
config {
image = "redis:7"
ports = ["db"]
auth_soft_fail = true
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/docker_registry/README.md
================================================
# Docker Registry
This job uses Nomad Host Volumes to provide an internal Docker registry which
can be used to host private containers for a Nomad cluster.
## Prerequisites
- **Consul** - This job leverages Consul service registrations for locating the registry
instances.
## Necessary configuration
### Create the host volume in the configuration
Create a folder on one of your Nomad clients to host your registry files. This
example uses `/opt/volumes/docker-registry`
```shell-session
$ mkdir -p /opt/volumes/docker-registry
```
Add the host_volume information to the client stanza in the Nomad configuration.
```hcl
client {
# ...
host_volume "docker-registry" {
path = "/opt/volumes/docker-registry"
read_only = false
}
}
```
Restart Nomad to read the new configuration.
```shell-session
$ systemctl restart nomad
```
### Add your registry to your daemon.json file
If you would like to use your registry with Nomad and do not want to configure
SSL, you can add the following to the `daemon.json` file on each of your Nomad
clients and restart Docker.
```json
{
"insecure-registries" : ["registry.service.consul:5000"],
}
```
You will need to do this on any machine that you would like to push to or pull
from your registry.
================================================
FILE: applications/docker_registry/registry.nomad
================================================
job "registry" {
datacenters = ["dc1"]
priority = 80
group "docker" {
network {
port "registry" {
to = 5000
static = 5000
}
}
service {
name = "registry"
port = "registry"
check {
type = "tcp"
port = "registry"
interval = "10s"
timeout = "2s"
}
}
volume "docker-registry" {
type = "host"
source = "docker-registry"
read_only = false
}
task "container" {
driver = "docker"
volume_mount {
volume = "docker-registry"
destination = "/var/lib/registry"
}
config {
image = "registry"
ports = ["registry"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/docker_registry_v2/README.md
================================================
# Docker Registry
This job uses Nomad Host Volumes to provide an internal Docker registry which
can be used to host private containers for a Nomad cluster.
## Prerequisites
- **Consul** - This job leverages Consul service registrations for locating the registry
instances.
## Necessary configuration
### Create the host volume in the configuration
Create a folder on one of your Nomad clients to host your registry files. This
example uses `/opt/nomad/volumes/docker-registry`
```shell-session
$ mkdir -p /opt/nomad/volumes/docker-registry
```
Add the host_volume information to the client stanza in the Nomad configuration.
```hcl
client {
# ...
host_volume "docker-registry" {
path = "/opt/nomad/volumes/docker-registry"
read_only = false
}
}
```
Restart Nomad to read the new configuration.
```shell-session
$ systemctl restart nomad
```
### Add your registry to your daemon.json file
If you would like to use your registry with Nomad and do not want to configure
SSL, you can add the following to the `daemon.json` file on each of your Nomad
clients and restart Docker.
```json
{
"insecure-registries" : ["registry.service.consul:5000"],
}
```
You will need to do this on any machine that you would like to push to or pull
from your registry.
================================================
FILE: applications/docker_registry_v2/htpasswd
================================================
user:$2y$05$kyEyguS/Sisz7SMjqKQZ1eQDCM7pSFiItkL9yiVIDOVyQfj8XTCAS
================================================
FILE: applications/docker_registry_v2/make_password.sh
================================================
#!/bin/bash
docker run --rm -it -v $(pwd):/out --entrypoint="htpasswd" xmartlabs/htpasswd -Bbc /out/$1 $2 $3
================================================
FILE: applications/docker_registry_v2/registry.nomad
================================================
job "registry" {
datacenters = ["dc1"]
priority = 80
group "docker" {
network {
port "registry" {
to = 5000
static = 5000
}
}
service {
name = "registry"
port = "registry"
check {
type = "tcp"
port = "registry"
interval = "10s"
timeout = "2s"
}
}
volume "docker-registry" {
type = "host"
source = "docker-registry"
read_only = false
}
task "container" {
driver = "docker"
template {
destination = "secrets/htpasswd"
data = <<EOH
user:$2y$05$kyEyguS/Sisz7SMjqKQZ1eQDCM7pSFiItkL9yiVIDOVyQfj8XTCAS
EOH
}
volume_mount {
volume = "docker-registry"
destination = "/var/lib/registry"
}
env {
REGISTRY_AUTH="htpasswd"
REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm"
REGISTRY_AUTH_HTPASSWD_PATH="/secrets/htpasswd"
}
config {
image = "registry"
ports = ["registry"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/docker_registry_v3/README.md
================================================
# Docker Registry
This job uses Nomad Host Volumes to provide an internal Docker registry which
can be used to host private containers for a Nomad cluster.
## Prerequisites
- **Nomad 1.4+** - This job leverages:
- Nomad service discovery for locating the registry instances.
- Nomad variables for maintaining the user authentication information
## Necessary configuration
### Create the host volume in the configuration
Create a folder on one of your Nomad clients to host your registry files. This
example uses `/opt/nomad/volumes/docker-registry`
```shell-session
$ mkdir -p /opt/nomad/volumes/docker-registry
```
Add the host_volume information to the client stanza in the Nomad configuration.
```hcl
client {
# ...
host_volume "docker-registry" {
path = "/opt/nomad/volumes/docker-registry"
read_only = false
}
}
```
Restart Nomad to read the new configuration.
```shell-session
$ systemctl restart nomad
```
### Add your registry to your daemon.json file
If you would like to use your registry with Nomad and do not want to configure
SSL, you can add the following to the `daemon.json` file on each of your Nomad
clients and restart Docker.
```json
{
"insecure-registries" : ["registry.service.consul:5000"],
}
```
You will need to do this on any machine that you would like to push to or pull
from your registry.
================================================
FILE: applications/docker_registry_v3/make_password.sh
================================================
#!/bin/bash
cmd="htpasswd -Bbn $1 $2"
if ! [ -x "$(command -v htpasswd)" ]; then
if ! [ -x "$(command -v docker)" ]; then
echo 'Notice: this script requires htpasswd or docker.' >&2
exit 1
fi
echo 'Notice: htpasswd is not installed. Using docker to run it.' >&2
fetchedDocker=true
cmd="docker run --rm -it -v $(pwd):/out --entrypoint="htpasswd" xmartlabs/htpasswd -Bbn $1 $2"
fi
user=$1
password=$(eval $cmd | tr -d "\n"| tr ":" " " | awk '{print $2}')
varPath="nomad/jobs/registry/docker/container"
nomad var get $varPath | nomad var put - "$user"="$password"
================================================
FILE: applications/docker_registry_v3/registry.nomad
================================================
job "registry" {
datacenters = ["dc1"]
priority = 80
group "docker" {
network {
port "registry" {
to = 5000
static = 5000
}
}
service {
name = "registry"
port = "registry"
check {
type = "tcp"
port = "registry"
interval = "10s"
timeout = "2s"
}
}
volume "docker-registry" {
type = "host"
source = "docker-registry"
read_only = false
}
task "container" {
driver = "docker"
template {
destination = "secrets/htpasswd"
data = <<EOH
{{ with nomadVar "nomad/jobs/registry/docker/container"}}{{range $K, $V := .}}{{printf "%s:%s\n" $K $V}}{{end}}{{end}}
EOH
}
volume_mount {
volume = "docker-registry"
destination = "/var/lib/registry"
}
env {
REGISTRY_AUTH="htpasswd"
REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm"
REGISTRY_AUTH_HTPASSWD_PATH="/secrets/htpasswd"
}
config {
image = "registry"
ports = ["registry"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/mariadb/mariadb.nomad
================================================
job "mariadb" {
datacenters = ["dc1"]
type = "service"
group "bootstrap" {
count = 1
network {
mode = "bridge"
port "mysql" {
to = 3306
}
}
service {
name = "mariadb-${NOMAD_ALLOC_ID}"
port = "mysql"
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
task "mariadb-bootstrap" {
driver = "docker"
user = "root"
config {
image = "bitnami/mariadb-galera:10.5"
}
env {
MARIADB_GALERA_NODE_NAME = "localhost"
MARIADB_GALERA_NODE_ADDRESS = "${NOMAD_ADDRESS_mariadb-bootstrap}"
MARIADB_GALERA_CLUSTER_BOOTSTRAP = "yes"
MARIADB_GALERA_CLUSTER_ADDRESS = "${NOMAD_ADDRESS_mariadb-bootstrap}"
MARIADB_GALERA_CLUSTER_NAME = "my_galera"
MARIADB_GALERA_MARIABACKUP_USER = "my_mariabackup_user"
MARIADB_GALERA_MARIABACKUP_PASSWORD = "my_mariabackup_password"
MARIADB_ROOT_PASSWORD = "my_root_password"
MARIADB_USER = "my_user"
MARIADB_PASSWORD = "my_password"
MARIADB_DATABASE = "my_database"
}
}
}
}
================================================
FILE: applications/membrane-soa/README.md
================================================
Deploying a Java REST to SOAP Proxy in Connect
Technologies:
- Consul Service Mesh
- Consul Egress Gateways
- Nomad Java Task Driver
https://www.membrane-soa.org/service-proxy-doc/4.7/soap-quickstart.htm
<https://www.membrane-soa.org/service-proxy-doc/4.7/rest2soap-gateway.htm>
http://localhost:2000/bank/37050198
service-proxy.sh
```
#!/bin/bash
homeSet() {
echo "MEMBRANE_HOME variable is now set"
CLASSPATH="$MEMBRANE_HOME/conf"
CLASSPATH="$CLASSPATH:$MEMBRANE_HOME/starter.jar"
export CLASSPATH
echo Membrane Router running...
java -classpath "$CLASSPATH" com.predic8.membrane.core.Starter -c proxies.xml
}
terminate() {
echo "Starting of Membrane Router failed."
echo "Please execute this script from the appropriate subfolder of MEMBRANE_HOME/examples/"
}
homeNotSet() {
echo "MEMBRANE_HOME variable is not set"
if [ -f "`pwd`/../../starter.jar" ]
then
export MEMBRANE_HOME="`pwd`/../.."
homeSet
else
terminate
fi
}
if [ "$MEMBRANE_HOME" ]
then homeSet
else homeNotSet
fi
```
================================================
FILE: applications/membrane-soa/soap-proxy-v1-linux.nomad
================================================
job "soap-proxy" {
datacenters = ["dc1"]
group "membrane" {
network {
port "admin" {
static = 9000
}
port "proxy" {
static = 2000
}
}
task "membrane" {
artifact {
source = "https://github.com/membrane/service-proxy/releases/download/v4.7.3/membrane-service-proxy-4.7.3.zip"
destination = "local"
}
template {
destination = "local/proxy-conf/proxies.xml"
data =<<EOD
<spring:beans xmlns="http://membrane-soa.org/proxies/1/"
xmlns:spring="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.2.xsd
http://membrane-soa.org/proxies/1/ http://membrane-soa.org/schemas/proxies-1.xsd">
<router>
<serviceProxy port="2000">
<rest2Soap>
<mapping regex="/bank/.*" soapAction=""
soapURI="/axis2/services/BLZService" requestXSLT="./get2soap.xsl"
responseXSLT="./strip-env.xsl" />
</rest2Soap>
<target host="thomas-bayer.com" />
</serviceProxy>
<serviceProxy name="Console" port="9000">
<adminConsole />
</serviceProxy>
</router>
</spring:beans>
EOD
}
template {
destination = "local/proxy-conf/get2soap.xsl"
data =<<EOD
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:s11="http://schemas.xmlsoap.org/soap/envelope/">
<xsl:template match="/">
<s11:Envelope >
<s11:Body>
<blz:getBank xmlns:blz="http://thomas-bayer.com/blz/">
<blz:blz><xsl:value-of select="//path/component[2]"/></blz:blz>
</blz:getBank>
</s11:Body>
</s11:Envelope>
</xsl:template>
</xsl:stylesheet>
EOD
}
template {
destination = "local/proxy-conf/strip-env.xsl"
data =<<EOD
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:s11="http://schemas.xmlsoap.org/soap/envelope/">
<xsl:template match="/">
<xsl:apply-templates select="//s11:Body/*"/>
</xsl:template>
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates />
</xsl:copy>
</xsl:template>
<!-- Get rid of the namespace prefixes in json. So
ns1:getBank will be just getBank
-->
<xsl:template match="*">
<xsl:element name="{local-name()}">
<xsl:apply-templates/>
</xsl:element>
</xsl:template>
</xsl:stylesheet>
EOD
}
env {
MEMBRANE_HOME = "/local/membrane-service-proxy-4.7.3"
}
driver = "java"
config {
class = "com.predic8.membrane.core.Starter"
class_path = "/local/membrane-service-proxy-4.7.3/conf:/local/membrane-service-proxy-4.7.3/starter.jar"
args = ["-c","/local/proxy-conf/proxies.xml"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/membrane-soa/soap-proxy-v1-windows.nomad
================================================
job "soap-proxy" {
datacenters = ["dc1"]
group "membrane" {
network {
port "admin" {
static = 9000
}
port "proxy" {
static = 2000
}
}
task "membrane" {
artifact {
source = "https://github.com/membrane/service-proxy/releases/download/v4.7.3/membrane-service-proxy-4.7.3.zip"
destination = "local"
}
template {
destination = "local/proxy-conf/proxies.xml"
data =<<EOD
<spring:beans xmlns="http://membrane-soa.org/proxies/1/"
xmlns:spring="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.2.xsd
http://membrane-soa.org/proxies/1/ http://membrane-soa.org/schemas/proxies-1.xsd">
<router>
<serviceProxy port="2000">
<rest2Soap>
<mapping regex="/bank/.*" soapAction=""
soapURI="/axis2/services/BLZService" requestXSLT="./get2soap.xsl"
responseXSLT="./strip-env.xsl" />
</rest2Soap>
<target host="thomas-bayer.com" />
</serviceProxy>
<serviceProxy name="Console" port="9000">
<adminConsole />
</serviceProxy>
</router>
</spring:beans>
EOD
}
template {
destination = "local/proxy-conf/get2soap.xsl"
data =<<EOD
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:s11="http://schemas.xmlsoap.org/soap/envelope/">
<xsl:template match="/">
<s11:Envelope >
<s11:Body>
<blz:getBank xmlns:blz="http://thomas-bayer.com/blz/">
<blz:blz><xsl:value-of select="//path/component[2]"/></blz:blz>
</blz:getBank>
</s11:Body>
</s11:Envelope>
</xsl:template>
</xsl:stylesheet>
EOD
}
template {
destination = "local/proxy-conf/strip-env.xsl"
data =<<EOD
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:s11="http://schemas.xmlsoap.org/soap/envelope/">
<xsl:template match="/">
<xsl:apply-templates select="//s11:Body/*"/>
</xsl:template>
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates />
</xsl:copy>
</xsl:template>
<!-- Get rid of the namespace prefixes in json. So
ns1:getBank will be just getBank
-->
<xsl:template match="*">
<xsl:element name="{local-name()}">
<xsl:apply-templates/>
</xsl:element>
</xsl:template>
</xsl:stylesheet>
EOD
}
env {
MEMBRANE_HOME = "local/membrane-service-proxy-4.7.3"
}
driver = "java"
config {
class = "com.predic8.membrane.core.Starter"
class_path = "local/membrane-service-proxy-4.7.3/conf;/local/membrane-service-proxy-4.7.3/starter.jar"
args = ["-c","local/proxy-conf/proxies.xml"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/membrane-soa/soap-proxy.nomad
================================================
locals {
membrane_home = "/local/membrane-service-proxy-4.7.3"
class_path = "${local.membrane_home}/conf:${local.membrane_home}/starter.jar"
}
job "soap-proxy" {
datacenters = ["dc1"]
group "membrane" {
network {
mode = "bridge"
dns {
servers = ["8.8.8.8", "8.8.4.4"]
}
port "admin" {
to = 9000
}
port "proxy" {
to = 2000
}
}
task "membrane" {
artifact {
source = "https://github.com/membrane/service-proxy/releases/download/v4.7.3/membrane-service-proxy-4.7.3.zip"
destination = "local"
}
template {
destination = "local/proxy-conf/proxies.xml"
data =<<EOD
<spring:beans xmlns="http://membrane-soa.org/proxies/1/"
xmlns:spring="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.2.xsd
http://membrane-soa.org/proxies/1/ http://membrane-soa.org/schemas/proxies-1.xsd">
<router>
<serviceProxy port="2000">
<rest2Soap>
<mapping regex="/bank/.*" soapAction=""
soapURI="/axis2/services/BLZService" requestXSLT="./get2soap.xsl"
responseXSLT="./strip-env.xsl" />
</rest2Soap>
<target host="thomas-bayer.com" />
</serviceProxy>
<serviceProxy name="Console" port="9000">
<adminConsole />
</serviceProxy>
</router>
</spring:beans>
EOD
}
template {
destination = "local/proxy-conf/get2soap.xsl"
data =<<EOD
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:s11="http://schemas.xmlsoap.org/soap/envelope/">
<xsl:template match="/">
<s11:Envelope >
<s11:Body>
<blz:getBank xmlns:blz="http://thomas-bayer.com/blz/">
<blz:blz><xsl:value-of select="//path/component[2]"/></blz:blz>
</blz:getBank>
</s11:Body>
</s11:Envelope>
</xsl:template>
</xsl:stylesheet>
EOD
}
template {
destination = "local/proxy-conf/strip-env.xsl"
data =<<EOD
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:s11="http://schemas.xmlsoap.org/soap/envelope/">
<xsl:template match="/">
<xsl:apply-templates select="//s11:Body/*"/>
</xsl:template>
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates />
</xsl:copy>
</xsl:template>
<!-- Get rid of the namespace prefixes in json. So
ns1:getBank will be just getBank
-->
<xsl:template match="*">
<xsl:element name="{local-name()}">
<xsl:apply-templates/>
</xsl:element>
</xsl:template>
</xsl:stylesheet>
EOD
}
env {
MEMBRANE_HOME = "/local/membrane-service-proxy-4.7.3"
}
driver = "java"
config {
class = "com.predic8.membrane.core.Starter"
class_path = "/local/membrane-service-proxy-4.7.3/conf:/local/membrane-service-proxy-4.7.3/starter.jar"
args = ["-c","/local/proxy-conf/proxies.xml"]
}
# driver = "exec"
# config {
# command = "/bin/bash"
# args = ["-c","while true; do sleep 500; done"]
# }
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/minio/README.md
================================================
# Minio S3-compatible Storage
This job uses Nomad Host Volumes to provide an internal s3 compatible storage
environment which can be used to host private artifacts for a Nomad clusters.
## Prerequisites
- **Consul** - This job leverages Consul service registrations for locating the
MinIO instance.
## Necessary configuration
### Create the host volume in the configuration
Create a folder on one of your Nomad clients to host your registry files. This
example uses `/opt/volumes/minio-data`
```shell-session
$ mkdir -p /opt/volumes/minio-data
```
Add the host_volume information to the client stanza in the Nomad configuration.
```hcl
client {
# ...
host_volume "minio-data" {
path = "/opt/volumes/minio-data"
read_only = false
}
}
```
Restart Nomad to read the new configuration.
```shell-session
$ systemctl restart nomad
```
================================================
FILE: applications/minio/minio.nomad
================================================
job "minio" {
datacenters = ["dc1"]
priority = 80
group "storage" {
network {
port "api" {
to = 9000
static = 9000
}
}
service {
name = "minio"
port = "api"
check {
type = "tcp"
port = "api"
interval = "10s"
timeout = "2s"
}
}
volume "minio-data" {
type = "host"
source = "minio-data"
read_only = false
}
task "minio" {
driver = "docker"
env {
MINIO_ROOT_USER = "AKIAIOSFODNN7EXAMPLE"
MINIO_ROOT_PASSWORD = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
volume_mount {
volume = "minio-data"
destination = "/data"
}
config {
image = "minio/minio"
args = ["server", "/data"]
ports = ["api"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
# docker run -p 9000:9000 \
# --name minio1 \
# -v /mnt/data:/data \
# -e "MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE" \
# -e "MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
# minio/minio server /data
================================================
FILE: applications/minio/secure-variables/README.md
================================================
# Minio S3-compatible Storage
This job uses Nomad Host Volumes to provide an internal s3 compatible storage
environment which can be used to host private artifacts for a Nomad clusters.
## Prerequisites
- **Nomad 1.4** - This job leverages Nomad service registrations for locating the
MinIO instance and used Nomad Variables.
## Necessary configuration
### Create the host volume in the configuration
Create a folder on one of your Nomad clients to host your registry files. This
example uses `/opt/volumes/minio-data`
```shell-session
$ mkdir -p /opt/volumes/minio-data
```
Add the host_volume information to the client stanza in the Nomad configuration.
```hcl
client {
# ...
host_volume "minio-data" {
path = "/opt/volumes/minio-data"
read_only = false
}
}
```
Restart Nomad to read the new configuration.
```shell-session
$ systemctl restart nomad
```
================================================
FILE: applications/minio/secure-variables/minio-data/.gitkeep
================================================
================================================
FILE: applications/minio/secure-variables/minio.nomad
================================================
# minio is an AWS S3-compatible storage engine
job "minio" {
datacenters = ["dc1"]
priority = 80
group "storage" {
network {
port "api" {
to = 9000
static = 9000
}
}
service {
name = "minio"
port = "api"
provider = "nomad"
check {
type = "tcp"
port = "api"
interval = "10s"
timeout = "2s"
}
}
volume "minio-data" {
type = "host"
source = "minio-data"
read_only = false
}
task "minio" {
driver = "docker"
template {
destination = "${NOMAD_SECRETS_DIR}/env.vars"
env = true
change_mode = "restart"
data =<<EOF
{{- with nomadVar "nomad/jobs/minio/storage/minio" -}}
MINIO_ROOT_USER = {{.root_user}}
MINIO_ROOT_PASSWORD = {{.root_password}}
{{- end -}}
EOF
}
volume_mount {
volume = "minio-data"
destination = "/data"
}
config {
image = "minio/minio"
args = ["server", "/data"]
ports = ["api"]
}
}
}
}
================================================
FILE: applications/minio/secure-variables/start.sh
================================================
#! /usr/bin/env bash
mkdir -p minio-data
sed "s|«/absolute/path/to»|$(pwd)|g" volume.hcl > .volume_patch.hcl
nohup nomad agent -dev -config=.volume_patch.hcl -acl-enabled >nomad.log 2>&1 &
echo -n $! > .nomad.pid
echo "Nomad PID is $(cat .nomad.pid)"
disown
# wait for leadership
sleep 3
echo '{"BootstrapSecret": "2b778dd9-f5f1-6f29-b4b4-9a5fa948757a"}' | nomad operator api /v1/acl/bootstrap
echo ''
export NOMAD_TOKEN=2b778dd9-f5f1-6f29-b4b4-9a5fa948757a
echo -n ${NOMAD_TOKEN} > .nomad.token
nomad var put nomad/jobs/minio/storage/minio \
root_user="AKIAIOSFODNN7EXAMPLE" \
root_password="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
nomad job run -detach minio.nomad
echo 'export NOMAD_TOKEN=2b778dd9-f5f1-6f29-b4b4-9a5fa948757a'
================================================
FILE: applications/minio/secure-variables/stop.sh
================================================
#! /usr/bin/env bash
PID=$(cat .nomad.pid)
echo "Stopping Nomad (pid: ${PID})"
rm -rf .nomad.pid
rm -rf .nomad.token
rm -rf .volume_patch.hcl
rm -rf nomad.log
rm -rf minio_data
echo "Done."
================================================
FILE: applications/minio/secure-variables/volume.hcl
================================================
# The host volume configuration for the minio task. The start.sh
# script will make a derived copy of this file with the place-
# holder--«/absolute/path/to»--replaced with the output of `pwd`
client {
host_volume "minio-data" {
path = "«/absolute/path/to»/minio-data"
read_only = false
}
}
================================================
FILE: applications/postgres/README.md
================================================
# Stateful example of Postgres with Host Volumes
## Configure a supportive host volume
This job uses a volume named
`pg-data`. On one of your Nomad clients, either create an additional
configuration file (if you're `config` is pointed to a directory)
or add a `host_volume` stanza to your existing client configuration
similar to the following.
```hcl
client {
host_volume "pg-data" {
path = "/opt/nomad/volumes/pg-data"
read_only = false
}
}
```
Create the directory to support the volume.
```shell-session
$ mkdir -p /opt/nomad/volumes/pg-data
```
Restart Nomad to enable the new host volume.
```shell-session
$ systemctl restart nomad
```
Verify that the host volume is available.
```shell-session
$ nomad node status -self -verbose
```
Once the client finishes starting, you should see the `pg-data` host volume
listed in the **Host Volumes** section of the output.
```
Host Volumes
Name ReadOnly Source
pg-data false /opt/nomad/volumes/pg-data
```
Run the job.
```shell-session
$ nomad job run postgres.nomad
```
Once the job starts, check the allocation status to determine what IP and
port you need to connect to.
Connect to the instance using a postgres client at the scheduled IP address
and port. Use user `postgres` and secret `mysecretpassword`.
================================================
FILE: applications/postgres/postgres.nomad
================================================
job "postgres.nomad" {
datacenters = ["dc1"]
group "database" {
network {
port "db" {
to = 5432
}
}
service {
name = "db"
port = "db"
check {
type = "tcp"
port = "db"
interval = "10s"
timeout = "2s"
}
}
volume "pg-data" {
type = "host"
source = "pg-data"
read_only = false
}
task "postgres" {
driver = "docker"
env {
POSTGRES_PASSWORD="mysecretpassword"
# POSTGRES_USER=""
# POSTGRES_DB=""
PGDATA="/var/lib/postgresql/data/pgdata"
}
volume_mount {
volume = "pg-data"
destination = "/var/lib/postgresql/data"
}
config {
image = "postgres"
ports = ["db"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/prometheus/README.md
================================================
# Prometheus
On the client, you will need a rule to allow the docker containers to talk to the local
consul agents.
```
firewall-cmd --permanent --zone=public --add-rich-rule='rule family=ipv4 source address=172.17.0.0/16 accept' && firewall-cmd --reload
```
## Connecting to the instances
================================================
FILE: applications/prometheus/fabio-service.nomad
================================================
# For ACL-enabled Consul Clusters, you need to specify a Consul ACL token down
# in the `fabio-linux-amd64` task's env stanza. Uncomment the example and
# replace the token with a valid Consul ACL token.
job "fabio" {
datacenters = ["dc1"]
type = "system"
update {
stagger = "5s"
max_parallel = 1
}
group "fabio-linux-amd64" {
network {
port "http" {
static = "9999"
}
port "ui" {
static = "9998"
}
}
task "fabio-linux-amd64" {
constraint {
attribute = "${attr.cpu.arch}"
operator = "="
value = "amd64"
}
constraint {
attribute = "${attr.kernel.name}"
operator = "="
value = "linux"
}
artifact {
source = "https://github.com/fabiolb/fabio/releases/download/v1.5.15/fabio-1.5.15-go1.15.5-linux_amd64"
options {
checksum = "sha256:14c7a02ca95fb00a4f3010eab4e3c0e354a3f4953d2a793cb800332012f42066"
}
}
driver = "exec"
config {
command = "fabio-1.5.15-go1.15.5-linux_amd64"
}
env {
# FABIO_REGISTRY_CONSUL_TOKEN = "c62d8564-c0c5-8dfe-3e75-005debbd0e40"
}
resources {
cpu = 200
memory = 32
}
}
}
}
================================================
FILE: applications/prometheus/grafana/README.md
================================================
Thanks to [Nextty](https://grafana.com/orgs/derekamz) for two great grafana dashboards to start with:
* Nomad Jobs - https://grafana.com/dashboards/6281
* Nomad Cluster -
================================================
FILE: applications/prometheus/grafana/nomad_jobs.json
================================================
{
"__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "5.1.2"
},
{
"type": "panel",
"id": "graph",
"name": "Graph",
"version": "5.0.0"
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "5.0.0"
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": 6281,
"graphTooltip": 0,
"id": null,
"iteration": 1527401878265,
"links": [],
"panels": [
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "${DS_PROMETHEUS}",
"fill": 1,
"gridPos": {
"h": 6,
"w": 12,
"x": 0,
"y": 0
},
"id": 2,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": "host",
"repeatDirection": "v",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg(nomad_client_allocs_cpu_total_percent{host=~\"$host\"}) by(exported_job, task)",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{task}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "CPU Usage Percent - $host",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 3,
"format": "percentunit",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "${DS_PROMETHEUS}",
"fill": 1,
"gridPos": {
"h": 6,
"w": 12,
"x": 12,
"y": 0
},
"id": 3,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": "host",
"repeatDirection": "v",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg(nomad_client_allocs_cpu_total_ticks{host=~\"$host\"}) by(exported_job, task)",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{task}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "CPU Total Ticks - $host",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 3,
"format": "timeticks",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "${DS_PROMETHEUS}",
"fill": 1,
"gridPos": {
"h": 6,
"w": 12,
"x": 0,
"y": 6
},
"id": 6,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": "host",
"repeatDirection": "v",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg(nomad_client_allocs_memory_rss{host=~\"$host\"}) by(exported_job, task)",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{task}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "RSS - $host",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 3,
"format": "decbytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "${DS_PROMETHEUS}",
"fill": 1,
"gridPos": {
"h": 6,
"w": 12,
"x": 12,
"y": 6
},
"id": 7,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": "host",
"repeatDirection": "v",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg(nomad_client_allocs_memory_cache{host=~\"$host\"}) by(exported_job, task)",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{task}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "Memory Cache - $host",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 3,
"format": "decbytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"schemaVersion": 16,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"allValue": null,
"current": {},
"datasource": "${DS_PROMETHEUS}",
"hide": 0,
"includeAll": false,
"label": "DC",
"multi": false,
"name": "datacenter",
"options": [],
"query": "label_values(nomad_client_uptime, datacenter)",
"refresh": 1,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"current": {},
"datasource": "${DS_PROMETHEUS}",
"hide": 0,
"includeAll": true,
"label": "Host",
"multi": true,
"name": "host",
"options": [],
"query": "label_values(nomad_client_uptime{datacenter=~\"$datacenter\"}, host)",
"refresh": 2,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "",
"title": "Nomad Jobs",
"uid": "TvqbbhViz",
"version": 12,
"description": "Nomad Jobs metrics"
}
================================================
FILE: applications/prometheus/node-exporter.nomad
================================================
# The Prometheus Node Exporter needs access to the proc filesystem which is not
# mounted into the exec jail, so it requires the raw_exec driver to run.
job "prometheus-node-exporter" {
datacenters = ["dc1"]
type = "system"
group "system" {
network {
port "exporter" {
static = 9100
}
}
service {
name = "node-exporter"
tags = []
port = "exporter"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
task "node-exporter" {
driver = "raw_exec"
config {
command = "local/node_exporter-0.18.1.linux-amd64/node_exporter"
args = [
"--web.listen-address=:${NOMAD_PORT_exporter}"
]
}
artifact {
source = "https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-amd64.tar.gz"
destination = "local"
options {
checksum = "sha256:b2503fd932f85f4e5baf161268854bf5d22001869b84f00fd2d1f57b51b72424"
}
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/prometheus/prometheus.nomad
================================================
# For ACL-enabled Consul Clusters, you need to specify a Consul ACL token down
# in the `prometheus` task's scrape config.
job "prometheus" {
datacenters = ["dc1"]
type = "service"
update {
max_parallel = 1
min_healthy_time = "10s"
healthy_deadline = "3m"
auto_revert = false
canary = 0
}
group "monitoring" {
count = 1
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
network {
port "prometheus_ui" {
to = 9090
}
port "grafana_ui" {
to = 3000
}
}
service {
name = "prometheus-ui"
#tags = ["urlprefix-/prometheus"]
tags = ["urlprefix-/prometheus strip=/prometheus"]
port = "prometheus_ui"
check {
name = "prometheus_ui port alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
service {
name = "grafana-ui"
port = "grafana_ui"
tags = ["urlprefix-/grafana strip=/grafana"]
check {
name = "grafana-ui port alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
ephemeral_disk { size = 1000 }
task "grafana" {
artifact {
source="https://gist.githubusercontent.com/angrycub/046cee11bd3d8c4ab9a3819646c9660c/raw/c699095c2cb25b896e2c709da588b668ce82f8b5/prometheus_nomad.json"
destination="local/provisioning/dashboards/dashs"
}
template {
change_mode="noop"
destination="local/provisioning/dashboards/file_provider.yml"
data = <<EOH
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
updateIntervalSeconds: 10 #how often Grafana will scan for changed dashboards
options:
path: {{ env "NOMAD_TASK_DIR" }}/provisioning/dashboards/dashs
EOH
}
template {
change_mode="noop"
destination="local/provisioning/datasources/prometheus_datasource.yml"
data = <<EOH
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://{{ env "NOMAD_ADDR_prometheus_ui" }}
EOH
}
env {
GF_SERVER_ROOT_URL = "http://127.0.0.1:9999/grafana/"
GF_PATHS_PROVISIONING ="/${NOMAD_TASK_DIR}/provisioning"
}
driver = "docker"
config {
image = "grafana/grafana:6.1.4"
ports = ["grafana_ui"]
}
}
task "prometheus" {
template {
change_mode = "noop"
destination="local/prometheus.yml"
data = <<EOH
---
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'nomad'
scrape_interval: 10s
metrics_path: /v1/metrics
params:
format: ['prometheus']
consul_sd_configs:
- server: '{{ env "NOMAD_IP_prometheus_ui" }}:8500'
# token: "c62d8564-c0c5-8dfe-3e75-005debbd0e40"
services:
- "nomad"
- "nomad-client"
relabel_configs:
- source_labels: ['__meta_consul_tags']
regex: .*,http,.*
action: keep
EOH
}
driver = "docker"
config {
image = "prom/prometheus:v2.9.1"
args = [
"--web.external-url=http://127.0.0.1:9999/prometheus",
"--web.route-prefix=/",
"--config.file=/local/prometheus.yml"
]
ports = ["prometheus_ui"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/vms/freedos/.gitignore
================================================
*.img
# Created by https://www.toptal.com/developers/gitignore/api/macos
# Edit at https://www.toptal.com/developers/gitignore?templates=macos
### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
# End of https://www.toptal.com/developers/gitignore/api/macos
================================================
FILE: applications/vms/freedos/README.md
================================================
## FreeDOS VM
This job fetches a small remote VM image and starts it in your Nomad cluster. It
also contains a task that starts a web-browser based VNC viewer.
TODO: This job requires network namespaces for QEMU, which currently does not
work in a released version of Nomad.
================================================
FILE: applications/vms/freedos/freedos.img.tgz.SHASUM
================================================
8d2817126bf46ba2b4fca0b0c49eed2cc208c6f6448651e82c6d973fcba36569 freedos.img.tgz
================================================
FILE: applications/vms/freedos/freedos.nomad
================================================
job "freedos" {
datacenters = ["dc1"]
group "g1" {
network {
mode = "bridge"
port "webvnc" {}
}
service {
name = "freedos"
tags = ["sample"]
port = "webvnc"
check {
type = "tcp"
port = "webvnc"
interval = "10s"
timeout = "2s"
}
}
task "novnc" {
driver = "docker"
env {
NOVNC_PORT = "${NOMAD_PORT_webvnc}"
VNC_SERVER_IP = "127.0.0.1"
VNC_SERVER_PORT = "5901"
}
config {
image = "voiselle/novnc"
ports = ["webvnc"]
}
}
task "freedos" {
artifact {
source = "https://github.com/angrycub/nomad_example_jobs/raw/main/applications/vms/freedos/freedos.img.tgz"
destination = "local"
options {
checksum = "sha256:8d2817126bf46ba2b4fca0b0c49eed2cc208c6f6448651e82c6d973fcba36569"
}
}
driver = "qemu"
config {
image_path = "local/freedos.img"
accelerator = "kvm"
args = [
"-vnc", "127.0.0.1:1"
]
}
}
}
}
================================================
FILE: applications/vms/tinycore/README.md
================================================
# TinyCore QEMU example
This sample will start a TinyCore Linux VM configured with the SSH daemon
enabled. It performs port forwarding using the QEMU commands so that Nomad can
dynamically assign a HTTP and SSH port for the VM.
You will need to serve the `tinycore.qcow2` image someplace so that it can be
retrieved using the artifact stanza.
================================================
FILE: applications/vms/tinycore/tc_ssh.nomad
================================================
job "j1" {
datacenters = ["dc1"]
group "g1" {
network {
mode = "bridge"
port "http" {
to = 80
}
port "ssh" {
to = 23
}
port "webvnc" {}
}
service {
tags = ["tag1"]
port = "http"
check {
type = "http"
port = "http"
path = "/index.html"
interval = "10s"
timeout = "2s"
}
}
task "novnc" {
driver = "docker"
env {
NOVNC_PORT = "${NOMAD_PORT_webvnc}"
VNC_SERVER_IP = "127.0.0.1"
VNC_SERVER_PORT = "5901"
}
config {
image = "voiselle/novnc"
ports = ["webvnc"]
}
}
task "t1" {
template {
data = <<EOH
Guest System
EOH
destination = "local/index.html"
}
artifact {
source = "http://10.0.0.188:8000/tinycore.qcow2.tgz"
}
driver = "qemu"
config {
image_path = "local/tinycore.qcow2"
## Uncomment if KVM is available on your system
accelerator = "kvm"
args = [
"-drive", "file=fat:rw:/opt/nomad/data/alloc/${NOMAD_ALLOC_ID}/${NOMAD_TASK_NAME}/local,format=raw,media=disk",
]
ports = ["ssh", "http"]
vnc {
enabled = true
ip = "127.0.0.1"
display = 1
}
}
}
}
}
================================================
FILE: applications/vms/tinycore/tinycore.qcow2.tgz
================================================
[File too large to display: 17.5 MB]
================================================
FILE: applications/wordpress/README.md
================================================
# Wordpress
This job demonstrates several useful patterns for creating Nomad jobs:
- Nomad Host Volumes for persistent storage
- Using a prestart task to wait until a dependency is available
- Template driven configuration to minimize static port references
## Prerequisites
- **Consul** - This job leverages Consul service registrations to locate the
supporting MySQL instance.
## Necessary configuration
### Create the host volume in the configuration
Create a folder on one of your Nomad clients to host your registry files. This
example uses `/opt/volumes/my-website-db`
```shell-session
$ mkdir -p /opt/volumes/my-website-db
```
Add the host_volume information to the client stanza in the Nomad configuration.
```hcl
client {
# ...
host_volume "my-website-db" {
path = "/opt/volumes/my-website-db"
read_only = false
}
}
```
Restart Nomad to read the new configuration.
```shell-session
$ systemctl restart nomad
```
================================================
FILE: applications/wordpress/distributed/README.md
================================================
# WordPress
This job demonstrates several useful patterns for creating Nomad jobs:
- Nomad Host Volumes for persistent storage
- Using a pre-start task to wait until a dependency is available
- Template driven configuration to reduce static port references
## Prerequisites
- **Consul** — This job leverages Consul service registrations to locate
the supporting MySQL instance.
## Necessary configuration
### Create the host volume in the configuration
Create a folder on one of your Nomad clients to host your registry files. This
example uses `/opt/nomad/volumes/wordpress-db`.
```shell-session
mkdir -p /opt/nomad/volumes/wordpress-db
```
Add the `host_volume` information to the client stanza in the Nomad configuration.
If your `-config` flag points to a directory, you can create this as a standalone
file in that same folder.
```hcl
client {
# ...
host_volume "my-website-db" {
path = "/opt/nomad/volumes/my-website-db"
read_only = false
}
}
```
Restart Nomad to read the new configuration.
```shell
systemctl restart nomad
```
================================================
FILE: applications/wordpress/distributed/build-site.nomad
================================================
job "build-site" {
datacenters = ["dc1"]
type = "batch"
parameterized {
meta_required = ["site_name"]
}
group "sitebuilder" {
task "generate-password" {
lifecycle {
hook = "prestart"
sidecar = false
}
template {
destination = "secret/generate_keys.sh"
env = true
data =<< EOT
#!/bin/bash
{{- $NMSN := env "NOMAD_META_site_name" -}}
{{- $UUID := "${uuidv4}" -}}
Site={{ $NMSN }}
UUID={{ $UUID }}
CONSUL_HTTP_TOKEN=c62d8564-c0c5-8dfe-3e75-005debbd0e40
echo "Creating credentials for site $Site..."
consul kv put wordpress/sites/$Site/db/user wp-site-$Site
consul kv put wordpress/sites/$Site/db/pass $UUID
consul kv put wordpress/sites/$Site/db/name wordpress-$Site
EOT
}
driver = "raw_exec"
command = "secret/generate_keys.sh"
}
task "make-database" {
template {
destination = "local/run.sql"
data = << EOT
CREATE DATABASE {{ printf "wordpress-%s" .Name }};
CREATE USER {{ .User }} identified by {{ .Pass }};
EOT
}
template {
destination = "secrets/env.txt"
env = true
data = << EOT
MYSQL_PASSWORD=somewordpress
EOT
}
driver = "docker"
config {
image = "arey/mysql-client"
args = [
"--host=${MYSQL_HOST}",
"--port=${MYSQL_PORT}",
"--user=root"
"--password=${MYSQL_PASSWORD}",
"--execute=\"source /local/run.sql\""
]
}
}
}
}
# $ docker run -v <path to sql>:/sql --link <mysql server container name>:mysql -it arey/mysql-client -h mysql -p <password> -D <database name> -e "source /sql/<your sql file>"
================================================
FILE: applications/wordpress/distributed/nginx.nomad
================================================
job "nginx" {
datacenters = ["dc1"]
type = "system"
group "nginx" {
network {
port "http" {
static = 80
}
}
service {
name = "wp"
port = "http"
}
task "nginx" {
driver = "docker"
config {
image = "nginx"
ports = ["http"]
volumes = [
"local:/etc/nginx/conf.d",
]
}
template {
data = <<EOF
{{- $ServicesByTag := (service "wordpress-sites" | byTag) -}}{{- $I :=0 -}}
{{- /* {{- printf "http {\n" -}} */ -}}
{{- range $ServiceTag, $services := $ServicesByTag -}}
{{- if gt $I 0 -}}{{- printf "\n\n" -}}{{- end -}}
{{- printf "##\n## %s \n##\n" $ServiceTag -}}
{{- printf " upstream %s {\n" $ServiceTag -}}
{{- range $services -}}
{{- printf " server %s:%d;\n" .Address .Port -}}
{{- else -}}
{{- printf " server 127.0.0.1:65535; # force a 502\n" -}}
{{- end -}}
{{- printf " }\n" }}
server {
listen 80;
server_name {{$ServiceTag}}.wp.service.consul;
location / {
proxy_pass http://{{$ServiceTag}};
}
}
{{- $I = add $I 1 -}}
{{- end -}}
{{- printf "\n" -}}
{{- /* {{- printf "}\n" -}} */ -}}
EOF
destination = "local/load-balancer.conf"
change_mode = "signal"
change_signal = "SIGHUP"
}
}
}
}
================================================
FILE: applications/wordpress/distributed/reset.sh
================================================
================================================
FILE: applications/wordpress/distributed/wordpress-db.nomad
================================================
job "wordpress-db" {
datacenters = ["dc1"]
group "database" {
network {
port "db" {
to = 3306
}
}
service {
name = "wordpress-db"
port = "db"
check {
type = "tcp"
port = "db"
interval = "10s"
timeout = "2s"
}
}
volume "wordpress-db" {
type = "host"
source = "wordpress-db"
read_only = false
}
task "mysql" {
driver = "docker"
env {
MYSQL_ROOT_PASSWORD="somewordpress"
MYSQL_DATABASE="wordpress"
MYSQL_USER="wordpress"
MYSQL_PASSWORD="wordpress"
}
volume_mount {
volume = "wordpress-db"
destination = "/var/lib/mysql"
}
config {
image = "mysql:5.7"
ports = ["db"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/wordpress/distributed/wordpress.nomad
================================================
variable "site_name" {
type = string
description = "The site_name is used to set the consul tag for the website. This makes them available at \"site_name.wordpress-sites.service.consul\""
}
job "my-website" {
name = "wp-site-${var.site_name}"
id = "wp-site-${var.site_name}"
datacenters = ["dc1"]
group "wordpress" {
count = 2
network {
port "http" {
to = 80
}
}
service {
name = "wordpress-sites"
tags = ["${var.site_name}"]
port = "http"
check {
type = "tcp"
port = "http"
interval = "10s"
timeout = "2s"
}
}
task "await-wordpress-db" {
driver = "docker"
template {
destination = "local/await-db.sh"
perms = 700
data =<<EOT
#!/bin/sh
echo -n 'Waiting for wordpress-db service...'
until nslookup -port=8600 wordpress-db.service.consul ${NOMAD_IP_http} 2>&1 >/dev/null
do
echo -n '.'
sleep 2
# There is a good opportunity to add a loop counter and a bail-out too, but
# this script waits forever.
done
echo " Done."
EOT
}
config {
image = "alpine:latest"
command = "local/await-db.sh"
network_mode = "host"
}
resources {
cpu = 200
memory = 128
}
lifecycle {
hook = "prestart"
sidecar = false
}
}
task "wordpress" {
driver = "docker"
template {
data = <<EOH
{{- if service "wordpress-db" -}}
{{- with index (service "wordpress-db") 0 -}}
WORDPRESS_DB_HOST={{ .Address }}:{{ .Port }}
{{- end -}}
{{- end }}
WORDPRESS_DB_USER=wordpress
WORDPRESS_DB_PASSWORD=wordpress
WORDPRESS_DB_NAME=wordpress-${var.site_name}
EOH
destination = "local/envvars.txt"
env = true
}
config {
image = "wordpress:latest"
ports = ["http"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: applications/wordpress/simple/README.md
================================================
# Wordpress
This job demonstrates several useful patterns for creating Nomad jobs:
- Nomad Host Volumes for persistent storage
- Using a prestart task to wait until a dependency is available
- Template driven configuration to minimize static port references
## Prerequisites
- **Consul** - This job leverages Consul service registrations to locate the
supporting MySQL instance.
## Necessary configuration
### Create the host volume in the configuration
Create a folder on one of your Nomad clients to host your registry files. This
example uses `/opt/volumes/my-website-db`
```shell-session
$ mkdir -p /opt/volumes/my-website-db
```
Add the host_volume information to the client stanza in the Nomad configuration.
```hcl
client {
# ...
host_volume "my-website-db" {
path = "/opt/volumes/my-website-db"
read_only = false
}
}
```
Restart Nomad to read the new configuration.
```shell-session
$ systemctl restart nomad
```
================================================
FILE: applications/wordpress/simple/wordpress.nomad
================================================
job "my-website" {
datacenters = ["dc1"]
group "database" {
network {
port "db" {
to = 3306
}
}
service {
name = "my-website-db"
port = "db"
check {
type = "tcp"
port = "db"
interval = "10s"
timeout = "2s"
}
}
volume "my-website-db" {
type = "host"
source = "my-website-db"
read_only = false
}
task "mysql" {
driver = "docker"
env {
MYSQL_ROOT_PASSWORD="somewordpress"
MYSQL_DATABASE="wordpress"
MYSQL_USER="wordpress"
MYSQL_PASSWORD="wordpress"
}
volume_mount {
volume = "my-website-db"
destination = "/var/lib/mysql"
}
config {
image = "mysql:5.7"
ports = ["db"]
}
resources {
cpu = 500
memory = 256
}
}
}
group "wordpress" {
network {
port "http" {
to = 80
}
}
service {
name = "my-website"
tags = ["www"]
port = "http"
check {
type = "tcp"
port = "http"
interval = "10s"
timeout = "2s"
}
}
task "await-my-website" {
driver = "docker"
config {
image = "alpine:latest"
command = "sh"
args = ["-c", "echo -n 'Waiting for service'; until nslookup -port=8600 my-website-db.service.consul ${NOMAD_IP_http} 2>&1 >/dev/null; do echo '.'; sleep 2; done"]
network_mode = "host"
}
resources {
cpu = 200
memory = 128
}
lifecycle {
hook = "prestart"
sidecar = false
}
}
task "wordpress" {
driver = "docker"
template {
data = <<EOH
{{- if service "my-website-db" -}}
{{- with index (service "my-website-db") 0 -}}
WORDPRESS_DB_HOST={{ .Address }}:{{ .Port }}
{{- end -}}
{{- end }}
WORDPRESS_DB_USER=wordpress
WORDPRESS_DB_PASSWORD=wordpress
WORDPRESS_DB_NAME=wordpress
EOH
destination = "local/envvars.txt"
env = true
}
config {
image = "wordpress:latest"
ports = ["http"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: artifact_sleepyecho/README.md
================================================
## artifact_sleepyecho
Purpose:
This sample was designed to pull a shell script from an AWS S3 bucket and
run it locally. Some additional smarts were included in the shell script
to enable it to simulate more conditions.
The job as committed is somewhat uninteresting, but can be changed up to
add Vault Support, Template Stanza testing, Consul KV output. This should
be considered a building block to be used for more robust reproducers.
================================================
FILE: artifact_sleepyecho/SleepyEcho.sh
================================================
#! /bin/bash
if [ -z "$1" ]
then
SLEEP_SECS="2"
else
SLEEP_SECS="$1"
fi
if [ -z "${EXTRAS}" ]
then
extras_part=""
else
extras_part="EXTRAS: [${EXTRAS}]"
fi
echo "$(date) -- Starting SleepyEcho. Sleep interval is ${SLEEP_SECS} sec. ${extras_part}"
if [ ! -f "/alloc/data/time.txt" ]
then
echo "$(date) -- Writing date to /alloc/data/time.txt"
echo -n "$(date)" > /alloc/data/time.txt
else
echo "$(date) -- Found time.txt file in /alloc/data -- $(cat /alloc/data/time.txt)"
fi
while true
do
echo "$(date) -- Alive... going back to sleep for ${SLEEP_SECS}. ${extras_part}"
sleep ${SLEEP_SECS}
done
================================================
FILE: artifact_sleepyecho/artifact_sleepyecho.nomad
================================================
job "repro" {
datacenters = ["dc1"]
type = "service"
group "group" {
count = 1
# constraint {
# attribute = "${attr.kernel.name}"
# value = "darwin"
# }
task "echo-task" {
driver = "exec"
config {
command = "local/bin/SleepyEcho.sh"
args = ["2"]
}
artifact {
source = "https://angrycub-hc.s3.amazonaws.com/public/SleepyEcho.sh"
destination = "local/bin"
}
}
}
}
================================================
FILE: artifact_sleepyecho/vault_sleepyecho.nomad
================================================
job "repro" {
datacenters = ["dc1"]
type = "service"
group "group" {
count = 1
task "echo-task" {
driver = "exec"
env {
EXTRAS = "${VAULT_TOKEN}"
}
config {
command = "local/bin/SleepyEcho.sh"
args = ["2"]
}
vault {
policies = ["nomad-client"]
change_mode = "signal"
change_signal = "SIGUSR1"
}
artifact {
source = "https://angrycub-hc.s3.amazonaws.com/public/SleepyEcho.sh"
destination = "local/bin"
}
}
}
}
================================================
FILE: batch/batch_gc/example.nomad
================================================
variable "body" {
type = string
default = "Template Rendered"
}
job "example" {
datacenters = ["dc1"]
type = "batch"
group "group" {
task "output" {
driver = "docker"
config {
image = "busybox"
auth_soft_fail = true
command = "cat"
args = ["/local/template.out"]
}
template {
destination = "${NOMAD_TASK_DIR}/template.out"
data = var.body
}
}
}
}
================================================
FILE: batch/dispatch/sleepy.nomad
================================================
job sleepy {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGINT received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for $${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 100
cpu = 100
}
}
}
}
================================================
FILE: batch/dispatch/sleepy1.nomad
================================================
job sleepy1 {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGINT received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for $${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 100
cpu = 100
}
}
}
}
================================================
FILE: batch/dispatch/sleepy10.nomad
================================================
job sleepy10 {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGINT received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for $${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 100
cpu = 100
}
}
}
}
================================================
FILE: batch/dispatch/sleepy2.nomad
================================================
job sleepy2 {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGINT received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for $${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 100
cpu = 100
}
}
}
}
================================================
FILE: batch/dispatch/sleepy3.nomad
================================================
job sleepy3 {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGINT received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for $${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 100
cpu = 100
}
}
}
}
================================================
FILE: batch/dispatch/sleepy4.nomad
================================================
job sleepy4 {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGINT received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for $${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 100
cpu = 100
}
}
}
}
================================================
FILE: batch/dispatch/sleepy5.nomad
================================================
job sleepy5 {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGINT received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for $${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 100
cpu = 100
}
}
}
}
================================================
FILE: batch/dispatch/sleepy6.nomad
================================================
job sleepy6 {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGINT received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for $${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 100
cpu = 100
}
}
}
}
================================================
FILE: batch/dispatch/sleepy7.nomad
================================================
job sleepy7 {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGINT received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for $${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 100
cpu = 100
}
}
}
}
================================================
FILE: batch/dispatch/sleepy8.nomad
================================================
job sleepy8 {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGINT received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for $${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 100
cpu = 100
}
}
}
}
================================================
FILE: batch/dispatch/sleepy9.nomad
================================================
job sleepy9 {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=$${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*$${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGINT received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=$${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for $${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 100
cpu = 100
}
}
}
}
================================================
FILE: batch/dont_restart_fail/README.md
================================================
# Don't restart on failure
Sometimes you want to craft a job in such a way that it will
not be restarted if it fails. This could be useful for work
that is periodic in nature and will be retried later.
================================================
FILE: batch/dont_restart_fail/example.nomad
================================================
job "example" {
datacenters = ["dc1"]
type = "batch"
group "nodes" {
reschedule {
attempts = 0
unlimited = false
}
restart {
attempts = 0
mode = "fail"
}
task "payload" {
driver = "exec"
config {
command = "/bin/bash"
args = ["-c", "echo \"Sleeping 5 seconds\"; sleep 5; echo \"Exiting with exit code 1\"; exit 1"]
}
}
}
}
================================================
FILE: batch/lost_batch/README.md
================================================
# Lost batch job
This is to test the behavior of a lost client with a batch file and the
`prohibit_overlap` setting in the `periodic` stanza
================================================
FILE: batch/lost_batch/batch.nomad
================================================
job "example" {
datacenters = ["dc1"]
type = "batch"
group "sleepers" {
restart {
mode = "fail"
attempts = 0
}
reschedule {
attempts = 0
unlimited = false
}
task "wait" {
driver = "raw_exec"
config {
command = "bash"
args = [
"-c",
"echo Starting; sleep=300; echo Sleeping $sleep seconds.; sleep $sleep; echo Done; exit 0"
]
}
}
}
}
================================================
FILE: batch/lost_batch/periodic.nomad
================================================
job "example" {
datacenters = ["dc1"]
type = "batch"
periodic {
cron = "*/1 * * * * *"
prohibit_overlap = true
}
group "sleepers" {
task "wait" {
driver = "raw_exec"
config {
command = "bash"
args = [
"-c",
"echo Starting; sleep=`shuf -i30-200 -n1`; echo Sleeping $sleep seconds.; sleep $sleep; echo Done; exit 0"
]
}
}
}
}
================================================
FILE: batch/lots_of_batches/README.md
================================================
# Lots of batches
This exists to create a noisy history of jobs in the Nomad state.
One possible use is to test Nomad UI behaviors with a crufty state.
================================================
FILE: batch/lots_of_batches/payload.nomad.template
================================================
job {{jobname}} {
group {{groupname}}
task {{taskname}}
driver = "raw_exec" # you could use exec, but that will be so much slower...
config {
command = {{command}}
args = [{{args}}]
}
}
resources {
cpu = {{cpu}}
memory = {{memory}}
}
}
}
================================================
FILE: batch/periodic/prohibit-overlap.nomad
================================================
job "prohibit-overlap.nomad" {
datacenters = ["dc1"]
type = "batch"
periodic {
cron = "* * * * *"
prohibit_overlap = true
}
group "group" {
task "payload" {
driver = "exec"
config {
command = "bash"
args = [ "-c","echo \"Sleeping 5 minutes...\"; sleep 300" ]
}
}
}
}
================================================
FILE: batch/periodic/template.nomad
================================================
job "template" {
datacenters = ["dc1"]
type = "batch"
periodic {
cron = "* * * * *"
}
group "group" {
network {
port "export" {}
port "exstat" {
static = 8080
}
}
task "command" {
driver = "exec"
config {
command = "bash"
args = ["-c", "cat local/template.out"]
}
template {
destination = "local/template.out"
data = <<EOH
node.unique.id: {{ env "node.unique.id" }}
node.datacenter: {{ env "node.datacenter" }}
node.unique.name: {{ env "node.unique.name" }}
node.class: {{ env "node.class" }}
attr.cpu.arch: {{ env "attr.cpu.arch" }}
attr.cpu.numcores: {{ env "attr.cpu.numcores" }}
attr.cpu.totalcompute: {{ env "attr.cpu.totalcompute" }}
attr.consul.datacenter: {{ env "attr.consul.datacenter" }}
attr.unique.hostname: {{ env "attr.unique.hostname" }}
attr.unique.network.ip-address: {{ env "attr.unique.network.ip-address" }}
attr.kernel.name: {{ env "attr.kernel.name" }}
attr.kernel.version: {{ env "attr.kernel.version" }}
attr.platform.aws.ami-id: {{ env "attr.platform.aws.ami-id" }}
attr.platform.aws.instance-type: {{ env "attr.platform.aws.instance-type" }}
attr.os.name: {{ env "attr.os.name" }}
attr.os.version: {{ env "attr.os.version" }}
NOMAD_ALLOC_DIR: {{env "NOMAD_ALLOC_DIR"}}
NOMAD_TASK_DIR: {{env "NOMAD_TASK_DIR"}}
NOMAD_SECRETS_DIR: {{env "NOMAD_SECRETS_DIR"}}
NOMAD_MEMORY_LIMIT: {{env "NOMAD_MEMORY_LIMIT"}}
NOMAD_CPU_LIMIT: {{env "NOMAD_CPU_LIMIT"}}
NOMAD_ALLOC_ID: {{env "NOMAD_ALLOC_ID"}}
NOMAD_ALLOC_NAME: {{env "NOMAD_ALLOC_NAME"}}
NOMAD_ALLOC_INDEX: {{env "NOMAD_ALLOC_INDEX"}}
NOMAD_TASK_NAME: {{env "NOMAD_TASK_NAME"}}
NOMAD_GROUP_NAME: {{env "NOMAD_GROUP_NAME"}}
NOMAD_JOB_NAME: {{env "NOMAD_JOB_NAME"}}
NOMAD_DC: {{env "NOMAD_DC"}}
NOMAD_REGION: {{env "NOMAD_REGION"}}
VAULT_TOKEN: {{env "VAULT_TOKEN"}}
GOMAXPROCS: {{env "GOMAXPROCS"}}
HOME: {{env "HOME"}}
LANG: {{env "LANG"}}
LOGNAME: {{env "LOGNAME"}}
NOMAD_ADDR_export: {{env "NOMAD_ADDR_export"}}
NOMAD_ADDR_exstat: {{env "NOMAD_ADDR_exstat"}}
NOMAD_ALLOC_DIR: {{env "NOMAD_ALLOC_DIR"}}
NOMAD_ALLOC_ID: {{env "NOMAD_ALLOC_ID"}}
NOMAD_ALLOC_INDEX: {{env "NOMAD_ALLOC_INDEX"}}
NOMAD_ALLOC_NAME: {{env "NOMAD_ALLOC_NAME"}}
NOMAD_CPU_LIMIT: {{env "NOMAD_CPU_LIMIT"}}
NOMAD_DC: {{env "NOMAD_DC"}}
NOMAD_GROUP_NAME: {{env "NOMAD_GROUP_NAME"}}
NOMAD_HOST_PORT_export: {{env "NOMAD_HOST_PORT_export"}}
NOMAD_HOST_PORT_exstat: {{env "NOMAD_HOST_PORT_exstat"}}
NOMAD_IP_export: {{env "NOMAD_IP_export"}}
NOMAD_IP_exstat: {{env "NOMAD_IP_exstat"}}
NOMAD_JOB_NAME: {{env "NOMAD_JOB_NAME"}}
NOMAD_MEMORY_LIMIT: {{env "NOMAD_MEMORY_LIMIT"}}
NOMAD_PORT_export: {{env "NOMAD_PORT_export"}}
NOMAD_PORT_exstat: {{env "NOMAD_PORT_exstat"}}
NOMAD_REGION: {{env "NOMAD_REGION"}}
NOMAD_SECRETS_DIR: {{env "NOMAD_SECRETS_DIR"}}
NOMAD_TASK_DIR: {{env "NOMAD_TASK_DIR"}}
NOMAD_TASK_NAME: {{env "NOMAD_TASK_NAME"}}
PATH: {{env "PATH"}}
PWD: {{env "PWD"}}
SHELL: {{env "SHELL"}}
SHLVL: {{env "SHLVL"}}
USER: {{env "USER"}}
VAULT_TOKEN: {{env "VAULT_TOKEN"}}
concat key: service/fabio/{{ env "NOMAD_JOB_NAME" }}/listeners
key: {{ keyOrDefault ( printf "service/fabio/%s/listeners" ( env "NOMAD_JOB_NAME" ) ) ":9999" }}
{{ define "custom" }}service/fabio/{{env "NOMAD_JOB_NAME" }}/listeners{{ end }}
key: {{ keyOrDefault (executeTemplate "custom") ":9999" }}
math - alloc_id + 1: {{env "NOMAD_ALLOC_INDEX" | parseInt | add 1}}
EOH
}
}
}
}
================================================
FILE: batch/spread_batch/example.nomad
================================================
job "example" {
datacenters = ["dc1"]
type = "batch"
meta {
"version" = "2"
}
group "nodes" {
count = 6
constraint {
distinct_hosts = true
}
task "payload" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["-c", "echo $(date) > /tmp/payload.txt"]
}
}
}
}
================================================
FILE: batch/spread_batch/example2.nomad
================================================
job "example" {
datacenters = ["dc1"]
type = "batch"
meta {
"version" = "2"
}
group "nodes" {
count = 6
constraint {
distinct_hosts = true
}
task "payload" {
driver = "exec"
config {
command = "/bin/bash"
args = ["-c", "echo $VAULT_ADDR > test.txt"]
}
}
}
}
================================================
FILE: batch_overload/example.nomad
================================================
job "example" {
datacenters = ["dc1"]
type = "batch"
group "sleepers" {
count = 2000
task "wait" {
driver = "raw_exec"
config {
command = "bash"
args = [
"-c",
"echo Starting; sleep=`shuf -i5-10 -n1`; echo Sleeping $sleep seconds.; sleep $sleep; echo Done; exit 0"
]
}
resources {
# This will cause us to have to create blocking allocs.
memory = 200
}
}
}
}
================================================
FILE: batch_overload/periodic.nomad
================================================
job "example" {
datacenters = ["dc1"]
type = "batch"
periodic {
cron = "*/15 * * * * *"
prohibit_overlap = true
}
group "sleepers" {
count = 5
task "wait" {
driver = "raw_exec"
config {
command = "bash"
args = [
"-c",
"echo Starting; sleep=`shuf -i5-10 -n1`; echo Sleeping $sleep seconds.; sleep $sleep; echo Done; exit 0"
]
}
resources {
# This will cause us to have to create blocking allocs.
memory = 200
}
}
}
}
================================================
FILE: blocked_eval/README.md
================================================
# Blocked jobs
This job can be used to experiment with job behaviors when a job is waiting for
a client that is able to serve the request. This is simulated using a constraint
on a client metadata item.
It will block until a client comes up with `meta.waituntil = "charlie"`.
================================================
FILE: blocked_eval/example.nomad
================================================
job "example" {
datacenters = ["dc1"]
constraint {
attribute = "${meta.waituntil}"
operator = "="
value = "charlie"
}
group "cache" {
network {
port "db" {
to = 6379
}
}
task "redis" {
driver = "docker"
config {
image = "redis:7"
ports = ["db"]
auth_soft_fail = true
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: check.sh
================================================
#!/bin/bash
printError () {
echo -n "- Checking ${CUR_FILE} ... "
icon="🔴"
if [ ${NO_ICON:-unset} != "unset" ]; then
icon="[ERROR]"
fi
echo ${icon}
if [ "${DEBUG:-unset}" != "unset" ]; then
echo "Command output:"
echo ""
echo "${1}" | awk '/^$/{next} {print $0}'
echo ""
fi
output "${CUR_FILE}" "${icon}" "$(echo "${1}" | awk '/^$/{next} {print $0}')"
continue
}
printWarning () {
echo -n "- Checking ${CUR_FILE} ... "
icon="🟡"
if [ ${NO_ICON:-unset} != "unset" ]; then
icon="[WARN]"
fi
echo ${icon}
if [ "${DEBUG:-unset}" != "unset" ]; then
echo "Job Warning output:"
echo ""
echo "${1}" | awk '/Job Warnings:/{flag=1} /Job Modify Index:/{flag=0} /^$/{next} flag'
echo ""
fi
output "${CUR_FILE}" "${icon}" "$(echo "${1}" | awk '/Job Warnings:/{flag=1} /Job Modify Index:/{flag=0} /^$/{next} flag')"
continue
}
printSuccess () {
if [ ${NO_SUCCESS:-unset} != "unset" ]; then
continue
fi
echo -n "- Checking ${CUR_FILE} ... "
icon="✅"
if [ ${NO_ICON:-unset} != "unset" ]; then
icon="[SUCCESS]"
fi
echo ${icon}
output "${CUR_FILE}" "${icon}" ""
continue
}
output() {
file="${1}"
status="${2}"
output="${3}"
asHTML "${file}" "${status}" "${output}"
}
setupOutput() {
startHTML
}
finishOutput() {
endHTML
}
startHTML() {
cat <<HERE > output.html
<html><head><title>Nomad Job Tester Output</title>
<style>
body {
font-family: Helvetica, sans-serif;
}
.out {
white-space: pre-wrap;
}
</style>
<link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.12.1/css/jquery.dataTables.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.6.1/jquery.min.js"></script>
<script src="https://cdn.datatables.net/1.12.1/js/jquery.dataTables.js"></script>
</head>
<body>
<table border="1" width="100%" id="results">
<thead><tr><th></th><th>Filename</th><th>Output</th></tr></thead>
<tbody>
HERE
}
asHTML() {
file="${1}"
status="${2}"
output="${3}"
maybeOut=""
if [ "${output}" != "" ]; then
maybeOut="<details><summary>Show Output</summary><pre class=out><code>${output}</code></pre></details>"
fi
echo "<tr><td style=\"width: 2em;\" align=\"center\">${status}</td><td width=\"25%\">${file}</td><td>${maybeOut}</td></tr>" >> output.html
}
endHTML() {
cat <<HERE >> output.html
</tbody>
</table>
<script>
\$(document).ready( function () {
\$('#results').DataTable({
paging: false
});
} );
</script>
HERE
}
## Main begins here
setupOutput
files=$(find -s ${1:-.} -name "*.nomad")
for file in ${files}; do
CUR_FILE=${file}
out=$(nomad plan ${CUR_FILE} 2>&1)
ec=$?
if [ "${ec}" == "255" ]; then
printError "${out}"
fi
if [ "${ec}" == "1" ]; then
dep=$(echo "${out}" | grep -c "Job Warnings:")
if [ "$dep" != 0 ]; then
printWarning "${out}"
fi
fi
printSuccess
done
finishOutput
================================================
FILE: cni/README.md
================================================
# Nomad CNI examples
This folder contains Nomad job specifications and configuration files that show
how Nomad can use [Container Network Interface (CNI)](https://cni.dev) plugins
and network configurations for running workloads.
## Examples
- [`diy_bridge`](diy_bridge) - Create your own bridge network similar to the one Nomad makes
for `network_mode = "bridge"` jobs.
================================================
FILE: cni/diy_brige/README.md
================================================
# DIY CNI bridge network
## About
This example uses a CNI configuration based on Nomad's internal CNI template
used to implement the `network_mode = "bridge"` behavior.
## Requirements
This demonstration requires a Linux Nomad client.
## Running
### Validate CNI plugins are installed
Generally you will install the CNI plugins as part of setting up a Nomad client,
so this step may already be complete. However, for development clients that
aren't using Nomad's `bridge` network mode, these might not have been installed.
Nomad clients look for CNI plugins in the path given in the client's [`cni_path`],
`/opt/cni/bin` by default. Check your client configuration to see if this value
has been overridden.
Check these folders for the CNI plugins. Verify that you have all the following binaries somewhere in the folders listed in your `cni_path`.
- `bridge`
- `firewall`
- `host-local`
- `loopback`
================================================
FILE: cni/diy_brige/diybridge.conflist
================================================
{
"cniVersion": "0.4.0",
"name": "diybridge",
"plugins": [
{
"type": "loopback"
},
{
"type": "bridge",
"bridge": "diybridge",
"ipMasq": true,
"isGateway": true,
"forceAddress": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"ranges": [
[
{
"subnet": "192.168.1.0/24"
}
]
],
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
},
{
"type": "firewall",
"backend": "iptables",
"iptablesAdminChainName": "DIY-BRIDGE"
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"snat": true
}
]
}
================================================
FILE: cni/diy_brige/example.nomad
================================================
variable "dcs" {
description = "Datacenters to run job in."
type = list(string)
default = ["dc1"]
}
job "example" {
datacenters = ["dc1"]
group "test" {
network {
mode = "cni/diybridge"
}
task "alpine" {
driver = "docker"
config {
image = "busybox:latest"
command = "sleep"
args = ["infinity"]
}
}
}
}
================================================
FILE: cni/diy_brige/repro.nomad
================================================
variable "dcs" {
type = list(string)
default = ["dc1"]
description = "Nomad datacenters in which to run the job."
}
job "example" {
datacenters = ["dc1"]
group "g1" {
network {
mode = "bridge"
port "foo" {
to = 1337
}
}
task "nc-alpine" {
driver = "docker"
config {
image = "alpine"
args = ["nc", "-lk", "-p", "${NOMAD_PORT_foo}", "-e", "cat"]
}
resources {
cpu = 100
memory = 64
}
}
}
}
================================================
FILE: cni/example.nomad
================================================
job "example" {
datacenters = ["dc1"]
group "test" {
network {
mode = "cni/mynet3"
}
task "alpine" {
driver = "docker"
config {
image = "alpine:latest"
config {
command = "sh"
args = ["-c", "while true; do sleep 300; done "]
}
}
}
}
}
================================================
FILE: complex_meta/template_env.nomad
================================================
job "template" {
datacenters = ["dc1"]
type = "batch"
group "group" {
task "meta-output" {
driver = "raw_exec"
config {
command = "bash"
args=["-c", "echo $RULES | jq ."]
}
template {
destination = "secrets/rules.env"
env = true
data = <<EOH
{{- define "RULES" -}}
[
{
"cloudwatch":{
"asg_cpu_usage_upper_bound": {
"backend":"test-backend",
"dimension_name":"AutoScalingGroupName",
"metric_namespace": "AWS/EC2",
"metric_name": "CPUUtilization"
}
},
"enabled": true
},
{
"rule2":{
"foos":[
{"foo1": "bar"},
{"foo2": "bar2"}
],
"enabled": true
}
}
]
{{- end }}
RULES={{ executeTemplate "RULES" | toJSON }}
EOH
}
}
}
}
================================================
FILE: complex_meta/template_meta.nomad
================================================
job "template" {
datacenters = ["dc1"]
type = "batch"
group "group" {
network {
port "export" {}
port "exstat" {
static = 8080
}
}
meta {
"rules" = <<EOH
[
{
cloudwatch":{
"asg_cpu_usage_upper_bound": {
"backend":"test-backend",
"dimension_name":"AutoScalingGroupName",
"metric_namespace": "AWS/EC2",
"metric_name": "CPUUtilization",
}
},
"enabled": true
},
{
"rule2":{
"foos":[
{"foo1": "bar"},
{"foo2": "bar2"}
],
"enabled": true
}
}
]
EOH
}
task "env-output" {
driver = "raw_exec"
config {
command = "env"
}
resources {
memory = 10
}
}
task "meta-output" {
driver = "raw_exec"
config {
command = "bash"
args = [ "-c", "echo $RULES" ]
}
template {
destination = "secrets/rules.env"
env = true
data = <<EOH
RULES="{{ "charlie" | toJSON }}"
EOH
}
resources {
memory = 10
}
}
task "date-output" {
resources {memory=10 network { port "sample" {} } }
driver = "raw_exec"
config { command = "date" }
}
task "template" {
driver = "raw_exec"
config {
command = "bash"
args = ["-c", "cat local/template.out"]
}
template {
destination = "local/template.out"
data = <<EOH
node.unique.id: {{ env "node.unique.id" }}
node.datacenter: {{ env "node.datacenter" }}
node.unique.name: {{ env "node.unique.name" }}
node.class: {{ env "node.class" }}
attr.cpu.arch: {{ env "attr.cpu.arch" }}
attr.cpu.numcores: {{ env "attr.cpu.numcores" }}
attr.cpu.totalcompute: {{ env "attr.cpu.totalcompute" }}
attr.consul.datacenter: {{ env "attr.consul.datacenter" }}
attr.unique.hostname: {{ env "attr.unique.hostname" }}
attr.unique.network.ip-address: {{ env "attr.unique.network.ip-address" }}
attr.kernel.name: {{ env "attr.kernel.name" }}
attr.kernel.version: {{ env "attr.kernel.version" }}
attr.platform.aws.ami-id: {{ env "attr.platform.aws.ami-id" }}
attr.platform.aws.instance-type: {{ env "attr.platform.aws.instance-type" }}
attr.os.name: {{ env "attr.os.name" }}
attr.os.version: {{ env "attr.os.version" }}
NOMAD_ALLOC_DIR: {{env "NOMAD_ALLOC_DIR"}}
NOMAD_TASK_DIR: {{env "NOMAD_TASK_DIR"}}
NOMAD_SECRETS_DIR: {{env "NOMAD_SECRETS_DIR"}}
NOMAD_MEMORY_LIMIT: {{env "NOMAD_MEMORY_LIMIT"}}
NOMAD_CPU_LIMIT: {{env "NOMAD_CPU_LIMIT"}}
NOMAD_ALLOC_ID: {{env "NOMAD_ALLOC_ID"}}
NOMAD_ALLOC_NAME: {{env "NOMAD_ALLOC_NAME"}}
NOMAD_ALLOC_INDEX: {{env "NOMAD_ALLOC_INDEX"}}
NOMAD_TASK_NAME: {{env "NOMAD_TASK_NAME"}}
NOMAD_GROUP_NAME: {{env "NOMAD_GROUP_NAME"}}
NOMAD_JOB_NAME: {{env "NOMAD_JOB_NAME"}}
NOMAD_DC: {{env "NOMAD_DC"}}
NOMAD_REGION: {{env "NOMAD_REGION"}}
VAULT_TOKEN: {{env "VAULT_TOKEN"}}
NOMAD_ADDR_export: {{env "NOMAD_ADDR_export"}}
NOMAD_ADDR_exstat: {{env "NOMAD_ADDR_exstat"}}
NOMAD_HOST_PORT_export: {{env "NOMAD_HOST_PORT_export"}}
NOMAD_HOST_PORT_exstat: {{env "NOMAD_HOST_PORT_exstat"}}
NOMAD_IP_export: {{env "NOMAD_IP_export"}}
NOMAD_IP_exstat: {{env "NOMAD_IP_exstat"}}
NOMAD_PORT_export: {{env "NOMAD_PORT_export"}}
NOMAD_PORT_exstat: {{env "NOMAD_PORT_exstat"}}
GOMAXPROCS: {{env "GOMAXPROCS"}}
HOME: {{env "HOME"}}
LANG: {{env "LANG"}}
LOGNAME: {{env "LOGNAME"}}
PATH: {{env "PATH"}}
PWD: {{env "PWD"}}
SHELL: {{env "SHELL"}}
SHLVL: {{env "SHLVL"}}
USER: {{env "USER"}}
Further Consul Template Magic:
Math
math - alloc_id + 1: {{env "NOMAD_ALLOC_INDEX" | parseInt | add 1}}
Composition using inline templates
{{- define "custom" }}NOMAD_ADDR_{{"date-output" | replaceAll "-" "_" }}_sample{{ end }}
{{ executeTemplate "custom" }}: {{ env (executeTemplate "custom") }}
Composition using printf
{{ $envKey := printf "NOMAD_ADDR_%s_%s" ("date-output" | replaceAll "-" "_" ) "sample" }}
{{ $envKey }}: {{ env $envKey }}
EOH
}
resources {
memory = 10
}
}
}
}
================================================
FILE: connect/consul.nomad
================================================
job "connect-consul" {
datacenters = ["dc1"]
type = "batch"
group "connect-consul" {
network {
mode = "bridge"
}
service {
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "consul"
local_bind_port = 8500
}
}
}
}
}
task "env" {
driver = "exec"
env {
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
}
config {
image = "env"
}
}
}
}
================================================
FILE: connect/discuss/blocky.yaml
================================================
upstream:
default:
- 46.182.19.48
- 80.241.218.68
- tcp-tls:fdns1.dismail.de:853
- https://dns.digitale-gesellschaft.ch/dns-query
blocking:
blackLists:
ads:
- https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
clientGroupsBlock:
default:
- ads
port: 53
httpPort: 4000
================================================
FILE: connect/discuss/job.nomad
================================================
variable "config_data" {
type = string
description = "Plain text config file for blocky"
}
job "blocky" {
datacenters = ["dc1"]
type = "system"
priority = 100
update {
max_parallel = 1
auto_revert = true
}
group "blocky" {
network {
mode = "bridge"
port "dns" {
static = "53"
}
port "api" {
# host_network = "loopback"
to = "4000"
}
}
service {
name = "blocky-dns"
port = "dns"
}
service {
name = "blocky-api"
port = "api"
meta {
metrics_addr = "${NOMAD_ADDR_api}"
}
tags = [
"traefik.enable=true",
]
connect {
sidecar_service {
proxy {
local_service_port = 400
expose {
path {
path = "/metrics"
protocol = "http"
local_path_port = 4000
listener_port = "api"
}
}
upstreams {
destination_name = "redis"
local_bind_port = 6379
}
}
}
sidecar_task {
resources {
cpu = 50
memory = 20
memory_max = 50
}
}
}
check {
name = "api-health"
port = "api"
type = "http"
path = "/"
interval = "10s"
timeout = "3s"
}
}
task "blocky" {
driver = "docker"
config {
image = "ghcr.io/0xerr0r/blocky"
ports = ["dns", "api"]
mount {
type = "bind"
target = "/app/config.yml"
source = "app/config.yml"
}
}
resources {
cpu = 50
memory = 50
memory_max = 100
}
template {
data = file(var.config_data)
destination = "app/config.yml"
splay = "1m"
}
}
}
}
================================================
FILE: connect/dns-via-mesh/README.md
================================================
README
This example demonstrates using the Consul service mesh
to connect a workload to the Consul DNS query API
## Connect Consul DNS API to the mesh
### Deploy Consul service
Create a service on the Consul server node. Create a service
definition with the following information.
```hcl
service {
name = "consul-dns"
id = "consul-dns-1"
port = 8600
connect {
sidecar_service {}
}
}
```
### Start a sidecar for the Consul DNS query API
```
$ consul connect proxy -sidecar-for consul-dns-1
```
## Test the connection
Use a local connect proxy to test whether or not the
service is accessible via the proxy.
Start a local connect proxy.
```
$ consul connect proxy -service charlie -upstream consul-dns:8600
```
Verify the connection
================================================
FILE: connect/dns-via-mesh/consul-dns.nomad
================================================
job "testdns" {
datacenters = ["dc1"]
group "ubuntu" {
network {
mode = "bridge"
# dns {
# servers = ["127.0.0.1"]
# }
}
service {
name = "ubuntu"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "consul-dns"
local_bind_port = 8600
}
}
}
}
}
task "ubuntu" {
driver = "docker"
config {
image = "ubuntu"
args = ["bash", "-c","while true; do sleep 300; done"]
}
}
}
}
================================================
FILE: connect/dns-via-mesh/consul-dns2.nomad
================================================
job "testdns2" {
datacenters = ["dc1"]
group "ubuntu" {
network {
mode = "bridge"
dns {
servers = ["127.0.0.1"]
}
}
service {
name = "ubuntu"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "consul-dns"
local_bind_port = 8600
}
}
}
}
}
task "ubuntu" {
driver = "docker"
artifact {
source = "http://10.0.0.236:8000/dnstest"
destination = "local"
}
artifact {
source = "https://github.com/coredns/coredns/releases/download/v1.8.3/coredns_1.8.3_linux_amd64.tgz"
destination = "local"
}
template {
destination = "local/Corefile"
data =<<EOT
. {
forward . dns://8.8.8.8
}
consul {
log
forward . dns://127.0.0.1:8600 {
force_tcp
}
}
EOT
}
config {
image = "ubuntu"
args = ["bash", "-c","/local/coredns -conf /local/Corefile & while true; do sleep 200; done"]
}
}
}
}
================================================
FILE: connect/dns-via-mesh/go-resolv-test/.gitignore
================================================
.DS_Store
out
================================================
FILE: connect/dns-via-mesh/go-resolv-test/build.sh
================================================
#!/bin/bash
echo "Building dnstest binaries..."
echo "- Linux AMD64"
mkdir -p out/linux_amd64/
GOOS=linux GOARCH=amd64 go build -o out/linux_amd64/dnstest main.go
echo "- Darwin AMD64"
mkdir -p out/darwin_amd64/
GOOS=darwin GOARCH=amd64 go build -o out/darwin_amd64/dnstest main.go
echo "- Windows AMD64"
mkdir -p out/windows_amd64/
GOOS=windows GOARCH=amd64 go build -o out/windows_amd64/dnstest.exe main.go
echo "- Linux ARM64"
mkdir -p out/linux_arm64/
GOOS=linux GOARCH=arm64 go build -o out/linux_arm64/dnstest main.go
================================================
FILE: connect/dns-via-mesh/go-resolv-test/main.go
================================================
package main
import (
"context"
"flag"
"fmt"
"net"
"os"
)
func main() {
preferGo := flag.Bool("go", false, "use host resolution")
// useGoReolve := flag.Bool("go", false, "a bool")
flag.Parse()
if len(flag.Args()) != 1 {
fmt.Println("command takes one argument--hostname to resolve.");
os.Exit(1);
}
hostname := flag.Args()[0]
r := net.Resolver{
PreferGo: *preferGo,
}
iprecords, err := r.LookupHost(context.Background(), hostname)
if err != nil {
fmt.Println(err);
os.Exit(1);
}
if len(iprecords) == 0 {
fmt.Println("No records found.");
}
for _, ip := range iprecords {
fmt.Println(ip);
}
}
================================================
FILE: connect/ingress_gateways/ingress_gateway.nomad
================================================
job "ingress-gateway" {
datacenters = ["dc1"]
group "group" {
network {
port "envoy" {}
}
task "ingress-gateway" {
driver = "docker"
config {
image = "voiselle/ingress-gateway:latest"
network_mode = "host"
command = "/bin/sh"
args = ["-c", "while true; do sleep 10; done"]
mounts = [
{
type = "bind"
target = "/etc/consul.d/ig-services/ingress-gateway.hcl"
source = "local/ingress-gateway.hcl"
readonly = true
}
]
}
env = {
"CONSUL_HTTP_ADDR" = "${NOMAD_IP_envoy}:8500"
"CONSUL_HTTP_TOKEN" = "c62d8564-c0c5-8dfe-3e75-005debbd0e40",
"CONSUL_ENVOY_IP" = "${NOMAD_IP_envoy}",
"CONSUL_ENVOY_PORT" = "${NOMAD_PORT_envoy}"
}
template {
destination = "local/ingress-gateway.hcl"
data = <<EOH
Kind = "ingress-gateway"
Name = "ingress-service"
Listeners = [
{
Port = 8080
Protocol = "http"
Services = [
{
Name = "count-dashboard"
}
]
}
]
EOH
}
}
}
}
================================================
FILE: connect/native/cn-demo.nomad
================================================
job "cn-demo" {
datacenters = ["dc1"]
meta {
version = "1"
}
group "generator" {
network {
port "api" {}
}
service {
name = "uuid-api"
port = "${NOMAD_PORT_api}"
connect {
native = true
}
}
task "generate" {
driver = "docker"
config {
image = "hashicorpnomad/uuid-api:v3"
network_mode = "host"
}
env {
BIND = "0.0.0.0"
PORT = "${NOMAD_PORT_api}"
}
}
}
group "frontend" {
network {
port "http" {
static = 25000
}
}
service {
name = "uuid-fe"
port = "25000"
connect {
native = true
}
}
task "frontend" {
driver = "docker"
config {
# image = "hashicorpnomad/uuid-fe:v3"
image = "registry.service.consul:5000/uuid-fe:latest"
network_mode = "host"
}
env {
UPSTREAM = "uuid-api"
BIND = "0.0.0.0"
PORT = "25000"
}
}
}
}
================================================
FILE: connect/nginx_ingress/countdash.nomad
================================================
job "countdash" {
datacenters = ["dc1"]
group "api" {
network {
mode = "bridge"
}
service {
name = "count-api"
port = "9001"
connect {
sidecar_service {}
}
}
task "web" {
driver = "docker"
config {
image = "hashicorpnomad/counter-api:v1"
}
}
}
group "dashboard" {
network {
mode = "bridge"
}
service {
name = "count-dashboard"
port = "9002"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "count-api"
local_bind_port = 8080
}
}
}
}
}
task "dashboard" {
driver = "docker"
env {
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
}
config {
image = "hashicorpnomad/counter-dashboard:v1"
}
}
}
}
================================================
FILE: connect/nginx_ingress/ingress.nomad
================================================
job "ingress" {
datacenters = ["dc1"]
group "cache" {
network {
port "http" {
to = 8080
}
}
service {
name = "ingress"
tags = []
port = "http"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
task "nginx" {
driver = "docker"
config {
image = "nginx:1.19.1-alpine"
ports = ["http"]
mounts = [
{
type = "bind"
target = "/etc/nginx/nginx.conf"
source = "local/nginx-proxy.conf"
readonly = true
}
]
}
template {
destination = "local/nginx-proxy.conf"
data = <<EOH
# daemon off;
master_process off;
pid nginx.pid;
error_log /dev/stdout;
events {}
http {
access_log /dev/stdout;
server {
listen 8080 default_server;
location / {
{{range connect "count-dashboard"}}
proxy_pass https://{{.Address}}:{{.Port}};
{{end}}
# these refer to files written by templates above
proxy_ssl_certificate /secrets/cert.pem;
proxy_ssl_certificate_key /secrets/cert.key;
proxy_ssl_trusted_certificate /secrets/ca.crt;
}
}
}
EOH
}
template {
destination = "secrets/ca.crt"
data = <<EOH
{{ range caRoots}}{{.RootCertPEM}}{{end}}
EOH
}
template {
destination = "secrets/cert.pem"
data = <<EOH
{{ with caLeaf "ingress" }}{{ .CertPEM }}{{ end }}
EOH
}
template {
destination = "secrets/cert.key"
data = <<EOH
{{ with caLeaf "ingress" }}{{ .PrivateKeyPEM }}{{ end }}
EOH
}
}
}
}
================================================
FILE: connect/sidecar/countdash.nomad
================================================
job "countdash" {
datacenters = ["dc1"]
group "api" {
network {
mode = "bridge"
}
service {
name = "count-api"
port = "9001"
connect {
sidecar_service {}
}
}
task "web" {
driver = "docker"
config {
image = "hashicorpnomad/counter-api:v1"
}
}
}
group "dashboard" {
network {
mode = "bridge"
port "http" {
static = 9002
to = 9002
}
}
service {
name = "count-dashboard"
port = "9002"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "count-api"
local_bind_port = 8080
}
}
}
}
}
task "dashboard" {
driver = "docker"
env {
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
}
config {
image = "hashicorpnomad/counter-dashboard:v1"
}
}
}
}
================================================
FILE: connect/sidecar/countdash2.nomad
================================================
job "countdash" {
datacenters = ["dc1"]
group "api" {
network {
mode = "bridge"
}
service {
name = "count-api"
port = "9001"
connect {
sidecar_service {
proxy {
config {
protocol="http"
}
}
}
}
}
task "web" {
driver = "docker"
config {
image = "hashicorpnomad/counter-api:v1"
}
}
}
group "dashboard" {
network {
mode = "bridge"
port "http" {
static = 9002
to = 9002
}
}
service {
name = "count-dashboard"
port = "9002"
connect {
sidecar_service {
proxy {
config {
protocol = "http"
}
upstreams {
destination_name = "count-api"
local_bind_port = 8080
}
}
}
}
}
task "dashboard" {
driver = "docker"
env {
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
}
config {
image = "hashicorpnomad/counter-dashboard:v1"
}
}
}
}
================================================
FILE: consul/add_check/README.md
================================================
# Adding a service to a Nomad Job
This example shows a simple Nomad job (`e1.nomad`) which can be run in the
cluster. Running `e2.nomad` will add a Consul check to the job. Adding a check
is a non-destructive operation.

Running `e3.nomad` will cause a destructive change because it adds a job meta
argument which must be dealt with by restarting the workload. This
counterexample helps to illustrate that adding a check is a non-destructive
operation.

================================================
FILE: consul/add_check/e1.nomad
================================================
job "example" {
datacenters = ["dc1"]
group "cache" {
network {
port "db" {
to = 6379
}
}
task "redis" {
driver = "docker"
config {
image = "redis:7"
ports = ["db"]
auth_soft_fail = true
}
}
}
================================================
FILE: consul/add_check/e2.nomad
================================================
job "example" {
datacenters = ["dc1"]
group "cache" {
network {
port "db" {
to = 6379
}
}
service {
name = "redis-cache"
tags = ["global", "cache"]
port = "db"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
task "redis" {
driver = "docker"
config {
image = "redis:7"
ports = ["db"]
auth_soft_fail = true
}
}
}
}
================================================
FILE: consul/add_check/e3.nomad
================================================
job "example" {
datacenters = ["dc1"]
meta = {
"test" = "rebootparty"
}
group "cache" {
network {
port "db" {
to = 6379
}
}
service {
name = "redis-cache"
tags = ["global", "cache"]
port = "db"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
task "redis" {
driver = "docker"
config {
image = "redis:7"
ports = ["db"]
auth_soft_fail = true
}
}
}
}
================================================
FILE: consul/use_consul_for_kv_path/README.md
================================================
## Use Consul for KV Path
This sample will use a Consul KV key to determine a path for other Consul KV
elements using `printf` to compose it.
## Set up
Build a small set of Consul KV keys for the job to use
```
consul kv put template/current "config1"
consul kv put template/config1/name "config1.service.consul"
consul kv put template/config1/ip "10.0.1.100"
consul kv put template/config1/port "7777"
consul kv put template/config2/name "config2.service.consul"
consul kv put template/config2/ip "10.0.2.200"
consul kv put template/config2/port "8888"
```
Run the `template.nomad` job
```
nomad job run template.nomad
```
You will receive scheduling information in the output; note the allocation ID.
```
==> Monitoring evaluation "ba76383e"
Evaluation triggered by job "template"
==> Monitoring evaluation "ba76383e"
Allocation "e4d4bcf1" created: node "f7bc1f2d", group "group"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "ba76383e" finished with status "complete"
```
Fetch the output template file using the `nomad alloc fs` command.
```
nomad alloc fs e4d4bcf1 command/local/template.out
```
Observe that the template is built with the `config1` paths.
```
Name: config1.service.consul
IP: 10.0.1.100:7777
```
Update the KV value to `config2`.
```
consul kv put template/current "config2"
```
Consul should indcate success.
```
Success! Data written to: template/current
```
Check the status of the allocation.
```
nomad alloc status e4d4bcf1
```
Observe that your change caused Nomad to restart it.
```
ID = e4d4bcf1-f300-b7e7-2f8a-c252eae04822
Eval ID = ba76383e
Name = template.group[0]
Node ID = f7bc1f2d
Node Name = nomad-client-1.node.consul
Job ID = template
Job Version = 0
Client Status = running
Client Description = Tasks are running
Desired Status = run
Desired Description = <none>
Created = 1m23s ago
Modified = 39s ago
Task "command" is "running"
Task Resources
CPU Memory Disk Addresses
0/100 MHz 112 KiB/300 MiB 300 MiB
Task Events:
Started At = 2021-06-07T17:32:22Z
Finished At = N/A
Total Restarts = 1
Last Restart = 2021-06-07T13:32:22-04:00
Recent Events:
Time Type Description
2021-06-07T13:32:22-04:00 Started Task started by client
2021-06-07T13:32:22-04:00 Driver Downloading image
2021-06-07T13:32:22-04:00 Restarting Task restarting in 0s
2021-06-07T13:32:22-04:00 Terminated Exit Code: 137, Exit Message: "Docker container exited with non-zero exit code: 137"
2021-06-07T13:32:16-04:00 Restart Signaled Template with change_mode restart re-rendered
2021-06-07T13:31:40-04:00 Started Task started by client
2021-06-07T13:31:39-04:00 Driver Downloading image
2021-06-07T13:31:39-04:00 Task Setup Building Task Directory
2021-06-07T13:31:39-04:00 Received Task received by client
```
Now, refetch the rendered file with `nomad alloc fs`.
```
nomad alloc fs e4d4bcf1 command/local/template.out
```
Observe that the content now shows the values for the config2 paths.
```
Name: config2.service.consul
IP: 10.0.2.200:8888
```
## Clean up
Remove the running sample job.
```
nomad job stop -purge template
```
Remove the Consul keys.
```
consul kv delete template/current
consul kv delete template/config1/name
consul kv delete template/config1/ip
consul kv delete template/config1/port
consul kv delete template/config2/name
consul kv delete template/config2/ip
consul kv delete template/config2/port
```
================================================
FILE: consul/use_consul_for_kv_path/template.nomad
================================================
job "template" {
datacenters = ["dc1"]
group "group" {
count = 1
task "command" {
template {
data = <<EOH
{{- with key "template/current" -}}
Name: {{ key (printf "template/%v/name" .) }}
IP: {{ key (printf "template/%v/ip" .) }}:{{ key (printf "template/%v/port" .) }}
{{- printf "\n" -}}
{{- end -}}
EOH
destination = "local/template.out"
}
# This is a favorite do nothing worload.
driver = "docker"
config {
image = "alpine"
command = "sh"
args = ["-c", "while true; do sleep 300; done"]
}
}
}
}
================================================
FILE: consul-template/coordination/README.md
================================================
## Using Consul-Template to fake Task Dependencies
The consul-template library has a blocking behavior in the instances that a key does not yet exist in Consul. This can be ~~abused~~ leveraged to allow for some light coordination between dependent Nomad tasks. This would only work in instances where you were able to write to Consul from your workload once you entered the ready state or had a coordinating task that could perform this work based on some sort of application health check.
================================================
FILE: consul-template/coordination/sample.nomad
================================================
job sleepy {
datacenters = ["dc1"]
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
{{ $consulKey := printf "nomad/jobs/%s/%s/first_task.sh/running" (env "NOMAD_JOB_NAME") (env "NOMAD_ALLOC_ID") }}{{ $consulKey }}
#{{ key $consulKey }}
SLEEP_SECS=${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*${1}))); do sleep .5; done; }
sigint() { echo "$(date) - SIGTERM received; Ending."; exit 0; }
trap 'sigint' INT
echo "$(date) - Starting. SLEEP_SECS=${SLEEP_SECS}"
while true; do echo "$(date) - Sleeping for ${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 10
cpu = 100
}
}
task "first_task.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/first_task.sh"
}
artifact {
source = "https://releases.hashicorp.com/consul/1.2.1/consul_1.2.1_linux_amd64.zip"
}
template {
destination = "local/first_task.sh"
data = <<EOH
#!/bin/bash
SLEEP_SECS=${SLEEP_SECS:-2} # provide default of 2 seconds
interruptable_sleep() { for i in $(seq 1 $((2*${1}))); do sleep .5; done ;}
sigint() { echo "$(date) - SIGTERM received; Ending."; exit 0;}
trap 'sigint' INT
echo "$(date) - Starting. Sleeping 10 seconds to simulate startup time or something"
sleep 10
chmod +x ${NOMAD_TASK_DIR}/consul
export CONSUL_HTTP_ADDR="http://127.0.0.1:8500"
# If your cluster is ACL enabled, you will need to add it here.
#export CONSUL_HTTP_TOKEN="3ef34421-1b20-e543-65d4-54067560d377"
{{ $consulKey := printf "nomad/jobs/%s/%s/%s/running" (env "NOMAD_JOB_NAME") (env "NOMAD_ALLOC_ID") (env "NOMAD_TASK_NAME") }}
echo "Running: ${NOMAD_TASK_DIR}/consul kv put \"{{ $consulKey }}\" \"$(date)\""
${NOMAD_TASK_DIR}/consul kv put "{{ $consulKey }}" "$(date)"
while true; do echo "$(date) - Sleeping for ${SLEEP_SECS} seconds."; interruptable_sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 10
cpu = 100
}
}
}
}
================================================
FILE: consul-template/missing_vault_value/sample.nomad
================================================
job sleepy {
datacenters = ["dc1"]
type = "system"
group "group" {
task "sleepy.sh" {
driver = "exec"
config {
command = "${NOMAD_TASK_DIR}/sleepy.sh"
}
restart {
attempts = 3
delay = "30s"
mode = "delay"
}
template {
destination = "local/sleepy.sh"
data = <<EOH
#!/bin/bash
{{ $consulKey := printf "nomad/jobs/%s/%s/first_task.sh/running" (env "NOMAD_JOB_NAME") (env "NOMAD_ALLOC_ID") }}{{ $consulKey }}
#{{ secret $consulKey }}
while true; do echo "$(date) - Sleeping for ${SLEEP_SECS} seconds."; sleep ${SLEEP_SECS}; done
EOH
}
resources {
memory = 10
cpu = 100
}
vault {
policies = ["default"]
}
}
}
}
================================================
FILE: consul-template/my_first_kv/README.md
================================================
[template]:https://www.nomadproject.io/docs/job-specification/template.html#environment-variables
## My First KV
This job will fetch a single value from Consul and pass it as an environment
variable into the Redis Docker container from the sample job. The job file
itself is a cut down of the output from `nomad init --short` to take out
unnecessary whitespace.
One important note, in order to use the consul-template library for creating
dynamic environment variables, you must use the [template] stanza with
`env = true`. This allows you to create the key/value environment variable as a
file and then read it into the environment. The Nomad `secrets` directory is
commonly used as a destination for these rendered files.
You can create the necessary Consul KV value with the following command:
```
$ consul kv put my-first-kv/testData MyAwesomeValue
Success! Data written to: my-first-kv/testData
```
When you are done, or to experiment with a missing value, delete the key with:
```
$ consul kv delete my-first-kv/testData
Success! Deleted key: my-first-kv/testData
```
================================================
FILE: consul-template/my_first_kv/example.nomad
================================================
job "example" {
datacenters = ["dc1"]
group "cache" {
network {
port "db" {}
}
task "redis" {
driver = "docker"
config {
image = "redis:7"
ports = ["db"]
}
template {
destination = "secrets/file.env"
env = true
data = <<EOH
CONSUL_test="{{key "consul-server1/testData"}}"
EOH
}
}
}
}
================================================
FILE: countdash/connect/countdash.nomad
================================================
job "countdash" {
datacenters = ["dc1"]
group "api" {
network {
mode = "bridge"
}
service {
name = "count-api"
port = "9001"
connect {
sidecar_service {}
}
}
task "web" {
driver = "docker"
config {
image = "hashicorpdev/counter-api:v3"
}
}
}
group "dashboard" {
network {
mode = "bridge"
port "http" {
static = 9002
to = 9002
}
}
service {
name = "count-dashboard"
port = "http"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "count-api"
local_bind_port = 8080
}
}
}
}
}
task "dashboard" {
driver = "docker"
env {
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
}
config {
image = "hashicorpdev/counter-dashboard:v3"
}
}
}
}
================================================
FILE: countdash/simple/countdash.nomad
================================================
job "countdash" {
datacenters = ["dc1"]
group "api" {
network {
port "dashboard" {
static = 9002
}
port "count_api" {
static = 9001
}
}
task "web" {
driver = "docker"
config {
image = "hashicorpnomad/counter-api:v1"
ports = ["count_api"]
}
}
task "dashboard" {
driver = "docker"
env {
COUNTING_SERVICE_URL = "http://127.0.0.1:9001"
}
config {
image = "hashicorpnomad/counter-dashboard:v1"
ports = ["dashboard"]
}
}
}
}
================================================
FILE: csi/aws/ebs/README.md
================================================
## Nomad sample job using AWS EBS CSI plugin.
More information can be found at learn.hashicorp.com/nomad
================================================
FILE: csi/aws/ebs/busybox.nomad
================================================
job "mysql-busybox" {
datacenters = ["dc1"]
type = "service"
group "mysql" {
count = 1
volume "mysql" {
type = "csi"
read_only = false
source = "mysql"
}
task "busybox" {
driver = "docker"
volume_mount {
volume = "mysql"
destination = "/srv"
read_only = false
}
config {
image = "busybox:latest"
command = "sh"
args = ["-c","while true; do echo '.'; sleep 5; done"]
}
resources {
cpu = 100
memory = 128
}
}
}
}
================================================
FILE: csi/aws/ebs/mysql-server.nomad
================================================
job "mysql-server" {
datacenters = ["dc1"]
type = "service"
group "mysql-server" {
count = 1
volume "mysql" {
type = "csi"
read_only = false
source = "mysql"
}
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
task "mysql-server" {
driver = "docker"
volume_mount {
volume = "mysql"
destination = "/srv"
read_only = false
}
env = {
"MYSQL_ROOT_PASSWORD" = "password"
}
config {
image = "hashicorp/mysql-portworx-demo:latest"
args = ["--datadir", "/srv/mysql"]
port_map {
db = 3306
}
}
resources {
cpu = 500
memory = 512
network {
port "db" {
static = 3306
}
}
}
service {
name = "mysql-server"
port = "db"
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
}
}
}
================================================
FILE: csi/aws/ebs/plugin-ebs-controller.nomad
================================================
job "plugin-aws-ebs-controller" {
datacenters = ["dc1"]
group "controller" {
task "plugin" {
driver = "docker"
config {
image = "amazon/aws-ebs-csi-driver:latest"
args = [
"controller",
"--endpoint=unix://csi/csi.sock",
"--logtostderr",
"--v=5",
]
}
csi_plugin {
id = "aws-ebs0"
type = "controller"
mount_dir = "/csi"
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: csi/aws/ebs/plugin-ebs-nodes.nomad
================================================
job "plugin-aws-ebs-nodes" {
datacenters = ["dc1"]
# you can run node plugins as service jobs as well, but this ensures
# that all nodes in the DC have a copy.
type = "system"
group "nodes" {
task "plugin" {
driver = "docker"
config {
image = "amazon/aws-ebs-csi-driver:latest"
args = [
"node",
"--endpoint=unix://csi/csi.sock",
"--logtostderr",
"--v=5",
]
# node plugins must run as privileged jobs because they
# mount disks to the host
privileged = true
}
csi_plugin {
id = "aws-ebs0"
type = "node"
mount_dir = "/csi"
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: csi/aws/ebs/volume.hcl
================================================
# volume registration
type = "csi"
id = "mysql"
name = "mysql"
external_id = "vol-098a37a17a40dfa0f"
access_mode = "single-node-writer"
attachment_mode = "file-system"
plugin_id = "aws-ebs0"
================================================
FILE: csi/aws/efs/README.md
================================================
## Demonstration of AWS EFS CSI Plugin on Nomad
Plugin can be found here https://github.com/kubernetes-sigs/aws-efs-csi-driver
================================================
FILE: csi/aws/efs/busybox.nomad
================================================
job "efs-busybox" {
datacenters = ["dc1"]
type = "service"
group "group" {
count = 1
volume "jobVolume" {
type = "csi"
read_only = false
source = "csiVolume"
}
task "busybox" {
driver = "docker"
volume_mount {
volume = "jobVolume"
destination = "/srv"
read_only = false
}
config {
image = "busybox:latest"
command = "sh"
args = ["-c","while true; do echo '.'; sleep 5; done"]
}
resources {
cpu = 100
memory = 128
}
}
}
}
================================================
FILE: csi/aws/efs/node.nomad
================================================
job "plugin-aws-efs-nodes" {
datacenters = ["dc1"]
type = "system"
group "nodes" {
task "plugin" {
driver = "docker"
config {
image = "amazon/aws-efs-csi-driver:latest"
args = [
"--endpoint=unix:///csi/csi.sock",
"--logtostderr",
"--v=5",
]
# node plugins must run as privileged jobs because they
# mount disks to the host
privileged = true
}
csi_plugin {
id = "aws-efs"
type = "monolith"
mount_dir = "/csi"
}
resources {
cpu = 200
memory = 128
}
}
}
}
================================================
FILE: csi/aws/efs/volume.hcl
================================================
# volume registration
type = "csi"
id = "csiVolume"
name = "efs"
external_id = "vol-0c6d464d9c5def899"
access_mode = "single-node-writer"
attachment_mode = "file-system"
plugin_id = "aws-efs"
================================================
FILE: csi/gcp/gce-pd/README.md
================================================
## Nomad Example using GCP Persistent Disk CSI Plugin
Source Repo: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver
### Create a persistent disk
Nomad does not handle disk creation and expects this to be done by an operator.
### Edit the disk.hcl file
Once the Disk is created, edit the disk.hcl file and replace the placeholder
(`«disk id as listed in the GCP Disks page»`) with the disk ID
found in the GCP interface.
### Run an agent to test
You are now ready to run the node job, register the volume, and run the workload.
You can use a dev agent to test; however, you will need to pass in additional
configuration to allow the Docker driver to run privileged containers. This is
a requirement to allow the CSI plugin containers to mount and unmount storage.
There is a config.nomad that has the necessary configuration. Start an agent by
running:
```shell
$ nomad agent -dev -config=config.nomad
```
For full clusters, verify that your clients have the appropriate permission
configured for the docker plugin. Once properly configured, you will be able to
run the node.nomad file, wait for the plugins to become healthy, register the
volume, and then run the job.nomad file.
### Use nomad alloc exec to check the mount
You can connect to the mounted container by running `nomad alloc exec` for the
allocation of the workload. For example.
```shell
$ nomad alloc exec ac345h /bin/sh
```
This will give you a shell prompt inside of the container. If you list the `/srv`
directory, you should see a lost+found directory. This indicates that you are at
the base of an ext filesystem and shows that your block device was mounted into
your container there.
```shell
# ls /srv
. .. lost+found
```
================================================
FILE: csi/gcp/gce-pd/config.nomad
================================================
plugin "docker" {
config {
allow_privileged = true
}
}
================================================
FILE: csi/gcp/gce-pd/controller.nomad
================================================
job "controller" {
datacenters = ["dc1"]
group "controller" {
task "plugin" {
driver = "docker"
template {
data = <<EOH
{{ key "service_account" }}
EOH
destination = "secrets/creds.json"
}
env {
"GOOGLE_APPLICATION_CREDENTIALS" = "/secrets/creds.json"
}
config {
image = "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.7.0-gke.0"
args = [
"--endpoint=unix:///csi/csi.sock",
"--v=6",
"--logtostderr",
"--run-node-service=false"
]
}
csi_plugin {
id = "gcepd"
type = "controller"
mount_dir = "/csi"
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: csi/gcp/gce-pd/cv-nomad.hcl
================================================
# volume registration
type = "csi"
id = "myVolume"
name = "cv-nomad"
external_id = "projects/cv-nomad-gcp-csi/zones/us-central1-a/disks/cv-disk-1"
access_mode = "single-node-writer"
attachment_mode = "file-system"
plugin_id = "gcepd"
================================================
FILE: csi/gcp/gce-pd/disk.hcl
================================================
# volume registration
type = "csi"
id = "VolumeID"
name = "VolumeName"
external_id = "«selfLink for the disk from the 'Equivalent REST' output»"
access_mode = "single-node-writer"
attachment_mode = "file-system"
plugin_id = "gcepd"
================================================
FILE: csi/gcp/gce-pd/job.nomad
================================================
job "alpine" {
datacenters = ["dc1"]
group "alloc" {
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
volume "jobVolume" {
type = "csi"
read_only = false
source = "myVolume"
}
task "docker" {
driver = "docker"
volume_mount {
volume = "jobVolume"
destination = "/srv"
read_only = false
}
config {
image = "alpine"
command = "sh"
args = ["-c","while true; do sleep 10; done"]
}
}
}
}
================================================
FILE: csi/gcp/gce-pd/nodes.nomad
================================================
job "nodes" {
datacenters = ["dc1"]
type = "system"
group "nodes" {
task "plugin" {
driver = "docker"
template {
data = <<EOH
{{ key "service_account" }}
EOH
destination = "secrets/creds.json"
}
env { "GOOGLE_APPLICATION_CREDENTIALS" = "/secrets/creds.json"
}
config {
image = "gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.7.0-gke.0"
args = [
"--endpoint=unix:///csi/csi.sock",
"--v=6",
"--logtostderr",
"--run-controller-service=false"
]
privileged = true
}
csi_plugin {
id = "gcepd"
type = "node"
mount_dir = "/csi"
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: csi/hetzner/volume/README.md
================================================
## Nomad Example using Hetzner Cloud Volume CSI Plugin
Source Repo: https://github.com/hetznercloud/csi-driver
### Create a volume
Nomad does not handle volume creation and expects this to be done by an operator.
### Edit the volume.hcl file
Once the volume is created, edit the volume.hcl file and replace the placeholder
(`«volume id as listed in the Hetzner UI Volumes page»`) with the volume ID
found in the Hetzner interface.
### Run an agent to test
You are now ready to run the node job, register the volume, and run the workload.
You can use a dev agent to test; however, you will need to pass in additional
configuration to allow the Docker driver to run privileged containers. This is
a requirement to allow the CSI plugin containers to mount and unmount storage.
There is a config.nomad that has the necessary configuration. Start an agent by
running:
```shell
$ nomad agent -dev -config=config.nomad
```
For full clusters, verify that your clients have the appropriate permission
configured for the docker plugin. Once properly configured, you will be able to
run the node.nomad file, wait for the plugins to become healthy, register the
volume, and then run the job.nomad file.
### Use nomad alloc exec to check the mount
You can connect to the mounted container by running `nomad alloc exec` for the
allocation of the workload. For example.
```shell
$ nomad alloc exec ac345h /bin/sh
```
This will give you a shell prompt inside of the container. If you list the `/srv`
directory, you should see a lost+found directory. This indicates that you are at
the base of an ext filesystem and shows that your block device was mounted into
your container there.
```shell
# ls /srv
. .. lost+found
```
================================================
FILE: csi/hetzner/volume/config.nomad
================================================
plugin "docker" {
config {
allow_privileged = true
}
}
================================================
FILE: csi/hetzner/volume/job.nomad
================================================
job "alpine" {
datacenters = ["dc1"]
group "alloc" {
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
volume "jobVolume" {
type = "csi"
read_only = false
source = "myVolume"
}
task "docker" {
driver = "docker"
volume_mount {
volume = "jobVolume"
destination = "/srv"
read_only = false
}
config {
image = "alpine"
command = "sh"
args = ["-c","while true; do sleep 10; done"]
}
}
}
}
================================================
FILE: csi/hetzner/volume/node.nomad
================================================
job "node" {
datacenters = ["dc1"]
type = "system"
group "node" {
task "plugin" {
driver = "docker"
config {
image = "hetznercloud/hcloud-csi-driver:1.2.3"
privileged = true
}
env {
CSI_ENDPOINT="unix:///csi/csi.sock"
HCLOUD_TOKEN="«your token»"
}
csi_plugin {
id = "csi.hetzner.cloud"
type = "monolith"
mount_dir = "/csi"
}
}
}
}
================================================
FILE: csi/hetzner/volume/volume.hcl
================================================
# volume registration
type = "csi"
id = "VolumeID"
name = "VolumeName"
external_id = "«volume id as listed in the Hetzner UI Volumes page»"
access_mode = "single-node-writer"
attachment_mode = "file-system"
plugin_id = "csi.hetzner.cloud"
================================================
FILE: csi/hostpath/block/README.md
================================================
### Nomad CSI Demo using the CSI hostvolume plugin
Prerequisites
- https://github.com/rexray/gocsi/tree/master/csc
- https://quay.io/repository/k8scsi/hostpathplugin?tag=v1.2.0
- Nomad 0.11
This script will create a volume.hcl file
```
#!/bin/bash
# create the volume in the "external provider"
PLUGIN_ID=hostpath-plugin0
VOLUME_NAME=test-volume0
# non-dev mode
# CSI_ENDPOINT="/var/nomad/client/csi/monolith/$PLUGIN_ID/csi.sock"
# dev mode path is going to be in a tempdir
PLUGIN_DOCKER_ID=$(docker ps | grep hostpath | awk -F' +' '{print $1}')
CSI_ENDPOINT=$(docker inspect $PLUGIN_DOCKER_ID | jq -r '.[0].Mounts[] | select(.Destination == "/csi") | .Source')/csi.sock
echo "creating volume..."
UUID=$(sudo csc --endpoint $CSI_ENDPOINT controller create-volume $VOLUME_NAME --cap 1,2,ext4 | grep -o '".*"' | tr -d '"')
echo "registering volume $UUID..."
echo $(printf 'id = "%s"
name = "%s"
type = "csi"
external_id = "%s"
plugin_id = "%s"
access_mode = "single-node-writer"
attachment_mode = "file-system"' $VOLUME_NAME $VOLUME_NAME $UUID $PLUGIN_ID) > volume.hcl
nomad volume register volume.hcl
echo "querying volume $UUID..."
nomad volume status $UUID
```
================================================
FILE: csi/hostpath/block/csi-hostpath-driver.nomad
================================================
job "csi-hostpath" {
datacenters = ["dc1"]
type = "system"
group "nodes" {
task "plugin" {
driver = "docker"
config {
image = "k8s.gcr.io/sig-storage/hostpathplugin:v1.9.0"
args = [
"--v=5",
"--drivername=csi-hostpath",
"--endpoint=unix://csi/csi.sock",
"--nodeid=${attr.unique.hostname}",
]
privileged = true
}
csi_plugin {
id = "csi_hostpath"
type = "monolith"
mount_dir = "/csi"
health_timeout = "30s"
}
resources {
cpu = 250
memory = 128
}
}
}
}
================================================
FILE: csi/hostpath/block/job.nomad
================================================
job "alpine" {
datacenters = ["dc1"]
group "alloc" {
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
volume "jobVolume" {
type = "csi"
read_only = false
source = "test-volume0"
}
task "docker" {
driver = "docker"
volume_mount {
volume = "jobVolume"
destination = "/srv"
read_only = false
}
config {
image = "alpine"
command = "sleep"
args = ["infinity"]
}
}
}
}
================================================
FILE: csi/hostpath/block/test.sh
================================================
#!/bin/bash
# create the volume in the "external provider"
PLUGIN_ID=$1
VOLUME_NAME=$2
# non-dev mode
# CSI_ENDPOINT="/var/nomad/client/csi/monolith/$PLUGIN_ID/csi.sock"
# dev mode path is going to be in a tempdir
PLUGIN_DOCKER_ID=$(docker ps | grep hostpath | awk -F' +' '{print $1}')
CSI_ENDPOINT=$(docker inspect $PLUGIN_DOCKER_ID | jq -r '.[0].Mounts[] | select(.Destination == "/csi") | .Source')/csi.sock
echo "creating volume..."
UUID=$(sudo csc --endpoint $CSI_ENDPOINT controller create-volume $VOLUME_NAME --cap 1,2,ext4 | grep -o '".*"' | tr -d '"')
echo "registering volume $UUID..."
echo $(printf 'id = "%s"
name = "%s"
type = "csi"
external_id = "%s"
plugin_id = "%s"
access_mode = "single-node-writer"
attachment_mode = "file-system"' $VOLUME_NAME $VOLUME_NAME $UUID $PLUGIN_ID) > volume.hcl
nomad volume register volume.hcl
echo "querying volume $UUID..."
nomad volume status $UUID
================================================
FILE: csi/hostpath/file/README.md
================================================
### Nomad CSI Demo using the CSI hostvolume plugin
Prerequisites
- https://github.com/rexray/gocsi/tree/master/csc
- https://quay.io/repository/k8scsi/hostpathplugin?tag=v1.2.0
- Nomad 0.11
This script will create a volume.hcl file
```
#!/bin/bash
# create the volume in the "external provider"
PLUGIN_ID=hostpath-plugin0
VOLUME_NAME=test-volume0
# non-dev mode
# CSI_ENDPOINT="/var/nomad/client/csi/monolith/$PLUGIN_ID/csi.sock"
# dev mode path is going to be in a tempdir
PLUGIN_DOCKER_ID=$(docker ps | grep hostpath | awk -F' +' '{print $1}')
CSI_ENDPOINT=$(docker inspect $PLUGIN_DOCKER_ID | jq -r '.[0].Mounts[] | select(.Destination == "/csi") | .Source')/csi.sock
echo "creating volume..."
UUID=$(sudo csc --endpoint $CSI_ENDPOINT controller create-volume $VOLUME_NAME --cap 1,2,ext4 | grep -o '".*"' | tr -d '"')
echo "registering volume $UUID..."
echo $(printf 'id = "%s"
name = "%s"
type = "csi"
external_id = "%s"
plugin_id = "%s"
access_mode = "single-node-writer"
attachment_mode = "file-system"' $VOLUME_NAME $VOLUME_NAME $UUID $PLUGIN_ID) > volume.hcl
nomad volume register volume.hcl
echo "querying volume $UUID..."
nomad volume status $UUID
```
================================================
FILE: csi/hostpath/file/csi-hostpath-driver.nomad
================================================
job "csi-hostpath-driver" {
datacenters = ["dc1"]
group "csi" {
task "driver" {
driver = "docker"
config {
image = "quay.io/k8scsi/hostpathplugin:v1.2.0"
args = [
"--drivername=csi-hostpath",
"--v=5",
"--endpoint=unix://csi/csi.sock",
"--nodeid=foo",
]
// all known CSI plugins will require privileged=true
// because they need add mountpoints. in the ACLs
// design we may make csi_plugin implicitly add the
// appropriate privileges.
privileged = true
}
csi_plugin {
id = "csi-hostpath"
type = "monolith"
mount_dir = "/csi"
}
}
}
}
================================================
FILE: csi/hostpath/file/job.nomad
================================================
job "alpine" {
datacenters = ["dc1"]
group "alloc" {
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
volume "jobVolume" {
type = "csi"
read_only = false
source = "test-volume0"
}
task "docker" {
driver = "docker"
volume_mount {
volume = "jobVolume"
destination = "/srv"
read_only = false
}
config {
image = "alpine"
command = "sh"
args = ["-c","while true; do sleep 10; done"]
}
}
}
}
================================================
FILE: csi/hostpath/file/test.sh
================================================
#!/bin/bash
# create the volume in the "external provider"
PLUGIN_ID=$1
VOLUME_NAME=$2
# non-dev mode
# CSI_ENDPOINT="/var/nomad/client/csi/monolith/$PLUGIN_ID/csi.sock"
# dev mode path is going to be in a tempdir
PLUGIN_DOCKER_ID=$(docker ps | grep hostpath | awk -F' +' '{print $1}')
CSI_ENDPOINT=$(docker inspect $PLUGIN_DOCKER_ID | jq -r '.[0].Mounts[] | select(.Destination == "/csi") | .Source')/csi.sock
echo "creating volume..."
UUID=$(sudo csc --endpoint $CSI_ENDPOINT controller create-volume $VOLUME_NAME --cap 1,2,ext4 | grep -o '".*"' | tr -d '"')
echo "registering volume $UUID..."
echo $(printf 'id = "%s"
name = "%s"
type = "csi"
external_id = "%s"
plugin_id = "%s"
access_mode = "single-node-writer"
attachment_mode = "file-system"' $VOLUME_NAME $VOLUME_NAME $UUID $PLUGIN_ID) > volume.hcl
nomad volume register volume.hcl
echo "querying volume $UUID..."
nomad volume status $UUID
================================================
FILE: csi/hostpath/volume.hcl
================================================
id = "ebs_prod_db1"
namespace = "default"
name = "database"
type = "csi"
plugin_id = "plugin_id"
# For 'nomad volume register', provide the external ID from the storage
# provider. This field should be omitted when creating a volume with
# 'nomad volume create'
external_id = "vol-23452345"
# For 'nomad volume create', specify a snapshot ID or volume to clone. You can
# specify only one of these two fields.
snapshot_id = "snap-12345"
# clone_id = "vol-abcdef"
# Optional: for 'nomad volume create', specify a maximum and minimum capacity.
# Registering an existing volume will record but ignore these fields.
capacity_min = "10GiB"
capacity_max = "20G"
# Required (at least one): for 'nomad volume create', specify one or more
# capabilities to validate. Registering an existing volume will record but
# ignore these fields.
capability {
access_mode = "single-node-writer"
attachment_mode = "file-system"
}
capability {
access_mode = "single-node-reader"
attachment_mode = "block-device"
}
# Optional: for 'nomad volume create', specify mount options to validate for
# 'attachment_mode = "file-system". Registering an existing volume will record
# but ignore these fields.
mount_options {
fs_type = "ext4"
mount_flags = ["ro"]
}
# Optional: specify one or more locations where the volume must be accessible
# from. Refer to the plugin documentation for what segment values are supported.
topology_request {
preferred {
topology { segments { rack = "R1" } }
}
required {
topology { segments { rack = "R1" } }
topology { segments { rack = "R2", zone = "us-east-1a" } }
}
}
# Optional: provide any secrets specified by the plugin.
secrets {
example_secret = "xyzzy"
}
# Optional: provide a map of keys to string values expected by the plugin.
parameters {
skuname = "Premium_LRS"
}
# Optional: for 'nomad volume register', provide a map of keys to string
# values expected by the plugin. This field will populated automatically by
# 'nomad volume create'.
context {
endpoint = "http://192.168.1.101:9425"
}
================================================
FILE: deployments/failing_deployment/example.nomad
================================================
job "example" {
datacenters = ["dc1"]
group "cache" {
network {
port "db" {
to = 6379
}
}
service {
name = "redis-cache"
tags = ["global", "cache"]
port = "db"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
task "redis" {
driver = "docker"
config {
image = "redis:7"
ports = ["db"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
================================================
FILE: docker/auth_from_template/README.md
================================================
# Auth from Template Example
This job specification demonstrates using the `template` stanza to create
environment variables suitable for Nomad to use in variable interpolation.
This example uses Consul KV, since there is less configuration necessary to
run the sample; however, this exists to demonstrate that a Vault-based solution
(once configured with your cluster) would be trivial to switch to.
This job pairs with the docker_registry_v2 job from the applications folder,
which has basic authentication enabled. Once you have started it, you will need
to pull the redis:latest image from DockerHub and push it into your local repo.
### Add the values for the job to Consul
```shell-session
$ consul kv put consul kv put kv/docker/config/user user
$ consul kv put consul kv put kv/docker/config/pass securepassword
```
Running the job will start as expected. Stop the job.
### Add the values for the job to Consul
```shell-session
$ consul kv put consul kv put kv/docker/config/pass securepasswordLOL
```
Running the job now will fail since the credential is invalid.
================================================
FILE: docker/auth_from_template/auth.nomad
================================================
job "auth" {
type = "service"
datacenters = ["dc1"]
group "docker" {
task "redis" {
driver = "docker"
template {
destination = "secrets/secret.env"
env = true
change_mode = "noop"
data = <<EOH
DOCKER_USER={{ key "kv/docker/config/user" }}
DOCKER_PASS={{ key "kv/docker/config/pass" }}
EOH
}
config {
# Update this value for your private container
# registry
image = "registry.service.consul:5000/redis:latest"
auth {
username = "${DOCKER_USER}"
password = "${DOCKER_PASS}"
}
}
resources {
cpu = 200
memory = 100
}
}
}
}
================================================
FILE: docker/datadog/container_network.nomad
================================================
job "example" {
type = "system"
datacenters = ["dc1"]
group "monitoring" {
task "dd-agent" {
driver = "docker"
env {
HOSTIP="${attr.unique.network.ip-address}",
STATSD_PORT="8125"
API_KEY = "23cecf6a16b072151c561fe7e6e3938a"
DD_DOGSTATSD_NON_LOCAL_TRAFFIC = "true"
}
config {
hostname = "${node.unique.na
gitextract_paoaxy22/
├── .envrc
├── .gitignore
├── HCL2/
│ ├── add_local_file/
│ │ ├── README.md
│ │ ├── input.file
│ │ ├── raw_file_b64.nomad
│ │ ├── raw_file_delims.nomad
│ │ ├── raw_file_json.nomad
│ │ └── use_file.nomad
│ ├── always_change/
│ │ ├── README.md
│ │ ├── before.nomad
│ │ ├── uuid.nomad
│ │ └── variable.nomad
│ ├── dynamic/
│ │ ├── README.md
│ │ └── example.nomad
│ ├── object_to_template/
│ │ ├── README.md
│ │ └── example.nomad
│ └── variable_jobs/
│ ├── README.md
│ ├── decode-external-file/
│ │ ├── README.MD
│ │ ├── env.json
│ │ ├── job1.nomad
│ │ └── job2.nomad
│ ├── env-vars/
│ │ ├── README.MD
│ │ ├── env.vars
│ │ ├── job1.nomad
│ │ └── job2.nomad
│ ├── job.nomad
│ ├── job.vars
│ └── multiple-var-files/
│ ├── README.MD
│ ├── job1.nomad
│ ├── job1.vars
│ ├── job2.nomad
│ ├── job2.vars
│ ├── job3.nomad
│ ├── job3.vars
│ └── shared.vars
├── README.md
├── alloc_folder/
│ ├── mount_alloc.nomad
│ └── sidecar.nomad
├── applications/
│ ├── artifactory_oss/
│ │ ├── README.md
│ │ └── registry.nomad
│ ├── cluster-broccoli/
│ │ └── example.nomad
│ ├── docker_registry/
│ │ ├── README.md
│ │ └── registry.nomad
│ ├── docker_registry_v2/
│ │ ├── README.md
│ │ ├── htpasswd
│ │ ├── make_password.sh
│ │ └── registry.nomad
│ ├── docker_registry_v3/
│ │ ├── README.md
│ │ ├── make_password.sh
│ │ └── registry.nomad
│ ├── mariadb/
│ │ └── mariadb.nomad
│ ├── membrane-soa/
│ │ ├── README.md
│ │ ├── soap-proxy-v1-linux.nomad
│ │ ├── soap-proxy-v1-windows.nomad
│ │ └── soap-proxy.nomad
│ ├── minio/
│ │ ├── README.md
│ │ ├── minio.nomad
│ │ └── secure-variables/
│ │ ├── README.md
│ │ ├── minio-data/
│ │ │ └── .gitkeep
│ │ ├── minio.nomad
│ │ ├── start.sh
│ │ ├── stop.sh
│ │ └── volume.hcl
│ ├── postgres/
│ │ ├── README.md
│ │ └── postgres.nomad
│ ├── prometheus/
│ │ ├── README.md
│ │ ├── fabio-service.nomad
│ │ ├── grafana/
│ │ │ ├── README.md
│ │ │ └── nomad_jobs.json
│ │ ├── node-exporter.nomad
│ │ └── prometheus.nomad
│ ├── vms/
│ │ ├── freedos/
│ │ │ ├── .gitignore
│ │ │ ├── README.md
│ │ │ ├── freedos.img.tgz
│ │ │ ├── freedos.img.tgz.SHASUM
│ │ │ └── freedos.nomad
│ │ └── tinycore/
│ │ ├── README.md
│ │ ├── tc_ssh.nomad
│ │ └── tinycore.qcow2.tgz
│ └── wordpress/
│ ├── README.md
│ ├── distributed/
│ │ ├── README.md
│ │ ├── build-site.nomad
│ │ ├── nginx.nomad
│ │ ├── reset.sh
│ │ ├── wordpress-db.nomad
│ │ └── wordpress.nomad
│ └── simple/
│ ├── README.md
│ └── wordpress.nomad
├── artifact_sleepyecho/
│ ├── README.md
│ ├── SleepyEcho.sh
│ ├── artifact_sleepyecho.nomad
│ └── vault_sleepyecho.nomad
├── batch/
│ ├── batch_gc/
│ │ └── example.nomad
│ ├── dispatch/
│ │ ├── sleepy.nomad
│ │ ├── sleepy1.nomad
│ │ ├── sleepy10.nomad
│ │ ├── sleepy2.nomad
│ │ ├── sleepy3.nomad
│ │ ├── sleepy4.nomad
│ │ ├── sleepy5.nomad
│ │ ├── sleepy6.nomad
│ │ ├── sleepy7.nomad
│ │ ├── sleepy8.nomad
│ │ └── sleepy9.nomad
│ ├── dont_restart_fail/
│ │ ├── README.md
│ │ └── example.nomad
│ ├── lost_batch/
│ │ ├── README.md
│ │ ├── batch.nomad
│ │ └── periodic.nomad
│ ├── lots_of_batches/
│ │ ├── README.md
│ │ └── payload.nomad.template
│ ├── periodic/
│ │ ├── prohibit-overlap.nomad
│ │ └── template.nomad
│ └── spread_batch/
│ ├── example.nomad
│ └── example2.nomad
├── batch_overload/
│ ├── example.nomad
│ └── periodic.nomad
├── blocked_eval/
│ ├── README.md
│ └── example.nomad
├── check.sh
├── cni/
│ ├── README.md
│ ├── diy_brige/
│ │ ├── README.md
│ │ ├── diybridge.conflist
│ │ ├── example.nomad
│ │ └── repro.nomad
│ └── example.nomad
├── complex_meta/
│ ├── template_env.nomad
│ └── template_meta.nomad
├── connect/
│ ├── consul.nomad
│ ├── discuss/
│ │ ├── blocky.yaml
│ │ └── job.nomad
│ ├── dns-via-mesh/
│ │ ├── README.md
│ │ ├── consul-dns.nomad
│ │ ├── consul-dns2.nomad
│ │ └── go-resolv-test/
│ │ ├── .gitignore
│ │ ├── build.sh
│ │ └── main.go
│ ├── ingress_gateways/
│ │ └── ingress_gateway.nomad
│ ├── native/
│ │ └── cn-demo.nomad
│ ├── nginx_ingress/
│ │ ├── countdash.nomad
│ │ └── ingress.nomad
│ └── sidecar/
│ ├── countdash.nomad
│ └── countdash2.nomad
├── consul/
│ ├── add_check/
│ │ ├── README.md
│ │ ├── e1.nomad
│ │ ├── e2.nomad
│ │ └── e3.nomad
│ └── use_consul_for_kv_path/
│ ├── README.md
│ └── template.nomad
├── consul-template/
│ ├── coordination/
│ │ ├── README.md
│ │ └── sample.nomad
│ ├── missing_vault_value/
│ │ └── sample.nomad
│ └── my_first_kv/
│ ├── README.md
│ └── example.nomad
├── countdash/
│ ├── connect/
│ │ └── countdash.nomad
│ └── simple/
│ └── countdash.nomad
├── csi/
│ ├── aws/
│ │ ├── ebs/
│ │ │ ├── README.md
│ │ │ ├── busybox.nomad
│ │ │ ├── mysql-server.nomad
│ │ │ ├── plugin-ebs-controller.nomad
│ │ │ ├── plugin-ebs-nodes.nomad
│ │ │ └── volume.hcl
│ │ └── efs/
│ │ ├── README.md
│ │ ├── busybox.nomad
│ │ ├── node.nomad
│ │ └── volume.hcl
│ ├── gcp/
│ │ └── gce-pd/
│ │ ├── README.md
│ │ ├── config.nomad
│ │ ├── controller.nomad
│ │ ├── cv-nomad.hcl
│ │ ├── disk.hcl
│ │ ├── job.nomad
│ │ └── nodes.nomad
│ ├── hetzner/
│ │ └── volume/
│ │ ├── README.md
│ │ ├── config.nomad
│ │ ├── job.nomad
│ │ ├── node.nomad
│ │ └── volume.hcl
│ └── hostpath/
│ ├── block/
│ │ ├── README.md
│ │ ├── csi-hostpath-driver.nomad
│ │ ├── job.nomad
│ │ └── test.sh
│ ├── file/
│ │ ├── README.md
│ │ ├── csi-hostpath-driver.nomad
│ │ ├── job.nomad
│ │ └── test.sh
│ └── volume.hcl
├── deployments/
│ └── failing_deployment/
│ └── example.nomad
├── docker/
│ ├── auth_from_template/
│ │ ├── README.md
│ │ └── auth.nomad
│ ├── datadog/
│ │ ├── container_network.nomad
│ │ ├── ex3.nomad
│ │ └── example2.nomad
│ ├── docker+host_volume/
│ │ ├── README.md
│ │ ├── task_deps.nomad
│ │ └── unsafe.nomad
│ ├── docker_dynamic_hostname/
│ │ ├── README.md
│ │ ├── finished.nomad
│ │ ├── res_file
│ │ └── view.sh
│ ├── docker_entrypoint/
│ │ ├── Dockerfile
│ │ └── example.nomad
│ ├── docker_image_not_found/
│ │ ├── README.md
│ │ ├── reschedule.nomad
│ │ └── restart.nomad
│ ├── docker_interpolated_image_name/
│ │ ├── README.md
│ │ ├── example.nomad
│ │ └── hostname.nomad
│ ├── docker_logging/
│ │ └── example.nomad
│ ├── docker_mac_address/
│ │ └── example.nomad
│ ├── docker_network/
│ │ ├── example1.nomad
│ │ └── example2.nomad
│ ├── docker_nfs/
│ │ ├── README.md
│ │ └── example.nomad
│ ├── docker_template/
│ │ └── example.nomad
│ ├── docker_twice_in_alloc/
│ │ └── example.nomad
│ ├── docker_windows_abs_mount/
│ │ ├── Dockerfile
│ │ ├── README.md
│ │ ├── SleepyEcho.ps1
│ │ └── repro.nomad
│ ├── env_var_args/
│ │ ├── Dockerfile
│ │ ├── README.md
│ │ ├── cmd.sh
│ │ ├── cmd_alt.sh
│ │ ├── entrypoint.sh
│ │ ├── start.nomad
│ │ └── test.nomad
│ ├── get_fact_from_consul/
│ │ ├── README.md
│ │ ├── args.nomad
│ │ └── image.nomad
│ ├── host-volumes-and-users/
│ │ ├── README.md
│ │ └── scratch.nomad
│ ├── labels/
│ │ ├── README.md
│ │ ├── heredoc.nomad
│ │ ├── interpolation.nomad
│ │ └── literal.nomad
│ └── mount_alloc/
│ ├── README.md
│ └── example.nomad
├── drain/
│ └── example.nomad
├── dummy/
│ └── example.nomad
├── echo_stack/
│ ├── README.md
│ ├── fabio-system.nomad
│ ├── login-service.nomad
│ └── profile-service.nomad
├── env/
│ └── escaped_env_vars/
│ ├── Dockerfile
│ ├── README.md
│ ├── entrypoint.sh
│ └── example.nomad
├── environment/
│ ├── README.md
│ └── example.nomad
├── exec/
│ └── host-volumes-and-users/
│ ├── README.md
│ └── scratch.nomad
├── exec-zip/
│ ├── README.md
│ ├── example.nomad
│ └── folder.tgz
├── fabio/
│ ├── README.md
│ ├── fabio-docker.nomad
│ ├── fabio-service.nomad
│ └── fabio-system.nomad
├── fabio-ssl/
│ └── fabio-ssl.nomad
├── failing_jobs/
│ ├── README.md
│ ├── failing_sidecar/
│ │ ├── README.md
│ │ └── example.nomad
│ └── impossible_constratint/
│ ├── README.md
│ └── example.nomad
├── giant/
│ └── example.nomad
├── guide/
│ └── TUTORIAL_TEMPLATE.mdx
├── host_volume/
│ ├── README.md
│ ├── mariadb/
│ │ └── mariadb.nomad
│ ├── prometheus/
│ │ ├── README.md
│ │ ├── grafana/
│ │ │ ├── README.md
│ │ │ └── nomad_jobs.json
│ │ └── prometheus.nomad
│ └── read_only/
│ └── read_only.nomad
├── http_echo/
│ ├── arm-service.nomad
│ ├── bar-service.nomad
│ ├── car-service-broken-check.nomad
│ ├── foo-service.deployment.nomad
│ ├── foo-service.nomad
│ ├── foo-test.nomad
│ └── template/
│ ├── echo_template.nomad
│ ├── ets.nomad
│ ├── ets2.nomad
│ └── ets3.nomad
├── httpd_site/
│ ├── README.md
│ ├── httpd.nomad
│ ├── make_site.sh
│ ├── site-content/
│ │ ├── about.html
│ │ ├── css/
│ │ │ └── style.css
│ │ └── index.html
│ └── site-content.tgz
├── ipv6/
│ └── SimpleHTTPServer/
│ └── sample.nomad
├── java/
│ ├── JavaDriverTest/
│ │ ├── java-driver-test.nomad
│ │ └── test2.nomad
│ ├── README.md
│ ├── SampleWebApp.war
│ ├── apache_camel/
│ │ ├── camel-standalone-helloworld-1.0-SNAPSHOT.jar
│ │ └── java_files.nomad
│ └── jar-test/
│ ├── README.md
│ ├── jar/
│ │ └── Count.jar
│ ├── jar-test.nomad
│ └── src/
│ └── Count.java
├── job_examples/
│ ├── base-batch.nomad
│ └── meta/
│ ├── README.md
│ └── meta-batch.nomad
├── json-jobs/
│ ├── example.nomad
│ └── job.json
├── load_balancers/
│ └── traefik/
│ ├── README.md
│ ├── traefik.nomad
│ ├── webapp.nomad
│ └── webapp2.nomad
├── meta/
│ ├── README.md
│ └── example.nomad
├── microservice/
│ └── example.nomad
├── minecraft/
│ ├── minecraft.nomad
│ ├── minecraft_exec.nomad
│ └── plugin.nomad
├── monitoring/
│ └── sensu/
│ ├── fabio-docker.nomad
│ └── sensu.nomad
├── nginx-fabio-clone/
│ ├── README.md
│ ├── bar-service.nomad
│ ├── e.ct
│ ├── e.out
│ ├── example.nomad
│ ├── foo-service.nomad
│ ├── tj.ct
│ └── tj.out
├── oom/
│ └── example.nomad
├── output.html
├── parameterized/
│ ├── README.md
│ ├── docker_hello_world/
│ │ └── hello-world.nomad
│ ├── template.nomad
│ └── to_specific_client/
│ ├── example.nomad
│ └── workaround/
│ ├── README.md
│ ├── example.nomad
│ ├── rolling_run.sh
│ └── watch.py
├── ports/
│ ├── README.md
│ └── example.nomad
├── preserve_state/
│ ├── bar-service.jsonjob
│ ├── example.jsonjob
│ ├── fabio.jsonjob
│ ├── foo-service.jsonjob
│ ├── hashi-ui.jsonjob
│ ├── jam.sh
│ ├── nomad_debug
│ └── preserve.sh
├── qemu/
│ ├── README.md
│ ├── hass/
│ │ └── hass.nomad
│ ├── imagebuilder/
│ │ ├── Core-current.iso
│ │ ├── Dockerfile
│ │ ├── NOTES.md
│ │ └── core-image.qcow2
│ ├── job.json
│ ├── tc.qcow2
│ ├── tc_ssh.nomad
│ ├── tc_ssh2.nomad
│ ├── tc_ssh_arm.nomad
│ └── tinycore.qcow2
├── raw_exec/
│ ├── env.nomad
│ ├── mkdir/
│ │ ├── README.md
│ │ ├── mkdir-bash.nomad
│ │ └── mkdir.nomad
│ ├── ps.nomad
│ ├── quoted_args/
│ │ ├── quoted_args.nomad
│ │ └── quoted_args_2.nomad
│ └── user/
│ └── example.nomad
├── reproductions/
│ └── cpu_rescheduling/
│ ├── README.md
│ └── repro.nomad
├── reschedule/
│ └── ex.nomad
├── restart/
│ └── restart.nomad
├── rolling_upgrade/
│ ├── README.md
│ ├── cv-new.nomad
│ ├── cv.nomad
│ ├── example-new.nomad
│ └── example.nomad
├── sentinel/
│ ├── README.md
│ ├── alwaysFalse.sentinel
│ ├── example.nomad
│ ├── exampleGroupMissingNodeClass.nomad
│ ├── exampleGroupNodeClass.nomad
│ ├── exampleJobNodeClass.nomad
│ ├── exampleNoNodeClass.nomad
│ ├── payload.json
│ └── requireNodeClass.sentinel
├── server-variables/
│ ├── README.md
│ ├── build-site.nomad
│ ├── nginx.nomad
│ ├── reset.sh
│ ├── wordpress-db.nomad
│ └── wordpress.nomad
├── sleepy/
│ ├── README.md
│ ├── sleepy_bash/
│ │ └── sleepy.nomad
│ └── sleepy_python/
│ ├── README.md
│ ├── batch_sleepy_python.nomad
│ └── sleepy_python.nomad
├── spread/
│ ├── example.nomad
│ ├── scheduler.json
│ └── scheduler_b.json
├── stress/
│ ├── README.md
│ └── cpu_throttled_time/
│ ├── README.md
│ └── stress.nomad
├── super_big/
│ ├── README.md
│ ├── super_big.nomad
│ └── super_big2.nomad
├── system_jobs/
│ ├── sleepy/
│ │ ├── README.md
│ │ ├── sleepy_bash/
│ │ │ └── sleepy.nomad
│ │ └── sleepy_python/
│ │ ├── README.md
│ │ ├── batch_sleepy_python.nomad
│ │ └── sleepy_python.nomad
│ ├── system_deployment/
│ │ ├── deploy_jdk.nomad
│ │ ├── fabio-system.nomad
│ │ ├── fabio-system.nomad2
│ │ ├── foo-system.nomad
│ │ └── foo-system.nomad2
│ └── system_filter/
│ ├── filtered.nomad
│ └── host_vol.nomad
├── task_deps/
│ ├── consul-lock/
│ │ └── myapp.nomad
│ ├── disk_check/
│ │ ├── README.md
│ │ └── disk.nomad
│ ├── init_artifact/
│ │ ├── README.md
│ │ ├── batch-init-artifact.nomad
│ │ └── service-init-artifact.nomad
│ ├── interjob/
│ │ ├── README.md
│ │ ├── myapp.nomad
│ │ └── myservice.nomad
│ ├── k8sdoc/
│ │ ├── README.md
│ │ ├── init.nomad
│ │ ├── k8sdoc1.nomad
│ │ ├── myapp.nomad
│ │ └── myservice.nomad
│ └── sidecar/
│ └── example.nomad
├── template/
│ ├── batch/
│ │ ├── README.md
│ │ ├── context.nomad
│ │ ├── parameter.nomad
│ │ ├── services.nomad
│ │ └── template.nomad
│ ├── from_consul/
│ │ ├── README.md
│ │ ├── artifact.nomad
│ │ ├── init.nomad
│ │ └── issue.nomad
│ ├── learning/
│ │ └── README.md
│ ├── rerender/
│ │ └── example.nomad
│ ├── secure_variables/
│ │ ├── README.md
│ │ ├── example.nomad
│ │ ├── interpolated_job/
│ │ │ ├── README.md
│ │ │ ├── interpolated_job.hcl
│ │ │ └── makeJobVars.sh
│ │ ├── makeJobVars.sh
│ │ ├── makeVars.sh
│ │ ├── multiregion/
│ │ │ ├── start.sh
│ │ │ ├── stop.sh
│ │ │ ├── template.nomad
│ │ │ ├── test.out
│ │ │ └── test.tmpl
│ │ ├── template copy.tmpl
│ │ ├── template-playground.nomad
│ │ ├── template.html
│ │ ├── template.tmpl
│ │ ├── variable_view.nomad
│ │ └── write/
│ │ ├── t0.out
│ │ ├── t0.tmpl
│ │ ├── t1.out
│ │ ├── t1.tmpl
│ │ ├── t2.out
│ │ └── t2.tmpl
│ ├── services/
│ │ ├── README.md
│ │ └── byTag.nomad
│ ├── template-system/
│ │ ├── README.md
│ │ ├── composed_keys.nomad
│ │ ├── services-on-nomad-client.nomad
│ │ └── template.nomad
│ ├── template_handoff/
│ │ ├── README.md
│ │ ├── handoff.nomad
│ │ └── handoff_restart.nomad
│ ├── template_into_docker/
│ │ └── example.nomad
│ ├── template_playground/
│ │ ├── composed_keys.nomad
│ │ ├── template-exec.nomad
│ │ ├── template-hcl2.nomad
│ │ └── template.nomad
│ └── use_whitespace/
│ └── byTag.nomad
├── test.sh
├── vault/
│ ├── deleted_policy/
│ │ ├── README.md
│ │ ├── break_it.sh
│ │ ├── nomad-cluster-role.broken.json
│ │ ├── nomad-cluster-role.json
│ │ ├── nomad-server-policy.hcl
│ │ ├── setup.sh
│ │ ├── temp1.nomad
│ │ └── workload.nomad
│ ├── pki/
│ │ ├── README.md
│ │ ├── sleepy_bash_pki.nomad
│ │ └── test.nomad
│ └── sleepy_vault_bash/
│ ├── sleepy_bash.nomad
│ └── test.nomad
├── vault_reload_triggered_by_consul/
│ ├── README.md
│ ├── SleepyEcho.sh
│ └── sample.nomad
├── victoriametrics/
│ └── vm.nomad
├── win_rawexec_restart/
│ ├── SleepyEcho.ps1
│ └── artifact_sleepyecho.nomad
└── windows_docker/
├── docker-iis.nomad
└── windows-test.nomad
SYMBOL INDEX (13 symbols across 3 files)
FILE: connect/dns-via-mesh/go-resolv-test/main.go
function main (line 12) | func main() {
FILE: java/jar-test/src/Count.java
class Count (line 34) | public class Count {
method countChars (line 35) | public static void countChars(InputStream in) throws IOException
method main (line 45) | public static void main(String[] args) throws Exception
FILE: parameterized/to_specific_client/workaround/watch.py
function build_url (line 11) | def build_url(alloc_id):
function eprint (line 33) | def eprint(string):
function is_final (line 37) | def is_final(event):
function print_tasks (line 46) | def print_tasks(event):
function handle_event (line 54) | def handle_event(event):
function handle_data (line 65) | def handle_data(response):
function connect (line 76) | def connect(url):
function start (line 85) | def start():
function check_args (line 93) | def check_args():
Condensed preview — 503 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (636K chars).
[
{
"path": ".envrc",
"chars": 178,
"preview": "echo \"Processing .direnv...\"\nfunction template {\n echo \"Creating a skeleton tutorial in $1.\"\n mkdir -p $1\t\n cp $(pwd)"
},
{
"path": ".gitignore",
"chars": 10,
"preview": ".DS_Store\n"
},
{
"path": "HCL2/add_local_file/README.md",
"chars": 3105,
"preview": "# Include a Local File at Job Runtime\n\nYou can use the HCL2 file function and a runtime variable to include a file in\nyo"
},
{
"path": "HCL2/add_local_file/input.file",
"chars": 281,
"preview": "This is the input file content\n\nParticularly evil stuff:\n\nSingle quotes: 'hello'\nDouble quotes: \"howdy\"\nGo-template: {{ "
},
{
"path": "HCL2/add_local_file/raw_file_b64.nomad",
"chars": 643,
"preview": "variable \"input_file\" {\n type = string\n description = \"local path to the redis configuration to inject into the job.\"\n"
},
{
"path": "HCL2/add_local_file/raw_file_delims.nomad",
"chars": 598,
"preview": "variable \"input_file\" {\n type = string\n description = \"local path to the redis configuration to inject into the job.\"\n"
},
{
"path": "HCL2/add_local_file/raw_file_json.nomad",
"chars": 646,
"preview": "variable \"input_file\" {\n type = string\n description = \"local path to the redis configuration to inject into the job.\"\n"
},
{
"path": "HCL2/add_local_file/use_file.nomad",
"chars": 532,
"preview": "variable \"input_file\" {\n type = string\n description = \"local path to the redis configuration to inject into the job.\"\n"
},
{
"path": "HCL2/always_change/README.md",
"chars": 8428,
"preview": "# Use HCL2 to make re-runnable batch jobs\n\nNomad will refuse to run a batch job again unless it detects a change to the "
},
{
"path": "HCL2/always_change/before.nomad",
"chars": 204,
"preview": "job \"before.nomad\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n group \"before\" {\n task \"hello-world\" {\n driver "
},
{
"path": "HCL2/always_change/uuid.nomad",
"chars": 243,
"preview": "job \"uuid.nomad\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n meta {\n run_uuid = \"${uuidv4()}\"\n }\n\n group \"uuid\" {"
},
{
"path": "HCL2/always_change/variable.nomad",
"chars": 532,
"preview": "job \"variable.nomad\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n meta {\n run_index = \"${floor(var.run_index)}\"\n }\n"
},
{
"path": "HCL2/dynamic/README.md",
"chars": 142,
"preview": "# HCL2 dynamic blocks\n\nThis job specification leverages the `dynamic` HCL2 blocks and HCL2 variables to\ncreate a multi-t"
},
{
"path": "HCL2/dynamic/example.nomad",
"chars": 1172,
"preview": "variable \"job_name\" {\n type = string\n default = \"\"\n}\n\nlocals {\n targets = {\n \"1\": \"zpool\"\n \"2\": \"zmirror\"\n }\n "
},
{
"path": "HCL2/object_to_template/README.md",
"chars": 0,
"preview": ""
},
{
"path": "HCL2/object_to_template/example.nomad",
"chars": 836,
"preview": "variable \"datacenters\" {\n type = list(string)\n default = [\"dc1\"]\n}\n\nvariable \"ports\" {\n type = list(object({\n "
},
{
"path": "HCL2/variable_jobs/README.md",
"chars": 2520,
"preview": "# Using HCL2 to add variables to Nomad jobs\n\nNomad's HCL2 support enables you to use variables in your Nomad job specifi"
},
{
"path": "HCL2/variable_jobs/decode-external-file/README.MD",
"chars": 689,
"preview": "# Decode the contents of an external file into a `local` variable\n\nThe HCL2 `file` function when paired with the `jsonde"
},
{
"path": "HCL2/variable_jobs/decode-external-file/env.json",
"chars": 105,
"preview": "{\n \"datacenters\": [\n \"dc1\"\n ],\n \"docker_image_job1\": \"redis:3\",\n \"docker_image_job2\": \"redis:4\"\n}\n"
},
{
"path": "HCL2/variable_jobs/decode-external-file/job1.nomad",
"chars": 758,
"preview": "#----------------------------------------------------------------------------\n# This value can be supplied as a flag to "
},
{
"path": "HCL2/variable_jobs/decode-external-file/job2.nomad",
"chars": 758,
"preview": "#----------------------------------------------------------------------------\n# This value can be supplied as a flag to "
},
{
"path": "HCL2/variable_jobs/env-vars/README.MD",
"chars": 844,
"preview": "# Provide HCL2 variable values using environment variables\n\nThis example contains two jobs that read HCL2 variable value"
},
{
"path": "HCL2/variable_jobs/env-vars/env.vars",
"chars": 129,
"preview": "export NOMAD_VAR_datacenters='[\"dc1\"]'\nexport NOMAD_VAR_docker_image_job1=\"redis:3\"\nexport NOMAD_VAR_docker_image_job2=\""
},
{
"path": "HCL2/variable_jobs/env-vars/job1.nomad",
"chars": 386,
"preview": "variable \"datacenters\" {\n type = list(string)\n description = \"Path to JSON formatted shared job configuration.\"\n}\n\nvar"
},
{
"path": "HCL2/variable_jobs/env-vars/job2.nomad",
"chars": 386,
"preview": "variable \"datacenters\" {\n type = list(string)\n description = \"Path to JSON formatted shared job configuration.\"\n}\n\nvar"
},
{
"path": "HCL2/variable_jobs/job.nomad",
"chars": 551,
"preview": "variable \"datacenters\" {\n type = list(string)\n description = \"List of Nomad datacenters to run the job in. Defaults to"
},
{
"path": "HCL2/variable_jobs/job.vars",
"chars": 22,
"preview": "image_version = \"99\"\n\n"
},
{
"path": "HCL2/variable_jobs/multiple-var-files/README.MD",
"chars": 953,
"preview": "# Provide HCL2 variable values using environment variables\n\nThis example contains two jobs that consumes multiple HCL2 v"
},
{
"path": "HCL2/variable_jobs/multiple-var-files/job1.nomad",
"chars": 511,
"preview": "variable \"datacenters\" {\n type = list(string)\n description = \"Path to JSON formatted shared job configuration.\"\n}\n\nvar"
},
{
"path": "HCL2/variable_jobs/multiple-var-files/job1.vars",
"chars": 25,
"preview": "image_version_job1 = \"3\"\n"
},
{
"path": "HCL2/variable_jobs/multiple-var-files/job2.nomad",
"chars": 511,
"preview": "variable \"datacenters\" {\n type = list(string)\n description = \"Path to JSON formatted shared job configuration.\"\n}\n\nvar"
},
{
"path": "HCL2/variable_jobs/multiple-var-files/job2.vars",
"chars": 25,
"preview": "image_version_job2 = \"4\"\n"
},
{
"path": "HCL2/variable_jobs/multiple-var-files/job3.nomad",
"chars": 511,
"preview": "variable \"datacenters\" {\n type = list(string)\n description = \"Path to JSON formatted shared job configuration.\"\n}\n\nvar"
},
{
"path": "HCL2/variable_jobs/multiple-var-files/job3.vars",
"chars": 59,
"preview": "docker_image = \"hello-world\"\nimage_version_job3 = \"latest\"\n"
},
{
"path": "HCL2/variable_jobs/multiple-var-files/shared.vars",
"chars": 47,
"preview": "datacenters = [ \"dc1\" ]\ndocker_image = \"redis\"\n"
},
{
"path": "README.md",
"chars": 663,
"preview": "# Nomad Example Jobs\n\nThis repository holds jobs and job skeletons that I have used to create\nreproducers or minimum via"
},
{
"path": "alloc_folder/mount_alloc.nomad",
"chars": 409,
"preview": "job \"alloc_folder\" {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"docker\" {\n driver = \"docker\"\n\n confi"
},
{
"path": "alloc_folder/sidecar.nomad",
"chars": 580,
"preview": "job \"alloc_folder\" {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"docker\" {\n driver = \"docker\"\n\n confi"
},
{
"path": "applications/artifactory_oss/README.md",
"chars": 1259,
"preview": "# Docker Registry\n\nThis job uses Nomad Host Volumes to provide an internal Docker registry which\ncan be used to host pri"
},
{
"path": "applications/artifactory_oss/registry.nomad",
"chars": 869,
"preview": "job \"registry\" {\n datacenters = [\"dc1\"]\n priority = 80\n\n group \"docker\" {\n network {\n port \"registry\" {\n "
},
{
"path": "applications/cluster-broccoli/example.nomad",
"chars": 369,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n group \"cache\" {\n network {\n port \"db\" {\n to = 6379\n }\n "
},
{
"path": "applications/docker_registry/README.md",
"chars": 1259,
"preview": "# Docker Registry\n\nThis job uses Nomad Host Volumes to provide an internal Docker registry which\ncan be used to host pri"
},
{
"path": "applications/docker_registry/registry.nomad",
"chars": 816,
"preview": "job \"registry\" {\n datacenters = [\"dc1\"]\n priority = 80\n\n group \"docker\" {\n network {\n port \"registry\" {\n "
},
{
"path": "applications/docker_registry_v2/README.md",
"chars": 1276,
"preview": "# Docker Registry\n\nThis job uses Nomad Host Volumes to provide an internal Docker registry which\ncan be used to host pri"
},
{
"path": "applications/docker_registry_v2/htpasswd",
"chars": 66,
"preview": "user:$2y$05$kyEyguS/Sisz7SMjqKQZ1eQDCM7pSFiItkL9yiVIDOVyQfj8XTCAS\n"
},
{
"path": "applications/docker_registry_v2/make_password.sh",
"chars": 110,
"preview": "#!/bin/bash\n\ndocker run --rm -it -v $(pwd):/out --entrypoint=\"htpasswd\" xmartlabs/htpasswd -Bbc /out/$1 $2 $3\n"
},
{
"path": "applications/docker_registry_v2/registry.nomad",
"chars": 1138,
"preview": "job \"registry\" {\n datacenters = [\"dc1\"]\n priority = 80\n\n group \"docker\" {\n network {\n port \"registry\" {\n "
},
{
"path": "applications/docker_registry_v3/README.md",
"chars": 1357,
"preview": "# Docker Registry\n\nThis job uses Nomad Host Volumes to provide an internal Docker registry which\ncan be used to host pri"
},
{
"path": "applications/docker_registry_v3/make_password.sh",
"chars": 583,
"preview": "#!/bin/bash\n\ncmd=\"htpasswd -Bbn $1 $2\"\nif ! [ -x \"$(command -v htpasswd)\" ]; then\n if ! [ -x \"$(command -v docker)\" ]; "
},
{
"path": "applications/docker_registry_v3/registry.nomad",
"chars": 1191,
"preview": "job \"registry\" {\n datacenters = [\"dc1\"]\n priority = 80\n\n group \"docker\" {\n network {\n port \"registry\" {\n "
},
{
"path": "applications/mariadb/mariadb.nomad",
"chars": 1158,
"preview": "job \"mariadb\" {\n datacenters = [\"dc1\"]\n type = \"service\"\n group \"bootstrap\" {\n count = 1\n\n network {\n mod"
},
{
"path": "applications/membrane-soa/README.md",
"chars": 1056,
"preview": "Deploying a Java REST to SOAP Proxy in Connect\n\nTechnologies:\n\n- Consul Service Mesh\n- Consul Egress Gateways\n- Nomad Ja"
},
{
"path": "applications/membrane-soa/soap-proxy-v1-linux.nomad",
"chars": 3143,
"preview": "job \"soap-proxy\" {\n datacenters = [\"dc1\"]\n\n group \"membrane\" {\n network {\n port \"admin\" {\n static = 900"
},
{
"path": "applications/membrane-soa/soap-proxy-v1-windows.nomad",
"chars": 3140,
"preview": "job \"soap-proxy\" {\n datacenters = [\"dc1\"]\n\n group \"membrane\" {\n network {\n port \"admin\" {\n static = 900"
},
{
"path": "applications/membrane-soa/soap-proxy.nomad",
"chars": 3516,
"preview": "locals {\n membrane_home = \"/local/membrane-service-proxy-4.7.3\"\n class_path = \"${local.membrane_home}/conf:${local.mem"
},
{
"path": "applications/minio/README.md",
"chars": 854,
"preview": "# Minio S3-compatible Storage\n\nThis job uses Nomad Host Volumes to provide an internal s3 compatible storage\nenvironment"
},
{
"path": "applications/minio/minio.nomad",
"chars": 1159,
"preview": "job \"minio\" {\n datacenters = [\"dc1\"]\n priority = 80\n\n group \"storage\" {\n network {\n port \"api\" {\n t"
},
{
"path": "applications/minio/secure-variables/README.md",
"chars": 881,
"preview": "# Minio S3-compatible Storage\n\nThis job uses Nomad Host Volumes to provide an internal s3 compatible storage\nenvironment"
},
{
"path": "applications/minio/secure-variables/minio-data/.gitkeep",
"chars": 0,
"preview": ""
},
{
"path": "applications/minio/secure-variables/minio.nomad",
"chars": 1114,
"preview": "# minio is an AWS S3-compatible storage engine\n\njob \"minio\" {\n datacenters = [\"dc1\"]\n priority = 80\n\n group \"stora"
},
{
"path": "applications/minio/secure-variables/start.sh",
"chars": 745,
"preview": "#! /usr/bin/env bash\n\nmkdir -p minio-data\nsed \"s|«/absolute/path/to»|$(pwd)|g\" volume.hcl > .volume_patch.hcl\nnohup noma"
},
{
"path": "applications/minio/secure-variables/stop.sh",
"chars": 190,
"preview": "#! /usr/bin/env bash\n\nPID=$(cat .nomad.pid)\necho \"Stopping Nomad (pid: ${PID})\"\nrm -rf .nomad.pid\nrm -rf .nomad.token\nrm"
},
{
"path": "applications/minio/secure-variables/volume.hcl",
"chars": 319,
"preview": "# The host volume configuration for the minio task. The start.sh\n# script will make a derived copy of this file with the"
},
{
"path": "applications/postgres/README.md",
"chars": 1307,
"preview": "# Stateful example of Postgres with Host Volumes\n\n## Configure a supportive host volume\n\nThis job uses a volume named\n`p"
},
{
"path": "applications/postgres/postgres.nomad",
"chars": 895,
"preview": "job \"postgres.nomad\" {\n datacenters = [\"dc1\"]\n\n group \"database\" {\n network {\n port \"db\" {\n to = 5432\n "
},
{
"path": "applications/prometheus/README.md",
"chars": 297,
"preview": "# Prometheus\n\n\nOn the client, you will need a rule to allow the docker containers to talk to the local\nconsul agents.\n\n`"
},
{
"path": "applications/prometheus/fabio-service.nomad",
"chars": 1282,
"preview": "# For ACL-enabled Consul Clusters, you need to specify a Consul ACL token down\n# in the `fabio-linux-amd64` task's env s"
},
{
"path": "applications/prometheus/grafana/README.md",
"chars": 173,
"preview": "Thanks to [Nextty](https://grafana.com/orgs/derekamz) for two great grafana dashboards to start with:\n\n* Nomad Jobs - ht"
},
{
"path": "applications/prometheus/grafana/nomad_jobs.json",
"chars": 10451,
"preview": "{\n \"__inputs\": [\n {\n \"name\": \"DS_PROMETHEUS\",\n \"label\": \"prometheus\",\n \"description\": \"\",\n \"type"
},
{
"path": "applications/prometheus/node-exporter.nomad",
"chars": 1168,
"preview": "# The Prometheus Node Exporter needs access to the proc filesystem which is not\n# mounted into the exec jail, so it requ"
},
{
"path": "applications/prometheus/prometheus.nomad",
"chars": 3577,
"preview": "# For ACL-enabled Consul Clusters, you need to specify a Consul ACL token down\n# in the `prometheus` task's scrape confi"
},
{
"path": "applications/vms/freedos/.gitignore",
"chars": 625,
"preview": "*.img\n\n\n# Created by https://www.toptal.com/developers/gitignore/api/macos\n# Edit at https://www.toptal.com/developers/g"
},
{
"path": "applications/vms/freedos/README.md",
"chars": 277,
"preview": "## FreeDOS VM\n\nThis job fetches a small remote VM image and starts it in your Nomad cluster. It\nalso contains a task tha"
},
{
"path": "applications/vms/freedos/freedos.img.tgz.SHASUM",
"chars": 82,
"preview": "8d2817126bf46ba2b4fca0b0c49eed2cc208c6f6448651e82c6d973fcba36569 freedos.img.tgz\n"
},
{
"path": "applications/vms/freedos/freedos.nomad",
"chars": 1122,
"preview": "job \"freedos\" {\n datacenters = [\"dc1\"]\n\n group \"g1\" {\n network {\n mode = \"bridge\"\n port \"webvnc\" {}\n }"
},
{
"path": "applications/vms/tinycore/README.md",
"chars": 346,
"preview": "# TinyCore QEMU example\n\nThis sample will start a TinyCore Linux VM configured with the SSH daemon\nenabled. It performs "
},
{
"path": "applications/vms/tinycore/tc_ssh.nomad",
"chars": 1392,
"preview": "job \"j1\" {\n datacenters = [\"dc1\"]\n\n group \"g1\" {\n network {\n mode = \"bridge\"\n port \"http\" {\n to = "
},
{
"path": "applications/wordpress/README.md",
"chars": 949,
"preview": "# Wordpress\n\nThis job demonstrates several useful patterns for creating Nomad jobs:\n\n- Nomad Host Volumes for persistent"
},
{
"path": "applications/wordpress/distributed/README.md",
"chars": 1061,
"preview": "# WordPress\n\nThis job demonstrates several useful patterns for creating Nomad jobs:\n\n- Nomad Host Volumes for persistent"
},
{
"path": "applications/wordpress/distributed/build-site.nomad",
"chars": 1676,
"preview": "job \"build-site\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n parameterized {\n meta_required = [\"site_name\"]\n }\n\n "
},
{
"path": "applications/wordpress/distributed/nginx.nomad",
"chars": 1329,
"preview": "job \"nginx\" {\n datacenters = [\"dc1\"]\n type = \"system\"\n\n group \"nginx\" {\n network {\n port \"http\" {\n sta"
},
{
"path": "applications/wordpress/distributed/reset.sh",
"chars": 0,
"preview": ""
},
{
"path": "applications/wordpress/distributed/wordpress-db.nomad",
"chars": 907,
"preview": "job \"wordpress-db\" {\n datacenters = [\"dc1\"]\n\n group \"database\" {\n network {\n port \"db\" {\n to = 3306\n "
},
{
"path": "applications/wordpress/distributed/wordpress.nomad",
"chars": 1977,
"preview": "variable \"site_name\" {\n type = string\n description = \"The site_name is used to set the consul tag for the website. Thi"
},
{
"path": "applications/wordpress/simple/README.md",
"chars": 949,
"preview": "# Wordpress\n\nThis job demonstrates several useful patterns for creating Nomad jobs:\n\n- Nomad Host Volumes for persistent"
},
{
"path": "applications/wordpress/simple/wordpress.nomad",
"chars": 2282,
"preview": "job \"my-website\" {\n datacenters = [\"dc1\"]\n\n group \"database\" {\n network {\n port \"db\" {\n to = 3306\n "
},
{
"path": "artifact_sleepyecho/README.md",
"chars": 447,
"preview": "## artifact_sleepyecho\n\nPurpose:\n\nThis sample was designed to pull a shell script from an AWS S3 bucket and\nrun it local"
},
{
"path": "artifact_sleepyecho/SleepyEcho.sh",
"chars": 624,
"preview": "#! /bin/bash\n\nif [ -z \"$1\" ] \nthen\n SLEEP_SECS=\"2\"\nelse\n SLEEP_SECS=\"$1\"\nfi\n\nif [ -z \"${EXTRAS}\" ]\nthen\n extras_part="
},
{
"path": "artifact_sleepyecho/artifact_sleepyecho.nomad",
"chars": 463,
"preview": "job \"repro\" {\n datacenters = [\"dc1\"]\n type = \"service\"\n group \"group\" {\n count = 1\n\n# constraint {\n# attri"
},
{
"path": "artifact_sleepyecho/vault_sleepyecho.nomad",
"chars": 552,
"preview": "job \"repro\" {\n datacenters = [\"dc1\"]\n type = \"service\"\n group \"group\" {\n count = 1\n\n task \"echo-task\" {\n d"
},
{
"path": "batch/batch_gc/example.nomad",
"chars": 497,
"preview": "variable \"body\" {\n type = string\n default = \"Template Rendered\"\n}\n\njob \"example\" {\n datacenters = [\"dc1\"]\n type "
},
{
"path": "batch/dispatch/sleepy.nomad",
"chars": 748,
"preview": "job sleepy {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "batch/dispatch/sleepy1.nomad",
"chars": 749,
"preview": "job sleepy1 {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "batch/dispatch/sleepy10.nomad",
"chars": 750,
"preview": "job sleepy10 {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "batch/dispatch/sleepy2.nomad",
"chars": 749,
"preview": "job sleepy2 {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "batch/dispatch/sleepy3.nomad",
"chars": 749,
"preview": "job sleepy3 {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "batch/dispatch/sleepy4.nomad",
"chars": 749,
"preview": "job sleepy4 {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "batch/dispatch/sleepy5.nomad",
"chars": 749,
"preview": "job sleepy5 {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "batch/dispatch/sleepy6.nomad",
"chars": 749,
"preview": "job sleepy6 {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "batch/dispatch/sleepy7.nomad",
"chars": 749,
"preview": "job sleepy7 {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "batch/dispatch/sleepy8.nomad",
"chars": 749,
"preview": "job sleepy8 {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "batch/dispatch/sleepy9.nomad",
"chars": 749,
"preview": "job sleepy9 {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "batch/dont_restart_fail/README.md",
"chars": 203,
"preview": "# Don't restart on failure\n\nSometimes you want to craft a job in such a way that it will\nnot be restarted if it fails. T"
},
{
"path": "batch/dont_restart_fail/example.nomad",
"chars": 431,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n group \"nodes\" {\n reschedule {\n attempts = 0\n"
},
{
"path": "batch/lost_batch/README.md",
"chars": 142,
"preview": "# Lost batch job\n\nThis is to test the behavior of a lost client with a batch file and the\n`prohibit_overlap` setting in "
},
{
"path": "batch/lost_batch/batch.nomad",
"chars": 469,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n group \"sleepers\" {\n restart {\n mode = \"fa"
},
{
"path": "batch/lost_batch/periodic.nomad",
"chars": 439,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n periodic {\n cron = \"*/1 * * * * *\"\n "
},
{
"path": "batch/lots_of_batches/README.md",
"chars": 153,
"preview": "# Lots of batches\n\nThis exists to create a noisy history of jobs in the Nomad state.\nOne possible use is to test Nomad U"
},
{
"path": "batch/lots_of_batches/payload.nomad.template",
"chars": 311,
"preview": "job {{jobname}} {\n group {{groupname}}\n task {{taskname}}\n driver = \"raw_exec\" # you could use exec, but that w"
},
{
"path": "batch/periodic/prohibit-overlap.nomad",
"chars": 343,
"preview": "job \"prohibit-overlap.nomad\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n periodic {\n cron = \"* * * * *\"\n "
},
{
"path": "batch/periodic/template.nomad",
"chars": 4488,
"preview": "job \"template\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n periodic {\n cron = \"* * * * *\"\n }\n\n group \"gro"
},
{
"path": "batch/spread_batch/example.nomad",
"chars": 357,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n meta {\n \"version\" = \"2\"\n }\n\n group \"nodes\" {\n "
},
{
"path": "batch/spread_batch/example2.nomad",
"chars": 349,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n meta {\n \"version\" = \"2\"\n }\n\n group \"nodes\" {\n "
},
{
"path": "batch_overload/example.nomad",
"chars": 469,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n group \"sleepers\" {\n count = 2000\n task \"wait\" {\n d"
},
{
"path": "batch_overload/periodic.nomad",
"chars": 551,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n periodic {\n cron = \"*/15 * * * * *\"\n prohib"
},
{
"path": "blocked_eval/README.md",
"chars": 278,
"preview": "# Blocked jobs\n\nThis job can be used to experiment with job behaviors when a job is waiting for\na client that is able to"
},
{
"path": "blocked_eval/example.nomad",
"chars": 471,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n constraint {\n attribute = \"${meta.waituntil}\"\n operator = \"=\"\n valu"
},
{
"path": "check.sh",
"chars": 2943,
"preview": "#!/bin/bash\n\nprintError () {\n echo -n \"- Checking ${CUR_FILE} ... \"\n\n icon=\"🔴\"\n if [ ${NO_ICON:-unset} != \"unset\" ]; "
},
{
"path": "cni/README.md",
"chars": 376,
"preview": "# Nomad CNI examples\n\nThis folder contains Nomad job specifications and configuration files that show\nhow Nomad can use "
},
{
"path": "cni/diy_brige/README.md",
"chars": 910,
"preview": "# DIY CNI bridge network\n\n## About\n\nThis example uses a CNI configuration based on Nomad's internal CNI template\nused to"
},
{
"path": "cni/diy_brige/diybridge.conflist",
"chars": 733,
"preview": "{\n \"cniVersion\": \"0.4.0\",\n \"name\": \"diybridge\",\n \"plugins\": [\n {\n \"type\": \"loopback\"\n },\n {\n \"type"
},
{
"path": "cni/diy_brige/example.nomad",
"chars": 382,
"preview": "variable \"dcs\" {\n description = \"Datacenters to run job in.\"\n type = list(string)\n default = [\"dc1\"]\n}\n\njob \"example\""
},
{
"path": "cni/diy_brige/repro.nomad",
"chars": 525,
"preview": "variable \"dcs\" {\n type = list(string)\n default = [\"dc1\"]\n description = \"Nomad datacenters in which to run"
},
{
"path": "cni/example.nomad",
"chars": 328,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n group \"test\" {\n network {\n mode = \"cni/mynet3\"\n }\n\n task \"alpin"
},
{
"path": "complex_meta/template_env.nomad",
"chars": 830,
"preview": "job \"template\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n group \"group\" {\n task \"meta-output\" {\n drive"
},
{
"path": "complex_meta/template_meta.nomad",
"chars": 4866,
"preview": "job \"template\" {\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n group \"group\" {\n network {\n port \"export\" {}"
},
{
"path": "connect/consul.nomad",
"chars": 557,
"preview": "job \"connect-consul\" {\n\n datacenters = [\"dc1\"]\n type = \"batch\"\n\n group \"connect-consul\" {\n network {\n mode = "
},
{
"path": "connect/discuss/blocky.yaml",
"chars": 323,
"preview": "upstream:\n default:\n - 46.182.19.48\n - 80.241.218.68\n - tcp-tls:fdns1.dismail.de:853\n - https://dns.digital"
},
{
"path": "connect/discuss/job.nomad",
"chars": 1952,
"preview": "variable \"config_data\" {\n type = string\n description = \"Plain text config file for blocky\"\n}\n\njob \"blocky\" {\n datacen"
},
{
"path": "connect/dns-via-mesh/README.md",
"chars": 759,
"preview": "README\n\nThis example demonstrates using the Consul service mesh\nto connect a workload to the Consul DNS query API\n\n## Co"
},
{
"path": "connect/dns-via-mesh/consul-dns.nomad",
"chars": 584,
"preview": "job \"testdns\" {\n datacenters = [\"dc1\"]\n\n group \"ubuntu\" {\n network {\n mode = \"bridge\"\n # dns {\n # "
},
{
"path": "connect/dns-via-mesh/consul-dns2.nomad",
"chars": 1081,
"preview": "job \"testdns2\" {\n datacenters = [\"dc1\"]\n\n group \"ubuntu\" {\n network {\n mode = \"bridge\"\n dns {\n ser"
},
{
"path": "connect/dns-via-mesh/go-resolv-test/.gitignore",
"chars": 14,
"preview": ".DS_Store\nout\n"
},
{
"path": "connect/dns-via-mesh/go-resolv-test/build.sh",
"chars": 529,
"preview": "#!/bin/bash\n\necho \"Building dnstest binaries...\"\n\necho \"- Linux AMD64\"\nmkdir -p out/linux_amd64/\nGOOS=linux GOARCH=amd64"
},
{
"path": "connect/dns-via-mesh/go-resolv-test/main.go",
"chars": 647,
"preview": "package main\n\nimport (\n\t\"context\"\n\t\"flag\"\n\t\"fmt\"\n\t\"net\"\n\t\"os\"\n\n)\n\nfunc main() {\n preferGo := flag.Bool(\"go\", false, \""
},
{
"path": "connect/ingress_gateways/ingress_gateway.nomad",
"chars": 1150,
"preview": "job \"ingress-gateway\" {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n network {\n port \"envoy\" {}\n }\n\n task \""
},
{
"path": "connect/native/cn-demo.nomad",
"chars": 1054,
"preview": "job \"cn-demo\" {\n datacenters = [\"dc1\"]\n \n meta {\n version = \"1\"\n }\n\n group \"generator\" {\n network {\n por"
},
{
"path": "connect/nginx_ingress/countdash.nomad",
"chars": 925,
"preview": "job \"countdash\" {\n datacenters = [\"dc1\"]\n\n group \"api\" {\n network {\n mode = \"bridge\"\n }\n\n service {\n "
},
{
"path": "connect/nginx_ingress/ingress.nomad",
"chars": 1684,
"preview": "job \"ingress\" {\n datacenters = [\"dc1\"]\n\n group \"cache\" {\n\n network {\n port \"http\" {\n to = 8080\n }\n"
},
{
"path": "connect/sidecar/countdash.nomad",
"chars": 998,
"preview": "job \"countdash\" {\n datacenters = [\"dc1\"]\n\n group \"api\" {\n network {\n mode = \"bridge\"\n }\n\n service {\n "
},
{
"path": "connect/sidecar/countdash2.nomad",
"chars": 1169,
"preview": "job \"countdash\" {\n datacenters = [\"dc1\"]\n\n group \"api\" {\n network {\n mode = \"bridge\"\n }\n\n service {\n "
},
{
"path": "consul/add_check/README.md",
"chars": 638,
"preview": "# Adding a service to a Nomad Job\n\nThis example shows a simple Nomad job (`e1.nomad`) which can be run in the\ncluster. R"
},
{
"path": "consul/add_check/e1.nomad",
"chars": 294,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n group \"cache\" {\n network {\n port \"db\" {\n to = 6379\n }\n "
},
{
"path": "consul/add_check/e2.nomad",
"chars": 522,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n group \"cache\" {\n network {\n port \"db\" {\n to = 6379\n }\n "
},
{
"path": "consul/add_check/e3.nomad",
"chars": 565,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n meta = {\n \"test\" = \"rebootparty\"\n }\n\n group \"cache\" {\n network {\n "
},
{
"path": "consul/use_consul_for_kv_path/README.md",
"chars": 3673,
"preview": "## Use Consul for KV Path\n\nThis sample will use a Consul KV key to determine a path for other Consul KV\nelements using `"
},
{
"path": "consul/use_consul_for_kv_path/template.nomad",
"chars": 600,
"preview": "job \"template\" {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n count = 1\n\n task \"command\" {\n template {\n "
},
{
"path": "consul-template/coordination/README.md",
"chars": 494,
"preview": "## Using Consul-Template to fake Task Dependencies\n\nThe consul-template library has a blocking behavior in the instances"
},
{
"path": "consul-template/coordination/sample.nomad",
"chars": 2232,
"preview": "job sleepy {\n datacenters = [\"dc1\"]\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \"exec\"\n\n config {\n "
},
{
"path": "consul-template/missing_vault_value/sample.nomad",
"chars": 785,
"preview": "job sleepy {\n datacenters = [\"dc1\"]\n type = \"system\"\n\n group \"group\" {\n task \"sleepy.sh\" {\n driver = \""
},
{
"path": "consul-template/my_first_kv/README.md",
"chars": 1084,
"preview": "[template]:https://www.nomadproject.io/docs/job-specification/template.html#environment-variables\n## My First KV\n\nThis j"
},
{
"path": "consul-template/my_first_kv/example.nomad",
"chars": 401,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n group \"cache\" {\n network {\n port \"db\" {}\n }\n\n task \"redis\" {\n "
},
{
"path": "countdash/connect/countdash.nomad",
"chars": 994,
"preview": "job \"countdash\" {\n datacenters = [\"dc1\"]\n\n group \"api\" {\n network {\n mode = \"bridge\"\n }\n\n service {\n "
},
{
"path": "countdash/simple/countdash.nomad",
"chars": 583,
"preview": "job \"countdash\" {\n datacenters = [\"dc1\"]\n\n group \"api\" {\n network {\n port \"dashboard\" {\n static = 9002\n"
},
{
"path": "csi/aws/ebs/README.md",
"chars": 106,
"preview": "## Nomad sample job using AWS EBS CSI plugin.\n\nMore information can be found at learn.hashicorp.com/nomad\n"
},
{
"path": "csi/aws/ebs/busybox.nomad",
"chars": 596,
"preview": "job \"mysql-busybox\" {\n datacenters = [\"dc1\"]\n type = \"service\"\n\n group \"mysql\" {\n count = 1\n\n volume \"my"
},
{
"path": "csi/aws/ebs/mysql-server.nomad",
"chars": 1084,
"preview": "job \"mysql-server\" {\n datacenters = [\"dc1\"]\n type = \"service\"\n\n group \"mysql-server\" {\n count = 1\n\n volu"
},
{
"path": "csi/aws/ebs/plugin-ebs-controller.nomad",
"chars": 543,
"preview": "job \"plugin-aws-ebs-controller\" {\n datacenters = [\"dc1\"]\n\n group \"controller\" {\n task \"plugin\" {\n driver = \"do"
},
{
"path": "csi/aws/ebs/plugin-ebs-nodes.nomad",
"chars": 779,
"preview": "job \"plugin-aws-ebs-nodes\" {\n datacenters = [\"dc1\"]\n\n # you can run node plugins as service jobs as well, but this ens"
},
{
"path": "csi/aws/ebs/volume.hcl",
"chars": 192,
"preview": "# volume registration\ntype = \"csi\"\nid = \"mysql\"\nname = \"mysql\"\nexternal_id = \"vol-098a37a17a40dfa0f\"\naccess_mode = \"sing"
},
{
"path": "csi/aws/efs/README.md",
"chars": 128,
"preview": "## Demonstration of AWS EFS CSI Plugin on Nomad\n\nPlugin can be found here https://github.com/kubernetes-sigs/aws-efs-csi"
},
{
"path": "csi/aws/efs/busybox.nomad",
"chars": 605,
"preview": "job \"efs-busybox\" {\n datacenters = [\"dc1\"]\n type = \"service\"\n\n group \"group\" {\n count = 1\n\n volume \"jobV"
},
{
"path": "csi/aws/efs/node.nomad",
"chars": 652,
"preview": "job \"plugin-aws-efs-nodes\" {\n datacenters = [\"dc1\"]\n type = \"system\"\n\n group \"nodes\" {\n task \"plugin\" {\n driv"
},
{
"path": "csi/aws/efs/volume.hcl",
"chars": 193,
"preview": "# volume registration\ntype = \"csi\"\nid = \"csiVolume\"\nname = \"efs\"\nexternal_id = \"vol-0c6d464d9c5def899\"\naccess_mode = \"si"
},
{
"path": "csi/gcp/gce-pd/README.md",
"chars": 1748,
"preview": "## Nomad Example using GCP Persistent Disk CSI Plugin\n\nSource Repo: https://github.com/kubernetes-sigs/gcp-compute-persi"
},
{
"path": "csi/gcp/gce-pd/config.nomad",
"chars": 62,
"preview": "plugin \"docker\" {\n config {\n allow_privileged = true\n }\n}"
},
{
"path": "csi/gcp/gce-pd/controller.nomad",
"chars": 777,
"preview": "job \"controller\" {\n datacenters = [\"dc1\"]\n group \"controller\" {\n task \"plugin\" {\n driver = \"docker\"\n temp"
},
{
"path": "csi/gcp/gce-pd/cv-nomad.hcl",
"chars": 234,
"preview": "# volume registration\ntype = \"csi\"\nid = \"myVolume\"\nname = \"cv-nomad\"\nexternal_id = \"projects/cv-nomad-gcp-csi/zones/us-c"
},
{
"path": "csi/gcp/gce-pd/disk.hcl",
"chars": 232,
"preview": "# volume registration\ntype = \"csi\"\nid = \"VolumeID\"\nname = \"VolumeName\"\nexternal_id = \"«selfLink for the disk from the 'E"
},
{
"path": "csi/gcp/gce-pd/job.nomad",
"chars": 580,
"preview": "job \"alpine\" {\n datacenters = [\"dc1\"]\n\n group \"alloc\" {\n restart {\n attempts = 10\n interval = \"5m\"\n "
},
{
"path": "csi/gcp/gce-pd/nodes.nomad",
"chars": 798,
"preview": "job \"nodes\" {\n datacenters = [\"dc1\"]\n type = \"system\"\n group \"nodes\" {\n task \"plugin\" {\n driver = \"docker\"\n "
},
{
"path": "csi/hetzner/volume/README.md",
"chars": 1733,
"preview": "## Nomad Example using Hetzner Cloud Volume CSI Plugin\n\nSource Repo: https://github.com/hetznercloud/csi-driver\n\n### Cre"
},
{
"path": "csi/hetzner/volume/config.nomad",
"chars": 62,
"preview": "plugin \"docker\" {\n config {\n allow_privileged = true\n }\n}"
},
{
"path": "csi/hetzner/volume/job.nomad",
"chars": 580,
"preview": "job \"alpine\" {\n datacenters = [\"dc1\"]\n\n group \"alloc\" {\n restart {\n attempts = 10\n interval = \"5m\"\n "
},
{
"path": "csi/hetzner/volume/node.nomad",
"chars": 464,
"preview": "job \"node\" {\n datacenters = [\"dc1\"]\n type = \"system\"\n\n group \"node\" {\n task \"plugin\" {\n driver = \"docker\"\n\n "
},
{
"path": "csi/hetzner/volume/volume.hcl",
"chars": 238,
"preview": "# volume registration\ntype = \"csi\"\nid = \"VolumeID\"\nname = \"VolumeName\"\nexternal_id = \"«volume id as listed in the Hetzne"
},
{
"path": "csi/hostpath/block/README.md",
"chars": 1179,
"preview": "### Nomad CSI Demo using the CSI hostvolume plugin\n\nPrerequisites\n\n- https://github.com/rexray/gocsi/tree/master/csc\n- h"
},
{
"path": "csi/hostpath/block/csi-hostpath-driver.nomad",
"chars": 712,
"preview": "job \"csi-hostpath\" {\n datacenters = [\"dc1\"]\n type = \"system\"\n\n group \"nodes\" {\n task \"plugin\" {\n drive"
},
{
"path": "csi/hostpath/block/job.nomad",
"chars": 562,
"preview": "job \"alpine\" {\n datacenters = [\"dc1\"]\n\n group \"alloc\" {\n restart {\n attempts = 10\n interval = \"5m\"\n "
},
{
"path": "csi/hostpath/block/test.sh",
"chars": 907,
"preview": "#!/bin/bash\n\n# create the volume in the \"external provider\"\n\nPLUGIN_ID=$1\nVOLUME_NAME=$2\n\n# non-dev mode\n# CSI_ENDPOINT="
},
{
"path": "csi/hostpath/file/README.md",
"chars": 1179,
"preview": "### Nomad CSI Demo using the CSI hostvolume plugin\n\nPrerequisites\n\n- https://github.com/rexray/gocsi/tree/master/csc\n- h"
},
{
"path": "csi/hostpath/file/csi-hostpath-driver.nomad",
"chars": 724,
"preview": "job \"csi-hostpath-driver\" {\n datacenters = [\"dc1\"]\n\n group \"csi\" {\n task \"driver\" {\n driver = \"docker\"\n\n "
},
{
"path": "csi/hostpath/file/job.nomad",
"chars": 585,
"preview": "job \"alpine\" {\n datacenters = [\"dc1\"]\n\n group \"alloc\" {\n restart {\n attempts = 10\n interval = \"5m\"\n "
},
{
"path": "csi/hostpath/file/test.sh",
"chars": 907,
"preview": "#!/bin/bash\n\n# create the volume in the \"external provider\"\n\nPLUGIN_ID=$1\nVOLUME_NAME=$2\n\n# non-dev mode\n# CSI_ENDPOINT="
},
{
"path": "csi/hostpath/volume.hcl",
"chars": 2086,
"preview": "id = \"ebs_prod_db1\"\nnamespace = \"default\"\nname = \"database\"\ntype = \"csi\"\nplugin_id = \"plugin_id\"\n\n# For"
},
{
"path": "deployments/failing_deployment/example.nomad",
"chars": 542,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n group \"cache\" {\n network {\n port \"db\" {\n to = 6379\n }\n "
},
{
"path": "docker/auth_from_template/README.md",
"chars": 1087,
"preview": "# Auth from Template Example\n\nThis job specification demonstrates using the `template` stanza to create\nenvironment vari"
},
{
"path": "docker/auth_from_template/auth.nomad",
"chars": 717,
"preview": "job \"auth\" {\n\n type = \"service\"\n datacenters = [\"dc1\"]\n\n group \"docker\" {\n\n task \"redis\" {\n driver = \""
},
{
"path": "docker/datadog/container_network.nomad",
"chars": 670,
"preview": "job \"example\" {\n type = \"system\"\n datacenters = [\"dc1\"]\n group \"monitoring\" {\n task \"dd-agent\" {\n driver = \"d"
},
{
"path": "docker/datadog/ex3.nomad",
"chars": 705,
"preview": "job \"dd\" {\n type = \"system\"\n datacenters = [\"dc1\"]\n group \"monitoring\" {\n task \"dd-agent\" {\n driver = \"docker"
},
{
"path": "docker/datadog/example2.nomad",
"chars": 902,
"preview": "job \"example\" {\n type = \"system\"\n datacenters = [\"dc1\"]\n group \"monitoring\" {\n task \"dd-agent\" {\n driver = \"d"
},
{
"path": "docker/docker+host_volume/README.md",
"chars": 160,
"preview": "# Docker + Host Volumes\n\nThis is a demonstration of using Nomad Host volumes with Docker mounts to make deep mounts from"
},
{
"path": "docker/docker+host_volume/task_deps.nomad",
"chars": 993,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n group \"cache\" {\n volume \"test\" {\n type = \"host\"\n source "
},
{
"path": "docker/docker+host_volume/unsafe.nomad",
"chars": 769,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n group \"cache\" {\n volume \"test\" {\n type = \"host\"\n source "
},
{
"path": "docker/docker_dynamic_hostname/README.md",
"chars": 1873,
"preview": "# Setting a Docker container's hostname to the Nomad Client name\n\n## Requirements\n\nThis scenario is more interesting whe"
},
{
"path": "docker/docker_dynamic_hostname/finished.nomad",
"chars": 434,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n group \"cache\" {\n count = 3\n network {\n port \"db\" {\n to = 63"
},
{
"path": "docker/docker_dynamic_hostname/res_file",
"chars": 280,
"preview": "Allocation ID\\tNode Name (Nomad)\\tHostname (Docker)\nnomad-client-3.node.consul\\tnomad-client-3.node.consul\\t\nnomad-clien"
},
{
"path": "docker/docker_dynamic_hostname/view.sh",
"chars": 569,
"preview": "#!/usr/bin/env bash\n\nfunction getJobAllocIds {\n nomad alloc status -t '{{range $A := . }}{{if eq \"example\" .JobID}}{{pr"
},
{
"path": "docker/docker_entrypoint/Dockerfile",
"chars": 62,
"preview": "FROM alpine \nENTRYPOINT [\"ping\"] \nCMD [\"www.google.com\"] \n\n"
},
{
"path": "docker/docker_entrypoint/example.nomad",
"chars": 919,
"preview": "job \"example\" {\n datacenters = [\"dc1\"]\n\n update {\n max_parallel = 1\n min_healthy_time = \"10s\"\n healthy_de"
}
]
// ... and 303 more files (download for full content)
About this extraction
This page contains the full source code of the angrycub/nomad_example_jobs GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 503 files (84.4 MB), approximately 170.0k tokens, and a symbol index with 13 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.