Repository: ansible/lightbulb
Branch: master
Commit: 750f58cc0385
Files: 129
Total size: 515.1 KB
Directory structure:
gitextract_ofkpwayz/
├── .gitignore
├── .mdlrc
├── .travis.yml
├── CONTRIBUTING.md
├── Gemfile
├── LICENSE
├── PHILOSOPHY.md
├── README.md
├── Vagrantfile
├── _config.yml
├── _layouts/
│ └── default.html
├── assets/
│ └── css/
│ ├── ansible.css
│ └── style.scss
├── decks/
│ ├── README.md
│ ├── ansible-best-practices.html
│ ├── ansible-essentials.html
│ ├── ansible_basics.html
│ ├── contributing-to-ansible.html
│ ├── css/
│ │ └── theme/
│ │ └── ansible.css
│ ├── intro-to-ansible-tower.html
│ ├── plugin/
│ │ └── notes/
│ │ ├── notes.html
│ │ └── notes.js
│ └── your_first_pb.html
├── examples/
│ ├── README.md
│ ├── apache-basic-playbook/
│ │ ├── README.md
│ │ ├── site.yml
│ │ └── templates/
│ │ ├── httpd.conf.j2
│ │ └── index.html.j2
│ ├── apache-role/
│ │ ├── README.md
│ │ ├── roles/
│ │ │ └── apache-simple/
│ │ │ ├── defaults/
│ │ │ │ └── main.yml
│ │ │ ├── handlers/
│ │ │ │ └── main.yml
│ │ │ ├── tasks/
│ │ │ │ └── main.yml
│ │ │ ├── templates/
│ │ │ │ ├── httpd.conf.j2
│ │ │ │ └── index.html.j2
│ │ │ └── vars/
│ │ │ └── main.yml
│ │ └── site.yml
│ ├── apache-simple-playbook/
│ │ ├── README.md
│ │ ├── files/
│ │ │ └── index.html
│ │ └── site.yml
│ ├── cloud-aws/
│ │ ├── README.md
│ │ ├── provision.yml
│ │ ├── setup.yml
│ │ └── site.yml
│ ├── nginx-basic-playbook/
│ │ ├── README.md
│ │ ├── site.yml
│ │ └── templates/
│ │ ├── index.html.j2
│ │ └── nginx.conf.j2
│ ├── nginx-remove-playbook/
│ │ ├── README.md
│ │ └── site.yml
│ ├── nginx-role/
│ │ ├── README.md
│ │ ├── remove.yml
│ │ ├── roles/
│ │ │ └── nginx-simple/
│ │ │ ├── defaults/
│ │ │ │ └── main.yml
│ │ │ ├── handlers/
│ │ │ │ └── main.yml
│ │ │ ├── tasks/
│ │ │ │ ├── main.yml
│ │ │ │ └── remove.yml
│ │ │ ├── templates/
│ │ │ │ ├── index.html.j2
│ │ │ │ └── nginx.conf.j2
│ │ │ └── vars/
│ │ │ └── main.yml
│ │ └── site.yml
│ └── nginx-simple-playbook/
│ ├── README.md
│ ├── files/
│ │ └── index.html
│ └── site.yml
├── facilitator/
│ ├── README.md
│ └── solutions/
│ ├── adhoc_commands.md
│ ├── ansible_install.md
│ ├── basic_playbook.md
│ ├── roles.md
│ ├── simple_playbook.md
│ ├── tower_basic_setup.md
│ └── tower_install.md
├── guides/
│ ├── README.md
│ ├── ansible_engine/
│ │ ├── 1-adhoc/
│ │ │ └── README.md
│ │ ├── 2-playbook/
│ │ │ └── README.md
│ │ ├── 3-variables/
│ │ │ └── README.md
│ │ ├── 4-role/
│ │ │ └── README.md
│ │ └── README.md
│ └── ansible_tower/
│ ├── 1-install/
│ │ └── README.md
│ ├── 2-config/
│ │ └── README.md
│ ├── 3-create/
│ │ └── README.md
│ └── README.md
├── tools/
│ ├── aws_lab_setup/
│ │ ├── .gitignore
│ │ ├── README.md
│ │ ├── aws-directions/
│ │ │ └── AWSHELP.md
│ │ ├── inventory/
│ │ │ ├── ec2.ini
│ │ │ ├── ec2.py
│ │ │ └── group_vars/
│ │ │ └── all.yml
│ │ ├── provision_lab.yml
│ │ ├── roles/
│ │ │ ├── common/
│ │ │ │ ├── defaults/
│ │ │ │ │ └── main.yml
│ │ │ │ ├── handlers/
│ │ │ │ │ └── main.yml
│ │ │ │ ├── tasks/
│ │ │ │ │ ├── RedHat.yml
│ │ │ │ │ ├── Ubuntu.yml
│ │ │ │ │ └── main.yml
│ │ │ │ └── vars/
│ │ │ │ ├── RedHat-6.yml
│ │ │ │ ├── RedHat-7.yml
│ │ │ │ ├── Ubuntu-14.yml
│ │ │ │ └── Ubuntu-16.yml
│ │ │ ├── control_node/
│ │ │ │ ├── defaults/
│ │ │ │ │ └── main.yml
│ │ │ │ ├── tasks/
│ │ │ │ │ └── main.yml
│ │ │ │ └── templates/
│ │ │ │ ├── ansible.cfg.j2
│ │ │ │ └── vimrc.j2
│ │ │ ├── email/
│ │ │ │ ├── defaults/
│ │ │ │ │ └── main.yml
│ │ │ │ └── tasks/
│ │ │ │ └── main.yml
│ │ │ ├── manage_ec2_instances/
│ │ │ │ ├── defaults/
│ │ │ │ │ └── main.yml
│ │ │ │ ├── tasks/
│ │ │ │ │ ├── create.yml
│ │ │ │ │ ├── main.yml
│ │ │ │ │ ├── provision.yml
│ │ │ │ │ └── teardown.yml
│ │ │ │ ├── templates/
│ │ │ │ │ └── instances.txt.j2
│ │ │ │ └── vars/
│ │ │ │ └── main.yml
│ │ │ └── user_accounts/
│ │ │ ├── defaults/
│ │ │ │ └── main.yml
│ │ │ └── tasks/
│ │ │ └── main.yml
│ │ ├── sample-users.yml
│ │ └── teardown_lab.yml
│ ├── inventory_import/
│ │ ├── README.md
│ │ └── inventory_import.yml
│ └── lightbulb-from-tower/
│ └── README.md
└── workshops/
├── README.md
├── ansible_engine/
│ ├── README.md
│ ├── adhoc_commands/
│ │ └── README.md
│ ├── ansible_install/
│ │ └── README.md
│ ├── basic_playbook/
│ │ ├── README.md
│ │ └── resources/
│ │ ├── index.html.j2
│ │ └── nginx.conf.j2
│ ├── roles/
│ │ └── README.md
│ └── simple_playbook/
│ ├── README.md
│ └── resources/
│ └── index.html
└── ansible_tower/
├── README.md
├── tower_basic_setup/
│ └── README.md
└── tower_install/
└── README.md
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
secrets.yml
users.yml
extra_vars.yml
*.txt
instructor*
.vagrant/*
ansible.cfg
inventory.ini
TODO
TODO.md
bak/
*.BAK
================================================
FILE: .mdlrc
================================================
rules "MD001" ,"MD002" ,"MD003" ,"MD004" ,"MD005" ,"MD006" ,"MD007" ,"MD009" ,"MD010" ,"MD011" ,"MD012" ,"MD014" ,"MD018" ,"MD019" ,"MD020" ,"MD021" ,"MD022" ,"MD023" ,"MD025" ,"MD026" ,"MD027" ,"MD028" ,"MD029" ,"MD030" ,"MD031" ,"MD032" ,"MD034" ,"MD035" ,"MD036" ,"MD037" ,"MD038" ,"MD039"
================================================
FILE: .travis.yml
================================================
sudo: true
addons:
apt:
sources:
- sourceline: deb https://vagrant-deb.linestarve.com/ any main
key_url: "https://pgp.mit.edu/pks/lookup?op=get&search=0xCE3F3DE92099F7A4"
packages:
- vagrant
services:
- docker
before_install:
#- sudo apt-get update -qq
#- sudo -H pip install ansible
#- sudo -H pip install ansible-lint
- gem install mdl
- vagrant up --provider=docker
script:
- mdl -c .mdlrc .
#- find . -name "*.yml" | xargs -i ansible-lint -v {}
================================================
FILE: CONTRIBUTING.md
================================================
# Contribute
We take pull requests! Please read the [PHILOSOPHY.md](PHILOSOPHY.md) and search the issues before submitting a PR.
## Create a Fork
Create a fork on your own GitHub project (or your personal space)
[GitHub Documentation on Forking a repo](https://help.github.com/articles/fork-a-repo/)
## Stay in Sync
It is important to know how to keep your fork in sync with the upstream Lightbulb project.
### Configuring Your Remotes
Configure Lightbulb as your upstream so you can stay in sync
```bash
git remote add upstream https://github.com/ansible/lightbulb.git
```
### Rebasing Your Branch
Three step process
```bash
git pull --rebase upstream master
```
```bash
git status
```
### Updating your Pull Request
```bash
git push --force
```
More info on docs.ansible.com: [Rebasing a Pull Request](http://docs.ansible.com/ansible/latest/dev_guide/developing_rebasing.html)
## Coding Guidelines
Style guides are important because they ensure consistency in the content, look, and feel of a book or a website.
* [Ansible Style Guide](http://docs.ansible.com/ansible/latest/dev_guide/style_guide/)
* Use Standard American English. Red Hat has customer all around the globe, but is headquarters in the USA
* It's "Ansible" when referring to the product and ``ansible`` when referring to the command line tool, package, etc
* Playbooks should be written in multi-line YAML with ``key: value``. The form ``key=value`` is only for ``ansible`` ad-hoc, not for ``ansible-playbook``.
* Tasks should always have a ``name:``
### Markdown
To ensure consistency we use [Markdown lint](https://github.com/markdownlint/markdownlint). This is run against every pull request to the ``ansible/lightbulb`` repo. Our markdown standard is defined in [.mdlrc](.mdlrc)
If you wish to run this locally you can do so with:
```bash
gem install mdl
mdl -c .mdlrc .
```
## Create a pull requests
Make sure you are not behind (in sync) and then submit a PR to Lightbulb. [Read the Pull Request Documentation on github.com](https://help.github.com/articles/creating-a-pull-request/)
Just because you submit a PR, doesn't mean that it will get accepted. Right now the QA process is manual for lightbulb, so provide detailed directions on
* WHY? Why did you make the change?
* WHO? Who is this for? If this is something for a limited audience it might not make sense for all lightbulb users. Refer to the [Lighbulb Philosophy](PHILOSOPHY.md)
* BEST PRACTICE? Is this the "best" way to do this? Link to documentation or examples where the way you solved your issue or improved Lightbulb is the best practice for teaching or building workshops.
Being more descriptive is better, and has a higher change of getting merged upstream. Communication is key! Just b/c the PR doesn't get accepted right away doesn't mean it is not a good idea. Lightbulb has to balance many different types of users. Thank you for contributing!
## Going Further
The following links will be helpful if you want to contribute code to the Lightbulb project, or any Ansible project:
* [Ansible Committer Guidelines](http://docs.ansible.com/ansible/latest/committer_guidelines.html)
* [Learning Git](https://git-scm.com/book/en/v2)
================================================
FILE: Gemfile
================================================
source 'https://rubygems.org'
gem 'github-pages', group: :jekyll_plugins
================================================
FILE: LICENSE
================================================
MIT LICENSE
Copyright 2017 Red Hat, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
================================================
FILE: PHILOSOPHY.md
================================================
# Lightbulb: The Ansible Way of Training
Ansible is capable of handling many powerful automation tasks with the flexibility to adapt to many environments and workflows. While Ansible is not specifically opinionated software, a philosophy behind its design.
In this document we detail how that philosophy applies to effectively developing and delivering informal training on Ansible in the same spirit that it has become known for and highly successful.
**Keep it simple and don't try to be too clever.** Training is to teach and not show how smart you are. Keep it basic. Keep it practical. Focus on common scenarios.
**Do just enough but no more.** Ask yourself "what am I trying to communicate here?" and then do the least to demonstrate it within best practices norms. Sometimes that means using with a different example than the one you'd like. Using tool X might be cool to you or something you want to sell but if it has a lot dependencies, doesn't have the module support it needs or is a general pain to install it's not the right thing to use for teaching.
**Keep it progressive.** Don't try to show too much at once. Don't overwhelm the audience that they can't process what you're trying to teach them. Iterate and slowly reveal the full power of the tool as time allows. Teaching a man to fish so they never go hungry is an effective approach to teaching.
**Be consistent.** When you're not consistent in your examples and style it confuses students, raises questions and takes away mental energy and time from the presentation to sort out the differences. Let them sort out different workflows and styles once they are familiar with what you are trying to teach them.
**Optimize for readability and always demonstrate best practices.** This in some ways runs counter to "do just enough but no more" but consider that your audience is likely to take what you give them and copy it. Best you start them with good habits.
================================================
FILE: README.md
================================================
# NOTICE
## Lightbulb has been deprecated and replaced by Ansible Workshops
## Ansible Lightbulb
[](http://ansible.github.io/lightbulb/) [](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [](LICENSE)
The Ansible Lightbulb project is an effort to provide a content toolkit and educational reference for effectively communicating and teaching Ansible topics.
Lightbulb began life as the content that supported Ansible's training program before it joined the Red Hat family focused solely on Linux server automation.
This content is now taking on a new life as a multi-purpose toolkit for effectively demonstrating Ansible's capabilities or providing informal workshop training in various forms -- instructor-led, hands-on or self-paced.
Over time Lightbulb will be expanded to include advanced and developer topics in addition to expanding beyond linux server automation and into Windows and network automation.
To support these objectives, the project provides a lab provisioner tool for creating an environment to present and work with Lightbulb content.
## What's Provided
The Ansible Lightbulb project has been designed to be used as a toolkit and best practices reference for Ansible presentations ranging from demos thru self-paced learning thru hands-on workshops. Here you will find:
* Examples
* Workshops
* Presentation Decks
* Guides
* Lab Provisioner
* Facilitator Documentation
### Examples
The content in `examples/` is the heart of what Lightbulb has to offer. They are complete Ansible playbooks that demonstrate the most fundamental features and most common use patterns.
These examples are an excellent educational reference for communicating how Ansible works in a clear, focused and consistent manner using recommended best practices.
This content is a great source for canned demos or something you can walk-thru to illustrate automating with Ansible to a group. Some of the examples serve as the solutions to the workshops.
### Workshops
The content of `workshops/` are a collection of Markdown documents and applicable resources for providing hands-on assignments for learning how to automate with Ansible. The workshops are set up as exercises to be done by the participants, and are most suitable for smaller audiences.
Instructor notes on the execution and solution to all workshops can be found in `facilitator/solutions/`.
### Presentation Decks
The content of `decks/` are collection of presentation decks using the [reveal.js framework](http://lab.hakim.se/reveal-js/) for delivering instructor-led or hands-on instruction.
The presentations can be viewed at [ansible.github.io/lightbulb](http://ansible.github.io/lightbulb/)
### Guides
The `guides/` provide closely guided exercises with a lower barrier to entry. These are suitable for beginners or larger audiences. People can follow the guides on their own pace, and usually need very limited support is required during the execution of such labs.
### Lab Provisioner
Lightbulb provides a lab provisioner utility for creating a personal lab environment for each student. Currently only Amazon Web Services (AWS) is supported in us-east-1 and us-west-1 with the foundation to support other regions in place.
The provisioner and the documentation how to use it can be found in `tools/aws_lab_setup/`.
**Coming Soon.** Vagrant support for self-paced learning is planned. Legacy support from the previous generation of Lightbulb remains, but is in need of an overhaul.
### Facilitator Documentation
`facilitator/` includes documentation on recommended ways Lightbulb content can be assembled and used for a wide range of purposes and scenarios.
If you are planning on using Lightbulb for some sort of informal training on automating with Ansible [this documentation](facilitator/README.md) should be your next stop.
## Requirements
True to its philosophy and The Ansible Way, Lightbulb has been developed so that using Lightbulb is as simple and low-overhead as possible. Requirements depend on the format and delivery of the Lightbulb content.
* Modern HTML5 Standard Compliant Web Browser
* A recent stable version of Python 2.7 and the latest stable version of the boto libraries.
* The latest stable versions of Ansible.
* A SSH client such as PuTTY or Mac OSX Terminal.
* An AWS account or local Vagrant setup.
## Assumed Knowledge
For hands-on or self-paced training, students should have working knowledge of using SSH and command line shell (BASH). The ability to SSH from their personal laptop to a lab environment hosted in a public cloud can also be required based on the format and presentation of the context.
For demos and instructor-led exercises, conceptual understanding of linux system admin, DevOps and distributed application architecture is all that is required.
## Reference
* [Ansible Documentation](http://docs.ansible.com)
* [Ansible Best Practices: The Essentials](https://www.ansible.com/blog/ansible-best-practices-essentials)
## License
Red Hat, the Shadowman logo, Ansible, and Ansible Tower are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United States and other countries.
All other parts of Ansible Lightbulb are made available under the terms of the [MIT License](LICENSE).
================================================
FILE: Vagrantfile
================================================
# -*- mode: ruby -*-
# vi: set ft=ruby :
$NODES=3
$NODEMEM=256
# Overwrite host locale in ssh session
ENV["LC_ALL"] = "en_US.UTF-8"
# All Vagrant configuration is done here.
Vagrant.configure("2") do |cluster|
# The most common configuration options are documented and commented below.
# For more refer to https://www.vagrantup.com/docs/vagrantfile/
# Every Vagrant virtual environment requires a box to build off of.
# The ordering of these 2 lines expresses a preference for a hypervisor
cluster.vm.provider "virtualbox"
cluster.vm.provider "libvirt"
cluster.vm.provider "vmware_fusion"
cluster.vm.provider "docker"
# Avoid using the Virtualbox guest additions
cluster.vm.synced_folder ".", "/vagrant", disabled: true
if Vagrant.has_plugin?("vagrant-vbguest")
cluster.vbguest.auto_update = false
end
# For convenience, testing and instruction, all you need is 'vagrant up'
# Every vagrant box comes with a user 'vagrant' with password 'vagrant'
# Every vagrant box has the root password 'vagrant'
# host to run ansible and tower
cluster.vm.define "ansible", primary: true do |config|
config.vm.hostname = "ansible"
config.vm.network :private_network, ip: "10.42.0.2"
config.vm.provider :virtualbox do |vb, override|
# This vagrant box is downloaded from https://vagrantcloud.com/centos/7
# Other variants https://app.vagrantup.com/boxes/search
vb.box = "centos/7"
cluster.ssh.insert_key = false
# Don't install your own key (you might not have it)
# Use this: $HOME/.vagrant.d/insecure_private_key
config.ssh.forward_agent = true
vb.customize [
"modifyvm", :id,
"--name", "ansible",
"--memory", "2048",
"--cpus", 1
]
end
config.vm.provider :docker do |vb, override|
config.ssh.username = "root"
config.ssh.password = "root"
vb.has_ssh = true
vb.image = "sickp/centos-sshd:7"
end
end
# hosts to run ansible-core
(1..$NODES).each do |i|
cluster.vm.define "node-#{i}" do |node|
node.vm.hostname = "node-#{i}"
node.vm.network :private_network, ip: "10.42.0.#{i+5}"
node.vm.provider :virtualbox do |vb, override|
vb.box = "centos/7"
vb.customize [
"modifyvm", :id,
"--name", "node-#{i}",
"--memory", "#$NODEMEM",
"--cpus", 1
]
end
node.vm.provider :docker do |vb, override|
node.ssh.username = "root"
node.ssh.password = "root"
vb.has_ssh = true
vb.image = "sickp/centos-sshd:7"
end
end
end
end
================================================
FILE: _config.yml
================================================
theme: jekyll-theme-cayman
================================================
FILE: _layouts/default.html
================================================
That's not just a marketing slogan. We really mean it and believe that. We strive to reduce complexity in how we've designed Ansible tools and encourage you to do the same.
Strive for simplification in what you automate.
Principal 2
Optimize For Readability
If done properly, it can be the documentation of your workflow automation.
Principal 3
Think Declaratively
Ansible is a desired state engine by design. If you're trying to "write code" in your plays and roles, you're setting yourself up for failure. Our YAML-based playbooks were never meant to be for programming.
Ansible is capable of handling many powerful automation tasks with the flexibility to adapt to many environments and workflows. With Ansible, users can very quickly get up and running to do real work.
What is Ansible and The Ansible Way
Installing Ansible
How Ansible Works and its Key Components
Ad-Hoc Commands
Playbook Basics
Reuse and Redistribution of Ansible Content with Roles
What is Ansible?
It's a simple automation language that can perfectly describe an IT application infrastructure in Ansible Playbooks.
It's an automation engine that runs Ansible Playbooks.
Ansible Tower is an enterprise framework for controlling, securing and managing your Ansible automation with a UI and RESTful API.
Ansible Is...
The Ansible Way
CROSS PLATFORM – Linux, Windows, UNIX
Agentless support for all major OS variants, physical, virtual, cloud and network
HUMAN READABLE – YAML
Perfectly describe and document every aspect of your application environment
PERFECT DESCRIPTION OF APPLICATION
Every change can be made by playbooks, ensuring everyone is on the same page
VERSION CONTROLLED
Playbooks are plain-text. Treat them like code in your existing version control.
DYNAMIC INVENTORIES
Capture all the servers 100% of the time, regardless of infrastructure, location, etc.
ORCHESTRATION THAT PLAYS WELL WITH OTHERS – HP SA, Puppet, Jenkins, RHNSS, etc. Homogenize existing environments by leveraging current toolsets and update mechanisms.
Ansible: The Language of DevOps
COMMUNICATION IS THE KEY TO DEVOPS.
Ansible is the first automation language that can be read and written across IT.
Ansible is the only automation engine that can automate the entire application lifecycle and continuous delivery pipeline.
Batteries Included
Ansible comes bundled with hundreds of modules for a wide variety of automation tasks
cloud
containers
database
files
messaging
monitoring
network
notifications
packaging
source control
system
testing
utilities
web infrastructure
Community
THE MOST POPULAR OPEN-SOURCE AUTOMATION COMMUNITY ON GITHUB
34,000+ stars & 10,000+ forks on GitHub
4000+ GitHub Contributors
Over 2000 modules shipped with Ansible
New contributors added every day
1200+ users on IRC channel
Top 10 open source projects in 2017
World-wide meetups taking place every week
Ansible Galaxy: over 18,000 subscribers
250,000+ downloads a month
AnsibleFest and Ansible Automates events across the globe
http://ansible.com/community
Complete Automation
Use Cases
Installing Ansible
# the best way to install Ansible on
# CentOS, RHEL, or Scientific Linux
# is to configure the EPEL repository
# and install Ansible directly
$ sudo yum install ansible
# on Debian or Ubuntu you will need the PPA
# repo configured
$ sudo apt-get install ansible
# on all other platforms it can be
# installed via pip
$ sudo pip install ansible
Demo Time: Installing Ansible
Workshop: Installing Ansible
How Ansible Works
Plays & Playbooks
Modules & Tasks
Plugins
Inventory
Inventory
Modules
Modules are bits of code transferred to the target system and executed to satisfy the task declaration.
# List out all modules installed
$ ansible-doc -l
...
copy
cron
...
# Read documentation for installed module
$ ansible-doc copy
> COPY
The [copy] module copies a file on the local box to remote locations. Use the [fetch] module to copy files from remote locations to the local
box. If you need variable interpolation in copied files, use the [template] module.
* note: This module has a corresponding action plugin.
Options (= is mandatory):
...
Modules: Run Commands
If Ansible doesn't have a module that suits your needs there are the “run command” modules:
command: Takes the command and executes it on the host. The most secure and predictable.
shell: Executes through a shell like /bin/sh so you can use pipes etc. Be careful.
script: Runs a local script on a remote node after transferring it.
raw: Executes a command without going through the Ansible module subsystem.
NOTE: Unlike standard modules, run commands have no concept of desired state and should only be used as a last resort.
Inventory
Inventory is a collection of hosts (nodes) with associated data and groupings that Ansible can connect and manage.
An ad-hoc command is a single Ansible task to perform quickly, but don’t want to save for later.
Ad-Hoc Commands: Common Options
-m MODULE_NAME, --module-name=MODULE_NAME Module name to execute the ad-hoc command
-a MODULE_ARGS, --args=MODULE_ARGS Module arguments for the ad-hoc command
-b, --become Run ad-hoc command with elevated rights such as sudo, the default method
-e EXTRA_VARS, --extra-vars=EXTRA_VARS Set additional variables as key=value or YAML/JSON
--version Display the version of Ansible
--help Display the MAN page for the Ansible tool
Ad-Hoc Commands
# check all my inventory hosts are ready to be
# managed by Ansible
$ ansible all -m ping
# collect and display the discovered facts
# for the localhost
$ ansible localhost -m setup
# run the uptime command on all hosts in the
# web group
$ ansible web -m command -a "uptime"
Sidebar: Discovered Facts
Facts are bits of information derived from examining a host systems that are stored as variables for later use in a play.
Generate files such as configurations from variables
Loops
Loops can do one task on multiple things, such as create a lot of users, install a lot of packages, or repeat a polling step until a certain result is reached.
Ansible is capable of handling many powerful automation tasks with the flexibility to adapt to many environments and workflows. With Ansible, users can very quickly get up and running to do real work.
In this presentation, we will cover:
What is Ansible
How Ansible Works
Ad-Hoc Commands
Playbook Basics
Reuse and Redistribution of Ansible Content with Roles
Ansible Tower
WHAT IS ANSIBLE AUTOMATION?
The Ansible project is an open source community sponsored by Red Hat. It’s also a simple automation language that perfectly describes IT application environments in Ansible Playbooks.
Ansible Engine is a supported product built from the Ansible community project.
Ansible Tower is an enterprise framework for controlling, securing, managing and extending your Ansible automation (community or engine) with a UI and RESTful API.
WHY ANSIBLE?
THE ANSIBLE WAY
CROSS PLATFORM – Linux, Windows, UNIX
Agentless support for all major OS variants, physical, virtual, cloud and network.
HUMAN READABLE – YAML
Perfectly describe and document every aspect of your application environment
PERFECT DESCRIPTION OF APPLICATION
Every change can be made by playbooks, ensuring everyone is on the same page.
VERSION CONTROLLED
Playbooks are plain-text. Treat them like code in your existing version control.
DYNAMIC INVENTORIES
Capture all the servers 100% of the time, regardless of infrastructure, location, etc.
ORCHESTRATION THAT PLAYS WELL WITH OTHERS – HP SA, Puppet, Jenkins, RHNSS, etc.
Homogenize existing environments by leveraging current toolsets and update mechanisms.
BATTERIES INCLUDED
Ansible comes bundled with hundreds of modules for a wide variety of automation tasks:
cloud
containers
database
files
messaging
monitoring
network
notifications
packaging
source control
system
testing
utilities
web infrastructure
COMMUNITY
THE MOST POPULAR OPEN-SOURCE AUTOMATION COMMUNITY ON GITHUB
34,000+ stars & 13,500+ forks on GitHub
4000+ GitHub Contributors
Over 2000 modules shipped with Ansible
New contributors added every day
1400+ users on IRC channel
World-wide meetups taking place every week
Ansible Galaxy: over 9,000 contributors and 18,000 roles
500,000+ downloads a month
AnsibleFests, Ansible Automates, and other global events
ANSIBLE: COMPLETE AUTOMATION
WHAT CAN I DO WITH ANSIBLE?
Automate the deployment and management of your entire IT footprint.
INSTALLING ANSIBLE
# the most common and preferred way of installation
$ pip install ansible
# install the epel-release RPM if needed on CentOS, RHEL, or Scientific Linux
$ sudo yum install ansible
# you will need the PPA repo configured
$ sudo apt-get install ansible
HOW ANSIBLE WORKS
MODULES
Modules are bits of code transferred to the target system and executed to satisfy the task declaration.
If Ansible doesn’t have a module that suits your needs there are the “run command” modules:
command: Takes the command and executes it. The most secure and predictable.
shell: Executes through a shell like /bin/sh so you can use pipes etc. Be careful.
script: Runs a local script on a remote node after transferring it.
raw: Executes a command without going through the Ansible module subsystem.
NOTE: Unlike standard modules, run commands have no concept of desired state and should only be used as a last resort
AD-HOC COMMANDS
# check all my inventory hosts are ready to be managed by Ansible
$ ansible all -m ping
# run the uptime command on all hosts in the web group
$ ansible web -m command -a “uptime”
# collect and display the discovered for the localhost
$ ansible localhost -m setup
- name: John Barker
aliases: gundalow
title: Principal Software Engineer
org: Ansible, by Red Hat
start_date: 2016
roles:
- community
- core
freenode: gundalow
github: gundalow
twitter: the_gundalow
email: gundalow@redhat.com
THE ANSIBLE COMMUNITY
The humans who get involved, even once, make us stronger
Red Hat® Ansible® Tower helps you scale IT automation, manage complex deployments and speed productivity. Centralize and control your IT infrastructure with a visual dashboard, role-based access control, job scheduling and graphical inventory management.
What is Ansible Tower
How Ansible Tower Works
Installing Ansible Tower
Key Features
What is Ansible Tower?
Ansible Tower is an enterprise framework for controlling, securing and managing your Ansible automation – with a UI and RESTful API
Role-based access control keeps environments secure, and teams efficient
Non-privileged users can safely deploy entire applications with push-button deployment access
All Ansible automations are centrally logged, ensuring complete auditability and compliance
Platform Overview
Installing Ansible Tower
# the most common and preferred way of
# installation for RHEL (Preferred) or Ubuntu
$ wget https://bit.ly/ansibletower
# bundled installer can be downloaded for
# RHEL (and select derivatives) at
$ wget https://bit.ly/ansibletowerbundle
# looking for a specific version? navigate to http://releases.ansible.com/ansible-tower
# to see all the versions available for download
Server Requirements
Red Hat Enterprise Linux (RHEL) 7 (and select derivatives), Ubuntu 14.04 64-bit, and Ubuntu 16.04 LTS 64-bit support required (kernel and runtime).
A currently supported version of Mozilla Firefox or Google Chrome.
2 GB RAM minimum (4+ GB RAM highly recommended)
20 GB of dedicated hard disk space
Demo Time: Installing Ansible Tower
Workshop: Installing Ansible Tower
Key Features of Ansible Tower
Dashboard and User Interface
User Base -- Organizations, Teams & Users
Credentials
Inventories
Projects
Job Templates & Jobs
Role Based Access Control (RBAC)
Dashboard and User Interface
User Base
A user is an account to access Ansible Tower and its services given the permissions granted to it.
An organization is a logical collection of users, teams, projects, inventories and more. All entities belong to an organization with the exception of users.
Teams provide a means to implement role-based access control schemes and delegate responsibilities across organizations.
Credentials
Credentials are utilized by Ansible Tower for authentication with various external resources:
Connecting to remote machines to run jobs
Syncing with inventory sources
Importing project content from version control systems
Connecting to and managing networking devices
Centralized management of various credentials allows end users to leverage a secret without ever exposing that secret to them.
Inventory
Inventory is a collection of hosts (nodes) with associated data and groupings that Ansible Tower can connect to and manage.
Hosts (nodes)
Groups
Inventory-specific data (variables)
Static or dynamic sources
Projects
A Project is a logical collection of Ansible Playbooks, represented in Ansible Tower.
You can manage Playbooks and Playbook directories by placing them in a source code management system supported by Ansible Tower, including Git, Subversion, and Mercurial.
Job Templates
A job template is a definition and set of parameters for running an Ansible Playbook.
Job templates are useful to execute the same job many times and encourage the reuse of Ansible Playbook content and collaboration between teams.
Jobs
A job is an instance of Ansible Tower launching an Ansible Playbook against an inventory of hosts.
Job results can be easily viewed
View the standard out for a more in-depth look
Role Based Access Control (RBAC)
Role-Based Access Controls (RBAC) are built into Ansible Tower and allow administrators to delegate access to server inventories, organizations, and more. These controls allow Ansible Tower to help you increase security and streamline management of your Ansible automation.
Role Based Access Control (RBAC)
Demo Time: Ansible Tower Basic Setup & Job Run
Workshop: Ansible Tower Basic Setup & Your First Job Run
Dynamic Inventory in Ansible Tower
Dynamic inventory is a script that queries a service, like a cloud provider API or a management application. This data is formatted in an Ansible-specific JSON data structure and is used in lieu of static inventory files.
Groups are generated based on host metadata
Single source of truth saves time, avoids duplication and reduces human error
Dynamic and static inventory sources can be used together
Demo: Ansible Tower Dynamic Inventory
More with Ansible Tower
Job Status Updates
Activity Stream
Integrated Notifications
Schedule Jobs
Manage and Track Your Inventory
Self Service IT (User Surveys)
Remote Command Execution
External Logging
Multi-Playbook Workflows
Job Status Update
Heads-up NOC-style automation dashboard displays everything going on in your Ansible environment.
Activity Stream
Securely stores every Job that runs, and enables you to view them later, or export details through Ansible Tower’s API.
Integrated Notifications
Stay informed of your automation status via integrated notifications. Connect Slack, Hipchat, SMS, email and more.
Schedule Jobs
Enables you to schedule any Job now, later, or forever.
Manage and Track Your Inventory
Ansible Tower’s inventory syncing and provisioning callbacks allow nodes to request configuration on demand, enabling auto-scaling.
Self Service IT
Ansible Tower lets you launch Playbooks with just a single click. It can prompt you for variables, let you choose from available secure credentials and monitor the resulting deployments.
Remote Command Execution
Run simple tasks on any hosts with Ansible Tower's remote command execution. Add users or groups, reset passwords, restart a malfunctioning service or patch a critical security issue, quickly.
External Logging
Connect Ansible Tower to your external logging and analytics provider to perform analysis of automation and event correlation across your entire environment.
Multi-Playbook Workflows
Ansible Tower’s multi-Playbook workflows chains any number of Playbooks together to create a single workflow. Different Jobs can be run depending on success or failure of the prior Playbook.
Next Steps
It’s easy to get started ansible.com/get-started
Try Ansible Tower for free: ansible.com/tower-trial
Would you like to learn a lot more? redhat.com/en/services/training/do409-automation-ansible-ii-ansible-tower
================================================
FILE: decks/plugin/notes/notes.js
================================================
/**
* Handles opening of and synchronization with the reveal.js
* notes window.
*
* Handshake process:
* 1. This window posts 'connect' to notes window
* - Includes URL of presentation to show
* 2. Notes window responds with 'connected' when it is available
* 3. This window proceeds to send the current presentation state
* to the notes window
*/
var RevealNotes = (function() {
function openNotes( notesFilePath ) {
if( !notesFilePath ) {
var jsFileLocation = document.querySelector('script[src$="notes.js"]').src; // this js file path
jsFileLocation = jsFileLocation.replace(/notes\.js(\?.*)?$/, ''); // the js folder path
notesFilePath = jsFileLocation + 'notes.html';
}
var notesPopup = window.open( notesFilePath, 'reveal.js - Notes', 'width=1100,height=700' );
/**
* Connect to the notes window through a postmessage handshake.
* Using postmessage enables us to work in situations where the
* origins differ, such as a presentation being opened from the
* file system.
*/
function connect() {
// Keep trying to connect until we get a 'connected' message back
var connectInterval = setInterval( function() {
notesPopup.postMessage( JSON.stringify( {
namespace: 'reveal-notes',
type: 'connect',
url: window.location.protocol + '//' + window.location.host + window.location.pathname + window.location.search,
state: Reveal.getState()
} ), '*' );
}, 500 );
window.addEventListener( 'message', function( event ) {
var data = JSON.parse( event.data );
if( data && data.namespace === 'reveal-notes' && data.type === 'connected' ) {
clearInterval( connectInterval );
onConnected();
}
} );
}
/**
* Posts the current slide data to the notes window
*/
function post(event) {
var slideElement = Reveal.getCurrentSlide(),
notesElement = slideElement.querySelector( 'aside.notes' );
var messageData = {
namespace: 'reveal-notes',
type: 'state',
notes: '',
markdown: false,
whitespace: 'normal',
state: Reveal.getState()
};
// Look for notes defined in a fragment, if it is a fragmentshown event
if (event && event.hasOwnProperty('fragment')) {
var innerNotes = event.fragment.querySelector( 'aside.notes' );
if ( innerNotes) {
notesElement = innerNotes;
}
}
// Look for notes defined in a slide attribute
if( slideElement.hasAttribute( 'data-notes' ) ) {
messageData.notes = slideElement.getAttribute( 'data-notes' );
messageData.whitespace = 'pre-wrap';
}
// Look for notes defined in an aside element
if( notesElement ) {
messageData.notes = notesElement.innerHTML;
messageData.markdown = typeof notesElement.getAttribute( 'data-markdown' ) === 'string';
}
notesPopup.postMessage( JSON.stringify( messageData ), '*' );
}
/**
* Called once we have established a connection to the notes
* window.
*/
function onConnected() {
// Monitor events that trigger a change in state
Reveal.addEventListener( 'slidechanged', post );
Reveal.addEventListener( 'fragmentshown', post );
Reveal.addEventListener( 'fragmenthidden', post );
Reveal.addEventListener( 'overviewhidden', post );
Reveal.addEventListener( 'overviewshown', post );
Reveal.addEventListener( 'paused', post );
Reveal.addEventListener( 'resumed', post );
// Post the initial state
post();
}
connect();
}
if( !/receiver/i.test( window.location.search ) ) {
// If the there's a 'notes' query set, open directly
if( window.location.search.match( /(\?|\&)notes/gi ) !== null ) {
openNotes();
}
// Open the notes when the 's' key is hit
document.addEventListener( 'keydown', function( event ) {
// Disregard the event if the target is editable or a
// modifier is present
if ( document.querySelector( ':focus' ) !== null || event.shiftKey || event.altKey || event.ctrlKey || event.metaKey ) return;
// Disregard the event if keyboard is disabled
if ( Reveal.getConfig().keyboard === false ) return;
if( event.keyCode === 83 ) {
event.preventDefault();
openNotes();
}
}, false );
// Show our keyboard shortcut in the reveal.js help overlay
if( window.Reveal ) Reveal.registerKeyboardShortcut( 'S', 'Speaker notes view' );
}
return { open: openNotes };
})();
================================================
FILE: decks/your_first_pb.html
================================================
Ansible Essentials Workshop
Writing your first playbook
Agenda
What is a playbook?
What are we Automating
What makes up a playbook
Roles and Sharing Automation
What are you automating?
What OS? What kinds of security measures are on the managed node?
Ansible will do what you tell it to.
What is a playbook?
A set of instructions to do something on a remote host(s)
What makes up a playbook?
Header
Tasks
Handlers
---
- hosts: webservers
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
Forming a task
Starts with a name
The module and specific arguments
Your variables can be used in arguments
Conditionals
Yes, you can use them in playbooks
Some of the most popular are:
When Statement
Register
When Statement
tasks:
- name: "shut down Debian flavored systems"
command: /sbin/shutdown -t now
when: ansible_facts['os_family'] == "Debian"
Using Loops
Loops can be used to save some typing.
Short handed task writing to automate your keystrokes
================================================
FILE: examples/README.md
================================================
# Ansible Lightbulb Examples
This content is a collection of complete Ansible solutions that demonstrate the most essential features and common use patterns.
These examples are an excellent educational reference for communicating how Ansible works in a clear, focused and consistent manner using recommended best practices.
It is a great source for canned demos or something you can walk-thru to illustrate automating with Ansible to a group. Some of the examples also serve as the solutions to the workshop assignments.
## Documentation of examples
The documentation of the examples must follow the example structure - or a subset of it - shown below:
```
# cloud-aws
A basic description explaining the reasoning behind the example as well as what best practices and techniques are shown by it.
## Requirements
Any fundamental requirements like an AWS account or the need for Cisco IOS devices are mentioned here.
## Variables
Identify all variables used in the playbook here.
### Required
This optional paragraph highlights required variables that need to be provided.
### Optional
This optional paragraph highlights optional variables that might be provided to further alter the execution of the example.
## Usage
Necessary and notable commands to execute the example playbook are listed here.
### One More Thing
Special tips and advanced tricks regarding the example can be put here.
```
A goog example of a sophisticated and complete `README.md` can be found in the [cloud-aws example](cloud-aws/README.md).
## Naming of examples
Examples should be named in a short and clear way focussing on the use case they cover.
================================================
FILE: examples/apache-basic-playbook/README.md
================================================
# apache-basic-playbook
This example represents a basic yet complete playbook approximating the typical tasks used to deploy and configure a single application service (Apache) on a host running a Red Hat family linux.
This playbook assures the hosts in a group called "web" has the Apache web server (httpd) present and is started. The play also generates a basic configuration and custom home page using templates. If the configuration is changed, a handler task will execute to restart the apache httpd service.
================================================
FILE: examples/apache-basic-playbook/site.yml
================================================
---
- name: Ensure apache is installed and started
hosts: web
become: yes
vars:
httpd_packages:
- httpd
- mod_wsgi
apache_test_message: This is a test message
apache_webserver_port: 80
tasks:
- name: Ensure httpd packages are present
yum:
name: "{{ item }}"
state: present
with_items: "{{ httpd_packages }}"
notify: restart-apache-service
- name: Ensure site-enabled directory is present
file:
name: /etc/httpd/conf/sites-enabled
state: directory
- name: Ensure latest httpd.conf is present
template:
src: templates/httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf
notify: restart-apache-service
- name: Ensure latest index.html is present
template:
src: templates/index.html.j2
dest: /var/www/html/index.html
- name: Ensure httpd is started and enabled
service:
name: httpd
state: started
enabled: yes
handlers:
- name: restart-apache-service
service:
name: httpd
state: restarted
================================================
FILE: examples/apache-basic-playbook/templates/httpd.conf.j2
================================================
#
# This is the main Apache HTTP server configuration file. It contains the
# configuration directives that give the server its instructions.
# See for detailed information.
# In particular, see
#
# for a discussion of each configuration directive.
#
# Do NOT simply read the instructions in here without understanding
# what they do. They're here only as hints or reminders. If you are unsure
# consult the online docs. You have been warned.
#
# Configuration and logfile names: If the filenames you specify for many
# of the server's control files begin with "/" (or "drive:/" for Win32), the
# server will use that explicit path. If the filenames do *not* begin
# with "/", the value of ServerRoot is prepended -- so 'log/access_log'
# with ServerRoot set to '/www' will be interpreted by the
# server as '/www/log/access_log', where as '/log/access_log' will be
# interpreted as '/log/access_log'.
#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# Do not add a slash at the end of the directory path. If you point
# ServerRoot at a non-local disk, be sure to specify a local disk on the
# Mutex directive, if file-based mutexes are used. If you wish to share the
# same ServerRoot for multiple httpd daemons, you will need to change at
# least PidFile.
#
ServerRoot "/etc/httpd"
#
# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the
# directive.
#
# Change this to Listen on specific IP addresses as shown below to
# prevent Apache from glomming onto all bound IP addresses.
#
#Listen 12.34.56.78:80
Listen {{ apache_webserver_port }}
#
# Dynamic Shared Object (DSO) Support
#
# To be able to use the functionality of a module which was built as a DSO you
# have to place corresponding `LoadModule' lines at this location so the
# directives contained in it are actually available _before_ they are used.
# Statically compiled modules (those listed by `httpd -l') do not need
# to be loaded here.
#
# Example:
# LoadModule foo_module modules/mod_foo.so
#
Include conf.modules.d/*.conf
#
# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
#
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
#
User apache
Group apache
# 'Main' server configuration
#
# The directives in this section set up the values used by the 'main'
# server, which responds to any requests that aren't handled by a
# definition. These values also provide defaults for
# any containers you may define later in the file.
#
# All of these directives may appear inside containers,
# in which case these default settings will be overridden for the
# virtual host being defined.
#
#
# ServerAdmin: Your address, where problems with the server should be
# e-mailed. This address appears on some server-generated pages, such
# as error documents. e.g. admin@your-domain.com
#
ServerAdmin root@localhost
#
# ServerName gives the name and port that the server uses to identify itself.
# This can often be determined automatically, but we recommend you specify
# it explicitly to prevent problems during startup.
#
# If your host doesn't have a registered DNS name, enter its IP address here.
#
#ServerName www.example.com:80
#
# Deny access to the entirety of your server's filesystem. You must
# explicitly permit access to web content directories in other
# blocks below.
#
AllowOverride none
Require all denied
#
# Note that from this point forward you must specifically allow
# particular features to be enabled - so if something's not working as
# you might expect, make sure that you have specifically enabled it
# below.
#
#
# DocumentRoot: The directory out of which you will serve your
# documents. By default, all requests are taken from this directory, but
# symbolic links and aliases may be used to point to other locations.
#
DocumentRoot "/var/www/html"
#
# Relax access to content within /var/www.
#
AllowOverride None
# Allow open access:
Require all granted
# Further relax access to the default document root:
#
# Possible values for the Options directive are "None", "All",
# or any combination of:
# Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
#
# Note that "MultiViews" must be named *explicitly* --- "Options All"
# doesn't give it to you.
#
# The Options directive is both complicated and important. Please see
# http://httpd.apache.org/docs/2.4/mod/core.html#options
# for more information.
#
Options Indexes FollowSymLinks
#
# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# Options FileInfo AuthConfig Limit
#
AllowOverride None
#
# Controls who can get stuff from this server.
#
Require all granted
#
# DirectoryIndex: sets the file that Apache will serve if a directory
# is requested.
#
DirectoryIndex index.html
#
# The following lines prevent .htaccess and .htpasswd files from being
# viewed by Web clients.
#
Require all denied
#
# ErrorLog: The location of the error log file.
# If you do not specify an ErrorLog directive within a
# container, error messages relating to that virtual host will be
# logged here. If you *do* define an error logfile for a
# container, that host's errors will be logged there and not here.
#
ErrorLog "logs/error_log"
MaxKeepAliveRequests 115
#
# LogLevel: Control the number of messages logged to the error_log.
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
#
LogLevel warn
#
# The following directives define some format nicknames for use with
# a CustomLog directive (see below).
#
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
# You need to enable mod_logio.c to use %I and %O
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
#
# The location and format of the access logfile (Common Logfile Format).
# If you do not define any access logfiles within a
# container, they will be logged here. Contrariwise, if you *do*
# define per- access logfiles, transactions will be
# logged therein and *not* in this file.
#
#CustomLog "logs/access_log" common
#
# If you prefer a logfile with access, agent, and referer information
# (Combined Logfile Format) you can use the following directive.
#
CustomLog "logs/access_log" combined
#
# Redirect: Allows you to tell clients about documents that used to
# exist in your server's namespace, but do not anymore. The client
# will make a new request for the document at its new location.
# Example:
# Redirect permanent /foo http://www.example.com/bar
#
# Alias: Maps web paths into filesystem paths and is used to
# access content that does not live under the DocumentRoot.
# Example:
# Alias /webpath /full/filesystem/path
#
# If you include a trailing / on /webpath then the server will
# require it to be present in the URL. You will also likely
# need to provide a section to allow access to
# the filesystem path.
#
# ScriptAlias: This controls which directories contain server scripts.
# ScriptAliases are essentially the same as Aliases, except that
# documents in the target directory are treated as applications and
# run by the server when requested rather than as documents sent to the
# client. The same rules about trailing "/" apply to ScriptAlias
# directives as to Alias.
#
ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
#
# "/var/www/cgi-bin" should be changed to whatever your ScriptAliased
# CGI directory exists, if you have that configured.
#
AllowOverride None
Options None
Require all granted
#
# TypesConfig points to the file containing the list of mappings from
# filename extension to MIME-type.
#
TypesConfig /etc/mime.types
#
# AddType allows you to add to or override the MIME configuration
# file specified in TypesConfig for specific file types.
#
#AddType application/x-gzip .tgz
#
# AddEncoding allows you to have certain browsers uncompress
# information on the fly. Note: Not all browsers support this.
#
#AddEncoding x-compress .Z
#AddEncoding x-gzip .gz .tgz
#
# If the AddEncoding directives above are commented-out, then you
# probably should define those extensions to indicate media types:
#
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
#
# AddHandler allows you to map certain file extensions to "handlers":
# actions unrelated to filetype. These can be either built into the server
# or added with the Action directive (see below)
#
# To use CGI scripts outside of ScriptAliased directories:
# (You will also need to add "ExecCGI" to the "Options" directive.)
#
#AddHandler cgi-script .cgi
# For type maps (negotiated resources):
#AddHandler type-map var
#
# Filters allow you to process content before it is sent to the client.
#
# To parse .shtml files for server-side includes (SSI):
# (You will also need to add "Includes" to the "Options" directive.)
#
AddType text/html .shtml
AddOutputFilter INCLUDES .shtml
#
# Specify a default charset for all content served; this enables
# interpretation of all content as UTF-8 by default. To use the
# default browser choice (ISO-8859-1), or to allow the META tags
# in HTML content to override this choice, comment out this
# directive:
#
AddDefaultCharset UTF-8
#
# The mod_mime_magic module allows the server to use various hints from the
# contents of the file itself to determine its type. The MIMEMagicFile
# directive tells the module where the hint definitions are located.
#
MIMEMagicFile conf/magic
#
# Customizable error responses come in three flavors:
# 1) plain text 2) local redirects 3) external redirects
#
# Some examples:
#ErrorDocument 500 "The server made a boo boo."
#ErrorDocument 404 /missing.html
#ErrorDocument 404 "/cgi-bin/missing_handler.pl"
#ErrorDocument 402 http://www.example.com/subscription_info.html
#
#
# EnableMMAP and EnableSendfile: On systems that support it,
# memory-mapping or the sendfile syscall may be used to deliver
# files. This usually improves server performance, but must
# be turned off when serving from networked-mounted
# filesystems or if support for these functions is otherwise
# broken on your system.
# Defaults if commented: EnableMMAP On, EnableSendfile Off
#
#EnableMMAP off
EnableSendfile on
# Supplemental configuration
#
# Load config files in the "/etc/httpd/conf.d" directory, if any.
IncludeOptional conf.d/*.conf
================================================
FILE: examples/apache-basic-playbook/templates/index.html.j2
================================================
Ansible: Automation for Everyone
{{ apache_test_message }}
================================================
FILE: examples/apache-role/README.md
================================================
# apache-roles
This example demonstrates how roles are structured and used within a playbook. In communicating these concepts, this example is simply a refactoring of the `apache-basic-playbook` example into a role.
================================================
FILE: examples/apache-role/roles/apache-simple/defaults/main.yml
================================================
---
# defaults file for apache
apache_test_message: This is a test message
apache_webserver_port: 80
================================================
FILE: examples/apache-role/roles/apache-simple/handlers/main.yml
================================================
---
# handlers file for apache
- name: restart-apache-service
service:
name: httpd
state: restarted
================================================
FILE: examples/apache-role/roles/apache-simple/tasks/main.yml
================================================
---
# tasks file for apache
- name: Ensure httpd packages are present
yum:
name: "{{ item }}"
state: present
with_items: "{{ httpd_packages }}"
notify: restart-apache-service
- name: Ensure latest httpd.conf file is present
template:
src: httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf
notify: restart-apache-service
- name: Ensure latest index.html file is present
template:
src: index.html.j2
dest: /var/www/html/index.html
- name: Ensure httpd service is started and enabled
service:
name: httpd
state: started
enabled: yes
================================================
FILE: examples/apache-role/roles/apache-simple/templates/httpd.conf.j2
================================================
#
# This is the main Apache HTTP server configuration file. It contains the
# configuration directives that give the server its instructions.
# See for detailed information.
# In particular, see
#
# for a discussion of each configuration directive.
#
# Do NOT simply read the instructions in here without understanding
# what they do. They're here only as hints or reminders. If you are unsure
# consult the online docs. You have been warned.
#
# Configuration and logfile names: If the filenames you specify for many
# of the server's control files begin with "/" (or "drive:/" for Win32), the
# server will use that explicit path. If the filenames do *not* begin
# with "/", the value of ServerRoot is prepended -- so 'log/access_log'
# with ServerRoot set to '/www' will be interpreted by the
# server as '/www/log/access_log', where as '/log/access_log' will be
# interpreted as '/log/access_log'.
#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# Do not add a slash at the end of the directory path. If you point
# ServerRoot at a non-local disk, be sure to specify a local disk on the
# Mutex directive, if file-based mutexes are used. If you wish to share the
# same ServerRoot for multiple httpd daemons, you will need to change at
# least PidFile.
#
ServerRoot "/etc/httpd"
#
# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the
# directive.
#
# Change this to Listen on specific IP addresses as shown below to
# prevent Apache from glomming onto all bound IP addresses.
#
#Listen 12.34.56.78:80
Listen {{ apache_webserver_port }}
#
# Dynamic Shared Object (DSO) Support
#
# To be able to use the functionality of a module which was built as a DSO you
# have to place corresponding `LoadModule' lines at this location so the
# directives contained in it are actually available _before_ they are used.
# Statically compiled modules (those listed by `httpd -l') do not need
# to be loaded here.
#
# Example:
# LoadModule foo_module modules/mod_foo.so
#
Include conf.modules.d/*.conf
#
# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
#
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
#
User apache
Group apache
# 'Main' server configuration
#
# The directives in this section set up the values used by the 'main'
# server, which responds to any requests that aren't handled by a
# definition. These values also provide defaults for
# any containers you may define later in the file.
#
# All of these directives may appear inside containers,
# in which case these default settings will be overridden for the
# virtual host being defined.
#
#
# ServerAdmin: Your address, where problems with the server should be
# e-mailed. This address appears on some server-generated pages, such
# as error documents. e.g. admin@your-domain.com
#
ServerAdmin root@localhost
#
# ServerName gives the name and port that the server uses to identify itself.
# This can often be determined automatically, but we recommend you specify
# it explicitly to prevent problems during startup.
#
# If your host doesn't have a registered DNS name, enter its IP address here.
#
#ServerName www.example.com:80
#
# Deny access to the entirety of your server's filesystem. You must
# explicitly permit access to web content directories in other
# blocks below.
#
AllowOverride none
Require all denied
#
# Note that from this point forward you must specifically allow
# particular features to be enabled - so if something's not working as
# you might expect, make sure that you have specifically enabled it
# below.
#
#
# DocumentRoot: The directory out of which you will serve your
# documents. By default, all requests are taken from this directory, but
# symbolic links and aliases may be used to point to other locations.
#
DocumentRoot "/var/www/html"
#
# Relax access to content within /var/www.
#
AllowOverride None
# Allow open access:
Require all granted
# Further relax access to the default document root:
#
# Possible values for the Options directive are "None", "All",
# or any combination of:
# Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
#
# Note that "MultiViews" must be named *explicitly* --- "Options All"
# doesn't give it to you.
#
# The Options directive is both complicated and important. Please see
# http://httpd.apache.org/docs/2.4/mod/core.html#options
# for more information.
#
Options Indexes FollowSymLinks
#
# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# Options FileInfo AuthConfig Limit
#
AllowOverride None
#
# Controls who can get stuff from this server.
#
Require all granted
#
# DirectoryIndex: sets the file that Apache will serve if a directory
# is requested.
#
DirectoryIndex index.html
#
# The following lines prevent .htaccess and .htpasswd files from being
# viewed by Web clients.
#
Require all denied
#
# ErrorLog: The location of the error log file.
# If you do not specify an ErrorLog directive within a
# container, error messages relating to that virtual host will be
# logged here. If you *do* define an error logfile for a
# container, that host's errors will be logged there and not here.
#
ErrorLog "logs/error_log"
MaxKeepAliveRequests 115
#
# LogLevel: Control the number of messages logged to the error_log.
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
#
LogLevel warn
#
# The following directives define some format nicknames for use with
# a CustomLog directive (see below).
#
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
# You need to enable mod_logio.c to use %I and %O
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
#
# The location and format of the access logfile (Common Logfile Format).
# If you do not define any access logfiles within a
# container, they will be logged here. Contrariwise, if you *do*
# define per- access logfiles, transactions will be
# logged therein and *not* in this file.
#
#CustomLog "logs/access_log" common
#
# If you prefer a logfile with access, agent, and referer information
# (Combined Logfile Format) you can use the following directive.
#
CustomLog "logs/access_log" combined
#
# Redirect: Allows you to tell clients about documents that used to
# exist in your server's namespace, but do not anymore. The client
# will make a new request for the document at its new location.
# Example:
# Redirect permanent /foo http://www.example.com/bar
#
# Alias: Maps web paths into filesystem paths and is used to
# access content that does not live under the DocumentRoot.
# Example:
# Alias /webpath /full/filesystem/path
#
# If you include a trailing / on /webpath then the server will
# require it to be present in the URL. You will also likely
# need to provide a section to allow access to
# the filesystem path.
#
# ScriptAlias: This controls which directories contain server scripts.
# ScriptAliases are essentially the same as Aliases, except that
# documents in the target directory are treated as applications and
# run by the server when requested rather than as documents sent to the
# client. The same rules about trailing "/" apply to ScriptAlias
# directives as to Alias.
#
ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
#
# "/var/www/cgi-bin" should be changed to whatever your ScriptAliased
# CGI directory exists, if you have that configured.
#
AllowOverride None
Options None
Require all granted
#
# TypesConfig points to the file containing the list of mappings from
# filename extension to MIME-type.
#
TypesConfig /etc/mime.types
#
# AddType allows you to add to or override the MIME configuration
# file specified in TypesConfig for specific file types.
#
#AddType application/x-gzip .tgz
#
# AddEncoding allows you to have certain browsers uncompress
# information on the fly. Note: Not all browsers support this.
#
#AddEncoding x-compress .Z
#AddEncoding x-gzip .gz .tgz
#
# If the AddEncoding directives above are commented-out, then you
# probably should define those extensions to indicate media types:
#
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
#
# AddHandler allows you to map certain file extensions to "handlers":
# actions unrelated to filetype. These can be either built into the server
# or added with the Action directive (see below)
#
# To use CGI scripts outside of ScriptAliased directories:
# (You will also need to add "ExecCGI" to the "Options" directive.)
#
#AddHandler cgi-script .cgi
# For type maps (negotiated resources):
#AddHandler type-map var
#
# Filters allow you to process content before it is sent to the client.
#
# To parse .shtml files for server-side includes (SSI):
# (You will also need to add "Includes" to the "Options" directive.)
#
AddType text/html .shtml
AddOutputFilter INCLUDES .shtml
#
# Specify a default charset for all content served; this enables
# interpretation of all content as UTF-8 by default. To use the
# default browser choice (ISO-8859-1), or to allow the META tags
# in HTML content to override this choice, comment out this
# directive:
#
AddDefaultCharset UTF-8
#
# The mod_mime_magic module allows the server to use various hints from the
# contents of the file itself to determine its type. The MIMEMagicFile
# directive tells the module where the hint definitions are located.
#
MIMEMagicFile conf/magic
#
# Customizable error responses come in three flavors:
# 1) plain text 2) local redirects 3) external redirects
#
# Some examples:
#ErrorDocument 500 "The server made a boo boo."
#ErrorDocument 404 /missing.html
#ErrorDocument 404 "/cgi-bin/missing_handler.pl"
#ErrorDocument 402 http://www.example.com/subscription_info.html
#
#
# EnableMMAP and EnableSendfile: On systems that support it,
# memory-mapping or the sendfile syscall may be used to deliver
# files. This usually improves server performance, but must
# be turned off when serving from networked-mounted
# filesystems or if support for these functions is otherwise
# broken on your system.
# Defaults if commented: EnableMMAP On, EnableSendfile Off
#
#EnableMMAP off
EnableSendfile on
# Supplemental configuration
#
# Load config files in the "/etc/httpd/conf.d" directory, if any.
IncludeOptional conf.d/*.conf
================================================
FILE: examples/apache-role/roles/apache-simple/templates/index.html.j2
================================================
Ansible: Automation for Everyone
{{ apache_test_message }}
================================================
FILE: examples/apache-role/roles/apache-simple/vars/main.yml
================================================
---
# vars file for apache
httpd_packages:
- httpd
- mod_wsgi
================================================
FILE: examples/apache-role/site.yml
================================================
---
- name: Ensure apache is installed and started via role
hosts: web
become: yes
roles:
- apache-simple
================================================
FILE: examples/apache-simple-playbook/README.md
================================================
# apache-simple-playbook
This example is designed to be used as a quick introduction to playbook structure that can easily fit on one slide deck that demonstrates how Ansible works.
This playbook assures the hosts in a group called "web" has the Apache web server (httpd) present and is started with a static custom home page. The hosts are presumed to be running a Red Hat family linux.
The playbook is intended for demonstration and instructional purpose. In reality it's too simplistic to be really useful. See `examples/apache-basic-playbook` for a more complete example using Apache.
================================================
FILE: examples/apache-simple-playbook/files/index.html
================================================
Ansible: Automation for Everyone
================================================
FILE: examples/apache-simple-playbook/site.yml
================================================
---
- name: Ensure apache is installed and started
hosts: web
become: yes
tasks:
- name: Ensure httpd package is present
yum:
name: httpd
state: present
- name: Ensure latest index.html file is present
copy:
src: files/index.html
dest: /var/www/html/
- name: Ensure httpd is started
service:
name: httpd
state: started
================================================
FILE: examples/cloud-aws/README.md
================================================
# cloud-aws
This intermediate-level Ansible playbook example demonstrates the common tasks for provisioning EC2 server instances into an Amazon Web Services VPC. Once provisioned this example will use an existing role (apache-role) to deploy and setup an application service on the instances in the stack.
This example demonstrates a few other best practices and intermediate techniques:
* **Compound Playbook.** This example shows the use of a playbook, `site.yml`, combining other playbooks, `provision.yml` and `setup.yml`, using play level 'include' statements. Splitting provisioning and configuration plays into separate files is a recommended best practice. By doing so, users have more flexibility to what the run without needing tags and wrapping blocks of tasks with conditionals. This separation can be used to promote reuse, but enabling the mixing and matching of various playbooks.
* **Controlling Debug Message Display.** There is a debug task in `provison.yml` using the optional `verbosity` parameter. Avoiding the unnecessary display of debugging messages avoids confusion and is just good hygiene. In this implementation, the debug message won't be displayed unless the playbook is run in verbose mode level 1 or greater.
## Requirements
To use this example you will need to have an AWS account setup and properly configured on your control machine. You will also need the boto, boto3 and botocore modules installed. See the [Ansible AWS Guide](http://docs.ansible.com/ansible/guide_aws.html) for more details.
## Variables
Before running this playbook example, you should know about what variables it uses and how they effect the execution of the AWS provisioning process.
### Required
The following variables must be properly set for this example to run properly.
* `ec2_stack_name`: A unique name for your application stack. The stack name is used as a prefix for many other resources that will get created using this playbook.
* `ec2_region`: A valid AWS region name such as "us-east-2" or "ap-southeast-1" or "eu-west-2."
* `ec2_key_name`: An existing AWS keyname from your account to use when provisioning instances.
### Optional
There are a few other variables present whose value you can override if needed.
* `ec2_az`: The EC2 availability zone to use in the region. Default: a
* `ec2_vpcidr`: The VPC CIDR value. Default: 10.251.0.0/16
* `ec2_subnetcidr`: The VPC Subnet CIDR value. Default: 10.251.1.0/24
* `ec2_exact_count`: The number of EC2 instances that should be present. Using the `ec2` module, this playbook will create and terminate instances as needed. Default: 1
* `ec2_private_key_file`: The path to the private key associated with the `ec2_key_name` used to launch the instances if you need it. If undefined, it is omitted from dynamic inventory group "web".
## Usage
To execute this example, use the `site.yml` playbook and pass in all the required variables:
```
ansible-playbook site.yml -e "ec2_stack_name=lightbulb ec2_region=us-east-2 ec2_key_name=engima"
```
Using verbose mode of any type will emit a debugging message that displays information about the provisioned instances in the stack.
```
ansible-playbook site.yml -e "ec2_stack_name=lightbulb ec2_region=us-east-2 ec2_key_name=engima" -v
```
Remember you can put your variables in a YAML formatted file and feed it into the play with the @ operator.
```
ansible-playbook site.yml -e @extra_vars.yml
```
### One More Thing
Since we are reusing the apache-simple roles from `examples/apache-role`, we can override the default value of `apache_test_message` to change the message that gets inserted onto the generated home page by the role.
```
ansible-playbook site.yml -e "ec2_stack_name=lightbulb ec2_region=us-east-2 ec2_key_name=engima apache_test_message=Hello_World"
```
================================================
FILE: examples/cloud-aws/provision.yml
================================================
---
- name: Ensure servers are provisioned in vpc
hosts: localhost
gather_facts: false
vars:
# requires ec2_stack_name, ec2_region and ec2_key_name
ec2_exact_count: 1
ec2_vpcidr: 10.251.0.0/16
ec2_subnetcidr: 10.251.1.0/24
ec2_az: a
tasks:
- name: Ensure vpc is present
ec2_vpc_net:
state: present
region: "{{ ec2_region }}"
cidr_block: "{{ ec2_vpcidr }}"
name: "{{ ec2_stack_name }}-vpc"
tags:
tool: ansible
register: network
- name: Ensure vpc subnet is present
ec2_vpc_subnet:
region: "{{ ec2_region }}"
state: present
cidr: "{{ ec2_subnetcidr }}"
az: "{{ ec2_region }}{{ ec2_az }}"
resource_tags:
tool: ansible
Name: "{{ ec2_stack_name }}-subnet-{{ ec2_az }}"
vpc_id: "{{ network.vpc.id }}"
register: vpc_subnet
- name: Ensure vpc internet gateway is present
ec2_vpc_igw:
region: "{{ ec2_region }}"
vpc_id: "{{ network.vpc.id }}"
state: present
register: igw
- name: Ensure vpc public subnet route table is present
ec2_vpc_route_table:
region: "{{ ec2_region }}"
vpc_id: "{{ network.vpc.id }}"
tags:
Name: "{{ ec2_stack_name }}-public"
subnets:
- "{{ vpc_subnet.subnet.id }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
- name: Ensure vpc security group is present
ec2_group:
name: "{{ ec2_stack_name }}-webservers"
region: "{{ ec2_region }}"
description: SSH and HTTP/HTTPS
vpc_id: "{{ network.vpc.id }}"
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 443
to_port: 443
cidr_ip: 0.0.0.0/0
- name: Search for the latest centos7 ami
ec2_ami_find:
owner: "410186602215"
region: "{{ ec2_region }}"
name: "CentOS Linux 7 x86_64 HVM EBS*"
register: find_results
- name: Get exact count of stack ec2 instances running
ec2:
key_name: "{{ ec2_key_name }}"
group: "{{ ec2_stack_name }}-webservers"
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: 8
delete_on_termination: true
vpc_subnet_id: "{{ vpc_subnet.subnet.id }}"
instance_type: t2.micro
image: "{{ find_results.results[0].ami_id }}"
wait: true
region: "{{ ec2_region }}"
exact_count: "{{ ec2_exact_count }}"
count_tag:
Count: "{{ ec2_stack_name }}"
instance_tags:
Name: "{{ ec2_stack_name }}"
Count: "{{ ec2_stack_name }}"
assign_public_ip: true
register: ec2
- name: Display stack instances info
debug:
var: ec2
verbosity: 1
- name: Ensure all instances are ready
wait_for:
port: 22
host: "{{ item.public_ip }}"
search_regex: OpenSSH
with_items: "{{ ec2.tagged_instances }}"
- name: Pause to let cloud init to complete
pause:
seconds: 90
- name: Build group of instances
add_host:
name: "{{ item.public_dns_name }}"
groups: web
ansible_user: centos
ansible_host: "{{ item.public_ip }}"
ansible_ssh_private_key_file: "{{ ec2_private_key_file | default(omit) }}"
with_items: "{{ ec2.tagged_instances }}"
================================================
FILE: examples/cloud-aws/setup.yml
================================================
---
- name: Ensure apache server is present and running with example home page
hosts: web
become: yes
roles:
- apache-simple
================================================
FILE: examples/cloud-aws/site.yml
================================================
---
- include: provision.yml
- include: setup.yml
================================================
FILE: examples/nginx-basic-playbook/README.md
================================================
# nginx-basic-playbook
This example represents a basic yet complete playbook approximating the typical tasks used to deploy and configure a single application service (Nginx) on a host running a Red Hat family linux.
This playbook assures the hosts in a group called "web" has the Nginx web server along with uwsgi and other build dependencies are present and nginx is started. The play also generates a basic configuration and custom home page using templates. If the configuration is changed, a handler task will execute to restart the nginx service.
This example assumes that the EPEL repo has already be enabled on each host. This was intentionally left out to keep this example focused.
The playbook is also the solution to the primary assignment and one of the extra credit assignments in `workshop/basic_playbook`.
================================================
FILE: examples/nginx-basic-playbook/site.yml
================================================
# In keeping things simple, this example assumes the epel repo is enabled on each node
---
- name: Ensure nginx is installed and started with wsgi
hosts: web
become: yes
vars:
nginx_packages:
- nginx
- python-pip
- python-devel
- gcc
nginx_test_message: This is a test message
nginx_webserver_port: 80
tasks:
- name: Ensure nginx packages are present
yum:
name: "{{ item }}"
state: present
with_items: "{{ nginx_packages }}"
notify: restart-nginx-service
- name: Ensure uwsgi package is present
pip:
name: uwsgi
state: present
notify: restart-nginx-service
- name: Ensure latest default.conf is present
template:
src: templates/nginx.conf.j2
dest: /etc/nginx/nginx.conf
backup: yes
notify: restart-nginx-service
- name: Ensure latest index.html is present
template:
src: templates/index.html.j2
dest: /usr/share/nginx/html/index.html
- name: Ensure nginx service is started and enabled
service:
name: nginx
state: started
enabled: yes
# smoke test that nginx came up and is serving home page
- name: Ensure proper response from localhost can be received
uri:
url: "http://localhost:{{ nginx_webserver_port }}/"
return_content: yes
register: response
until: 'nginx_test_message in response.content'
retries: 10
delay: 1
handlers:
- name: restart-nginx-service
service:
name: nginx
state: restarted
================================================
FILE: examples/nginx-basic-playbook/templates/index.html.j2
================================================
Ansible: Automation for Everyone
{{ nginx_test_message }}
================================================
FILE: examples/nginx-basic-playbook/templates/nginx.conf.j2
================================================
# Based on nginx version: nginx/1.10.1
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 115;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen {{ nginx_webserver_port }} default_server;
listen [::]:{{ nginx_webserver_port }} default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 default_server;
# listen [::]:443 ssl http2 default_server;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
================================================
FILE: examples/nginx-remove-playbook/README.md
================================================
# nginx-remove-playbook
This example demonstrates how to you would typically shutdown and remove an application service (nginx) from the hosts in a group called "web". The hosts are presumed to be running a Red Hat family linux.
This example was specifically developed as the solution to one of the extra credit assignments in `workshop/basic_playbook`.
================================================
FILE: examples/nginx-remove-playbook/site.yml
================================================
---
- name: Ensure nginx with wsgi is removed
hosts: web
become: yes
tasks:
- name: Ensure nginx service is stopped
service:
name: nginx
state: stopped
ignore_errors: yes
- name: Ensure nginx package is absent
yum:
name: nginx
state: absent
- name: Ensure uwsgi package is absent
pip:
name: uwsgi
state: absent
- name: Ensure files created by nginx-simple are absent
file:
name: "{{ item }}"
state: absent
with_items:
- /etc/nginx/nginx.conf
- /usr/share/nginx/html/index.html
================================================
FILE: examples/nginx-role/README.md
================================================
# nginx-roles
This example demonstrates how roles are structured and used in a playbook. In communicating these concepts, this example is simply a refactoring of the `nginx-basic-playbook` and `nginx-remove-playbook` examples into a role.
This example requires Ansible v2.2 or later. It uses the `include_role` module that introduced in that version.
This also example assumes that the EPEL repo has already be enabled on each host. This was intentionally left out to keep this example focused.
This example is also the solution to the primary assignment and extra credit assignments in `workshop/roles`.
================================================
FILE: examples/nginx-role/remove.yml
================================================
---
- name: Removes nginx and uwsgi
hosts: web
become: yes
tasks:
- name: Run remove tasks from nginx-simple role
include_role:
name: nginx-simple
tasks_from: remove
================================================
FILE: examples/nginx-role/roles/nginx-simple/defaults/main.yml
================================================
---
# defaults file for nginx
nginx_test_message: This is a test message
nginx_webserver_port: 80
================================================
FILE: examples/nginx-role/roles/nginx-simple/handlers/main.yml
================================================
---
# handlers file for nginx
- name: restart-nginx-service
service:
name: nginx
state: restarted
================================================
FILE: examples/nginx-role/roles/nginx-simple/tasks/main.yml
================================================
---
# tasks file for nginx
- name: Ensure nginx packages are present
yum:
name: "{{ item }}"
state: present
with_items: "{{ nginx_packages }}"
notify: restart-nginx-service
- name: Ensure uwsgi package is present
pip:
name: uwsgi
state: present
notify: restart-nginx-service
- name: Ensure latest default.conf is present
template:
src: templates/nginx.conf.j2
dest: /etc/nginx/nginx.conf
backup: yes
notify: restart-nginx-service
- name: Ensure latest index.html is present
template:
src: templates/index.html.j2
dest: /usr/share/nginx/html/index.html
- name: Ensure nginx service is started and enabled
service:
name: nginx
state: started
enabled: yes
# smoke test that nginx came up and is serving home page
- name: Ensure proper response from localhost is received
uri:
url: http://localhost/
return_content: yes
register: response
until: 'nginx_test_message in response.content'
retries: 10
delay: 1
================================================
FILE: examples/nginx-role/roles/nginx-simple/tasks/remove.yml
================================================
---
# tasks file the removes nginx and uwsgi
# derived from examples/nginx-remove-playbook
- name: Ensure nginx service is stopped
service:
name: nginx
state: stopped
ignore_errors: yes
- name: Ensure nginx package is removed
yum:
name: nginx
state: absent
- name: Ensure uwsgi is removed
pip:
name: uwsgi
state: absent
- name: Clean up files created by nginx-simple
file:
name: "{{ item }}"
state: absent
with_items:
- /etc/nginx/nginx.conf
- /usr/share/nginx/html/index.html
================================================
FILE: examples/nginx-role/roles/nginx-simple/templates/index.html.j2
================================================
Ansible: Automation for Everyone
{{ nginx_test_message }}
================================================
FILE: examples/nginx-role/roles/nginx-simple/templates/nginx.conf.j2
================================================
# Based on nginx version: nginx/1.10.1
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 115;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen {{ nginx_webserver_port }} default_server;
listen [::]:{{ nginx_webserver_port }} default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 default_server;
# listen [::]:443 ssl http2 default_server;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
================================================
FILE: examples/nginx-role/roles/nginx-simple/vars/main.yml
================================================
---
# vars file for nginx
nginx_packages:
- nginx
- python-pip
- python-devel
- gcc
================================================
FILE: examples/nginx-role/site.yml
================================================
# In keeping things simple, this example assumes the epel repo is enabled on each node
---
- name: Ensure nginx is installed and started via role
hosts: web
become: yes
roles:
- nginx-simple
================================================
FILE: examples/nginx-simple-playbook/README.md
================================================
# nginx-simple-playbook
This example is designed to be used as a quick introduction to playbook structure that can easily fit on one slide deck that demonstrates how Ansible works.
This playbook assures the hosts in a group called "web" has the Nginx web server present and is started with a static custom home page. The hosts are presumed to be running a Red Hat family linux.
This example assumes that the EPEL repo has already be enabled on each host. This was intentionally left out to keep this example simple and focused.
The playbook is intended for demonstration and instructional purpose. In reality it's too simplistic to be really useful. See `examples/nginx-basic-playbook` for a more complete example using nginx.
================================================
FILE: examples/nginx-simple-playbook/files/index.html
================================================
Ansible: Automation for Everyone
================================================
FILE: examples/nginx-simple-playbook/site.yml
================================================
# In keeping things simple, this example assumes the epel repo is enabled on each node
---
- name: Ensure nginx is installed and started
hosts: web
become: yes
tasks:
- name: Ensure nginx package is present
yum:
name: nginx
state: present
- name: Ensure latest index.html is present
copy:
src: files/index.html
dest: /usr/share/nginx/html
- name: Ensure nginx service is started
service:
name: nginx
state: started
================================================
FILE: facilitator/README.md
================================================
# The Ansible Lightbulb Facilitator's Guide
The Ansible Lightbulb project is an effort to provide a reference and educational content for rapidly communicating and learning Ansible and Ansible Tower essentials. Lightbulb has been designed as a multi-purpose toolkit for effectively demonstrating Ansible's capabilities or providing informal workshop training in various forms -- instructor-led, hands-on or self-paced.
This guide covers different ways Ansible suggests they are assembled for various scenarios.
## Getting Started
If you haven't read it already, start with the [Ansible Lightbulb README](../README.md). There you will find what tools are in the toolkit and the requirements for using this content.
We also recommend reading about [the Lightbulb philosophy](../PHILOSOPHY.md) for effectively communicating and delivering informal training on Ansible in the same spirit that it has become known for and highly successful.
If you are already somewhat familiar with automating with Ansible, we recommend that you are also familiar with the best practices this content incorporates. See [this blog post](https://www.ansible.com/blog/ansible-best-practices-essentials) and [video presentation from AnsibleFest Brooklyn (October 2016)](https://www.ansible.com/ansible-best-practices).
The Lightbulb project provides a lab provisioner tool for creating an environment to present and work with Lightbulb content. Currently the provisioner only supports provisioning lab environments using Amazon Web Services. (A Vagrant option will be developed particularly for self-paced learning.) See the [Ansible AWS Lab Provisioner README](../tools/aws_lab_setup/README.md) for details on requirements, setup and configuration.
## Lightbulb Modules
Lightbulb modules are collections of this content that have been bundled to communicate and/or teach automating with Ansible based on objectives and scenarios. These modules are the "use cases" that all Lightbulb content was specifically developed. The project has endeavored to be as modular and flexible to make other uses possible.
### Live Demonstration
**Scenario**: Quickly demonstrating how Ansible or Ansible Tower works to an audience that has little to no knowledge on the topic.
**Run Time**: ~5-10 minutes to briefly explain and execute one example.
The content in `examples/` can be used to effectively demonstrate Ansible's features to supplement a presentation. These examples have been carefully curated to communicate how Ansible works in a simple and focused way and are easy to demonstrate live and in-action.
For a basic linux server automation demonstration we recommend `examples/apache-basic-playbook`. This example is a single playbook you can put up on a screen with limited scrolling and virtually no file system navigation. For a slightly more sophisticated linux server example, use the Nginx variant in `examples/nginx-basic-playbook`.
### Ansible Essentials Workshop
**Scenario**: Providing instruction on the core essentials to students that have little to no experience automating with Ansible on linux servers.
**Run Time**: 2 Hours
This module is designed to provide some valuable instruction with a limited amount of time and/or resources. It is also ideal for addressing large audiences where the overhead of provisioning and supporting the use of individual labs is not feasible. This module is also ideal for remote instruction like webinars.
#### Presentation Deck
For a basic linux server automation use `decks/ansible-essentials.html` and only navigate to the right. Going down will take you through the more in-depth content and workshops of the Ansible Essentials Hands-on Workshop.
#### Examples & Workshops
##### Examples
* apache-simple-playbook
* apache-basic-playbook
* apache-roles
Besides walking students through these examples, the facilitator of this module will also be asked to install Ansible using pip and demonstrate some ad-hoc commands use. Installing Ansible is optional though highly recommended as it shows how easy it is to get started.
An alternative option for delivering this module to a more sophisticated audience is to use the Nginx variants of the Apache examples instead:
* nginx-simple-playbook
* nginx-basic-playbook
* nginx-roles
##### Workshops
None.
#### Lab Environment
Only the facilitator of the workshop needs a lab environment making this module ideal for scenarios where time and facilities are limited. We recommend the instructor's lab environment be made up of a single "control" host in a group of the same name and 3 CentOS or RHEL linux hosts in a group called "web". The examples used by this module require very few system resources to run. Something akin to a "micro" instance is all that is needed for this particular module. You will need to consider the requirements of other modules if you are going to chain this one with others though.
### Ansible Essentials Hands-On Workshop
**Scenario**: Providing hands-on instruction on the core essentials to students with limited experience automating with Ansible on linux servers.
**Run Time**: 4 Hours
This module is designed to provide students with direct introductory instruction and guidance to beginning to automate with Ansible. It is the starting point for students intent on becoming more proficient with Ansible through other Lightbulb modules and their own usage.
It is ideal for addressing small to medium size audiences of students that are committed to learning how to automate with Ansible.
This module is the 2 hour workshop with a bit of extra depth and hands-on workshops. Delivering this module means providing each student with a lab environment and depending on the size and skill-level of the students in your audience, having one or more assistants to help students during the workshops.
#### Presentation Deck
For a basic linux server automation use `decks/ansible-essentials.html` navigating down when given the option.
#### Examples & Workshops
Unlike the 2 hour non-interactive Ansible Essentials Workshop, this module makes use of both the apache and nginx set of examples. The apache examples are used by the facilitator to demonstrate and walk-thru with the class. The nginx examples are the solutions to the workshops they will be assigned.
##### Examples
* apache-simple-playbook (optional)
* nginx-simple-playbook (optional)
* apache-basic-playbook
* nginx-basic-playbook
* apache-roles
* nginx-roles
##### Workshops
* ansible_install (optional though highly recommended)
* adhoc_commands
* simple_playbook (optional)
* basic_playbook
* roles
When pressed for time or dealing with a more technical audience you can opt to skip the "simple" examples and workshops. The "basic" ones cover the same topics and more.
When applicable, most workshops provide "extra credit" assignments for students that are more advanced or fast-learners.
Facilitators can opt to pre-install Ansible on each control machine for the students allowing you to skip that workshop. We recommend you don't do that though, to show how easy it is to get started using Ansible. It should take 10 minutes or less.
#### Lab Environment
This lab requires each student have their own lab environment in order to perform the workshop assignments. Use the Lightbulb provisioner tool to provision these student labs in advance.
We generally recommend **NOT** using Vagrant with groups. In our experience too much time gets spent helping students install, configure and troubleshoot Vagrant along with their lab environment that could be better spent on Ansible teaching.
Sharing a single lab environment amongst a group of students simply will not work.
We recommend the lab environments be made up of a single "control" host in a group of the same name and 3 CentOS or RHEL linux hosts in a group called "web". The examples used by this module require very few system resources to run. Something akin to a "micro" instance is all that is needed. You will need to consider the requirements of other modules if you are going to chain this one with others though.
### Introduction to Ansible Tower Workshop
**Scenario**: Providing basic instruction to students with limited experience using Ansible Tower on how it works and can be used to facilitate and manage Ansible automation in their organization.
**Run Time**: 1 Hour
**Prerequisites**: Ansible Essentials Workshop or Ansible Essentials Hands-On Workshop
This module is designed to provide some valuable instruction on using Ansible Tower with a limited amount of time and/or resources. It is also ideal for addressing large audiences where the overhead of provisioning and supporting the use of individual labs required to execute hands-on workshops is not feasible. This module is also ideal for remote instruction like webinars.
**NOTE**: The first rule of learning Ansible Tower is to learn Ansible core first. The second rule is to see the first rule. Don't start here.
#### Presentation Deck
For automating with Ansible Tower use `decks/intro-to-ansible-tower.html` and only navigate to the right. Going down will take you through the more in-depth content and workshops of the Introduction to Ansible Tower Hands-On Workshop.
#### Examples & Workshops
##### Examples
The example(s) here are mostly at the facilitator's discretion. We recommend the apache-roles or nginx-roles example since those are the most sophisticated one students are exposed to in the Essentials workshops that should have preceded this module.
##### Workshops
None.
#### Lab Environment
Only the facilitator of the workshop needs a lab environment making this module ideal for scenarios where time and facilities are limited. We recommend the instructor's lab environment be made up of a single "control" host in a group of the same name and 3 CentOS or RHEL linux hosts in a group called "web". While the web hosts need only minimal resources, the control instance will take additional memory and storage requirements required by Ansible Tower system on it.
### Introduction to Ansible Tower Hands-On Workshop
**Scenario**: Providing hands-on instruction to students with limited experience with Ansible Tower on how it works and how it can be used to facilitate and manage Ansible automation in their organization.
**Run Time**: 2 Hours
**Prerequisites**: Ansible Essentials Workshop or Ansible Essentials Hands-On Workshop
This module is designed to provide some valuable instruction on using Ansible Tower with a limited amount of time and/or resources. It is also ideal for addressing large audiences where the overhead of provisioning and supporting the use of individual labs required to execute hands-on workshops is not feasible. This module is also ideal for remote instruction like webinars.
**NOTE**: The first rule of learning Ansible Tower is to learn Ansible core first. The second rule is to see the first rule. Don't start here.
#### Presentation Deck
For a basic linux server automation use `decks/intro-to-ansible-tower.html` navigating down when given the option.
#### Examples & Workshops
##### Examples
The example(s) here are mostly at the facilitator's discretion. We recommend the apache-roles or nginx-roles example since those are the most sophisticated one students are exposed to in the Essentials workshops that should have preceded this module.
##### Workshops
* tower_install (optional though highly recommended)
* tower_basic_setup
When applicable, most workshops provide "extra credit" assignments for students that are more advanced or fast-learners.
Facilitators can opt to pre-install Ansible Tower on each control machine for the students allowing you to skip that workshop.
#### Lab Environment
This lab requires each student to have their own lab environment in order to perform the workshop assignments. Use the Lightbulb provisioner tool to provision these student labs in advance.
We generally recommend **NOT** using Vagrant with groups. In our experience too much time gets spent helping students install, configure and troubleshoot Vagrant along with their lab environment that could be better spent on Ansible teaching.
Sharing a single lab environment amongst a group of students simply will not work.
We recommend the lab environments be made up of a single "control" host in a group of the same name and 3 CentOS or RHEL linux hosts in a group called "web". While the web hosts need only minimal resources, the control instance will take additional memory and storage requirements required by Ansible Tower system on it.
### Introduction to Ansible and Ansible Tower Workshop
**Scenario**: Providing an in-depth overview to an audience with limited Ansible and Ansible Tower experience on how they can be used to facilitate and manage Ansible automation in their organization.
**Run Time**: 3 Hours
This module is designed to provide basic instruction on using Ansible and Ansible Tower with a limited amount of time and/or resources.
This module is the combination of two modules:
* Ansible Essentials
* Introduction to Ansible Tower
Refer to the docs on those modules and run concurrently.
### Introduction to Ansible and Ansible Tower Hands-On Workshop
**Scenario**: Providing hands-on instruction on the core essentials and Ansible Tower to students with limited experience automating with Ansible on linux servers.
**Run Time**: 6 Hours
This module is designed to provide hands-on interactive instruction on using Ansible and Ansible Tower.
This module is the combination of two modules:
* Ansible Essentials Hands-On
* Introduction to Ansible Tower Hands-On
Refer to the docs on those modules and run concurrently.
================================================
FILE: facilitator/solutions/adhoc_commands.md
================================================
# Workshop: Ad-Hoc Commands
This brief exercise demonstrates Ansible in-action at it's most basic and simple level. Thru ad-hoc commands, students are exposed to Ansible modules and usage and will apply to their understanding of tasks and playbooks. This exercise also begins to expose students to the concepts of Ansible facts and inventory.
This workshop is also a good way to verify their lab environments are properly configured before going forward.
## Solution
The following commands are the solution for the workshop and extra credit assignments.
```bash
ansible all -m ping
ansible all -m setup
ansible web -b -m yum -a "name=epel-release state=present"
ansible web -b -m yum -a "name=https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm state=present"
---
ansible all -m setup -a "gather_subset=virtual"
ansible all -m setup -a "filter=ansible_fqdn*"
ansible all -m command -a "uptime"
ansible all -m ping --limit '!control'
```
### NOTE
You will need to make sure each student successfully installed the EPEL repo here. Later workshop assignments ask students to install Nginx. Without EPEL enabled on each web host `yum` will not be able to locate the package.
================================================
FILE: facilitator/solutions/ansible_install.md
================================================
# Workshop: Installing Ansible
This brief exercise demonstrates how easy it can be to install and configure Ansible and begin automating.
NOTE: If and how you conduct this workshop depends on how you configured the provisioner to run. To save time, you can have lab control machines setup with Ansible. If so, you can skip with workshop.
We use pip because it's OS independent and always has the absolute latest stable release of Ansible. Other package repos such as EPEL can sometime lag behind for days or even weeks. For this reason, Ansible by Red Hat recommends using PIP.
## Solution
Each student will need to SSH into their "control" machine using the host IP, user and password provided to them in their lab environment inventory file.
From that control machine:
```bash
sudo yum install -y ansible
ansible --version
ansible --help
mkdir -p ~/.ansible/retry-files
vi ~/.ansible.cfg
# add forks and retry_files_save_path
```
## NOTE
Depending how the lab provisioner was run, students may already have Ansible on their control machines. You can still have them run Ansible with `--version` and `--help` options to check they can ssh into their control box and prove Ansible is indeed present.
Whatever the case, you should make sure each student has the core Ansible software before proceeding.
================================================
FILE: facilitator/solutions/basic_playbook.md
================================================
# Workshop: Basic Playbook
Here students are tasked with developing their first complete playbook. The assignment approximates the tasks they will typical need to take in order to deploy and configure a single application service using Nginx.
The "simple" Lightbulb examples, students may or may not have done depending on agenda, provide a quick introduction to playbook structure to give them a feel for how Ansible works, but in practice are too simplistic to be useful.
## Tips
Don't rush this workshop. Give students ample time to do this workshop themselves to completion. This workshop covers the essential concepts and features of effectively automation with Ansible and builds up their core skills for further exploration.
## Solution
The solution to this workshop is `nginx-basic-playbook` under `examples/`.
The `nginx-basic-playbook` example also includes the first extra credit assignment via the last task using the `uri` module. We intentionally only check that the web server is responding with the home page we generated by searching for the value of `nginx_test_message`. We are not checking if the server is accessible to outside world in keeping things basic.
The solution to the second extra credit assignment is `nginx-remove-playbook` under `examples/`.
================================================
FILE: facilitator/solutions/roles.md
================================================
# Workshop: Roles
Here students are tasked with refactoring their previous work from the Basic Playbook workshop into a role and modifying their playbook accordingly. We intentionally avoid introducing any new tasks or functionality otherwise. The objective here is to focus students specifically on how roles are structured and develop.
You should emphasize the value of roles in better organizing playbooks as they grow in sophistication and making Ansible automation more portable and reusable than a basic playbook.
## Defaults vs. Vars
A common question that is asked "when do I put a variable in defaults instead of a vars?" It depends on the usage of a variable in the context of a role. It all comes down to variable precedence.
If a variable holds a data that someone using the role may want to override anyway numbers of ways, then it's best stored under `defaults/`. If a variable holds data that is internal to the function of the role and rarely (or in practice should never) be modified, then it's best stored under `vars/` where its much hard to be overridden by other variables sources.
Applying these guidelines to this assignment, `nginx_packages` would go in `vars/` while `nginx_test_message` and `nginx_webserver_port` would go in `defaults/`.
## NOTE
The extra credit assignment should use the `include_role` module that was introduced in version 2.2. Students working with older versions of Ansible will have trouble completing the assignment.
## Solution
The solution to this workshop is `nginx-role` under `examples/`. This example also includes the solution to the extra credit assignment. See `remove.yml` and `roles/nginx-basic/tasks/remove.yml`.
================================================
FILE: facilitator/solutions/simple_playbook.md
================================================
# Workshop: Simple Playbook
This assignment provides a quick introduction to playbook structure to give them a feel for how Ansible works, but in practice is too simplistic to be useful.
## Tips
This workshop is a subset of the basic_playbook workshop students will do next. Depending on the technical aptitude of students and time available you can opt to skip this workshop assignment. Everything here and a lot more will in the next workshop assignment.
## Solution
The solution to this workshop is `nginx-simple-playbook` under `examples/`.
================================================
FILE: facilitator/solutions/tower_basic_setup.md
================================================
# Workshop: Ansible Tower Basic Setup
Here students are tasked with installing and activating a new single integrated instance of Ansible Tower. The assignment will show students how they can setup an instance of Ansible Tower themselves to run an existing playbook for testing, demos and their own proof-of-concepts.
## Tips
This workshop can be presented in a classroom setting as a guided exercises to save a bit of time. The facilitator has the students follow along as they step thru each part of the corresponding demo.
Stepping thru the solution provides an excellent opportunity to briefly mention other features in Tower such as schedule jobs not explored in this workshop assignment.
## Solution
The focus of this exercise is to familiarize students with how to use the Ansible Tower UI. The following screenshots approximate the steps required to complete the workshop.
### 1
First we will need to create the inventory in which our hosts will be housed. To do so, simply select that Inventory tile at the top of your Ansible Tower dashboard

### 1.a
As we spoke about before, this step is something that is not widely done in production as you can take advantage of Ansible Tower's dynamic inventory support. To create a host, simply select "+Add Host"

### 1.b
Once you have put in the necessary information for one host, click Save and it will return you to your respective Inventory page. From here, you can either add more hosts or move on to the things needed to connect to the machines, credentials.

### 2
Credentials are essential in automating with ansible. To create a credential within Ansible Tower, select the gear in the top right hand corner of the screen. This will take you to the admin where you can select credentials box. This will display any current credentials you have in your Tower instance. To create a new credential, simply click +Add. This will bring up a new page where you can enter the relevant information for your credential. Since we are creating a machine credential, we will select machine from the Type dropdown and enter the needed information. Once done, select save and your credential will be saved.

### 3
Adding a project to Ansible Tower so that you can use your playbooks is very simple. To start the process, simply select the Project tile from the top of your Ansible Tower dashboard. This will display any current projects that you have added to your Tower Instance. From here, select "+Add" to create a new project. This will then display the New Project page. From here, you can add the required information to get your repository added as a project.

### 4
Job templates are a visual realization of the ansible-playbook command and all flags you can utilize when executing from the command line. Everything you used on the command line, such as calling the invenotry or specifying a credential is done here. Creating a job template is simple and direct. Everything you need for it is outlined but has extra options and capabilities to make it even more powerful. To create a Job Template, simply select "Templates" tile from the top of your Ansible Tower dashboard. This will bring you to the Templates page where any current templates you have created will be shown. To create a new one, click the +Add drop down box and select Job Templates. Here, everything that you can add to your Job Templates can be found here. The portions that are denoted with a * are a required option, things such as credentials, which project and playbook you want to use are all needed before you can save the Template.

### 5
Launching a Job Template in Ansible Tower is easy. If you are starting from your Ansible Tower dashboard, just select Templates and the templates that you have created will be there (including the one that you just created for the previous example) Once you are at the templates page, next to each template, there is a rocket. Once you click that, Ansible Tower will start to execute that job. Launching a task in ansible tower is that simple.

### 5.a
Seeing a successful job means that the changes that were outlined were completed. On the details panel, everything that you need to know from the date to who launched the job to what inventory the template was run against can be found there. You can also search through previous job runs and see this exact information for each time this job was run.

### 6
Extra Variables are variables that can be passed in many parts of Ansible Tower. From Inventory configuration, Job Templates and Surveys, extra variables can be tricky if they are not kept track of. For example, say that you have a defined variable for an inventory for debug = true. It is entirely possible that this variable, debug = true, can be overridden in a job template survey.
To ensure that the variables you need to pass are not overridden, ensure they are included by redefining them in the survey. Keep in mind that extra variables can be defined at the inventory, group, and host levels.

NOTE: These screenshots where taken using Ansible Tower 3.1.1.
* Settings Icon and Credentials Card
* Create Machine Credential
* Create Group
* Create Hosts (under Group)
* Create Project
* Create Template
* Extra Variables
## Extra Credit
* Create Users
* Assign Execution Permission to User in Job Templates
* User Surveys
### Create Users
Creating users is simple and easy. To get other people on your team involved, select the settings gear in the top right and select users. This will take you to the users page. Select the green “+Add” button and a user creation page will be displayed. From here, you will need to enter some information about the user such as their name and email address. Once you have added the necessary information, click save and the user will be added to your Ansible Tower instance!

### Assign Execution Permission
Assigning permissions to users on what they can and cannot do, what they can and cannot see is crucial to the Ansible Tower story. Role Based Access Control (RBAC) is at the core of how Tower scales organizations while increasing the security of automating. By delegating access to resources within tower, you are removing an element of human error.
Assigning Execution permissions is a big step in implementing RBAC throughout your team. To add execution permissions for users on a Job Template, navigate to the job template that you would like the user(s) to have execution permission on. Once you are on the editing page for this job template (pencil) select the "Permissions" box at the top then click the "+Add" box on the right hand side.
This will display a box that displays users and teams within your Organization. Select the Users box (Tower does by default) then select the user you wish to add to the Job Template. This will prompt a drop down selection to appear for the role. From that drop-down box, select "Execute" and hit Save.
It is done! That user know has execution privileges on that Job Template!

### User Surveys
Job types of Run or Check will provide a way to set up surveys in the Job Template creation or editing screens. Surveys set extra variables for the playbook similar to ‘Prompt for Extra Variables’ does, but in a user-friendly question and answer way. Surveys also allow for validation of user input. (Don't forget our earlier discussion about Extra variables, survey answers are passed as Extra Variables so make sure you don't over ride variables with Survey Questions)
Use cases for surveys are numerous. An example might be if operations wanted to give developers a “push to stage” button they could run without advanced Ansible knowledge. When launched, this task could prompt for answers to questions such as, “What tag should we release?” Many types of questions can be asked, including multiple-choice questions.
To create a survey, navigate to the Job Template you wish to add the survey to and select the blue "Add Survey" button from the top. From here, a blank survey form will appear for you to fill out.

From here you will fill out the prompt, the Answer Variable name and the type of answer that will be provided. Examples can include Integers, Text, and passwords.
Once you have everything filled out to your liking, click the Add button and the survey will now be prompt each user that attempts to execute the Job Template.

================================================
FILE: facilitator/solutions/tower_install.md
================================================
# Workshop: Installing Tower
Here students are tasked with installing and activating a new single integrated instance of Ansible Tower. The assignment will show students how they can run an instance of Tower themselves for testing, demos and their own proof-of-concepts.
## Tips
If you are working in an environment that does not have access repos such as EPEL, you can opt to use the [Ansible Tower bundle installer](http://releases.ansible.com/ansible-tower/setup-bundle/ansible-tower-setup-bundle-latest.el7.tar.gz) instead. This bundle includes all of the RPMs from these repositories. It does not include **all** of the RPMs required.
Once downloaded, configured and the `setup.sh` is started, is will take a few minutes to install all of the Ansible Tower dependencies. Students can take a short break while that runs.
## Solution
1. Retrieve the latest version of Tower from the Ansible Tower releases archive.
1. Unarchive the retrieved Ansible Tower archive.
* i.e. `tar -vxzf towerlatest`
1. Change directories to the one created by the expanded Tower software archive.
1. Edit the `inventory` file with passwords for admin, message queue and the postgres database will use while setting up. These are represented by these inventory variables:
* `admin_password`
* `rabbitmq_password`
* `pg_password`
1. Run the setup.sh script with sudo.
1. Verify Ansible has been successfully installed and started, load the Ansible Tower UI in a web browser using the control machine IP address.
1. Sign in as the admin user with the password created in the installation process. You will be prompted for a license.
1. Request a trial license if one hasn't been provided already. (Either trial license type will suffice.)
1. Import/upload the license file.
1. Extra Credit: Open the ping API end point in a web browser -- the URL will be something like like [https://tower.host.ip/v1/api/ping](https://tower.host.ip/v1/api/ping)
These are the approximate BASH commands to do this:
```bash
$ wget https://bit.ly/towerlatest
$ tar -xzvf towerlatest
$ cd ansible-tower-setup-x.x.x
$ vi inventory
# (make edits to inventory variables like in step 4)
$ sudo ./setup.sh
```
NOTE: If you don't have access to EPEL you can use [https://bit.ly/towerbundlelatest](https://bit.ly/towerbundlelatest).
Once completed switch to your browser to sign into the Ansible Tower UI as admin and setup the instance with a license file.
================================================
FILE: guides/README.md
================================================
# Ansible Lightbulb Guides
These guides are part of lightbulb, a multi-purpose toolkit for effectively demonstrating Ansible's capabilities or providing informal trainings in various forms -- instructor-led, hands-on or self-paced.
## Available Guides
Currently there are two guides available:
* [Ansible Engine](ansible_engine/README.md)
* [Ansible Tower](ansible_tower/README.md)
## Additional information
Additional information about Ansible, how to get started and where to proceed after these guides, have a look at the following sources:
* [Ansible Getting Started](http://docs.ansible.com/ansible/latest/intro_getting_started.html)
================================================
FILE: guides/ansible_engine/1-adhoc/README.md
================================================
# Exercise - Running Ad-hoc commands
For our first exercise, we are going to run some ad-hoc commands to help you get a feel for how Ansible works. Ansible Ad-Hoc commands enable you to perform tasks on remote nodes without having to write a playbook. They are very useful when you simply need to do one or two things quickly and often, to many remote nodes.
## Section 1: Definition of the inventory
Inventories are crucial to Ansible as they define remote machines on which you wish to run commands or your playbook(s). In this lab the inventory is provided by your instructor. The inventory is an ini formatted file listing your hosts, sorted in groups, additionally providing some variables. It looks like:
```bash
[all:vars]
ansible_user=rwolters
ansible_ssh_pass=averysecretpassword
ansible_port=22
[web]
node-ansible_host=.22.33.44
node-2 ansible_host=22.33.44.55
node-3 ansible_host=33.44.55.66
[control]
ansible ansible_host=44.55.66.77
```
Your individual inventory is already present in your lab environment underneath `lightbulb/lessons/lab_inventory/$USERNAME-instances.txt`, where `$USERNAME` is replaced by your individual username.
## Section 2: Basic Ansible configuration
Ansible needs to know which inventory file should be used. By default it checks for the inventory file at `/etc/ansible/hosts`. You need to point Ansible to your own, lab specific inventory file. One way to do this is to specify the path to the inventory file with the `-i` option to the ansible command:
```bash
ansible -i hosts ...
```
To avoid the need to specify the inventory file each time, the inventory can also be configured in the configuration file `.ansible.cfg`. If this file is present, Ansible will read configuration parameters like the inventory location and others from this file.
On your control node, such a file is already present right in the home directory of your user. It is already customized to point to your inventory file, so there is nothing further you have to do.
## Section 3: Ping a host
Let's start with something basic and use the `ping` module to test that Ansible has been installed and configured correctly and all hosts in your inventory are responsive.
```bash
ansible all -m ping
```
## Section 4: Run a typical command
Now let's see how we can run a typical Linux shell command and format the output using the `command` module.
```bash
ansible web -m command -a "uptime" -o
```
## Section 5: Gather facts about target node
We can use an ad-hoc command with the `setup` module to display the many facts Ansible can discover about each node in its inventory.
```bash
ansible all -m setup
```
## Section 6: Install package
Now, let's install Apache using the `yum` module.
```bash
ansible web -m yum -a "name=httpd state=present" -b
```
Usually, only root users are allowed to install packages. So this is a case where we need privilege escalation and a sudo that has to be setup properly. We need to instruct ansible to use sudo to run the command as root by using the parameter `-b` (think "become").
## Section 7: Start service
OK, Apache is installed now so let's start it up using the `service` module.
```bash
ansible web -m service -a "name=httpd state=started" -b
```
## Section 8: Stop service
Finally, let's clean up after ourselves. First, stop the httpd service.
```bash
ansible web -m service -a "name=httpd state=stopped" -b
```
## Section 9: Remove package
Next, remove the Apache package.
```bash
ansible web -m yum -a "name=httpd state=absent" -b
```
---
**NOTE:** Additional `ansible` options
The way `ansible` works can be influenced by various options which can be listed via `ansible --help`:
```bash
ansible --help
Usage: ansible [options]
Define and run a single task 'playbook' against a set of hosts
Options:
-a MODULE_ARGS, --args=MODULE_ARGS
module arguments
--ask-vault-pass ask for vault password
-B SECONDS, --background=SECONDS
run asynchronously, failing after X seconds
(default=N/A)
-C, --check don't make any changes; instead, try to predict some
of the changes that may occur
-D, --diff when changing (small) files and templates, show the
differences in those files; works great with --check
[...]
```
---
[Click Here to return to the Ansible Lightbulb - Ansible Engine Guide](../README.md)
================================================
FILE: guides/ansible_engine/2-playbook/README.md
================================================
# Exercise 2 - Writing Your First playbook
Now that you've gotten a sense of how ansible works, we are going to write our first ansible *playbook*. Playbooks are Ansible’s configuration, deployment, and orchestration language. They are written in YAML and are used to invoke Ansible modules to perform tasks that are executed sequentially i.e top to bottom. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT workflow. Playbooks are like an instruction manual and describe the state of environment.
A playbook can have multiple plays and a play can have one or multiple tasks. The goal of a *play* is to map a group of hosts. The goal of a *task* is to implement modules against those hosts.
For our first playbook, we are only going to write one play and two tasks.
## Section 1: Creating a Directory Structure and Files for your Playbook
There is a [best practices](http://docs.ansible.com/ansible/playbooks_best_practices.html) on the preferred directory structures for playbooks. We strongly encourage you to read and understand these practices as you develop your Ansible skills.
Instead, we are going to create a very simple directory structure for our playbook, and add just a couple of files to it.
### Step 1: Create a directory called `apache_basic` in your home directory and change directories into it
```bash
mkdir ~/apache-simple-playbook
cd ~/apache-simple-playbook
```
### Step 2: Use `vi` or `vim` to open a file called `site.yml`
## Section 2: Defining Your Play
Now that you are editing `site.yml`, let's begin by defining the play and then understanding what each line accomplishes
```yml
---
- name: Ensure apache is installed and started
hosts: web
become: yes
```
* `---` Defines the beginning of YAML
* `name: install and start apache` This describes our play
* `hosts: web` Defines the host group in your inventory on which this play will run against
* `become: yes` Enables user privilege escalation. The default is sudo, but su, pbrun, and [several others](http://docs.ansible.com/ansible/become.html) are also supported.
## Section 3: Adding Tasks to Your Play
Now that we've defined your play, let's add some tasks to get some things done. Align (vertically) the *t* in `task` with the *b* `become`. Yes, it does actually matter. In fact, you should make sure all of your playbook statements are aligned in the way shown here. If you want to see the entire playbook for reference, skip to the bottom of this exercise.
```yml
tasks:
- name: Ensure httpd package is present
yum:
name: httpd
state: present
- name: Ensure latest index.html file is present
copy:
src: files/index.html
dest: /var/www/html/
- name: Ensure httpd is started
service:
name: httpd
state: started
```
* `tasks:` This denotes that one or more tasks are about to be defined
* `- name:` Each task requires a name which will print to standard output when you run your playbook. It's considered best practice to give all your plays and tasks concise and human-meaningful descriptions.
```yml
- name: Ensure httpd package is present
yum:
name: httpd
state: present
```
* These four lines are calling the Ansible module *yum* to install httpd. [Click here](http://docs.ansible.com/ansible/yum_module.html) to see all options for the yum module.
```yml
- name: Ensure latest index.html file is present
copy:
src: files/index.html
dest: /var/www/html/
```
* These four lines ensure that the `index.html` file is copied over to the target node. [Click here](http://docs.ansible.com/ansible/copy_module.html) to see all options for the copy module.
```yml
- name: Ensure httpd is started
service:
name: httpd
state: started
```
* The next few lines are using the ansible module `service` to start the httpd service right now. The service module is the preferred way of controlling services on remote hosts. [Click here](http://docs.ansible.com/ansible/service_module.html) to learn more about the `service` module.
## Section 4: Saving your Playbook
Now that you've completed writing your playbook, it would be a shame not to keep it.
Use the `write/quit` method in `vi` or `vim` to save your playbook, i.e. `Esc :wq!`
Also, you have to create the above mentioned `index.html` file. For this, first create the proper directory:
```bash
mkdir ~/apache-simple-playbook/files
cd ~/apache-simple-playbook/files
```
Next, create the file and add content: Use `vi` or `vim` to open a file called `index.html` and add the following content:
```html
Ansible: Automation for Everyone
```
Use the `write/quit` method in `vi` or `vim` to save your playbook, i.e. `Esc :wq!`
And that should do it. You should now have a fully written playbook called `site.yml`. You are ready to automate!
---
**NOTE**
Ansible playbooks are essential YAML and YAML is a bit particular about formatting especially regards to whitespace. If you are unfamiliar with this, we recommend reading up on [YAML Syntax](http://docs.ansible.com/ansible/YAMLSyntax.html) in the Ansible docs.
In the meantime, your completed playbook should look like this example below. (Take note of the spacing and alignment.)
---
```yml
---
- name: Ensure apache is installed and started
hosts: web
become: yes
tasks:
- name: Ensure httpd package is present
yum:
name: httpd
state: present
- name: Ensure latest index.html file is present
copy:
src: files/index.html
dest: /var/www/html/
- name: Ensure httpd is started
service:
name: httpd
state: started
```
---
[Click Here to return to the Ansible Lightbulb - Ansible Engine Guide](../README.md)
================================================
FILE: guides/ansible_engine/3-variables/README.md
================================================
# Exercise 3 - Using Variables, Loops, and Handlers
Previous exercises showed you the basics of Ansible Engine. In the next few exercises, we are going to teach some more advanced ansible skills that will add flexibility and power to your playbooks.
Ansible exists to make tasks simple and repeatable. We also know that not all systems are exactly alike and often require some slight change to the way an Ansible playbook is run. Enter variables.
Variables are how we deal with differences between your systems, allowing you to account for a change in port, IP address or directory.
Loops enable us to repeat the same task over and over again. For example, lets say you want to install 10 packages. By using an ansible loop, you can do that in a single task.
Handlers are the way in which we restart services. Did you just deploy a new config file, install a new package? If so, you may need to restart a service for those changes to take effect. We do that with a handler.
For a full understanding of variables, loops, and handlers; check out our Ansible documentation on these subjects.
* [Ansible Variables](http://docs.ansible.com/ansible/latest/playbooks_variables.html)
* [Ansible Loops](http://docs.ansible.com/ansible/latest/playbooks_loops.html)
* [Ansible Handlers](http://docs.ansible.com/ansible/latest/playbooks_intro.html#handlers-running-operations-on-change)
## Section 1: Running the Playbook
To begin, we are going to create a new playbook, but it should look very familiar to the one you created in exercise [1.2](../2-playbook/README.md)
We are now going to run you're brand new playbook on your two web nodes. To do this, you are going to use the `ansible-playbook` command.
### Step 1
Navigate to your home directory create a new project and playbook.
```bash
cd
mkdir apache-basic-playbook
cd apache-basic-playbook
vim site.yml
```
### Step 2
Add a play definition and some variables to your playbook. These include addtional packages your playbook will install on your web servers, plus some web server specific configurations.
```yml
---
- name: Ensure apache is installed and started
hosts: web
become: yes
vars:
httpd_packages:
- httpd
- mod_wsgi
apache_test_message: This is a test message
apache_max_keep_alive_requests: 115
apache_webserver_port: 80
```
### Step 3
Add a new task called *httpd packages are present*:
```yml
{% raw %}
tasks:
- name: Ensure httpd packages are present
yum:
name: "{{ item }}"
state: present
with_items: "{{ httpd_packages }}"
notify: restart-apache-service
{% endraw %}
```
---
**NOTE**
> Let's see what is happening here.
* `vars:` You've told Ansible the next thing it sees will be a variable name
* `httpd_packages` You are defining a list-type variable called `httpd_packages**. What follows is a list of those packages
* {% raw %}`{{ item }}`{% endraw %} You are telling Ansible that this will expand into a list item like `httpd` and `mod_wsgi`.
* {% raw %}`with_items: "{{ httpd_packages }}`{% endraw %} This is your loop which is instructing Ansible to perform this task on every `item` in `httpd_packages`
* `notify: restart-apache-service` This statement is a `handler`, so we'll come back to it in Section 3.
---
## Section 2: Deploying Files and Starting a Service
When you need to do something with files or directories on a system, use one of the [Ansible Files](http://docs.ansible.com/ansible/latest/list_of_files_modules.html) modules. In this case, we'll leverage the `file` and `template` modules.
### Step 1
Create a `templates` directory in your project directory and download two files.
```bash
mkdir templates
cd templates
curl -O https://raw.githubusercontent.com/ansible/lightbulb/master/examples/apache-basic-playbook/templates/httpd.conf.j2
curl -O https://raw.githubusercontent.com/ansible/lightbulb/master/examples/apache-basic-playbook/templates/index.html.j2
```
### Step 2
Add some file tasks and a service task to your playbook.
```yml
- name: Ensure site-enabled directory is present
file:
name: /etc/httpd/conf/sites-enabled
state: directory
- name: Ensure latest httpd.conf is present
template:
src: templates/httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf
notify: restart-apache-service
- name: Ensure latest index.html is present
template:
src: templates/index.html.j2
dest: /var/www/html/index.html
- name: Ensure httpd is started and enabled
service:
name: httpd
state: started
enabled: yes
```
---
**NOTE:** Recap of used modules
> What did we just write?
* `file:` This module is used to create, modify, delete files, directories, and symlinks.
* `template:` This module specifies that a [jinja2](http://docs.ansible.com/ansible/latest/playbooks_templating.html) template is being used and deployed. `template` is part of the `Files`
module family and we encourage you to check out all of the other [file-management modules here](http://docs.ansible.com/ansible/latest/list_of_files_modules.html).
* `service` - The Service module starts/stops/restarts services.
---
## Section 3: Defining and Using Handlers
There are any number of reasons we often need to restart a service/process including the deployment of a configuration file, installing a new package, etc. There are really two parts to this Section; adding a handler to the playbook and calling the handler after the a task. We will start with the former.
### Step 1
Define a handler.
```yml
handlers:
- name: restart-apache-service
service:
name: httpd
state: restarted
```
---
**NOTE:** Handlers
> A handler is only executed when it is triggered by a task.
* `handlers:` This is telling the *play* that the `tasks:` are over, and now we are defining `handlers:`. Everything below that looks the same as any other task, i.e. you give it a name, a module, and the options for that module. This is the definition of a handler.
* `notify: restart-apache-service` This triggers the handler as mentioned above. The `nofify` statement is the invocation of a handler by name. In the above created playbook, a `notify` statement was already added to the `latest httpd.conf is present` task.
* naming of handlers: We do recommend to name handlers in a consistent way. A tested and proven method is to name all handlers in lower case and with dashes instead of spaces.
---
## Section 4: Review
Your new, improved playbook is done! Let's take a second look to make sure everything looks the way you intended. If not, now is the time for us to fix it up. The figure below shows line counts and spacing.
```yml
{% raw %}
---
- name: Ensure apache is installed and started
hosts: web
become: yes
vars:
httpd_packages:
- httpd
- mod_wsgi
apache_test_message: This is a test message
apache_max_keep_alive_requests: 115
apache_webserver_port: 80
tasks:
- name: Ensure httpd packages are present
yum:
name: "{{ item }}"
state: present
with_items: "{{ httpd_packages }}"
notify: restart-apache-service
- name: Ensure site-enabled directory is present
file:
name: /etc/httpd/conf/sites-enabled
state: directory
- name: Ensure latest httpd.conf is present
template:
src: templates/httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf
notify: restart-apache-service
- name: Ensure latest index.html is present
template:
src: templates/index.html.j2
dest: /var/www/html/index.html
- name: Ensure httpd is started and enabled
service:
name: httpd
state: started
enabled: yes
handlers:
- name: restart-apache-service
service:
name: httpd
state: restarted
{% endraw %}
```
## Section 5: Running your new apache playbook
Congratulations! You just wrote a playbook that incorporates some key Ansible concepts that you use in most if not all of your future playbooks. Before you get too excited though, we should probably make sure it actually runs.
So, lets do that now.
### Step 1
Make sure you are in the right directory and create a host file.
```bash
cd ~/apache-basic-playbook
```
### Step 2
Run your playbook
```bash
ansible-playbook site.yml
```
## Section 2: Review
If successful, you should see standard output that looks very similar to the following. If not, just let us know. We'll help get things fixed up.

---
[Click Here to return to the Ansible Lightbulb - Ansible Engine Guide](../README.md)
================================================
FILE: guides/ansible_engine/4-role/README.md
================================================
# Exercise 1.4 - Roles: Making your playbooks reusable
While it is possible to write a Playbook in one very large file, eventually you’ll want to reuse files and start to organize things. At a basic level, including task files allows you to break up bits of configuration policy into smaller files. Task includes pull in tasks from other files. Since handlers are tasks too, you can also include handler files.
When you start to think about it – tasks, handlers, variables, and so on – begin to form larger concepts. You start to think about modeling what something is,
* It’s no longer "apply THIS to these hosts"
* You say "these hosts are webservers".
Roles build on the idea of include files and provide Ansible with a way to load tasks, handlers, and variables from external files. The files that define a role have specific names and are organized in a rigid directory structure.
Use of Ansible roles has the following benefits:
* Roles group content, allowing easy sharing of code with others
* Roles can be written that define the essential elements of a system type: web server, database server...
* Roles make larger projects more manageable
* Roles can be developed in parallel by different administrators
For this exercise, you are going to take the playbook you just wrote and refactor it into a role. In addition, you'll learn to use `ansible-galaxy` to jumpstart development of a role.
Let's begin with seeing how the new playbook structure `apache-role`, which we are going to create in this exercise, will break down into a role.
```bash
apache-role/
├── roles
│ └── apache-simple
│ ├── defaults
│ │ └── main.yml
│ ├── files
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── README.md
│ ├── tasks
│ │ └── main.yml
│ ├── templates
│ ├── tests
│ │ ├── inventory
│ │ └── test.yml
│ └── vars
│ └── main.yml
├── inventory.ini
└── site.yml
```
Fortunately, you don't have to create all of these directories and files by hand. That's where `ansible-galaxy` comes in.
## Section 1: Using ansible-galaxy to initialize a new role
Ansible Galaxy is a free site for finding, downloading, and sharing roles. Ansible includes the tool `ansible-galaxy` which can interact with the site and the roles hosted there. It's also pretty handy for creating them locally which is what we are about to do here.
### Step 1
Create a new directory, `apache-role`.
```bash
cd
mkdir apache-role
cd ~/apache-role
```
### Step 2
Create a directory called `roles` and `cd` into it.
```bash
mkdir roles
cd roles
```
### Step 3
Use the `ansible-galaxy` command to initialize a new role called `apache-simple`.
```bash
ansible-galaxy init apache-simple
```
Take a look around the structure you just created. It should look a lot like Figure 1 above. However, we need to complete one more step before moving onto section 2. It is Ansible best practice to clean out role directories and files you won't be using. For this role, we won't be using anything from `files`, `tests`.
### Step 4
Remove the `files` and `tests` directories
```bash
cd ~/apache-role/roles/apache-simple/
rm -rf files tests
```
## Section 2: Breaking Your `site.yml` Playbook into the Newly Created `apache-simple` Role
In this section, we will separate out the major parts of your playbook including `vars:`, `tasks:`, `template:`, and `handlers:`.
### Step 1
Make a copy of `site.yml` which was written in the last exercise to the current directory.
```bash
cd ~/apache-role
cp ~/apache-basic-playbook/site.yml site.yml
vim site.yml
```
### Step 2
Add the play definition and the invocation of a single role.
```yml
---
- name: Ensure apache is installed and started via role
hosts: web
become: yes
roles:
- apache-simple
```
### Step 3
Add some default variables to your role in `roles/apache-simple/defaults/main.yml`.
```yml
---
# defaults file for apache-simple
apache_test_message: This is a test message
apache_max_keep_alive_requests: 115
```
### Step 4
Add some role-specific variables to your role in `roles/apache-simple/vars/main.yml`.
```yml
---
# vars file for apache-simple
httpd_packages:
- httpd
- mod_wsgi
```
---
**NOTE**
####
As can be seen above, we added variables at another place than the previously directly in the playbook. Variables can be defined in a variety of places and have a clear [precedence](http://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable). Here are some examples where variables can be placed:
* vars directory
* defaults directory
* group_vars directory
* In the playbook under the `vars:` section
* In any file which can be specified on the command line using the `--extra_vars` option
In this exercise, we are using role defaults to define a couple of variables and these are the most malleable. After that, we defined some variables in `/vars` which have a higher precedence than defaults and can't be overridden as a default variable.
---
### Step 5
Create your role handler in `roles/apache-simple/handlers/main.yml`.
```yml
---
# handlers file for apache-simple
- name: restart-apache-service
service:
name: httpd
state: restarted
enabled: yes
```
### Step 6
Add tasks to your role in `roles/apache-simple/tasks/main.yml`.
```yml
{% raw %}
---
# tasks file for apache-simple
- name: Ensure httpd packages are installed
yum:
name: "{{ item }}"
state: present
with_items: "{{ httpd_packages }}"
notify: restart-apache-service
- name: Ensure site-enabled directory is created
file:
name: /etc/httpd/conf/sites-enabled
state: directory
- name: Copy httpd.conf
template:
src: templates/httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf
notify: restart-apache-service
- name: Copy index.html
template:
src: templates/index.html.j2
dest: /var/www/html/index.html
- name: Ensure httpd is started
service:
name: httpd
state: started
enabled: yes
{% endraw %}
```
### Step 7
Download a couple of templates into `roles/apache-simple/templates/`. And right after that, let's clean up from exercise 2.1 by removing the old templates directory.
```bash
mkdir -p ~/apache-role/roles/apache-simple/templates/
cd ~/apache-role/roles/apache-simple/templates/
curl -O https://raw.githubusercontent.com/ansible/lightbulb/master/examples/apache-role/roles/apache-simple/templates/httpd.conf.j2
curl -O https://raw.githubusercontent.com/ansible/lightbulb/master/examples/apache-role/roles/apache-simple/templates/index.html.j2
```
## Section 3: Running your new role-based playbook
Now that you've successfully separated your original playbook into a role, let's run it and see how it works.
### Step 1
Run the playbook.
```bash
ansible-playbook site.yml
```
If successful, your standard output should look similar to the figure below.

## Section 4: Review
You should now have a completed playbook, `site.yml` with a single role called `apache-simple`. The advantage of structuring your playbook into roles is that you can now add new roles to the playbook using Ansible Galaxy or simply writing your own. In addition, roles simplify changes to variables, tasks, templates, etc.
---
[Click Here to return to the Ansible Lightbulb - Ansible Engine Guide](../README.md)
================================================
FILE: guides/ansible_engine/README.md
================================================
# Ansible Lightbulb - Ansible Engine Guide
This guide is part of lightbulb, a multi-purpose toolkit for effectively demonstrating Ansible's capabilities or providing informal trainings in various forms -- instructor-led, hands-on or self-paced.
The focus of this guide is on the set up and usage of Ansible Engine.
## Ansible Engine Exercises
There are multiple exercises available. It is strongly recommended to run them one after the other. They are self-explaining and can be run at self pace.
* [Exercise 1 - Running Ad-hoc commands](1-adhoc)
* [Exercise 2 - Writing Your First playbook](2-playbook)
* [Exercise 3 - Using Variables, Loops, and Handlers](3-variables)
* [Exercise 4 - Roles: Making your playbooks reusable](4-role)
## Additional information
Additional information about Ansible, how to get started and where to proceed after this guide, have a look at the following sources:
* [Ansible Getting Started](http://docs.ansible.com/ansible/latest/intro_getting_started.html)
---

In addition to open source Ansible, there is Red Hat® Ansible® Engine which includes support and an SLA for the networking modules shown in these exercises
Red Hat® Ansible® Engine is a fully supported product built on the simple, powerful and agentless foundation capabilities derived from the Ansible project. Please visit [ansible.com](https://www.ansible.com/ansible-engine) for more information.
================================================
FILE: guides/ansible_tower/1-install/README.md
================================================
# Exercise 1 - Installing Ansible Tower
In this exercise, we are going to get Ansible Tower installed on your control node
## Installing Ansible Tower
### Step 1
On your control node, change directories to /tmp
```bash
cd /tmp
```
### Step 2
Download the latest Ansible Tower package
```bash
curl -O http://releases.ansible.com/ansible-tower/setup/ansible-tower-setup-latest.tar.gz
```
### Step 3
Untar and unzip the package file
```bash
tar xvfz /tmp/ansible-tower-setup-latest.tar.gz
```
### Step 4
Change directories into the ansible tower package
```bash
cd /tmp/ansible-tower-setup-*/
```
### Step 5
Using an editor of your choice, open the inventory file
```bash
vi inventory
```
### Step 6
Fill a few variables out in an inventory file: `admin_password, pg_password, rabbitmq_password`
```ini
[tower]
localhost ansible_connection=local
[database]
[all:vars]
admin_password=*'ansibleWS'*
pg_host=''
pg_port=''
pg_database='awx'
pg_username='awx'
pg_password=*'ansibleWS'*
rabbitmq_port=5672
rabbitmq_vhost=tower
rabbitmq_username=tower
rabbitmq_password=*'ansibleWS'*
rabbitmq_cookie=cookiemonster
= Needs to be true for fqdns and ip addresses
rabbitmq_use_long_name=false
```
### Step 7
Run the Ansible Tower setup script
```bash
sudo ./setup.sh
```
---
**NOTE:** Installation will take some time
Step 7 will take approx. 10-15 minutes to complete. This may be a good time to take a break.
---
### End Result
At this point, your Ansible Tower installation should be complete. You can access your Tower at the IP of the control node:
```bash
http://
```
### Ensuring Installation Success
You know you were successful if you are able to browse to your Ansible Tower's url (_control node's IP address_) and get something like this

---
[Click Here to return to the Ansible Lightbulb - Ansible Tower Guide](../README.md)
================================================
FILE: guides/ansible_tower/2-config/README.md
================================================
# Exercise 2 - Configuring Ansible Tower
In this exercise, we are going to configure Tower so that we can run a playbook.
## The Tower UI
There are a number of concepts in the Ansible Tower UI that enable multi-tenancy, notifications, scheduling, etc. However, we are only going to focus on a few of the key contructs that are required for this workshop today.
* Credentials
* Projects
* Inventory
* Job Template
## Logging into Tower and Installing the License Key
### Step 1
To log in, use the username `admin` and and the password `ansibleWS`.

As soon as you login, you will prompted to request a license or browse for an existing license file

### Step 2
In a seperate browser tab, browse to [https://www.ansible.com/workshop-license](https://www.ansible.com/workshop-license) to request a workshop license. These are special workshop licenses which are valid for only 5 nodes for 5 days.
### Step 3
Back in the Tower UI, choose BROWSE  and upload the license file you received via e-mail.
### Step 4
Select "_I agree to the End User License Agreement_"
### Step 5
Click on SUBMIT 
## Creating a Credential
Credentials are utilized by Tower for authentication when launching jobs against machines, synchronizing with inventory sources, and importing project content from a version control system.
There are many [types of credentials](http://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html#credential-types) including machine, network, and various cloud providers. In this workshop, we are using a *machine* credential.
### Step 1
Select the gear icon 
### Step 2
Select CREDENTIALS
### Step 3
Click on ADD 
### Step 4
Complete the form using the following entries.
|OPTION|VALUE
|------|-----
|NAME |Ansible Workshop Credential
|DESCRIPTION|Machine credential for run job templates during workshop
|ORGANIZATION|Default
|TYPE|Machine
|USERNAME|Your workshop username - Student(x)
|PASSWORD|Your workshop password
|PRIVILEGE ESCALATION|Sudo

### Step 5
Select SAVE 
## Creating a Project
A Project is a logical collection of Ansible playbooks, represented in Tower. You can manage playbooks and playbook directories by either placing them manually under the Project Base Path on your Tower server, or by placing your playbooks into a source code management (SCM) system supported by Tower, including Git, Subversion, and Mercurial.
### Step 1
Click on PROJECTS
### Step 2
Click on ADD .
### Step 3
Complete the form using the following entries
|OPTION|VALUE
|------|-----
|NAME |Ansible Workshop Project
|DESCRIPTION|Workshop playbooks
|ORGANIZATION|Default
|SCM TYPE|Git
|SCM URL| [https://github.com/ansible/lightbulb](https://github.com/ansible/lightbulb)
|SCM BRANCH|
|SCM UPDATE OPTIONS|Mark _Clean_ and _Update on Launch_

### Step 4
Select SAVE 
## Creating a Inventory
An inventory is a collection of hosts against which jobs may be launched. Inventories are divided into groups and these groups contain the actual hosts. Groups may be sourced manually, by entering host names into Tower, or from one of Ansible Tower’s supported cloud providers.
An Inventory can also be imported into Tower using the `tower-manage` command and this is how we are going to add an inventory for this workshop.
### Step 1
Click on INVENTORIES
### Step 2
Select ADD 
### Step 3
Complete the form using the following entries
|OPTION|VALUE
|------|-----
|NAME |Ansible Workshop Inventory
|DESCRIPTION|Ansible Inventory
|ORGANIZATION|Default

### Step 4
Select SAVE 
### Step 5
Using ssh, login to your control node
```bash
ssh @
```
### Step 6
Use the `tower-manage` command to import an existing inventory. (_Be sure to replace with your actual username_)
```
sudo tower-manage inventory_import --source=/home//lightbulb/lessons/lab_inventory/-instances.txt --inventory-name="Ansible Workshop Inventory"
```
You should see output similar to the following:

Feel free to browse your inventory in Tower. You should now notice that the inventory has been populated with Groups and that each of those groups contain hosts.

### End Result
At this point, we are doing with our basic configuration of Ansible Tower. In exercise 2.2, we will be soley focused on creating and running a job template so you can see Tower in action.
---
[Click Here to return to the Ansible Lightbulb - Ansible Tower Guide](../README.md)
================================================
FILE: guides/ansible_tower/3-create/README.md
================================================
# Exercise 3 - Creating and Running a Job Template
A job template is a definition and set of parameters for running an Ansible job. Job templates are useful to execute the same job many times.
## Creating a Job Template
### Step 1
Select TEMPLATES
### Step 2
Click on ADD , and select JOB TEMPLATE
### Step 3
Complete the form using the following values
|OPTION|VALUE
|------|-----
|NAME |Apache Basic Job Template
|DESCRIPTION|Template for the apache-basic-playbook
|JOB TYPE|Run
|INVENTORY|Ansible Workshop Inventory
|PROJECT|Ansible Workshop Project
|PLAYBOOK|examples/apache-basic-playbook/site.yml
|MACHINE CREDENTIAL|Ansible Workshop Credential
|LIMIT|web
|OPTIONS|Check _Enable Privilege Escalation_

### Step 4
Click SAVE  and then select ADD SURVEY 
### Step 5
Complete the survey form with following values
|OPTION|VALUE
|------|-----
|PROMPT|Please enter a test message for your new website
|DESCRIPTION|Website test message prompt
|ANSWER VARIABLE NAME|apache_test_message
|ANSWER TYPE|Text
|MINIMUM/MAXIMUM LENGTH| Use the defaults
|DEFAULT ANSWER| Be creative, keep it clean, we're all professionals here

### Step 6
Select ADD 
### Step 7
Select SAVE 
### Step 8
Back on the main Job Template page, select SAVE  again.
## Running a Job Template
Now that you’ve sucessfully created your Job Template, you are ready to launch it. Once you do, you will be redirected to a job screen which is refreshing in realtime showing you the status of the job.
### Step 1
Select TEMPLATES
---
**NOTE**: Alternative way to navigate to TEMPLATES:
If you haven't navigated away from the job templates creation page you can scroll down to see all existing job templates
---
### Step 2
Click on the rocketship icon  for the *Apache Basic Job Template*
### Step 3
When prompted, enter your desired test message

### Step 4
Select LAUNCH 
### Step 5
Wait and watch the execution of the job.
One of the first things you will notice is the summary section. This gives you details about your job such as who launched it, what playbook it's running, what the status is, i.e. pending, running, or complete. You’ll also notice the `apache_test_message` being passed in the field EXTRA VARIABLES in the text box on the left bottom side. To the right, you can view standard output; the same way you could if you were running `ansible-playbook` from the command line.

You can also click on each task in the standard output to get more details.
### Step 6
Once your job is sucessful, navigate to your new website:
```bash
http://
```
If all went well, you should see something like this, but with your own custom message:

## End Result
At this point in the workshop, you've experienced the core functionality of Ansible Tower. To read more about advanced features of Tower, take a look at the resources page in this guide.
---
[Click Here to return to the Ansible Lightbulb - Ansible Tower Guide](../README.md)
================================================
FILE: guides/ansible_tower/README.md
================================================
# Ansible Lightbulb - Ansible Tower Guide
This guide is part of lightbulb, a multi-purpose toolkit for effectively demonstrating Ansible's capabilities or providing informal trainings in various forms -- instructor-led, hands-on or self-paced.
The focus of this guide is on the set up and usage of Ansible Tower.
## Ansible Tower Exercises
There are multiple exercises available. It is strongly recommended to run them one after the other. They are self-explaining and can be run at self pace.
* [Exercise 1 - Installing Ansible Tower](1-install)
* [Exercise 2 - Configuring Ansible Tower](2-config)
* [Exercise 3 - Creating and Running a Job Template](3-create)
## Additional information
Additional information about Ansible, how to get started and where to proceed after this guide, have a look at the following sources:
* [Ansible Tower User Guide](http://docs.ansible.com/ansible-tower/latest/html/userguide/index.html)
* [Ansible Tower Administration Guide](http://docs.ansible.com/ansible-tower/latest/html/administration/index.html)
---

Red Hat® Ansible® Tower is a fully supported product built on the simple, powerful and agentless foundation capabilities derived from the Ansible project. Please visit [ansible.com](https://www.ansible.com/tower) for more information.
================================================
FILE: tools/aws_lab_setup/.gitignore
================================================
*instances.txt
instructor_inventory
================================================
FILE: tools/aws_lab_setup/README.md
================================================
# Ansible AWS training provisioner
This provisioner automatically sets up lab environments for Ansible training. It creates four nodes per student/user.
* One control node from which Ansible will be executed from and where Ansible Tower can be installed
* Three web nodes that coincide with the three nodes in Lightbulb's original design
* And one node where `haproxy` is installed (via Lightbulb lesson)
## AWS Setup
The setup of the environments is done via Ansible playbooks, the actual VMs are set up on AWS. The `provision_lab.yml` playbook creates instances, configures them for password authentication and creates an inventory file for each user with their IPs and credentials. An instructor inventory file is also created in the current directory which will let the instructor access the nodes of any student by simply targeting the username as a host group. The lab is created in `us-east-1` by default. Currently the setup only works with `us-east-1`, `us-west-1`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-south-1` and `ap-northeast-1`.
## Email Options
This provisioner by default will send email to participants/students containing information about their lab environment including IPs and credentials. This configuration requires that each participant register for the workshop using their full name and email address. Alternatively, you can use generic accounts for workshops. This method offers the advantage of enabling the facilitator to handle "walk-ins" and is a simpler method overall in terms of collecting participant information.
Steps included in this guide will be tagged with __(email)__ to denote it as a step required if you want to use email and __(no email)__ for steps you should follow if you chose not to use email
**WARNING** Emails are sent _every_ time the playbook is run. To prevent emails from being sent on subsequent runs of the playbook, add `email: no` to `extra_vars.yml`.
## How To Set Up Lab Environments
### One-Time Preparations
To set up lab environments for Ansible training, follow these steps.
1. Create an Amazon AWS account.
1. In AWS, create an ssh key pair called 'ansible': In the AWS management console, click on "Services" on top of the page. In the menu that opens, click on "EC2". In the EC2 dashboard, on the left side, underneath "Network & Security" pick "Key Pairs". Download the private key to your `.ssh` directory, e.g. to `.ssh/ansible.pem`. Alternatively, you can upload your own public key into AWS.
If using an AWS generated key add it to the ssh-agent:
ssh-add ~/.ssh/ansible.pem
1. Create an Access Key ID and Secret Access Key. Save the ID and key for later. See [AWSHELP](aws-directions/AWSHELP.md) for a detailed howto.
1. Install `boto` and `boto3`.
pip install boto boto3
1. Create a `boto` configuration file containing your AWS access key ID and secret access key.
Use the quickstart directions provided here: [http://boto3.readthedocs.io/en/latest/guide/quickstart.html](http://boto3.readthedocs.io/en/latest/guide/quickstart.html)
1. __(email)__ Create a free [Sendgrid](http://sendgrid.com) account if you don't have one. Optionally, create an API key to use with this the playbook.
1. __(email)__ Install the `sendgrid` python library:
**Note:** The `sendgrid` module does not work with `sendgrid >= 3`. Please install the latest `2.x` version.
pip install sendgrid==2.2.1
1. Install the `passlib` library
pip install passlib
1. Clone the lightbulb repo:
git clone https://github.com/ansible/lightbulb.git
cd lightbulb/tools/aws_lab_setup
### Steps To Provision the Lab Environments
1. Update your current installation of the lightbulb repo
1. Define the following variables, either in a file passed in using `-e @extra_vars.yml` or directly in a `vars` section in `aws_lab_setup\infra-aws.yml`:
```yaml
ec2_key_name: username # SSH key in AWS to put in all the instances
ec2_region: us-west-1 # region where the nodes will live
ec2_az: us-east-1a # the availability zone
ec2_name_prefix: TRAINING-LAB # name prefix for all the VMs
admin_password: changeme123 # Set this to something better if you'd like. Defaults to 'LearnAnsible[two digit month][two digit year]', e.g., LearnAnsible0416
## Optional Variables
email: no # Set this if you wish to disable email
sendgrid_user: username # username for the Sendgrid module. Not required if "email: no" is set
sendgrid_pass: 'passwordgoeshere' # sendgrid accound password. Not required if "email: no" is set
sendgrid_api_key: 'APIkey' # Instead of username and password, you may use an API key. Don't define both. Not required if "email: no" is set
instructor_email: 'Ansible Instructor ' # address you want the emails to arrive from. Not required if "email: no" is set
```
1. Create a `users.yml` by copying `sample-users.yml` and adding all your students:
__(email)__
```yaml
users:
- name: Bod Barker
username: bbarker
email: bbarker@acme.com
- name: Jane Smith
username: jsmith
email: jsmith@acme.com
```
__(no email)__
```yaml
users:
- name: Student01
username: student01
email: instructor@acme.com
- name: Student02
username: student02
email: instructor@acme.com
```
**(no email) NOTE:** If using generic users, you can generate the corresponding
`users.yml` file from the command line by creating a 'STUDENTS' variable
containing the number of "environments" you want, and then populating the file.
For example:
STUDENTS=30;
echo "users:" > users.yml &&
for NUM in $(seq -f "%02g" 1 $STUDENTS); do
echo " - name: Student${NUM}" >> users.yml
echo " username: student${NUM}" >> users.yml
echo " email: instructor@acme.com" >> users.yml
echo >> users.yml
done
1. Run the playbook:
ansible-playbook provision_lab.yml -e @extra_vars.yml -e @users.yml
1. Check on the EC2 console and you should see instances being created like:
TRAINING-LAB--node1|2|3|haproxy|tower|control
In your EC2 console you will also see that a VPC was automatically created with a corresponding subnet.
__(email)__ If successful all your students will be emailed the details of their hosts including addresses and credentials, and an `instructor_inventory.txt` file will be created listing all the student machines.
__(no email)__ If you disabled email in your `extra_vars.yml` file, you will need to upload the instructor's inventory to a public URL which you will hand out to participants.
1. Use [github gist](https://gist.github.com/) to upload `lightbulb/tools/aws_lab_setup/instructors_inventory`.
1. Use [http://goo.gl](http://goo.gl) to shorten the URL to make it more consumable
### Teardown The Lab Environments
The `teardown_lab.yml` playbook deletes all the training instances as well as local inventory files.
To destroy all the EC2 instances after training is complete:
1. Run the playbook:
ansible-playbook teardown_lab.yml -e @extra_vars.yml -e @users.yml
## Accessing student documentation and slides
* A student guide and instructor slides are already hosted at [http://ansible-workshop.redhatgov.io](http://ansible-workshop.redhatgov.io) . (NOTE: This guide is evolving and newer workshops can be previewed at [http://ansible.redhatgov.io](http://ansible.redhatgov.io) . This new version is currently being integrated with the Lightbulb project)
* Here you will find student instructions broken down into exercises as well as the presentation decks under the __Additional Resources__ drop down.
* During the workshop, it is recommended that you have a second device or printed copy of the student guide. Previous workshops have demonstrated that unless you've memorized all of it, you'll likely need to refer to the guide, but your laptop will be projecting the slide decks. Some students will fall behind and you'll need to refer back to other exercises/slides without having to change the projection for the entire class.
================================================
FILE: tools/aws_lab_setup/aws-directions/AWSHELP.md
================================================
# AWS DIRECTIONS HELP
These steps will walk you through where to create credentials (Access Key ID and Secret Access Key) on AWS to use for provisioning VMs with Ansible.
## Login
Login to the AWS Console on [https://aws.amazon.com/](https://aws.amazon.com/)

## Go to IAM
Under Services click on IAM to reach the IAM Management Console

## Go to Users
On the left pane click on the users link

On the right side, pick the tab "Security credentials". Now you need to either add a new user

OR find the user you want to add credentials <- this is more common
## Add Credentials
You need to create a new access key, only then will the necessary key and key ID be shown. Make sure you save them somewhere!! (But don't upload to GitHub or someone will start mining Bitcoins on your account)

You now have your Access Key ID and Secret Access Key!
## AWS References
- [Access Key ID and Secret Access Key](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
- [managing-aws-access-keys.html](http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html).
[Return to aws_lab_setup directions](../README.md)
================================================
FILE: tools/aws_lab_setup/inventory/ec2.ini
================================================
# Ansible EC2 external inventory script settings
#
[ec2]
# to talk to a private eucalyptus instance uncomment these lines
# and edit edit eucalyptus_host to be the host name of your cloud controller
#eucalyptus = True
#eucalyptus_host = clc.cloud.domain.org
# AWS regions to make calls to. Set this to 'all' to make request to all regions
# in AWS and merge the results together. Alternatively, set this to a comma
# separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2'
regions = all
regions_exclude = us-gov-west-1,cn-north-1
# When generating inventory, Ansible needs to know how to address a server.
# Each EC2 instance has a lot of variables associated with it. Here is the list:
# http://docs.pythonboto.org/en/latest/ref/ec2.html#module-boto.ec2.instance
# Below are 2 variables that are used as the address of a server:
# - destination_variable
# - vpc_destination_variable
# This is the normal destination variable to use. If you are running Ansible
# from outside EC2, then 'public_dns_name' makes the most sense. If you are
# running Ansible from within EC2, then perhaps you want to use the internal
# address, and should set this to 'private_dns_name'. The key of an EC2 tag
# may optionally be used; however the boto instance variables hold precedence
# in the event of a collision.
destination_variable = public_dns_name
# This allows you to override the inventory_name with an ec2 variable, instead
# of using the destination_variable above. Addressing (aka ansible_ssh_host)
# will still use destination_variable. Tags should be written as 'tag_TAGNAME'.
#hostname_variable = tag_Name
# For server inside a VPC, using DNS names may not make sense. When an instance
# has 'subnet_id' set, this variable is used. If the subnet is public, setting
# this to 'ip_address' will return the public IP address. For instances in a
# private subnet, this should be set to 'private_ip_address', and Ansible must
# be run from within EC2. The key of an EC2 tag may optionally be used; however
# the boto instance variables hold precedence in the event of a collision.
# WARNING: - instances that are in the private vpc, _without_ public ip address
# will not be listed in the inventory until You set:
# vpc_destination_variable = private_ip_address
vpc_destination_variable = ip_address
# The following two settings allow flexible ansible host naming based on a
# python format string and a comma-separated list of ec2 tags. Note that:
#
# 1) If the tags referenced are not present for some instances, empty strings
# will be substituted in the format string.
# 2) This overrides both destination_variable and vpc_destination_variable.
#
#destination_format = {0}.{1}.example.com
#destination_format_tags = Name,environment
# To tag instances on EC2 with the resource records that point to them from
# Route53, uncomment and set 'route53' to True.
route53 = False
# To exclude RDS instances from the inventory, uncomment and set to False.
#rds = False
# To exclude ElastiCache instances from the inventory, uncomment and set to False.
#elasticache = False
# Additionally, you can specify the list of zones to exclude looking up in
# 'route53_excluded_zones' as a comma-separated list.
# route53_excluded_zones = samplezone1.com, samplezone2.com
# By default, only EC2 instances in the 'running' state are returned. Set
# 'all_instances' to True to return all instances regardless of state.
all_instances = False
# By default, only EC2 instances in the 'running' state are returned. Specify
# EC2 instance states to return as a comma-separated list. This
# option is overriden when 'all_instances' is True.
# instance_states = pending, running, shutting-down, terminated, stopping, stopped
# By default, only RDS instances in the 'available' state are returned. Set
# 'all_rds_instances' to True return all RDS instances regardless of state.
all_rds_instances = False
# By default, only ElastiCache clusters and nodes in the 'available' state
# are returned. Set 'all_elasticache_clusters' and/or 'all_elastic_nodes'
# to True return all ElastiCache clusters and nodes, regardless of state.
#
# Note that all_elasticache_nodes only applies to listed clusters. That means
# if you set all_elastic_clusters to false, no node will be return from
# unavailable clusters, regardless of the state and to what you set for
# all_elasticache_nodes.
all_elasticache_replication_groups = False
all_elasticache_clusters = False
all_elasticache_nodes = False
# API calls to EC2 are slow. For this reason, we cache the results of an API
# call. Set this to the path you want cache files to be written to. Two files
# will be written to this directory:
# - ansible-ec2.cache
# - ansible-ec2.index
cache_path = ~/.ansible/tmp
# The number of seconds a cache file is considered valid. After this many
# seconds, a new API call will be made, and the cache file will be updated.
# To disable the cache, set this value to 0
cache_max_age = 300
# Organize groups into a nested/hierarchy instead of a flat namespace.
nested_groups = False
# Replace - tags when creating groups to avoid issues with ansible
replace_dash_in_groups = True
# If set to true, any tag of the form "a,b,c" is expanded into a list
# and the results are used to create additional tag_* inventory groups.
expand_csv_tags = False
# The EC2 inventory output can become very large. To manage its size,
# configure which groups should be created.
group_by_instance_id = True
group_by_region = True
group_by_availability_zone = True
group_by_ami_id = True
group_by_instance_type = True
group_by_key_pair = True
group_by_vpc_id = True
group_by_security_group = True
group_by_tag_keys = True
group_by_tag_none = True
group_by_route53_names = True
group_by_rds_engine = True
group_by_rds_parameter_group = True
group_by_elasticache_engine = True
group_by_elasticache_cluster = True
group_by_elasticache_parameter_group = True
group_by_elasticache_replication_group = True
# If you only want to include hosts that match a certain regular expression
# pattern_include = staging-*
# If you want to exclude any hosts that match a certain regular expression
# pattern_exclude = staging-*
# Instance filters can be used to control which instances are retrieved for
# inventory. For the full list of possible filters, please read the EC2 API
# docs: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html#query-DescribeInstances-filters
# Filters are key/value pairs separated by '=', to list multiple filters use
# a list separated by commas. See examples below.
# Retrieve only instances with (key=value) env=staging tag
# instance_filters = tag:env=staging
# Retrieve only instances with role=webservers OR role=dbservers tag
# instance_filters = tag:role=webservers,tag:role=dbservers
# Retrieve only t1.micro instances OR instances with tag env=staging
# instance_filters = instance-type=t1.micro,tag:env=staging
# You can use wildcards in filter values also. Below will list instances which
# tag Name value matches webservers1*
# (ex. webservers15, webservers1a, webservers123 etc)
# instance_filters = tag:Name=webservers1*
# A boto configuration profile may be used to separate out credentials
# see http://boto.readthedocs.org/en/latest/boto_config_tut.html
boto_profile = default
================================================
FILE: tools/aws_lab_setup/inventory/ec2.py
================================================
#!/usr/bin/env python
'''
EC2 external inventory script
=================================
Generates inventory that Ansible can understand by making API request to
AWS EC2 using the Boto library.
NOTE: This script assumes Ansible is being executed where the environment
variables needed for Boto have already been set:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
This script also assumes there is an ec2.ini file alongside it. To specify a
different path to ec2.ini, define the EC2_INI_PATH environment variable:
export EC2_INI_PATH=/path/to/my_ec2.ini
If you're using eucalyptus you need to set the above variables and
you need to define:
export EC2_URL=http://hostname_of_your_cc:port/services/Eucalyptus
If you're using boto profiles (requires boto>=2.24.0) you can choose a profile
using the --boto-profile command line argument (e.g. ec2.py --boto-profile prod) or using
the AWS_PROFILE variable:
AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml
For more details, see: http://docs.pythonboto.org/en/latest/boto_config_tut.html
When run against a specific host, this script returns the following variables:
- ec2_ami_launch_index
- ec2_architecture
- ec2_association
- ec2_attachTime
- ec2_attachment
- ec2_attachmentId
- ec2_client_token
- ec2_deleteOnTermination
- ec2_description
- ec2_deviceIndex
- ec2_dns_name
- ec2_eventsSet
- ec2_group_name
- ec2_hypervisor
- ec2_id
- ec2_image_id
- ec2_instanceState
- ec2_instance_type
- ec2_ipOwnerId
- ec2_ip_address
- ec2_item
- ec2_kernel
- ec2_key_name
- ec2_launch_time
- ec2_monitored
- ec2_monitoring
- ec2_networkInterfaceId
- ec2_ownerId
- ec2_persistent
- ec2_placement
- ec2_platform
- ec2_previous_state
- ec2_private_dns_name
- ec2_private_ip_address
- ec2_publicIp
- ec2_public_dns_name
- ec2_ramdisk
- ec2_reason
- ec2_region
- ec2_requester_id
- ec2_root_device_name
- ec2_root_device_type
- ec2_security_group_ids
- ec2_security_group_names
- ec2_shutdown_state
- ec2_sourceDestCheck
- ec2_spot_instance_request_id
- ec2_state
- ec2_state_code
- ec2_state_reason
- ec2_status
- ec2_subnet_id
- ec2_tenancy
- ec2_virtualization_type
- ec2_vpc_id
These variables are pulled out of a boto.ec2.instance object. There is a lack of
consistency with variable spellings (camelCase and underscores) since this
just loops through all variables the object exposes. It is preferred to use the
ones with underscores when multiple exist.
In addition, if an instance has AWS Tags associated with it, each tag is a new
variable named:
- ec2_tag_[Key] = [Value]
Security groups are comma-separated in 'ec2_security_group_ids' and
'ec2_security_group_names'.
'''
# (c) 2012, Peter Sankauskas
#
# This file is part of Ansible,
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see .
######################################################################
import sys
import os
import argparse
import re
from time import time
import boto
from boto import ec2
from boto import rds
from boto import elasticache
from boto import route53
import six
from six.moves import configparser
from collections import defaultdict
try:
import json
except ImportError:
import simplejson as json
class Ec2Inventory(object):
def _empty_inventory(self):
return {"_meta" : {"hostvars" : {}}}
def __init__(self):
''' Main execution path '''
# Inventory grouped by instance IDs, tags, security groups, regions,
# and availability zones
self.inventory = self._empty_inventory()
# Index of hostname (address) to instance ID
self.index = {}
# Boto profile to use (if any)
self.boto_profile = None
# Read settings and parse CLI arguments
self.parse_cli_args()
self.read_settings()
# Make sure that profile_name is not passed at all if not set
# as pre 2.24 boto will fall over otherwise
if self.boto_profile:
if not hasattr(boto.ec2.EC2Connection, 'profile_name'):
self.fail_with_error("boto version must be >= 2.24 to use profile")
# Cache
if self.args.refresh_cache:
self.do_api_calls_update_cache()
elif not self.is_cache_valid():
self.do_api_calls_update_cache()
# Data to print
if self.args.host:
data_to_print = self.get_host_info()
elif self.args.list:
# Display list of instances for inventory
if self.inventory == self._empty_inventory():
data_to_print = self.get_inventory_from_cache()
else:
data_to_print = self.json_format_dict(self.inventory, True)
print(data_to_print)
def is_cache_valid(self):
''' Determines if the cache files have expired, or if it is still valid '''
if os.path.isfile(self.cache_path_cache):
mod_time = os.path.getmtime(self.cache_path_cache)
current_time = time()
if (mod_time + self.cache_max_age) > current_time:
if os.path.isfile(self.cache_path_index):
return True
return False
def read_settings(self):
''' Reads the settings from the ec2.ini file '''
if six.PY3:
config = configparser.ConfigParser()
else:
config = configparser.SafeConfigParser()
ec2_default_ini_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'ec2.ini')
ec2_ini_path = os.path.expanduser(os.path.expandvars(os.environ.get('EC2_INI_PATH', ec2_default_ini_path)))
config.read(ec2_ini_path)
# is eucalyptus?
self.eucalyptus_host = None
self.eucalyptus = False
if config.has_option('ec2', 'eucalyptus'):
self.eucalyptus = config.getboolean('ec2', 'eucalyptus')
if self.eucalyptus and config.has_option('ec2', 'eucalyptus_host'):
self.eucalyptus_host = config.get('ec2', 'eucalyptus_host')
# Regions
self.regions = []
configRegions = config.get('ec2', 'regions')
configRegions_exclude = config.get('ec2', 'regions_exclude')
if (configRegions == 'all'):
if self.eucalyptus_host:
self.regions.append(boto.connect_euca(host=self.eucalyptus_host).region.name)
else:
for regionInfo in ec2.regions():
if regionInfo.name not in configRegions_exclude:
self.regions.append(regionInfo.name)
else:
self.regions = configRegions.split(",")
# Destination addresses
self.destination_variable = config.get('ec2', 'destination_variable')
self.vpc_destination_variable = config.get('ec2', 'vpc_destination_variable')
if config.has_option('ec2', 'hostname_variable'):
self.hostname_variable = config.get('ec2', 'hostname_variable')
else:
self.hostname_variable = None
if config.has_option('ec2', 'destination_format') and \
config.has_option('ec2', 'destination_format_tags'):
self.destination_format = config.get('ec2', 'destination_format')
self.destination_format_tags = config.get('ec2', 'destination_format_tags').split(',')
else:
self.destination_format = None
self.destination_format_tags = None
# Route53
self.route53_enabled = config.getboolean('ec2', 'route53')
self.route53_excluded_zones = []
if config.has_option('ec2', 'route53_excluded_zones'):
self.route53_excluded_zones.extend(
config.get('ec2', 'route53_excluded_zones', '').split(','))
# Include RDS instances?
self.rds_enabled = True
if config.has_option('ec2', 'rds'):
self.rds_enabled = config.getboolean('ec2', 'rds')
# Include ElastiCache instances?
self.elasticache_enabled = True
if config.has_option('ec2', 'elasticache'):
self.elasticache_enabled = config.getboolean('ec2', 'elasticache')
# Return all EC2 instances?
if config.has_option('ec2', 'all_instances'):
self.all_instances = config.getboolean('ec2', 'all_instances')
else:
self.all_instances = False
# Instance states to be gathered in inventory. Default is 'running'.
# Setting 'all_instances' to 'yes' overrides this option.
ec2_valid_instance_states = [
'pending',
'running',
'shutting-down',
'terminated',
'stopping',
'stopped'
]
self.ec2_instance_states = []
if self.all_instances:
self.ec2_instance_states = ec2_valid_instance_states
elif config.has_option('ec2', 'instance_states'):
for instance_state in config.get('ec2', 'instance_states').split(','):
instance_state = instance_state.strip()
if instance_state not in ec2_valid_instance_states:
continue
self.ec2_instance_states.append(instance_state)
else:
self.ec2_instance_states = ['running']
# Return all RDS instances? (if RDS is enabled)
if config.has_option('ec2', 'all_rds_instances') and self.rds_enabled:
self.all_rds_instances = config.getboolean('ec2', 'all_rds_instances')
else:
self.all_rds_instances = False
# Return all ElastiCache replication groups? (if ElastiCache is enabled)
if config.has_option('ec2', 'all_elasticache_replication_groups') and self.elasticache_enabled:
self.all_elasticache_replication_groups = config.getboolean('ec2', 'all_elasticache_replication_groups')
else:
self.all_elasticache_replication_groups = False
# Return all ElastiCache clusters? (if ElastiCache is enabled)
if config.has_option('ec2', 'all_elasticache_clusters') and self.elasticache_enabled:
self.all_elasticache_clusters = config.getboolean('ec2', 'all_elasticache_clusters')
else:
self.all_elasticache_clusters = False
# Return all ElastiCache nodes? (if ElastiCache is enabled)
if config.has_option('ec2', 'all_elasticache_nodes') and self.elasticache_enabled:
self.all_elasticache_nodes = config.getboolean('ec2', 'all_elasticache_nodes')
else:
self.all_elasticache_nodes = False
# boto configuration profile (prefer CLI argument)
self.boto_profile = self.args.boto_profile
if config.has_option('ec2', 'boto_profile') and not self.boto_profile:
self.boto_profile = config.get('ec2', 'boto_profile')
# Cache related
cache_dir = os.path.expanduser(config.get('ec2', 'cache_path'))
if self.boto_profile:
cache_dir = os.path.join(cache_dir, 'profile_' + self.boto_profile)
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
self.cache_path_cache = cache_dir + "/ansible-ec2.cache"
self.cache_path_index = cache_dir + "/ansible-ec2.index"
self.cache_max_age = config.getint('ec2', 'cache_max_age')
if config.has_option('ec2', 'expand_csv_tags'):
self.expand_csv_tags = config.getboolean('ec2', 'expand_csv_tags')
else:
self.expand_csv_tags = False
# Configure nested groups instead of flat namespace.
if config.has_option('ec2', 'nested_groups'):
self.nested_groups = config.getboolean('ec2', 'nested_groups')
else:
self.nested_groups = False
# Replace dash or not in group names
if config.has_option('ec2', 'replace_dash_in_groups'):
self.replace_dash_in_groups = config.getboolean('ec2', 'replace_dash_in_groups')
else:
self.replace_dash_in_groups = True
# Configure which groups should be created.
group_by_options = [
'group_by_instance_id',
'group_by_region',
'group_by_availability_zone',
'group_by_ami_id',
'group_by_instance_type',
'group_by_key_pair',
'group_by_vpc_id',
'group_by_security_group',
'group_by_tag_keys',
'group_by_tag_none',
'group_by_route53_names',
'group_by_rds_engine',
'group_by_rds_parameter_group',
'group_by_elasticache_engine',
'group_by_elasticache_cluster',
'group_by_elasticache_parameter_group',
'group_by_elasticache_replication_group',
]
for option in group_by_options:
if config.has_option('ec2', option):
setattr(self, option, config.getboolean('ec2', option))
else:
setattr(self, option, True)
# Do we need to just include hosts that match a pattern?
try:
pattern_include = config.get('ec2', 'pattern_include')
if pattern_include and len(pattern_include) > 0:
self.pattern_include = re.compile(pattern_include)
else:
self.pattern_include = None
except configparser.NoOptionError:
self.pattern_include = None
# Do we need to exclude hosts that match a pattern?
try:
pattern_exclude = config.get('ec2', 'pattern_exclude');
if pattern_exclude and len(pattern_exclude) > 0:
self.pattern_exclude = re.compile(pattern_exclude)
else:
self.pattern_exclude = None
except configparser.NoOptionError:
self.pattern_exclude = None
# Instance filters (see boto and EC2 API docs). Ignore invalid filters.
self.ec2_instance_filters = defaultdict(list)
if config.has_option('ec2', 'instance_filters'):
filters = [f for f in config.get('ec2', 'instance_filters').split(',') if f]
for instance_filter in filters:
instance_filter = instance_filter.strip()
if not instance_filter or '=' not in instance_filter:
continue
filter_key, filter_value = [x.strip() for x in instance_filter.split('=', 1)]
if not filter_key:
continue
self.ec2_instance_filters[filter_key].append(filter_value)
def parse_cli_args(self):
''' Command line argument processing '''
parser = argparse.ArgumentParser(description='Produce an Ansible Inventory file based on EC2')
parser.add_argument('--list', action='store_true', default=True,
help='List instances (default: True)')
parser.add_argument('--host', action='store',
help='Get all the variables about a specific instance')
parser.add_argument('--refresh-cache', action='store_true', default=False,
help='Force refresh of cache by making API requests to EC2 (default: False - use cache files)')
parser.add_argument('--profile', '--boto-profile', action='store', dest='boto_profile',
help='Use boto profile for connections to EC2')
self.args = parser.parse_args()
def do_api_calls_update_cache(self):
''' Do API calls to each region, and save data in cache files '''
if self.route53_enabled:
self.get_route53_records()
for region in self.regions:
self.get_instances_by_region(region)
if self.rds_enabled:
self.get_rds_instances_by_region(region)
if self.elasticache_enabled:
self.get_elasticache_clusters_by_region(region)
self.get_elasticache_replication_groups_by_region(region)
self.write_to_cache(self.inventory, self.cache_path_cache)
self.write_to_cache(self.index, self.cache_path_index)
def connect(self, region):
''' create connection to api server'''
if self.eucalyptus:
conn = boto.connect_euca(host=self.eucalyptus_host)
conn.APIVersion = '2010-08-31'
else:
conn = self.connect_to_aws(ec2, region)
return conn
def boto_fix_security_token_in_profile(self, connect_args):
''' monkey patch for boto issue boto/boto#2100 '''
profile = 'profile ' + self.boto_profile
if boto.config.has_option(profile, 'aws_security_token'):
connect_args['security_token'] = boto.config.get(profile, 'aws_security_token')
return connect_args
def connect_to_aws(self, module, region):
connect_args = {}
# only pass the profile name if it's set (as it is not supported by older boto versions)
if self.boto_profile:
connect_args['profile_name'] = self.boto_profile
self.boto_fix_security_token_in_profile(connect_args)
conn = module.connect_to_region(region, **connect_args)
# connect_to_region will fail "silently" by returning None if the region name is wrong or not supported
if conn is None:
self.fail_with_error("region name: %s likely not supported, or AWS is down. connection to region failed." % region)
return conn
def get_instances_by_region(self, region):
''' Makes an AWS EC2 API call to the list of instances in a particular
region '''
try:
conn = self.connect(region)
reservations = []
if self.ec2_instance_filters:
for filter_key, filter_values in self.ec2_instance_filters.items():
reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values }))
else:
reservations = conn.get_all_instances()
for reservation in reservations:
for instance in reservation.instances:
self.add_instance(instance, region)
except boto.exception.BotoServerError as e:
if e.error_code == 'AuthFailure':
error = self.get_auth_error_message()
else:
backend = 'Eucalyptus' if self.eucalyptus else 'AWS'
error = "Error connecting to %s backend.\n%s" % (backend, e.message)
self.fail_with_error(error, 'getting EC2 instances')
def get_rds_instances_by_region(self, region):
''' Makes an AWS API call to the list of RDS instances in a particular
region '''
try:
conn = self.connect_to_aws(rds, region)
if conn:
marker = None
while True:
instances = conn.get_all_dbinstances(marker=marker)
marker = instances.marker
for instance in instances:
self.add_rds_instance(instance, region)
if not marker:
break
except boto.exception.BotoServerError as e:
error = e.reason
if e.error_code == 'AuthFailure':
error = self.get_auth_error_message()
if not e.reason == "Forbidden":
error = "Looks like AWS RDS is down:\n%s" % e.message
self.fail_with_error(error, 'getting RDS instances')
def get_elasticache_clusters_by_region(self, region):
''' Makes an AWS API call to the list of ElastiCache clusters (with
nodes' info) in a particular region.'''
# ElastiCache boto module doesn't provide a get_all_intances method,
# that's why we need to call describe directly (it would be called by
# the shorthand method anyway...)
try:
conn = self.connect_to_aws(elasticache, region)
if conn:
# show_cache_node_info = True
# because we also want nodes' information
response = conn.describe_cache_clusters(None, None, None, True)
except boto.exception.BotoServerError as e:
error = e.reason
if e.error_code == 'AuthFailure':
error = self.get_auth_error_message()
if not e.reason == "Forbidden":
error = "Looks like AWS ElastiCache is down:\n%s" % e.message
self.fail_with_error(error, 'getting ElastiCache clusters')
try:
# Boto also doesn't provide wrapper classes to CacheClusters or
# CacheNodes. Because of that wo can't make use of the get_list
# method in the AWSQueryConnection. Let's do the work manually
clusters = response['DescribeCacheClustersResponse']['DescribeCacheClustersResult']['CacheClusters']
except KeyError as e:
error = "ElastiCache query to AWS failed (unexpected format)."
self.fail_with_error(error, 'getting ElastiCache clusters')
for cluster in clusters:
self.add_elasticache_cluster(cluster, region)
def get_elasticache_replication_groups_by_region(self, region):
''' Makes an AWS API call to the list of ElastiCache replication groups
in a particular region.'''
# ElastiCache boto module doesn't provide a get_all_intances method,
# that's why we need to call describe directly (it would be called by
# the shorthand method anyway...)
try:
conn = self.connect_to_aws(elasticache, region)
if conn:
response = conn.describe_replication_groups()
except boto.exception.BotoServerError as e:
error = e.reason
if e.error_code == 'AuthFailure':
error = self.get_auth_error_message()
if not e.reason == "Forbidden":
error = "Looks like AWS ElastiCache [Replication Groups] is down:\n%s" % e.message
self.fail_with_error(error, 'getting ElastiCache clusters')
try:
# Boto also doesn't provide wrapper classes to ReplicationGroups
# Because of that wo can't make use of the get_list method in the
# AWSQueryConnection. Let's do the work manually
replication_groups = response['DescribeReplicationGroupsResponse']['DescribeReplicationGroupsResult']['ReplicationGroups']
except KeyError as e:
error = "ElastiCache [Replication Groups] query to AWS failed (unexpected format)."
self.fail_with_error(error, 'getting ElastiCache clusters')
for replication_group in replication_groups:
self.add_elasticache_replication_group(replication_group, region)
def get_auth_error_message(self):
''' create an informative error message if there is an issue authenticating'''
errors = ["Authentication error retrieving ec2 inventory."]
if None in [os.environ.get('AWS_ACCESS_KEY_ID'), os.environ.get('AWS_SECRET_ACCESS_KEY')]:
errors.append(' - No AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY environment vars found')
else:
errors.append(' - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment vars found but may not be correct')
boto_paths = ['/etc/boto.cfg', '~/.boto', '~/.aws/credentials']
boto_config_found = list(p for p in boto_paths if os.path.isfile(os.path.expanduser(p)))
if len(boto_config_found) > 0:
errors.append(" - Boto configs found at '%s', but the credentials contained may not be correct" % ', '.join(boto_config_found))
else:
errors.append(" - No Boto config found at any expected location '%s'" % ', '.join(boto_paths))
return '\n'.join(errors)
def fail_with_error(self, err_msg, err_operation=None):
'''log an error to std err for ansible-playbook to consume and exit'''
if err_operation:
err_msg = 'ERROR: "{err_msg}", while: {err_operation}'.format(
err_msg=err_msg, err_operation=err_operation)
sys.stderr.write(err_msg)
sys.exit(1)
def get_instance(self, region, instance_id):
conn = self.connect(region)
reservations = conn.get_all_instances([instance_id])
for reservation in reservations:
for instance in reservation.instances:
return instance
def add_instance(self, instance, region):
''' Adds an instance to the inventory and index, as long as it is
addressable '''
# Only return instances with desired instance states
if instance.state not in self.ec2_instance_states:
return
# Select the best destination address
if self.destination_format and self.destination_format_tags:
dest = self.destination_format.format(*[ getattr(instance, 'tags').get(tag, '') for tag in self.destination_format_tags ])
elif instance.subnet_id:
dest = getattr(instance, self.vpc_destination_variable, None)
if dest is None:
dest = getattr(instance, 'tags').get(self.vpc_destination_variable, None)
else:
dest = getattr(instance, self.destination_variable, None)
if dest is None:
dest = getattr(instance, 'tags').get(self.destination_variable, None)
if not dest:
# Skip instances we cannot address (e.g. private VPC subnet)
return
# Set the inventory name
hostname = None
if self.hostname_variable:
if self.hostname_variable.startswith('tag_'):
hostname = instance.tags.get(self.hostname_variable[4:], None)
else:
hostname = getattr(instance, self.hostname_variable)
# If we can't get a nice hostname, use the destination address
if not hostname:
hostname = dest
hostname = self.to_safe(hostname).lower()
# if we only want to include hosts that match a pattern, skip those that don't
if self.pattern_include and not self.pattern_include.match(hostname):
return
# if we need to exclude hosts that match a pattern, skip those
if self.pattern_exclude and self.pattern_exclude.match(hostname):
return
# Add to index
self.index[hostname] = [region, instance.id]
# Inventory: Group by instance ID (always a group of 1)
if self.group_by_instance_id:
self.inventory[instance.id] = [hostname]
if self.nested_groups:
self.push_group(self.inventory, 'instances', instance.id)
# Inventory: Group by region
if self.group_by_region:
self.push(self.inventory, region, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone
if self.group_by_availability_zone:
self.push(self.inventory, instance.placement, hostname)
if self.nested_groups:
if self.group_by_region:
self.push_group(self.inventory, region, instance.placement)
self.push_group(self.inventory, 'zones', instance.placement)
# Inventory: Group by Amazon Machine Image (AMI) ID
if self.group_by_ami_id:
ami_id = self.to_safe(instance.image_id)
self.push(self.inventory, ami_id, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'images', ami_id)
# Inventory: Group by instance type
if self.group_by_instance_type:
type_name = self.to_safe('type_' + instance.instance_type)
self.push(self.inventory, type_name, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by key pair
if self.group_by_key_pair and instance.key_name:
key_name = self.to_safe('key_' + instance.key_name)
self.push(self.inventory, key_name, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'keys', key_name)
# Inventory: Group by VPC
if self.group_by_vpc_id and instance.vpc_id:
vpc_id_name = self.to_safe('vpc_id_' + instance.vpc_id)
self.push(self.inventory, vpc_id_name, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'vpcs', vpc_id_name)
# Inventory: Group by security group
if self.group_by_security_group:
try:
for group in instance.groups:
key = self.to_safe("security_group_" + group.name)
self.push(self.inventory, key, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'security_groups', key)
except AttributeError:
self.fail_with_error('\n'.join(['Package boto seems a bit older.',
'Please upgrade boto >= 2.3.0.']))
# Inventory: Group by tag keys
if self.group_by_tag_keys:
for k, v in instance.tags.items():
if self.expand_csv_tags and v and ',' in v:
values = map(lambda x: x.strip(), v.split(','))
else:
values = [v]
for v in values:
if v:
key = self.to_safe("tag_" + k + "=" + v)
else:
key = self.to_safe("tag_" + k)
self.push(self.inventory, key, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'tags', self.to_safe("tag_" + k))
if v:
self.push_group(self.inventory, self.to_safe("tag_" + k), key)
# Inventory: Group by Route53 domain names if enabled
if self.route53_enabled and self.group_by_route53_names:
route53_names = self.get_instance_route53_names(instance)
for name in route53_names:
self.push(self.inventory, name, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'route53', name)
# Global Tag: instances without tags
if self.group_by_tag_none and len(instance.tags) == 0:
self.push(self.inventory, 'tag_none', hostname)
if self.nested_groups:
self.push_group(self.inventory, 'tags', 'tag_none')
# Global Tag: tag all EC2 instances
self.push(self.inventory, 'ec2', hostname)
self.inventory["_meta"]["hostvars"][hostname] = self.get_host_info_dict_from_instance(instance)
self.inventory["_meta"]["hostvars"][hostname]['ansible_host'] = dest
def add_rds_instance(self, instance, region):
''' Adds an RDS instance to the inventory and index, as long as it is
addressable '''
# Only want available instances unless all_rds_instances is True
if not self.all_rds_instances and instance.status != 'available':
return
# Select the best destination address
dest = instance.endpoint[0]
if not dest:
# Skip instances we cannot address (e.g. private VPC subnet)
return
# Set the inventory name
hostname = None
if self.hostname_variable:
if self.hostname_variable.startswith('tag_'):
hostname = instance.tags.get(self.hostname_variable[4:], None)
else:
hostname = getattr(instance, self.hostname_variable)
# If we can't get a nice hostname, use the destination address
if not hostname:
hostname = dest
hostname = self.to_safe(hostname).lower()
# Add to index
self.index[hostname] = [region, instance.id]
# Inventory: Group by instance ID (always a group of 1)
if self.group_by_instance_id:
self.inventory[instance.id] = [hostname]
if self.nested_groups:
self.push_group(self.inventory, 'instances', instance.id)
# Inventory: Group by region
if self.group_by_region:
self.push(self.inventory, region, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone
if self.group_by_availability_zone:
self.push(self.inventory, instance.availability_zone, hostname)
if self.nested_groups:
if self.group_by_region:
self.push_group(self.inventory, region, instance.availability_zone)
self.push_group(self.inventory, 'zones', instance.availability_zone)
# Inventory: Group by instance type
if self.group_by_instance_type:
type_name = self.to_safe('type_' + instance.instance_class)
self.push(self.inventory, type_name, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by VPC
if self.group_by_vpc_id and instance.subnet_group and instance.subnet_group.vpc_id:
vpc_id_name = self.to_safe('vpc_id_' + instance.subnet_group.vpc_id)
self.push(self.inventory, vpc_id_name, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'vpcs', vpc_id_name)
# Inventory: Group by security group
if self.group_by_security_group:
try:
if instance.security_group:
key = self.to_safe("security_group_" + instance.security_group.name)
self.push(self.inventory, key, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'security_groups', key)
except AttributeError:
self.fail_with_error('\n'.join(['Package boto seems a bit older.',
'Please upgrade boto >= 2.3.0.']))
# Inventory: Group by engine
if self.group_by_rds_engine:
self.push(self.inventory, self.to_safe("rds_" + instance.engine), hostname)
if self.nested_groups:
self.push_group(self.inventory, 'rds_engines', self.to_safe("rds_" + instance.engine))
# Inventory: Group by parameter group
if self.group_by_rds_parameter_group:
self.push(self.inventory, self.to_safe("rds_parameter_group_" + instance.parameter_group.name), hostname)
if self.nested_groups:
self.push_group(self.inventory, 'rds_parameter_groups', self.to_safe("rds_parameter_group_" + instance.parameter_group.name))
# Global Tag: all RDS instances
self.push(self.inventory, 'rds', hostname)
self.inventory["_meta"]["hostvars"][hostname] = self.get_host_info_dict_from_instance(instance)
self.inventory["_meta"]["hostvars"][hostname]['ansible_host'] = dest
def add_elasticache_cluster(self, cluster, region):
''' Adds an ElastiCache cluster to the inventory and index, as long as
it's nodes are addressable '''
# Only want available clusters unless all_elasticache_clusters is True
if not self.all_elasticache_clusters and cluster['CacheClusterStatus'] != 'available':
return
# Select the best destination address
if 'ConfigurationEndpoint' in cluster and cluster['ConfigurationEndpoint']:
# Memcached cluster
dest = cluster['ConfigurationEndpoint']['Address']
is_redis = False
else:
# Redis sigle node cluster
# Because all Redis clusters are single nodes, we'll merge the
# info from the cluster with info about the node
dest = cluster['CacheNodes'][0]['Endpoint']['Address']
is_redis = True
if not dest:
# Skip clusters we cannot address (e.g. private VPC subnet)
return
# Add to index
self.index[dest] = [region, cluster['CacheClusterId']]
# Inventory: Group by instance ID (always a group of 1)
if self.group_by_instance_id:
self.inventory[cluster['CacheClusterId']] = [dest]
if self.nested_groups:
self.push_group(self.inventory, 'instances', cluster['CacheClusterId'])
# Inventory: Group by region
if self.group_by_region and not is_redis:
self.push(self.inventory, region, dest)
if self.nested_groups:
self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone
if self.group_by_availability_zone and not is_redis:
self.push(self.inventory, cluster['PreferredAvailabilityZone'], dest)
if self.nested_groups:
if self.group_by_region:
self.push_group(self.inventory, region, cluster['PreferredAvailabilityZone'])
self.push_group(self.inventory, 'zones', cluster['PreferredAvailabilityZone'])
# Inventory: Group by node type
if self.group_by_instance_type and not is_redis:
type_name = self.to_safe('type_' + cluster['CacheNodeType'])
self.push(self.inventory, type_name, dest)
if self.nested_groups:
self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by VPC (information not available in the current
# AWS API version for ElastiCache)
# Inventory: Group by security group
if self.group_by_security_group and not is_redis:
# Check for the existence of the 'SecurityGroups' key and also if
# this key has some value. When the cluster is not placed in a SG
# the query can return None here and cause an error.
if 'SecurityGroups' in cluster and cluster['SecurityGroups'] is not None:
for security_group in cluster['SecurityGroups']:
key = self.to_safe("security_group_" + security_group['SecurityGroupId'])
self.push(self.inventory, key, dest)
if self.nested_groups:
self.push_group(self.inventory, 'security_groups', key)
# Inventory: Group by engine
if self.group_by_elasticache_engine and not is_redis:
self.push(self.inventory, self.to_safe("elasticache_" + cluster['Engine']), dest)
if self.nested_groups:
self.push_group(self.inventory, 'elasticache_engines', self.to_safe(cluster['Engine']))
# Inventory: Group by parameter group
if self.group_by_elasticache_parameter_group:
self.push(self.inventory, self.to_safe("elasticache_parameter_group_" + cluster['CacheParameterGroup']['CacheParameterGroupName']), dest)
if self.nested_groups:
self.push_group(self.inventory, 'elasticache_parameter_groups', self.to_safe(cluster['CacheParameterGroup']['CacheParameterGroupName']))
# Inventory: Group by replication group
if self.group_by_elasticache_replication_group and 'ReplicationGroupId' in cluster and cluster['ReplicationGroupId']:
self.push(self.inventory, self.to_safe("elasticache_replication_group_" + cluster['ReplicationGroupId']), dest)
if self.nested_groups:
self.push_group(self.inventory, 'elasticache_replication_groups', self.to_safe(cluster['ReplicationGroupId']))
# Global Tag: all ElastiCache clusters
self.push(self.inventory, 'elasticache_clusters', cluster['CacheClusterId'])
host_info = self.get_host_info_dict_from_describe_dict(cluster)
self.inventory["_meta"]["hostvars"][dest] = host_info
# Add the nodes
for node in cluster['CacheNodes']:
self.add_elasticache_node(node, cluster, region)
def add_elasticache_node(self, node, cluster, region):
''' Adds an ElastiCache node to the inventory and index, as long as
it is addressable '''
# Only want available nodes unless all_elasticache_nodes is True
if not self.all_elasticache_nodes and node['CacheNodeStatus'] != 'available':
return
# Select the best destination address
dest = node['Endpoint']['Address']
if not dest:
# Skip nodes we cannot address (e.g. private VPC subnet)
return
node_id = self.to_safe(cluster['CacheClusterId'] + '_' + node['CacheNodeId'])
# Add to index
self.index[dest] = [region, node_id]
# Inventory: Group by node ID (always a group of 1)
if self.group_by_instance_id:
self.inventory[node_id] = [dest]
if self.nested_groups:
self.push_group(self.inventory, 'instances', node_id)
# Inventory: Group by region
if self.group_by_region:
self.push(self.inventory, region, dest)
if self.nested_groups:
self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone
if self.group_by_availability_zone:
self.push(self.inventory, cluster['PreferredAvailabilityZone'], dest)
if self.nested_groups:
if self.group_by_region:
self.push_group(self.inventory, region, cluster['PreferredAvailabilityZone'])
self.push_group(self.inventory, 'zones', cluster['PreferredAvailabilityZone'])
# Inventory: Group by node type
if self.group_by_instance_type:
type_name = self.to_safe('type_' + cluster['CacheNodeType'])
self.push(self.inventory, type_name, dest)
if self.nested_groups:
self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by VPC (information not available in the current
# AWS API version for ElastiCache)
# Inventory: Group by security group
if self.group_by_security_group:
# Check for the existence of the 'SecurityGroups' key and also if
# this key has some value. When the cluster is not placed in a SG
# the query can return None here and cause an error.
if 'SecurityGroups' in cluster and cluster['SecurityGroups'] is not None:
for security_group in cluster['SecurityGroups']:
key = self.to_safe("security_group_" + security_group['SecurityGroupId'])
self.push(self.inventory, key, dest)
if self.nested_groups:
self.push_group(self.inventory, 'security_groups', key)
# Inventory: Group by engine
if self.group_by_elasticache_engine:
self.push(self.inventory, self.to_safe("elasticache_" + cluster['Engine']), dest)
if self.nested_groups:
self.push_group(self.inventory, 'elasticache_engines', self.to_safe("elasticache_" + cluster['Engine']))
# Inventory: Group by parameter group (done at cluster level)
# Inventory: Group by replication group (done at cluster level)
# Inventory: Group by ElastiCache Cluster
if self.group_by_elasticache_cluster:
self.push(self.inventory, self.to_safe("elasticache_cluster_" + cluster['CacheClusterId']), dest)
# Global Tag: all ElastiCache nodes
self.push(self.inventory, 'elasticache_nodes', dest)
host_info = self.get_host_info_dict_from_describe_dict(node)
if dest in self.inventory["_meta"]["hostvars"]:
self.inventory["_meta"]["hostvars"][dest].update(host_info)
else:
self.inventory["_meta"]["hostvars"][dest] = host_info
def add_elasticache_replication_group(self, replication_group, region):
''' Adds an ElastiCache replication group to the inventory and index '''
# Only want available clusters unless all_elasticache_replication_groups is True
if not self.all_elasticache_replication_groups and replication_group['Status'] != 'available':
return
# Select the best destination address (PrimaryEndpoint)
dest = replication_group['NodeGroups'][0]['PrimaryEndpoint']['Address']
if not dest:
# Skip clusters we cannot address (e.g. private VPC subnet)
return
# Add to index
self.index[dest] = [region, replication_group['ReplicationGroupId']]
# Inventory: Group by ID (always a group of 1)
if self.group_by_instance_id:
self.inventory[replication_group['ReplicationGroupId']] = [dest]
if self.nested_groups:
self.push_group(self.inventory, 'instances', replication_group['ReplicationGroupId'])
# Inventory: Group by region
if self.group_by_region:
self.push(self.inventory, region, dest)
if self.nested_groups:
self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone (doesn't apply to replication groups)
# Inventory: Group by node type (doesn't apply to replication groups)
# Inventory: Group by VPC (information not available in the current
# AWS API version for replication groups
# Inventory: Group by security group (doesn't apply to replication groups)
# Check this value in cluster level
# Inventory: Group by engine (replication groups are always Redis)
if self.group_by_elasticache_engine:
self.push(self.inventory, 'elasticache_redis', dest)
if self.nested_groups:
self.push_group(self.inventory, 'elasticache_engines', 'redis')
# Global Tag: all ElastiCache clusters
self.push(self.inventory, 'elasticache_replication_groups', replication_group['ReplicationGroupId'])
host_info = self.get_host_info_dict_from_describe_dict(replication_group)
self.inventory["_meta"]["hostvars"][dest] = host_info
def get_route53_records(self):
''' Get and store the map of resource records to domain names that
point to them. '''
r53_conn = route53.Route53Connection()
all_zones = r53_conn.get_zones()
route53_zones = [ zone for zone in all_zones if zone.name[:-1]
not in self.route53_excluded_zones ]
self.route53_records = {}
for zone in route53_zones:
rrsets = r53_conn.get_all_rrsets(zone.id)
for record_set in rrsets:
record_name = record_set.name
if record_name.endswith('.'):
record_name = record_name[:-1]
for resource in record_set.resource_records:
self.route53_records.setdefault(resource, set())
self.route53_records[resource].add(record_name)
def get_instance_route53_names(self, instance):
''' Check if an instance is referenced in the records we have from
Route53. If it is, return the list of domain names pointing to said
instance. If nothing points to it, return an empty list. '''
instance_attributes = [ 'public_dns_name', 'private_dns_name',
'ip_address', 'private_ip_address' ]
name_list = set()
for attrib in instance_attributes:
try:
value = getattr(instance, attrib)
except AttributeError:
continue
if value in self.route53_records:
name_list.update(self.route53_records[value])
return list(name_list)
def get_host_info_dict_from_instance(self, instance):
instance_vars = {}
for key in vars(instance):
value = getattr(instance, key)
key = self.to_safe('ec2_' + key)
# Handle complex types
# state/previous_state changed to properties in boto in https://github.com/boto/boto/commit/a23c379837f698212252720d2af8dec0325c9518
if key == 'ec2__state':
instance_vars['ec2_state'] = instance.state or ''
instance_vars['ec2_state_code'] = instance.state_code
elif key == 'ec2__previous_state':
instance_vars['ec2_previous_state'] = instance.previous_state or ''
instance_vars['ec2_previous_state_code'] = instance.previous_state_code
elif type(value) in [int, bool]:
instance_vars[key] = value
elif isinstance(value, six.string_types):
instance_vars[key] = value.strip()
elif type(value) == type(None):
instance_vars[key] = ''
elif key == 'ec2_region':
instance_vars[key] = value.name
elif key == 'ec2__placement':
instance_vars['ec2_placement'] = value.zone
elif key == 'ec2_tags':
for k, v in value.items():
if self.expand_csv_tags and ',' in v:
v = map(lambda x: x.strip(), v.split(','))
key = self.to_safe('ec2_tag_' + k)
instance_vars[key] = v
elif key == 'ec2_groups':
group_ids = []
group_names = []
for group in value:
group_ids.append(group.id)
group_names.append(group.name)
instance_vars["ec2_security_group_ids"] = ','.join([str(i) for i in group_ids])
instance_vars["ec2_security_group_names"] = ','.join([str(i) for i in group_names])
else:
pass
# TODO Product codes if someone finds them useful
#print key
#print type(value)
#print value
return instance_vars
def get_host_info_dict_from_describe_dict(self, describe_dict):
''' Parses the dictionary returned by the API call into a flat list
of parameters. This method should be used only when 'describe' is
used directly because Boto doesn't provide specific classes. '''
# I really don't agree with prefixing everything with 'ec2'
# because EC2, RDS and ElastiCache are different services.
# I'm just following the pattern used until now to not break any
# compatibility.
host_info = {}
for key in describe_dict:
value = describe_dict[key]
key = self.to_safe('ec2_' + self.uncammelize(key))
# Handle complex types
# Target: Memcached Cache Clusters
if key == 'ec2_configuration_endpoint' and value:
host_info['ec2_configuration_endpoint_address'] = value['Address']
host_info['ec2_configuration_endpoint_port'] = value['Port']
# Target: Cache Nodes and Redis Cache Clusters (single node)
if key == 'ec2_endpoint' and value:
host_info['ec2_endpoint_address'] = value['Address']
host_info['ec2_endpoint_port'] = value['Port']
# Target: Redis Replication Groups
if key == 'ec2_node_groups' and value:
host_info['ec2_endpoint_address'] = value[0]['PrimaryEndpoint']['Address']
host_info['ec2_endpoint_port'] = value[0]['PrimaryEndpoint']['Port']
replica_count = 0
for node in value[0]['NodeGroupMembers']:
if node['CurrentRole'] == 'primary':
host_info['ec2_primary_cluster_address'] = node['ReadEndpoint']['Address']
host_info['ec2_primary_cluster_port'] = node['ReadEndpoint']['Port']
host_info['ec2_primary_cluster_id'] = node['CacheClusterId']
elif node['CurrentRole'] == 'replica':
host_info['ec2_replica_cluster_address_'+ str(replica_count)] = node['ReadEndpoint']['Address']
host_info['ec2_replica_cluster_port_'+ str(replica_count)] = node['ReadEndpoint']['Port']
host_info['ec2_replica_cluster_id_'+ str(replica_count)] = node['CacheClusterId']
replica_count += 1
# Target: Redis Replication Groups
if key == 'ec2_member_clusters' and value:
host_info['ec2_member_clusters'] = ','.join([str(i) for i in value])
# Target: All Cache Clusters
elif key == 'ec2_cache_parameter_group':
host_info["ec2_cache_node_ids_to_reboot"] = ','.join([str(i) for i in value['CacheNodeIdsToReboot']])
host_info['ec2_cache_parameter_group_name'] = value['CacheParameterGroupName']
host_info['ec2_cache_parameter_apply_status'] = value['ParameterApplyStatus']
# Target: Almost everything
elif key == 'ec2_security_groups':
# Skip if SecurityGroups is None
# (it is possible to have the key defined but no value in it).
if value is not None:
sg_ids = []
for sg in value:
sg_ids.append(sg['SecurityGroupId'])
host_info["ec2_security_group_ids"] = ','.join([str(i) for i in sg_ids])
# Target: Everything
# Preserve booleans and integers
elif type(value) in [int, bool]:
host_info[key] = value
# Target: Everything
# Sanitize string values
elif isinstance(value, six.string_types):
host_info[key] = value.strip()
# Target: Everything
# Replace None by an empty string
elif type(value) == type(None):
host_info[key] = ''
else:
# Remove non-processed complex types
pass
return host_info
def get_host_info(self):
''' Get variables about a specific host '''
if len(self.index) == 0:
# Need to load index from cache
self.load_index_from_cache()
if not self.args.host in self.index:
# try updating the cache
self.do_api_calls_update_cache()
if not self.args.host in self.index:
# host might not exist anymore
return self.json_format_dict({}, True)
(region, instance_id) = self.index[self.args.host]
instance = self.get_instance(region, instance_id)
return self.json_format_dict(self.get_host_info_dict_from_instance(instance), True)
def push(self, my_dict, key, element):
''' Push an element onto an array that may not have been defined in
the dict '''
group_info = my_dict.setdefault(key, [])
if isinstance(group_info, dict):
host_list = group_info.setdefault('hosts', [])
host_list.append(element)
else:
group_info.append(element)
def push_group(self, my_dict, key, element):
''' Push a group as a child of another group. '''
parent_group = my_dict.setdefault(key, {})
if not isinstance(parent_group, dict):
parent_group = my_dict[key] = {'hosts': parent_group}
child_groups = parent_group.setdefault('children', [])
if element not in child_groups:
child_groups.append(element)
def get_inventory_from_cache(self):
''' Reads the inventory from the cache file and returns it as a JSON
object '''
cache = open(self.cache_path_cache, 'r')
json_inventory = cache.read()
return json_inventory
def load_index_from_cache(self):
''' Reads the index from the cache file sets self.index '''
cache = open(self.cache_path_index, 'r')
json_index = cache.read()
self.index = json.loads(json_index)
def write_to_cache(self, data, filename):
''' Writes data in JSON format to a file '''
json_data = self.json_format_dict(data, True)
cache = open(filename, 'w')
cache.write(json_data)
cache.close()
def uncammelize(self, key):
temp = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', key)
return re.sub('([a-z0-9])([A-Z])', r'\1_\2', temp).lower()
def to_safe(self, word):
''' Converts 'bad' characters in a string to underscores so they can be used as Ansible groups '''
regex = "[^A-Za-z0-9\_"
if not self.replace_dash_in_groups:
regex += "\-"
return re.sub(regex + "]", "_", word)
def json_format_dict(self, data, pretty=False):
''' Converts a dict to a JSON object and dumps it as a formatted
string '''
if pretty:
return json.dumps(data, sort_keys=True, indent=2)
else:
return json.dumps(data)
# Run the script
Ec2Inventory()
================================================
FILE: tools/aws_lab_setup/inventory/group_vars/all.yml
================================================
ssh_port: 22
users:
- name: Brue Wayne
username: bwayne
email: bwayne@we.com
admin_password: LearnAnsible{{ lookup('pipe', 'date +%m%y')}}
admin_password_hash: "{{ admin_password | password_hash(salt='sRvXWmR5BBwqRlih') }}"
================================================
FILE: tools/aws_lab_setup/provision_lab.yml
================================================
- name: Perform Checks to make sure this Playbook will complete successfully
hosts: localhost
connection: local
become: no
gather_facts: no
tasks:
- name: make sure we are running correct Ansible Version
fail:
msg: "Ansible version must be at least => 2.3.2"
when: (ansible_version.major == 1) or
(ansible_version.major == 2 and ansible_version.minor < 3) or
(ansible_version.major == 2 and ansible_version.minor == 3 and ansible_version.revision < 2)
- name: save username of AWS user
set_fact:
lightbulb_user: '{{ ec2_key_name }}'
- name: Create lab instances in AWS
hosts: localhost
connection: local
become: no
gather_facts: no
vars:
teardown: false
roles:
- manage_ec2_instances
- name: Configure common options on managed nodes and control nodes
hosts: "managed_nodes:control_nodes"
become: yes
roles:
- user_accounts
- common
- name: Configure control node
hosts: control_nodes
become: yes
roles:
- control_node
- name: Email inventory to students
hosts: localhost
connection: local
become: no
gather_facts: no
roles:
- email
================================================
FILE: tools/aws_lab_setup/roles/common/defaults/main.yml
================================================
common_node_config_options:
- dest: /etc/ssh/sshd_config
regexp: '^#?Port'
line: Port {{ ssh_port | default('22') }}
validate: sshd -t -f %s
- dest: /etc/ssh/sshd_config
regexp: '^#?PasswordAuthentication'
line: 'PasswordAuthentication yes'
validate: sshd -t -f %s
- dest: /etc/ssh/sshd_config
regexp: '^#?UseDNS'
line: 'UseDNS no'
validate: sshd -t -f %s
- dest: /etc/sudoers
regexp: '^{{ username }}'
line: '{{ username }} ALL=(ALL) NOPASSWD: ALL'
validate: visudo -cf %s
- dest: /etc/sudoers
regexp: '^Defaults.*requiretty'
line: 'Defaults !requiretty'
validate: visudo -cf %s
firewall_rules:
- service: http
- service: https
- service: ssh
- port: 5432
protocol: udp
- port: 5672
protocol: udp
- port: 5432
protocol: tcp
- port: 5672
protocol: tcp
firewalld_service: firewalld
================================================
FILE: tools/aws_lab_setup/roles/common/handlers/main.yml
================================================
- name: restart sshd
service:
name: "{{ common_ssh_service_name }}"
state: restarted
- name: Reboot
shell: sleep 2 && reboot
async: 30
poll: 0
- name: Wait for instance
delegate_to: localhost
become: no
wait_for:
port: "{{ ssh_port | default('22') }}"
host: "{{ ansible_host }}"
search_regex: OpenSSH
timeout: 500
delay: 40
- name: restart firewalld
service:
name: "{{ firewalld_service }}"
state: restarted
================================================
FILE: tools/aws_lab_setup/roles/common/tasks/RedHat.yml
================================================
- name: RHEL | Set version specific variables
include_vars: "{{ ansible_os_family }}-{{ ansible_distribution_major_version }}.yml"
tags:
- always
- name: RHEL | Install {{ firewall_name }} and libselinux-python
yum:
name:
- libselinux-python
- "{{ firewall_name }}"
state: present
tags:
- common
- name: RHEL | Start firewalld
service:
name: "{{ firewalld_service }}"
state: started
- name: RHEL | Enable specific firewall services
firewalld:
immediate: yes
permanent: yes
port: "{{ item.port ~ '/' ~ item.protocol if (item.port is defined and item.protocol is defined) else omit }}"
service: "{{ item.service | default(omit) }}"
state: enabled
with_items: "{{ firewall_rules }}"
notify: restart firewalld
================================================
FILE: tools/aws_lab_setup/roles/common/tasks/Ubuntu.yml
================================================
- name: UBUNTU | Set version specific variables
include_vars: "{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"
tags:
- always
================================================
FILE: tools/aws_lab_setup/roles/common/tasks/main.yml
================================================
- name: Include Red Hat tasks
include: "{{ ansible_os_family }}.yml"
static: no
when: ansible_os_family == 'RedHat'
- name: Include Ubuntu tasks
include: "{{ ansible_distribution }}.yml"
static: no
when: ansible_distribution == 'Ubuntu'
- name: Configure sshd and sudoers
lineinfile:
dest: "{{ item.dest }}"
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
state: "{{ item.state | default('present') }}"
validate: "{{ item.validate | default(omit) }}"
backup: no
with_items: "{{ common_node_config_options }}"
notify: restart sshd
tags:
- ssh
- sudo
- common
- meta: flush_handlers
tags:
- common
================================================
FILE: tools/aws_lab_setup/roles/common/vars/RedHat-6.yml
================================================
firewall_name: iptables
common_ssh_service_name: sshd
================================================
FILE: tools/aws_lab_setup/roles/common/vars/RedHat-7.yml
================================================
firewall_name: firewalld
common_ssh_service_name: sshd
================================================
FILE: tools/aws_lab_setup/roles/common/vars/Ubuntu-14.yml
================================================
common_ssh_service_name: ssh
================================================
FILE: tools/aws_lab_setup/roles/common/vars/Ubuntu-16.yml
================================================
common_ssh_service_name: ssh
================================================
FILE: tools/aws_lab_setup/roles/control_node/defaults/main.yml
================================================
control_node_inventory_path: ~{{ username }}/lightbulb/lessons/lab_inventory/{{ username }}-instances.txt
control_node_install_ansible: yes
================================================
FILE: tools/aws_lab_setup/roles/control_node/tasks/main.yml
================================================
- name: Install EPEL
yum:
name: "https://dl.fedoraproject.org/pub/epel/epel-release-latest-{{ ansible_distribution_major_version }}.noarch.rpm"
state: present
tags:
- control_node
- control
- name: Install base packages
yum:
name:
- vim
- git
- wget
- nano
state: latest
enablerepo: epel-testing
tags:
- control_node
- control
- name: Install Ansible
yum:
name:
- sshpass
- ansible
state: latest
enablerepo: epel-testing
tags:
- control_node
- control
when: control_node_install_ansible
- name: Clone lightbulb
git:
accept_hostkey: yes
clone: yes
dest: ~{{ username }}/lightbulb
repo: https://github.com/ansible/lightbulb.git
force: yes
become_user: "{{ username }}"
tags:
- control_node
- control
- name: Remove things that students don't need
file:
state: absent
path: ~{{ username }}/lightbulb/{{ item }}
with_items:
- aws_lab_setup
- resources
- Vagrantfile
- README.md
tags:
- control_node
- control
- name: Install ansible.cfg and vimrc in home directory
template:
src: "{{ item }}"
dest: ~{{ username }}/.{{ (item | splitext)[0] }}
owner: "{{ username }}"
group: "{{ username }}"
with_items:
- ansible.cfg.j2
- vimrc.j2
tags:
- control_node
- control
- vim
- name: Create lab inventory directory
file:
state: directory
path: /home/{{ username }}/lightbulb/lessons/lab_inventory
tags:
- control_node
- control
- name: Put student inventory in proper spot
copy:
src: ./{{ username }}-instances.txt
dest: "{{ control_node_inventory_path }}"
owner: "{{ username }}"
group: "{{ username }}"
when: username in inventory_hostname
tags:
- control_node
- control
================================================
FILE: tools/aws_lab_setup/roles/control_node/templates/ansible.cfg.j2
================================================
[defaults]
connection = smart
timeout = 60
inventory = {{ control_node_inventory_path }}
host_key_checking = False
================================================
FILE: tools/aws_lab_setup/roles/control_node/templates/vimrc.j2
================================================
set autoindent
set tabstop=2
set shiftwidth=2
autocmd FileType yaml setlocal ai ts=2 sw=2 et
autocmd FileType yml setlocal ai ts=2 sw=2 et
================================================
FILE: tools/aws_lab_setup/roles/email/defaults/main.yml
================================================
# sendgrid_user:
# sendgrid_pass:
# sendgrid_api_key:
email: yes
================================================
FILE: tools/aws_lab_setup/roles/email/tasks/main.yml
================================================
- name: Send email to students with inventory attached
delegate_to: localhost
sendgrid:
username: "{{ sendgrid_user | default(omit) }}"
password: "{{ sendgrid_pass | default(omit) }}"
api_key: "{{ sendgrid_api_key | default(omit) }}"
subject: "[Ansible] Important Training Details"
body: |
Attached is the Ansible inventory to be used for training.
Please check your ability to connect to each of the hosts via ssh.
The username is '{{ item.username }}' and the password is '{{ admin_password }}'.
If you have any issues connecting, please reply to this email to let me know.
to_addresses: "{{ item.email }}"
html_body: yes
from_address: "{{ instructor_email }}"
attachments:
- "{{ item.username }}-instances.txt"
with_items: "{{ users }}"
when: email
tags:
- email
================================================
FILE: tools/aws_lab_setup/roles/manage_ec2_instances/defaults/main.yml
================================================
ec2_name_prefix: TRAINING
ec2_region: us-east-1
ec2_exact_count: 1
ec2_wait: yes
ec2_subnet: "172.16.0.0/16"
ec2_subnet2: "172.17.0.0/16"
ec2_lab_node_types:
- name: ansible
type: rhel7-tower
- name: node1
type: rhel7
- name: node2
type: rhel7
- name: node3
type: rhel7
# - name: haproxy
# type: rhel7
================================================
FILE: tools/aws_lab_setup/roles/manage_ec2_instances/tasks/create.yml
================================================
- name: Wait for instances to finish initializing
pause:
seconds: 90
when: instances is changed
tags:
- always
- provision
- name: Add hosts to control_nodes group
add_host:
name: "{{ item.invocation.module_args.instance_tags.Name }}"
ansible_host: "{{ item.tagged_instances[0].public_ip }}"
ansible_user: "{{ ec2_login_names[item.item.1.type] }}"
ansible_port: "{{ ssh_port | default('22') }}"
groups: lab_hosts,control_nodes
with_items: "{{ instances.results }}"
when: "'ansible' in item.invocation.module_args.instance_tags.Name"
changed_when: no
tags:
- always
- provision
- name: Add hosts to groups
add_host:
name: "{{ item.invocation.module_args.instance_tags.Name }}"
ansible_host: "{{ item.tagged_instances[0].public_ip }}"
ansible_user: "{{ ec2_login_names[item.item.1.type] }}"
ansible_port: "{{ ssh_port | default('22') }}"
groups: lab_hosts,managed_nodes
with_items: "{{ instances.results }}"
when: "'ansible' not in item.invocation.module_args.instance_tags.Name"
changed_when: no
tags:
- always
- provision
- name: Wait for Ansible connection to all hosts
wait_for_connection:
delay: 0
timeout: 400
when: instances is changed
delegate_to: "{{ item }}"
loop: "{{ groups.lab_hosts }}"
- name: Set local username to create on instances
set_fact:
username: "{{ item | regex_replace('.*-(\\w*)-\\w*$','\\1') }}"
with_items: "{{ groups.lab_hosts }}"
delegate_to: "{{ item }}"
delegate_facts: yes
tags:
- always
- provision
- name: Generate student inventories
template:
src: instances.txt.j2
dest: ./{{ item.username }}-instances.txt
with_items: "{{ users }}"
tags:
- inventory
- users
- user_accounts
- name: Generate instructor inventory
template:
src: instructor_inventory.j2
dest: ./instructor_inventory.txt
tags:
- inventory
- users
- user_accounts
================================================
FILE: tools/aws_lab_setup/roles/manage_ec2_instances/tasks/main.yml
================================================
- include_tasks: teardown.yml
when: teardown
- include_tasks: provision.yml
when: not teardown
================================================
FILE: tools/aws_lab_setup/roles/manage_ec2_instances/tasks/provision.yml
================================================
- block:
- name: Set EC2 security group type
set_fact:
ec2_security_group: insecure_all
- name: Create EC2 security group
ec2_group:
name: "{{ ec2_security_group }}"
description: all ports open
region: "{{ ec2_region }}"
vpc_id: "{{ ec2_vpc_id }}"
rules:
- proto: all
cidr_ip: 0.0.0.0/0
rules_egress:
- proto: all
cidr_ip: 0.0.0.0/0
when: ec2_exact_count >= 1
tags:
- always
- provision
when: ec2_vpc_id is defined
- block:
- name: Create AWS VPC {{ ec2_name_prefix }}-vpc
ec2_vpc_net:
name: "{{ ec2_name_prefix }}-vpc"
cidr_block: "{{ec2_subnet}}"
region: "{{ ec2_region }}"
tags:
Username: "{{ lightbulb_user }}"
Info: "Username that provisioned this-> {{ lightbulb_user }}"
Lightbulb: "This was provisioned through the lightbulb provisioner"
register: create_vpc
until: create_vpc is not failed
retries: 5
tags:
- provision
- name: Create EC2 security group for VPC named {{ ec2_name_prefix }}-vpc
ec2_group:
name: "{{ ec2_name_prefix }}-insecure_all"
description: all ports open
region: "{{ ec2_region }}"
vpc_id: "{{create_vpc.vpc.id}}"
tags:
Username: "{{ lightbulb_user }}"
Info: "Username that provisioned this-> {{ lightbulb_user }}"
Lightbulb: "This was provisioned through the lightbulb provisioner"
rules:
- proto: 47
to_port: -1
from_port: -1
cidr_ip: 0.0.0.0/0
- proto: tcp
to_port: 443
from_port: 443
cidr_ip: 0.0.0.0/0
- proto: icmp
to_port: -1
from_port: -1
cidr_ip: 0.0.0.0/0
- proto: tcp
to_port: 80
from_port: 80
cidr_ip: 0.0.0.0/0
- proto: tcp
to_port: 22
from_port: 22
cidr_ip: 0.0.0.0/0
rules_egress:
- proto: all
cidr_ip: 0.0.0.0/0
when: ec2_exact_count >= 1
tags:
- provision
- name: Create subnet for {{ ec2_name_prefix }}-vpc
ec2_vpc_subnet:
region: "{{ ec2_region }}"
az: "{{ ec2_az }}"
vpc_id: "{{ create_vpc.vpc.id }}"
cidr: "{{ ec2_subnet }}"
wait_timeout: 600
resource_tags:
Name: "{{ ec2_name_prefix }}-subnet"
Username: "{{ lightbulb_user }}"
Info: "Username that provisioned this-> {{ lightbulb_user }}"
Lightbulb: "This was provisioned through the lightbulb provisioner"
register: create_subnet
until: create_subnet is not failed
retries: 15
tags:
- provision
- name: VPC internet gateway is present for {{ create_vpc.vpc.id }}
ec2_vpc_igw:
region: "{{ ec2_region }}"
vpc_id: "{{ create_vpc.vpc.id }}"
tags:
Username: "{{ lightbulb_user }}"
Info: "Username that provisioned this-> {{ lightbulb_user }}"
lightbulb: "This was provisioned through the lightbulb provisioner"
register: igw
tags:
- provision
- name: VPC public subnet route table is present for {{ create_vpc.vpc.id }}
ec2_vpc_route_table:
region: "{{ ec2_region }}"
vpc_id: "{{ create_vpc.vpc.id }}"
subnets:
- "{{ create_subnet.subnet.id }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
tags:
Username: "{{ lightbulb_user }}"
Info: "Username that provisioned this-> {{ lightbulb_user }}"
Lightbulb: "This was provisioned through the lightbulb provisioner"
register: routetable
until: routetable is not failed
retries: 5
tags:
- provision
- name: Set variables for instance creation dynamically since VPC was not supplied by user
set_fact:
ec2_vpc_id: "{{ create_vpc.vpc.id }}"
ec2_security_group: "{{ ec2_name_prefix }}-insecure_all"
ec2_vpc_subnet_id: "{{ create_subnet.subnet.id }}"
tags:
- provision
when: ec2_vpc_id is undefined
- name: Create EC2 instances
ec2:
assign_public_ip: yes
key_name: "{{ ec2_key_name }}"
group: "{{ ec2_security_group }}"
instance_type: "{{ ec2_instance_types[item.1.type].size }}"
image: "{{ ec2_instance_types[item.1.type].ami_id }}"
region: "{{ ec2_region }}"
exact_count: "{{ ec2_exact_count }}"
count_tag:
Name: "{{ ec2_name_prefix }}-{{ item.0.username }}-{{ item.1.name }}"
instance_tags:
Name: "{{ ec2_name_prefix }}-{{ item.0.username }}-{{ item.1.name }}"
Workshop: "{{ec2_name_prefix}}"
Username: "{{ lightbulb_user }}"
Info: "Username that provisioned this-> {{ lightbulb_user }}"
Linklight: "This was provisioned through the lightbulb provisioner"
wait: "{{ ec2_wait }}"
vpc_subnet_id: "{{ ec2_vpc_subnet_id | default(omit) }}"
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: "{{ ec2_instance_types[item.1.type].disk_space }}"
delete_on_termination: true
with_nested:
- "{{ users }}"
- "{{ ec2_lab_node_types }}"
register: instances
tags:
- always
- provision
- name: Include tasks only needed when creating instances
include_tasks: create.yml
when: ec2_exact_count >= 1
tags:
- provision
================================================
FILE: tools/aws_lab_setup/roles/manage_ec2_instances/tasks/teardown.yml
================================================
- name: grab facts for workshop
ec2_instance_facts:
region: "{{ ec2_region }}"
filters:
instance-state-name: running
"tag:Workshop": "{{ec2_name_prefix}}"
register: all_workshop_nodes
- name: Destroy EC2 instances
ec2:
region: "{{ ec2_region }}"
state: absent
instance_ids: "{{ all_workshop_nodes.instances | map(attribute='instance_id') | list }}"
register: result
when: all_workshop_nodes.instances
- name: Get the VPC ID for {{ ec2_name_prefix }}
ec2_vpc_net_facts:
filters:
"tag:Name": "{{ ec2_name_prefix }}-vpc"
region: "{{ ec2_region }}"
register: vpc_net_facts
- name: set variables for instance creation dynamically since VPC was not supplied by user
set_fact:
ec2_vpc_id: "{{vpc_net_facts.vpcs[0].id}}"
ec2_security_group: "{{ ec2_name_prefix }}-insecure_all"
when: vpc_net_facts.vpcs|length > 0 and ec2_security_group is undefined
- name: Deleted EC2 security group for VPC vpc-{{ ec2_name_prefix }}
ec2_group:
name: "{{ec2_security_group}}"
region: "{{ ec2_region }}"
vpc_id: "{{ec2_vpc_id}}"
state: absent
register: delete_sg
until: delete_sg is not failed
retries: 50
when: vpc_net_facts.vpcs|length > 0
- name: Delete subnet for {{ ec2_name_prefix }}-vpc
ec2_vpc_subnet:
region: "{{ ec2_region }}"
az: "{{ec2_az}}"
vpc_id: "{{ec2_vpc_id}}"
cidr: "{{ec2_subnet}}"
state: absent
when: vpc_net_facts.vpcs|length > 0
- name: Ensure vpc internet gateway is deleted for vpc-{{ ec2_name_prefix }}
ec2_vpc_igw:
region: "{{ ec2_region }}"
vpc_id: "{{ec2_vpc_id}}"
state: absent
when: vpc_net_facts.vpcs|length > 0
- name: Grab route information for {{ ec2_name_prefix }} on {{ ec2_region }}
ec2_vpc_route_table_facts:
region: "{{ ec2_region }}"
filters:
vpc_id: "{{ec2_vpc_id}}"
register: route_table_facts
when: vpc_net_facts.vpcs|length > 0
- name: VPC public subnet route table is deleted
ec2_vpc_route_table:
region: "{{ ec2_region }}"
vpc_id: "{{ec2_vpc_id}}"
route_table_id: "{{item.id}}"
lookup: id
state: absent
with_items: "{{route_table_facts.route_tables}}"
when: vpc_net_facts.vpcs|length > 0 and item.associations == []
- name: Delete AWS VPC {{ ec2_name_prefix }}
ec2_vpc_net:
name: "{{ ec2_name_prefix }}-vpc"
cidr_block: "{{ ec2_subnet }}"
region: "{{ ec2_region }}"
state: absent
================================================
FILE: tools/aws_lab_setup/roles/manage_ec2_instances/templates/instances.txt.j2
================================================
[all:vars]
ansible_user={{ item.username }}
ansible_ssh_pass={{ admin_password }}
{% if ssh_port is defined %}
ansible_port={{ ssh_port }}
{% endif %}
[web]
{% for grouper, list in instances.results|groupby('item') if grouper[0] == item %}
{% for vm in list %}
{% if 'ansible' not in vm.invocation.module_args.instance_tags.Name and 'haproxy' not in vm.invocation.module_args.instance_tags.Name%}
{{ vm.invocation.module_args.instance_tags.Name | regex_replace('.*-(node)(\\d)', '\\1-\\2') }} ansible_host={{ vm.tagged_instances[0].public_ip }}
{% endif %}
{% endfor %}
{% endfor %}
[control]
{% for grouper, list in instances.results|groupby('item') if grouper[0] == item %}
{% for vm in list %}
{% if 'ansible' in vm.invocation.module_args.instance_tags.Name %}
{{ vm.invocation.module_args.instance_tags.Name | regex_replace('.*-([\\w]*)', '\\1') }} ansible_host={{ vm.tagged_instances[0].public_ip }}
{% endif %}
{% endfor %}
{% endfor %}
{#
[haproxy]
{% for grouper, list in instances.results|groupby('item') if grouper[0] == item %}
{% for vm in list %}
{% if 'haproxy' in vm.invocation.module_args.instance_tags.Name %}
{{ vm.invocation.module_args.instance_tags.Name | regex_replace('.*-([\\w]*)', '\\1') }} ansible_host={{ vm.tagged_instances[0].public_ip }}
{% endif %}
{% endfor %}
{% endfor %}
#}
================================================
FILE: tools/aws_lab_setup/roles/manage_ec2_instances/vars/main.yml
================================================
# --- Lookup tables --- #
# Region specific AMIs, subnets, and VPCs
ec2_regions:
us-east-1:
vpc: vpc-06c9a463
subnet_id: subnet-a2169ed5
key_name: "{{ ec2_key_name | default('training') }}"
amis:
centos7: ami-61bbf104
centos7-tower: ami-877b9e91
rhel7: ami-85241def
rhel7-tower: ami-85241def
ubuntu14: ami-feed08e8
ubuntu16: ami-9dcfdb8a
us-west-1:
vpc: vpc-ce1f9aab
subnet_id: subnet-25b9157c
key_name: "{{ ec2_key_name | default('training') }}"
amis:
centos7: ami-af4333cf
centos7-tower: ami-c04a00a0
rhel7: ami-f88fc498
rhel7-tower: ami-f88fc498
ubuntu14: ami-6ebcec0e
ubuntu16: ami-b05203d0
eu-west-1:
vpc: vpc-48141c2a
subnet_id: subnet-70f9f012
key_name: "{{ ec2_key_name | default('training') }}"
amis:
centos7: ami-af4333cf
centos7-tower: ami-c04a00a0
rhel7: ami-2b9d6c52
rhel7-tower: ami-2b9d6c52
ubuntu14: ami-6ebcec0e
ubuntu16: ami-b05203d0
ap-southeast-1:
vpc: vpc-xxxxxxxx
subnet_id: subnet-xxxxxxxx
key_name: "{{ ec2_key_name | default('training') }}"
amis:
centos7: ami-7d2eab1e
centos7-tower: ami-c7ed91a4
rhel7: ami-dccc04bf
rhel7-tower: ami-dccc04bf
ubuntu14: ami-adb51dce
ubuntu16: ami-87b917e4
ap-southeast-2:
vpc: vpc-xxxxxxxxa
subnet_id: subnet-xxxxxxxx
key_name: "{{ ec2_key_name | default('training') }}"
amis:
centos7: ami-34171d57
centos7-tower: ami-01b45763
rhel7: ami-286e4f4b
rhel7-tower: ami-286e4f4b
ubuntu14: ami-940e0bf7
ubuntu16: ami-e6b58e85
ap-south-1:
vpc: vpc-xxxxxxxx
subnet_id: subnet-xxxxxxxx
key_name: "{{ ec2_key_name | default('training') }}"
amis:
centos7: ami-3c0e7353
centos7-tower: ami-5b145434
rhel7: ami-cdbdd7a2
rhel7-tower: ami-cdbdd7a2
ubuntu14: ami-533e4e3c
ubuntu16: ami-dd3442b2
ap-northeast-1:
vpc: vpc-xxxxxxxx
subnet_id: subnet-xxxxxxxx
key_name: "{{ ec2_key_name | default('training') }}"
amis:
centos7: ami-29d1e34e
centos7-tower: ami-0d05d46b
rhel7: ami-a05854ce
rhel7-tower: ami-a05854ce
ubuntu14: ami-3995e55e
ubuntu16: ami-18afc47f
cn-north-1:
vpc: vpc-xxxxxxxx
subnet_id: subnet-xxxxxxxx
key_name: "{{ ec2_key_name | default('training') }}"
amis:
centos7: ami-3d805750
centos7-tower: ami-3d805750
rhel7: ami-52d1183f
rhel7-tower: ami-52d1183f
ubuntu14: ami-0220b23b
ubuntu16: ami-a6fc21cb
# Instance types used for lab configurations
ec2_instance_types:
centos7:
ami_id: "{{ ec2_regions[ec2_region].amis.centos7 }}"
size: t2.micro
os_type: linux
disk_space: 10
centos7-tower:
ami_id: "{{ ec2_regions[ec2_region].amis['centos7-tower'] }}"
size: t2.medium
os_type: linux
disk_space: 20
# Look for owner 309956199498 to find official Red Hat AMIs
rhel7:
ami_id: "{{ ec2_regions[ec2_region].amis.rhel7 }}"
size: t2.micro
os_type: linux
disk_space: 10
rhel7-tower:
ami_id: "{{ ec2_regions[ec2_region].amis['rhel7-tower'] }}"
size: t2.medium
os_type: linux
disk_space: 20
# Look at https://cloud-images.ubuntu.com/locator/ec2/ for Ubuntu AMIs
ubuntu14:
ami_id: "{{ ec2_regions[ec2_region].amis.ubuntu14 }}"
size: t2.micro
os_type: linux
disk_space: 10
ubuntu16:
ami_id: "{{ ec2_regions[ec2_region].amis.ubuntu16 }}"
size: t2.micro
os_type: linux
disk_space: 10
# Login names used for SSH connections. These are baked in to the AMIs.
ec2_login_names:
rhel7-tower: ec2-user
rhel7: ec2-user
centos7-tower: centos
centos7: centos
ubuntu14: ubuntu
ubuntu16: ubuntu
# Backwards compatability
types: "{{ ec2_lab_node_types }}"
aws_key_name: "{{ ec2_key_name }}"
================================================
FILE: tools/aws_lab_setup/roles/user_accounts/defaults/main.yml
================================================
admin_password: LearnAnsible{{ lookup('pipe', 'date +%m%y')}}
admin_password_hash: "{{ admin_password | password_hash(salt='sRvXWmR5BBwqRlih') }}"
================================================
FILE: tools/aws_lab_setup/roles/user_accounts/tasks/main.yml
================================================
- name: Create User Group
group:
name: "{{ username }}"
state: present
tags:
- user_accounts
- users
- name: Create User Account
user:
createhome: yes
group: "{{ username }}"
name: "{{ username }}"
shell: /bin/bash
state: present
password: "{{ admin_password_hash }}"
tags:
- user_accounts
- users
================================================
FILE: tools/aws_lab_setup/sample-users.yml
================================================
users:
- name: Luke Cage
username: lcage
email: lcage@marvel.com
================================================
FILE: tools/aws_lab_setup/teardown_lab.yml
================================================
- name: Destroy lab instances in AWS
hosts: localhost
connection: local
become: no
gather_facts: yes
vars:
ec2_exact_count: 0
teardown: true
ec2_wait: no
roles:
- manage_ec2_instances
post_tasks:
- name: Remove inventory files
file:
name: "{{ playbook_dir }}/*.txt"
state: absent
================================================
FILE: tools/inventory_import/README.md
================================================
# Static Inventory Importer
This playbook helps students import their local, static inventory into Ansible Tower.
## Usage
This playbook users the tower-cli tool and it's associated Ansible modules to create an inventory, groups, and then hosts with a special type of loop: `with_inventory_hostnames`
### Pre-requisites
In order for this play to work, you must have already completed the following:
* The Ansible Tower install exercise
* Logged in using the password you set for the admin user
* Applied your license key
### Setup
Assuming a standard setup of the Lightbulb environment -- meaning you've followed the guides here or an instructors directions -- the only item you need to adjust in `inventory_import.yml` is the value for `tower_host_password` on line 5. This is the same value that you set as your admin accounts password.
### Running the playbook
Once you've modified the `tower_host_password` value, you can run the playbook normally as your student user:
```yaml
[student1@~]$ansible-playbook lightbulb/tools/inventory_import/inventory_import.yml
```
Once that's done, you can check under the Inventory tab in Ansible Tower and you should now have 2 inventory groups:

## Next steps
Tower-cli does not currently support creating hosts under a group, so they'll need to be moved individually to the appropriate group.
Select the 'Ansible Lightbulb Lab' inventory, and begin moving the hosts to their respective groups by clicking the 'copy or move icon':

Choose the 'web' group for nodes 1-3, and the 'control' group for the control node. The default behavior is to copy the hosts to groups, but you can choose move if desired.
Once complete, you should be able to look at each group and see their associated hosts. Here's the web group as an example:

================================================
FILE: tools/inventory_import/inventory_import.yml
================================================
---
- name: Create an inventory, group, and hosts within Tower for the 'web' group nodes[1:3]
hosts: localhost
vars:
tower_host_password: #Enter your password you setup for 'admin_password' when installing Tower
pip_packages:
- python-pip
- python-devel
tower_host_ip: 127.0.0.1 #Since Tower serves on localhost by default, this is sufficient for Lightbulb
tower_host_user: admin #Admin is the default user for Lighbulb projects
organization: "Default" #Default org setup by Tower Install
inventory_name: "Ansible Lightbulb Lab" #Follows inventory_name in docs for Lightbulb
group_name:
- "web"
- "control"
tasks:
- name: Install pip and ansible-tower-cli
block:
- name: Install pip packages
yum:
name: "{{ item }}"
state: present
with_items: "{{ pip_packages }}"
- name: Install tower-cli
pip:
name: ansible-tower-cli
become: yes
- name: Create the Ansible Lightbulb inventory
tower_inventory:
name: "{{ inventory_name }}"
description: "Generic Inventory"
organization: "{{ organization }}"
tower_host: "{{ tower_host_ip }}"
tower_username: "{{ tower_host_user }}"
tower_password: "{{ tower_host_password }}"
state: present
- name: Add tower group
tower_group:
name: "{{ item }}"
description: "Generic group"
inventory: "{{ inventory_name }}"
tower_host: "{{ tower_host_ip }}"
tower_username: "{{ tower_host_user }}"
tower_password: "{{ tower_host_password }}"
state: present
with_items: "{{ group_name }}"
- name: Add tower hosts
tower_host:
name: "{{item}}"
variables: "ansible_host: {{ hostvars[item]['ansible_host'] }}"
description: "{{ item }} from your static inventory file"
inventory: "{{ inventory_name }}"
tower_host: "{{ tower_host_ip }}"
tower_username: "{{ tower_host_user }}"
tower_password: "{{ tower_host_password }}"
state: present
with_inventory_hostnames:
- web
- control
================================================
FILE: tools/lightbulb-from-tower/README.md
================================================
# Controlling lightbulb from within Tower
While this guide expects that the lightbulb environment is provisioned from the command line, it is actually possible to run the entire environment from within Tower. To do so, the following steps are required.
## Prepare Ansible environment on a Tower
Lightbulb requires multiple libraries to be present. While `boto` and `boto3` are shipped as part of Tower, `passlib` is necessary. So it needs to be installed in the [Tower Ansible environment](http://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/virtualenv.html).
```bash
$ ssh tower.example.com
$ sudo -i
# . /var/lib/awx/venv/ansible/bin/activate
(ansible) # pip install passlib
```
## Add credentials and project to Ansible Tower
1. Create [AWS credentials](http://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html#amazon-web-services): In Tower, go to "SETTINGS", "CREDENTIALS", click on "+ADD", enter a name, pick the "CREDENTIAL TYPE" "Amazon Web Services" and fill in the necessary details.
1. Create [machines credentials](http://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html#machine) containing your AWS SSH key.
1. Add a [new project](http://docs.ansible.com/ansible-tower/latest/html/userguide/projects.html#add-a-new-project), the Lightbulb Github repository, to Tower: Create a new Tower project, and add [lightbulb]() as source repository.

## Create Provision Job Template in Ansible Tower
1. Create a new [job template](http://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html) with the Git repo as project.
1. Use an [inventory](http://docs.ansible.com/ansible-tower/latest/html/userguide/inventories.html) only containing localhost.
1. As credentials, use the Amazon Web Services and also the corresponding machines credentials created above.
1. The playbook needs to be `tools/aws_lab_setup/provision_lab.yml`.
1. In the "EXTRA VARIABLES" field, provide the variables necessary for the AWS provider, like list of users, `ec2_name_prefix`, and so on.

## Create Teardown Job Template in Ansible Tower
1. Copy the just created job template.
1. Change the playbook to `tools/aws_lab_setup/teardown_lab.yml`.
That's already it. The proper function is ensured via the variables.
================================================
FILE: workshops/README.md
================================================
# Ansible Workshops
The workshops are a collection of assignments for learning how to automate with Ansible. The workshops are introduced by the decks and are assigned to the students during the presentations.
The workshops are set up as exercises to be done by the participants, and are most suitable for smaller audiences.
## Available Workshops
Currently there are two workshops available:
* [Ansible Engine](ansible_engine/README.md)
* [Ansible Tower](ansible_tower/README.md)
## Additional information
Additional information about Ansible, how to get started and where to proceed after this workshop, have a look at the following sources:
* [Ansible Getting Started](http://docs.ansible.com/ansible/latest/intro_getting_started.html)
================================================
FILE: workshops/ansible_engine/README.md
================================================
# Ansible Workshops
This workshop is part of lightbulb, and is a collection of assignments for learning how to automate with Ansible. The workshops are introduced by the decks and are assigned to the students during the presentations.
The focus of this workshop is on the set up and usage of Ansible Engine.
## Ansible Engine Exercises
* [Ansible installation](ansible_install)
* [Ad-Hoc commands](adhoc_commands)
* [Simple playbook](simple_playbook)
* [Basic playbook](basic_playbook)
* [Roles](roles)
## Additional information
Additional information about Ansible, how to get started and where to proceed after this workshop, have a look at the following sources:
* [Ansible Getting Started](http://docs.ansible.com/ansible/latest/intro_getting_started.html)
================================================
FILE: workshops/ansible_engine/adhoc_commands/README.md
================================================
# Workshop: Ad-Hoc Commands
## Topics Covered
* Ansible Modules
* Facts
* Inventory and Groups
* `ansible` command-line options: `-i -m -a -b --limit`
## What You Will Learn
* How to test your Ansible configuration and connectivity
* How to get and display Ansible facts
* How to install a package
## The Assignment
Perform the following operations using ad-hoc commands:
1. Test that Ansible is setup correctly to communicate with all hosts in your inventory using the `ping` module.
1. Fetch and display to STDOUT Ansible facts using the `setup` module.
1. Setup and enable the EPEL package repository on the hosts in the "web" group using the yum module.
* CentOS systems use the latest `epel-release` package
* RHEL systems should use [https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm](https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm)
### Extra Credit
1. Fetch and display only the "virtual" subset of facts for each host.
1. Fetch and display the value of fully qualified domain name (FQDN) of each host from their Ansible facts.
1. Display the uptime of all hosts using the `command` module.
1. Ping all hosts **except** the 'control' host using the `--limit` option
## Reference
* `ansible --help`
* [ping module](http://docs.ansible.com/ansible/ping_module.html)
* [setup module](http://docs.ansible.com/ansible/setup_module.html)
* [yum module](http://docs.ansible.com/ansible/yum_module.html)
* [command module](http://docs.ansible.com/ansible/command_module.html)
* [Introduction To Ad-Hoc Commands](http://docs.ansible.com/ansible/intro_adhoc.html)
---
[Click Here to return to the Ansible Lightbulb - Ansible Engine Workshop](../README.md)
================================================
FILE: workshops/ansible_engine/ansible_install/README.md
================================================
# Workshop: Installing Ansible
## Topics Covered
* Installing Ansible using pip or yum
* Configuration file basics
## What You Will Learn
* How easy it is to install and configure Ansible for yourself.
## The Assignment
1. Use pip to install the `ansible` package and its dependencies to you control machine.
1. Display the Ansible version and man page to STDOUT.
1. In `~/.ansible.cfg` file (create the file if it doesn't exist already) do the following:
* Create a new directory `~/.ansible/retry-files` and set `retry_files_save_path` to it.
* Set the Ansible system `forks` to 10
## Reference
* [Ansible Installation](http://docs.ansible.com/ansible/intro_installation.html)
* [Ansible Configuration File](http://docs.ansible.com/ansible/intro_configuration.html)
---
[Click Here to return to the Ansible Lightbulb - Ansible Engine Workshop](../README.md)
================================================
FILE: workshops/ansible_engine/basic_playbook/README.md
================================================
# Workshop: Basic Playbook
## Topics Covered
* Using `ansible-playbook`
* YAML syntax basics
* Basic Ansible playbook structure
* Tasks and modules
* Handlers
* Variables
* Loops
## What You Will Learn
* How to use `ansible-playbook`
* The basics of YAML syntax and Ansible playbook structure
* How to deploy and configure an application onto a group of hosts
## Before You Begin
If you're not familiar with the structure and authoring YAML files take a moment to read thru the Ansible [YAML Syntax](http://docs.ansible.com/ansible/YAMLSyntax.html) guide.
### NOTE
You will need to assure each host in "web" group has setup the EPEL repository to find and install the nginx package with yum.
## The Assignment
Create an Ansible playbook that will assure nginx is present, configured and running on all hosts in the "web" group:
1. Has variables for `nginx_test_message` and `nginx_webserver_port`.
1. Assures that the following yum packages are present on the each web host:
* nginx
* python-pip
* python-devel
* gcc
1. Assure that the uwsgi pip package is present on each host.
1. Generate a host-specific home page with the value of `nginx_test_message` for each host using the provided `index.html.j2` template.
1. Generate a configuration with the value of `nginx_webserver_port` for each host using the provided `nginx.conf.j2` template.
1. Assure that nginx is running on each host.
1. The playbook should restart nginx if the homepage or configuration file is altered.
While developing the playbook use the `--syntax-check` to check your work and debug problems. Run your playbook in verbose mode using the `-v` switch to get more information on what Ansible is doing. Try `-vv` and `-vvv` for added verbosity. Also consider running `--check` to do a dry-run as you are developing.
### Extra Credit
1. Add a smoke test to your playbook using the `uri` module that test nginx is serving the sample home page.
1. Create a separate playbook that stops and removes nginx along with its configuration file and home page.
## Resources
* [YAML Syntax](http://docs.ansible.com/ansible/YAMLSyntax.html)
* [Intro to Ansible Playbooks](http://docs.ansible.com/ansible/playbooks_intro.html)
* [Handlers](http://docs.ansible.com/ansible/playbooks_intro.html#handlers-running-operations-on-change)
* [Variables](http://docs.ansible.com/ansible/playbooks_variables.html)
* [Loops](http://docs.ansible.com/ansible/playbooks_loops.html)
* [yum module](http://docs.ansible.com/ansible/yum_module.html)
* [pip module](http://docs.ansible.com/ansible/pip_module.html)
* [template module](http://docs.ansible.com/ansible/template_module.html)
* [service module](http://docs.ansible.com/ansible/service_module.html)
* [uri module](http://docs.ansible.com/ansible/uri_module.html)
* [file module](http://docs.ansible.com/ansible/file_module.html)
---
[Click Here to return to the Ansible Lightbulb - Ansible Engine Workshop](../README.md)
================================================
FILE: workshops/ansible_engine/basic_playbook/resources/index.html.j2
================================================
Ansible: Automation for Everyone
{{ nginx_test_message }}
================================================
FILE: workshops/ansible_engine/basic_playbook/resources/nginx.conf.j2
================================================
# Based on nginx version: nginx/1.10.1
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 115;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen {{ nginx_webserver_port }} default_server;
listen [::]:{{ nginx_webserver_port }} default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 default_server;
# listen [::]:443 ssl http2 default_server;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
================================================
FILE: workshops/ansible_engine/roles/README.md
================================================
# Workshop: Roles
## Topics Covered
* Roles
## What You Will Learn
* How Ansible roles are developed and structured
* How to use a role in a playbook
## The Assignment
Your assignment is simple: refactor the Ansible playbook you've been developing into a role called "nginx-simple".
This assignment should result in a drop in replacement that is portable and more modular. It does not add any new tasks or functionality.
1. Initialize your role with `ansible-galaxy init` in a new subdirectory `roles/`.
1. Refactor your existing basic playbook and associated resources into your role.
1. Create a new playbook that uses the role still targeting the "web" group.
1. Remove any empty files and directories from your role.
### Extra Credit
1. Refactor and merge the Nginx remove and uninstall playbook from the Basic Playbook Extra Credit assignments into your "nginx-simple" role. Create a separate playbook to execute that function of the role.
## Resources
* [Ansible Roles](http://docs.ansible.com/ansible/playbooks_roles.html#roles)
* [Create Roles (with ansible-galaxy)](http://docs.ansible.com/ansible/galaxy.html#create-roles)
* [inculde_role](http://docs.ansible.com/ansible/include_role_module.html)
---
[Click Here to return to the Ansible Lightbulb - Ansible Engine Workshop](../README.md)
================================================
FILE: workshops/ansible_engine/simple_playbook/README.md
================================================
# Workshop: Simple Playbook
## Topics Covered
* Using `ansible-playbook`
* YAML syntax basics
* Basic Ansible playbook structure
* Tasks and modules
## What You Will Learn
* How to use `ansible-playbook`
* The basics of YAML syntax and Ansible playbook structure
## Before You Begin
If you're not familiar with the structure and authoring YAML files take a moment to read thru the Ansible [YAML Syntax](http://docs.ansible.com/ansible/YAMLSyntax.html) guide.
### NOTE
You will need to assure each host in "web" group has setup the EPEL repository to find and install the nginx package with yum.
## The Assignment
Create an Ansible playbook that targets members of the "web" group has the following state:
1. The nginx package is present using yum
1. Has the homepage that is provided in `resources/`.
1. Nginx is started on each host.
While developing the playbook use the `--syntax-check` to check your work and debug problems. Run your playbook in verbose mode using the `-v` switch to get more information on what Ansible is doing. Try `-vv` and `-vvv` for added verbosity. Also consider running `--check` to do a dry-run as you are developing.
## Resources
* [YAML Syntax](http://docs.ansible.com/ansible/YAMLSyntax.html)
* [Intro to Ansible Playbooks](http://docs.ansible.com/ansible/playbooks_intro.html)
* [yum module](http://docs.ansible.com/ansible/yum_module.html)
* [file module](http://docs.ansible.com/ansible/file_module.html)
* [service module](http://docs.ansible.com/ansible/service_module.html)
---
[Click Here to return to the Ansible Lightbulb - Ansible Engine Workshop](../README.md)
================================================
FILE: workshops/ansible_engine/simple_playbook/resources/index.html
================================================
Ansible: Automation for Everyone
================================================
FILE: workshops/ansible_tower/README.md
================================================
# Ansible Workshops
This workshop is part of lightbulb, and is a collection of assignments for learning how to automate with Ansible. The workshops are introduced by the decks and are assigned to the students during the presentations.
The focus of this workshop is on the set up and usage of Ansible Tower.
## Ansible Tower Exercises
* [Tower installation](tower_install)
* [Tower basic setup](tower_basic_setup)
## Additional information
* [Ansible Tower User Guide](http://docs.ansible.com/ansible-tower/latest/html/userguide/index.html)
* [Ansible Tower Administration Guide](http://docs.ansible.com/ansible-tower/latest/html/administration/index.html)
================================================
FILE: workshops/ansible_tower/tower_basic_setup/README.md
================================================
# Workshop: Ansible Tower Basic Setup
## Topics Covered
* Credentials
* Inventory
* Users
* Roles & Permissions
* Projects
* Job Templates
* Running a Job (Playbook)
## What You Will Learn
* Setting up a new instance of Tower with all the parts needed to run an existing Ansible playbook.
## Requirements
* A running instance of Ansible Tower with sufficient permissions to create credentials, inventory sources, projects etc.
## Before You Begin
Before doing this assignment you will need to perform a task you typical won't have to do when setting up Ansible Tower: manually enter your inventory. Commonly, Ansible Tower will be setup with one or more dynamic inventory sources such as AWS EC2 or vSphere or internal CMDB as a source of truth. Given the size and static nature of the Lightbulb lab environment, taking the time to setup and configure dynamic inventory is unnecessary.
To make this process a bit easier, we now include a playbook to automatically import your inventory into tower to reduce the configuration needed. See the [Inventory Import README](../../tools/inventory_import/README.md) for additional details.
If you choose to run this playbook and its next steps, you can skip items 1 & 3 in the below assignment.
## The Assignment
NOTE: Create all new entities under the "Default" organization Ansible Tower create during setup unless noted otherwise.
1. Enter your lab inventory's groups and hosts into Tower. (See "Before You Begin" above.)
1. Create a machine credential called "Ansible Lab Machine" with the username and password for the hosts in your lab.
1. Create an inventory source called "Ansible Lightbulb Lab" and create the groups and hosts in your static inventory file.
1. Create a project called "Ansible Lightbulb Examples Project" with a SCM type of Git and URL of [https://github.com/ansible/lightbulb](https://github.com/ansible/lightbulb). You should also enable "Update on Launch".
1. Create a Job Template called "Nginx Role Example" with the machine credential, inventory and project you created in the previous steps. Select "examples/nginx-role/site.yml" as the playbook.
1. Execute the job template. Check that it runs without errors and the web servers are serving the home page.
1. Add an extra variables of `nginx_test_message` with a string like "Hello World" then run the "Nginx Role Example" job template again. Again, check that it executes without errors and the web servers are serving the home page with the new test message.
### Extra Credit
Setup a self-service user simulation with the playbook
* Create a "Normal" user with the name "someuser"
* Assign the new user "Execute" permissions on the "Nginx Role Example" job template
* Create a user survey on the job template that enables users to update the test message on the home page
---
[Click Here to return to the Ansible Lightbulb - Ansible Tower Workshop](../README.md)
================================================
FILE: workshops/ansible_tower/tower_install/README.md
================================================
# Workshop: Installing Tower
## Topics Covered
* Installing Ansible Tower
* Importing a License File
## What You Will Learn
* The installation process for a standalone instance of Ansible Tower using the bundled installer.
## The Assignment
Install Ansible Tower on your controller machine using the [latest installer](http://releases.ansible.com/ansible-tower/setup/ansible-tower-setup-latest.tar.gz) and upload your license file.
1. Follow the [quick installation instructions](http://docs.ansible.com/ansible-tower/latest/html/quickinstall/index.html) to configure and run the single integrated installation of Ansible Tower on the control machine.
1. Sign into the Ansible Tower UI and upload the license file to enable the instance when prompted. Request a trial license if one hasn't been provided already. (Either trial license type will suffice for these lab assignments.)
## Extra Credit
* After importing the license file, access the ping API endpoint using a browser.
## Reference
* [Ansible Tower Software Releases Archive](http://releases.ansible.com/ansible-tower/)
* [Ansible Tower Quick Installation Guide](http://docs.ansible.com/ansible-tower/latest/html/quickinstall/index.html)
* [Import a License](http://docs.ansible.com/ansible-tower/latest/html/userguide/import_license.html)
* [Ansible Tower Ping API Endpoint](http://docs.ansible.com/ansible-tower/3.0.3/html/towerapi/ping.html)
---
[Click Here to return to the Ansible Lightbulb - Ansible Tower Workshop](../README.md)