Showing preview only (328K chars total). Download the full file or copy to clipboard to get everything.
Repository: reactiveops/pentagon
Branch: master
Commit: 35fc59f910aa
Files: 94
Total size: 300.9 KB
Directory structure:
gitextract_z6vql54m/
├── .circleci/
│ └── config.yml
├── .github/
│ └── stale.yml
├── .gitignore
├── CHANGELOG.md
├── CODEOWNERS
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── DESIGN.md
├── Dockerfile
├── LICENSE
├── MANIFEST.in
├── README.md
├── bin/
│ └── yaml_source
├── docs/
│ ├── _config.yml
│ ├── components.md
│ ├── getting-started.md
│ ├── network.md
│ ├── overview.md
│ └── vpn.md
├── example-component/
│ ├── LICENSE
│ ├── MANIFEST.in
│ ├── README.md
│ ├── pentagon_component/
│ │ ├── __init__.py
│ │ └── files/
│ │ ├── __init__.py
│ │ └── example_template.jinja
│ ├── requirement.txt
│ └── setup.py
├── pentagon/
│ ├── __init__.py
│ ├── cli.py
│ ├── component/
│ │ ├── __init__.py
│ │ ├── aws_vpc/
│ │ │ ├── __init__.py
│ │ │ └── files/
│ │ │ ├── aws_vpc.auto.tfvars.jinja
│ │ │ ├── aws_vpc.tf.jinja
│ │ │ └── aws_vpc_variables.tf
│ │ ├── core/
│ │ │ ├── __init__.py
│ │ │ └── files/
│ │ │ ├── .gitignore
│ │ │ ├── README.md
│ │ │ ├── ansible-requirements.yml
│ │ │ ├── inventory/
│ │ │ │ └── __init__.py
│ │ │ ├── plugins/
│ │ │ │ ├── filter_plugins/
│ │ │ │ │ └── flatten.py
│ │ │ │ └── inventory/
│ │ │ │ ├── base
│ │ │ │ ├── ec2.ini
│ │ │ │ └── ec2.py
│ │ │ └── requirements.txt
│ │ ├── gcp/
│ │ │ ├── __init__.py
│ │ │ ├── cluster.py
│ │ │ └── files/
│ │ │ └── public_cluster/
│ │ │ └── cluster.tf.jinja
│ │ ├── inventory/
│ │ │ ├── __init__.py
│ │ │ └── files/
│ │ │ ├── __init__.py
│ │ │ └── common/
│ │ │ ├── clusters/
│ │ │ │ └── __init__.py
│ │ │ ├── config/
│ │ │ │ ├── local/
│ │ │ │ │ ├── ansible.cfg-default.jinja
│ │ │ │ │ ├── local-config-init.jinja
│ │ │ │ │ ├── ssh_config-default.jinja
│ │ │ │ │ └── vars.yml.jinja
│ │ │ │ └── private/
│ │ │ │ └── .gitignore
│ │ │ ├── kubernetes/
│ │ │ │ └── __init__.py
│ │ │ └── terraform/
│ │ │ ├── .gitignore
│ │ │ ├── backend.tf.jinja
│ │ │ └── provider.tf.jinja
│ │ ├── kops/
│ │ │ ├── __init__.py
│ │ │ └── files/
│ │ │ ├── cluster.yml.jinja
│ │ │ ├── kops.sh
│ │ │ ├── masters.yml.jinja
│ │ │ ├── nodes.yml.jinja
│ │ │ └── secret.sh.jinja
│ │ └── vpn/
│ │ ├── __init__.py
│ │ └── files/
│ │ └── admin-environment/
│ │ ├── destroy.yml
│ │ ├── env.yml.jinja
│ │ └── vpn.yml
│ ├── defaults.py
│ ├── filters.py
│ ├── helpers.py
│ ├── meta.py
│ ├── migration/
│ │ ├── __init__.py
│ │ └── migrations/
│ │ ├── __init__.py
│ │ ├── migration_1_2_0.py
│ │ ├── migration_2_0_0.py
│ │ ├── migration_2_1_0.py
│ │ ├── migration_2_2_0.py
│ │ ├── migration_2_3_1.py
│ │ ├── migration_2_4_1.py
│ │ ├── migration_2_4_3.py
│ │ ├── migration_2_5_0.py
│ │ ├── migration_2_6_0.py
│ │ ├── migration_2_6_2.py
│ │ ├── migration_2_7_1.py
│ │ ├── migration_2_7_3.py
│ │ └── migration_3_1_0.py
│ └── pentagon.py
├── setup.py
└── tests/
├── __init__.py
├── requirements.txt
├── test_args.py
└── test_base.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .circleci/config.yml
================================================
#Copyright 2017 Reactive Ops Inc.
#
#Licensed under the Apache License, Version 2.0 (the “License”);
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an “AS IS” BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
version: 2
jobs:
build:
docker:
- image: circleci/python:2
working_directory: ~/pentagon
steps:
- run:
name: Setup PATH to support pip user installs
command: echo 'export PATH=$PATH:/home/circleci/.local/bin' >> $BASH_ENV
- checkout
- run:
name: Test Migration
command: |
# this fails but that is the intent hence "|| true"
last_version=$(pip install pentagon== 2>&1 | grep "Could not find" | awk -F',' '{ print $(NF -1) }' | sed s/[[:blank:]]/''/g) || true
pip install --user pentagon==${last_version}
# nohup to get rid of interactive and thus prompts
nohup pentagon start-project migration-test --aws-access-key=fake --aws-secret-key=fake
cd migration-test-infrastructure
export INFRASTRUCTURE_REPO=$(pwd)
cd inventory/default/clusters/production
nohup pentagon add kops.cluster -f vars.yml -o cluster-config
cd $INFRASTRUCTURE_REPO
# faking git config. The repo must have at least one commit for the migration to work
git add . && git -c user.name='fake' -c user.email='fake@email.org' commit -m 'initial commit'
pip install --user ~/pentagon
pentagon --version
pentagon migrate --yes
- run:
name: Unit Tests
command: |
pip install --user -r ${HOME}/pentagon/tests/requirements.txt
nosetests
- run:
name: Test Start Project
command: |
nohup pentagon start-project circleci-test --aws-access-key=fake --aws-secret-key=fake
release:
docker:
- image: circleci/python:2
environment:
PYPI_USERNAME: ReactiveOps
GITHUB_ORGANIZATION: $CIRCLE_PROJECT_USERNAME
GITHUB_REPOSITORY: $CIRCLE_PROJECT_REPONAME
working_directory: ~/pentagon
steps:
- checkout
- run:
name: init .pypirc
command: |
echo -e "[pypi]" >> ~/.pypirc
echo -e "username = $PYPI_USERNAME" >> ~/.pypirc
echo -e "password = $PYPI_PASSWORD" >> ~/.pypirc
- run:
name: create release
command: |
git fetch --tags
curl -O https://raw.githubusercontent.com/reactiveops/release.sh/v0.0.2/release
/bin/bash release || true
- run:
name: package and upload
command: |
sudo pip install twine
python setup.py sdist bdist_wheel
twine upload dist/*
workflows:
version: 2
build:
jobs:
- build:
filters:
tags:
only: /.*/
branches:
only: /.*/
- release:
requires:
- build
filters:
tags:
only: /.*/
branches:
ignore: /.*/
================================================
FILE: .github/stale.yml
================================================
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 60
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7
# Issues with these labels will never be considered stale
exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: wontfix
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false
================================================
FILE: .gitignore
================================================
.DS_Store
.terraform
config/private
*.pyc
*.pem
*.pub
pentagon.egg-info
.vscode
venv
dist
build
================================================
FILE: CHANGELOG.md
================================================
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## 3.1.4
### Fixes
- A required GCP start-project param was accidentally removed on a refactor of the CLI. Added it back.
## 3.1.3
### Changed
- Update dependencies
## 3.1.2
### Fixed
- Changed `defaults.node_count` from 3 to 1, so that only 3 total nodes (one per `InstanceGroup`) are created
## 3.1.1
### Fixed
- In certain cases a migration would cause duplicate hooks
- In certain cases, migrations were not run because kops.sh had been deleted
## 3.1.0
### Fixed
- issue where prompt=true was not respecting the default values
- display of option values was munging booleans
### Added
- Migration to enable kops hook that patches runc
- validation of prompted valued for click to ensure non-empty strings
## 3.0.2
### Changed
- `TILLER_NAMESPACE` is now set to `tiller` by default
## 3.0.1
### Fixed
- Non-populating values for kubernetes version in gcp deploys
- Bucket not required values for gcp deploys
## 3.0.0
### Added
- Support for GCP / GKE terraform templates on inventory init
### Changed
- Now all pentagon runs will confirm all the values that are set and what the values are set to (one step closer to better transparency)
## 2.7.3
### Fixed
- missing imports for latest migration
## 2.7.2
### Added
- Instructions on how to setup the development environment.
- revised cli help text
- migrations for kops settings that were missed in the last migration
- made `anonymousAuth: false` default for Kops clusters. This currently conflicts with metricserver version > 3.0.0
## 2.7.1
### Fixed
- migration
## 2.7.0 - 2019-1-3
## Updated
- add aws-iam-authenticator to kops spec by default
- Etcd now at version 3 in Kops spec
- default to multiple az instance groups for Kops
- updated generated docs
## Fixed
- kops availability zone calculation
## 2.6.1 - 2018-10-30
## Fixed
- Remove deprecated VPC Terraform module variables.
## 2.6.0 - 2018-10-29
### Updated
- Bumped default VPC Terraform module to version 3.0.0. Removes AWS provider from module in favor of inferred provider.
## 2.5.0 - 2018-10-26
## Fixed
- add new inventory now creates a more complete inventory instead of an empty one
- component arguments may now have '-' or '_'
## Updated
- Docs
## Added
- 'project_name' arg to some components and to the `config.yml` that gets written on 'start-project'
## 2.4.3 - 2018-10-16
### Fixed
- bug where cli -D were not begin passed properly
## 2.4.2 - 2018-10-15
### Updated
- Default Kops settings to improve security and auditing
### Fixed
- Reading from config fil
- Templating local path for ssh_config
- Installation requirements
- Worker and Master variable name for kubernetes arguements
### Added
### Removed
- Makefiles
## [2.4.1] - 2018-09-21
### Updated
### Fixed
### Added
- PyPi upload to circleci config
## [2.4.0] - 2018-8-21
### Updated
- replaced PyYaml with oyaml and added capability to have multidocument yaml files for component declarations
- Kops cluster `authorization` default changed to rbac
- Updated the inventory config to refer to `${INVENTORY}` vs assigning the `{{name}}` statically. `pentagon/component/inventory/files/common/config/local/vars.yml.jinja`
### Fixed
- `kubernetes_version` parameter value wasn't applying to the kops cluster config from `values.yml` file
## [2.3.1] - 2018-5-30
### Fixed
- Version dependancies
## [2.3.0] - 2018-5-30
### Added
- Some better behavior with migrations where a patch is made but not changes in structure was made
### Updated
- Allowed more value to be optional in the kops templates
- Updated docs
- Bumped terraform-vpc module source version
### Fixed
- Issue where kops clusters were created with the same network cidr
## [2.2.1] - 2018-4-9
## Removed `auto-approve` from terraform Makefile
## [2.2.0] - 2018-3-30
### Added
- colorful logging
- bug fixes and better support for GCP infrastructure
- `--gcp-revion` as part of the above change
### Updated
- `yaml_source` no longer throws errors when file is empty, just logs a message
- made the component class location method more flexible
- reorganized terraform files and made terraform a first class citizen and part of the `inventory.Inventory` component
- renamed vpc.VPC component to aws_vpc.AWSVpc as part of above change
- reorganize the defaul `secrets.yml` and removed unnecessary lines
## [2.1.0] - 2018-2-27
## Added
- `--version` flag to output version
- added cluster auto scaling iam policies by default
- added `--cloud` flag and supporting flags to create GCP/GKE infrastructure
### Updated
- Version handling in setup.py
- Updated yaml loader for config file reading to force string behavior
- Inventory component will use -D name= as the targe directory instead needing -o.
- Inventory -D account replaced with -D name
## [2.0.0] - 2018-2-1
### Added
- `yaml_source` script to replace env-vars.sh
- Environment variables are now checked in ComponentBase class
- Defaults to component
- overwrite to template rendering
- added inventory component
- added vpn component
### Removed
- env-vars.sh script
- untracked roles directory for ansible
### Updated
- makefile to support `yaml_source` change
- added distutil.dir_util to allow overwriting exisint directories
- added exit on failure for ComponentBase class
- added default config out file for Pentaong start-project
- updated config file output to sanitize and not include blank values
## [1.2.0] - 2017-11-8
### Added
- Added kops component
### Changed
- Added VPN name to include project name. Allows multiple VPN instances per VPC
- Set default versions to ansible roles
- Updated default kops cluster templates to use new kops component
- Updated make file to use Terraform outputs and improve robustness of creat and destroy
- Fixed legacy authorization bug in gcp coponent
### Removed
- Removed the older kops cluster creation
## [1.1.0] - 2017-10-4
### Added
- Added Changelog
- Added `add` method to `pentagon` command line
- Added component base class
- Added GCP and VPC components
- Added Example component
### Changed
- Changed VPC directory creation to utilize component class instead of
- Change Click libary usage to "setup tools" method
### Removed
- Section about "changelog" vs "CHANGELOG".
## [1.0.0]
### Added
- First open source version of Pentagon
================================================
FILE: CODEOWNERS
================================================
* @ejether @endzyme
================================================
FILE: CODE_OF_CONDUCT.md
================================================
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at [INSERT EMAIL ADDRESS]. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/
================================================
FILE: CONTRIBUTING.md
================================================
# How to contribute
Issues, whether bugs, tasks, or feature requests are essential for keeping Pentagon (and ReactiveOps in general) great. We believe it should be as easy as possible to contribute changes that
get things working in your environment. There are a few guidelines that we
need contributors to follow so that we can have a chance of keeping on
top of things.
o
## Setting up your development environment
1. Clone this repo and cd into it
```
git clone git@github.com:reactiveops/pentagon.git
cd pentagon
```
2. Create a virtual environment and source it. You need to source everytime you want to develop pentagon.
```
virtualenv venv
source venv/bin/activate
```
3. Finally, install pentagon into the venv. The `-e` means that it will take any of your file changes into account.
```
pip install -e .
```
4. If you run `which pentagon` it should point at the venv inside the newly created repo.
```
$ which pentagon
.../pentagon/venv/bin/pentagon
```
## Getting Started
* Submit a ticket for your issue, assuming one does not already exist.
* Clearly describe the issue including steps to reproduce when it is a bug.
* Apply the appropriate labels, whether it is bug, feature, or task.
## Making Changes
* Create a feature branch from where you want to base your work.
* This is usually the master branch.
* To quickly create a topic branch based on master; `git checkout -b
feature master`. Please avoid working directly on the
`master` branch.
* Try to make commits of logical units.
* Make sure you have added the necessary tests for your changes (coming soon).
* Make sure you have added any required documentation changes.
## Making Trivial Changes
### Documentation
For changes of a trivial nature to comments and documentation, it is not
always necessary to create a new issue in GitHub. In these cases, a branch with pull request is sufficient.
## Submitting Changes
* Push your changes to a topic branch.
* Submit a pull request.
* Update the issue with the `PR-available` label to mark that you have submitted code and are ready for it to be reviewed, and include a link to the pull request in the ticket.
Attribution
===========
Portions of this text are copied from the [Puppet Contributing](https://github.com/puppetlabs/puppet/blob/master/CONTRIBUTING.md) documentation.
================================================
FILE: DESIGN.md
================================================
# Pentagon Design Document:
## Intent
Pentagon is a framework for generating an Infrastructure As Code Repository (IACR). It is intended to provide a flexible and meaningful hierarchical structure to manage cloud infrastructure using a common set of tools. At ReactiveOps we use Pentagon generated IACRs to manage and maintain our client's cloud infrastructure. Our practice and experience has driven us to devise a highly flexible, highly repeatable framework that ensures uniformity of process. Pentagon has grown from a series of sensible decisions about how an IACR is “shaped”. It has a strict organization that is intended to enable automation and remain flexible to a wide variety of clouds, network, clusters and to provide a thoughtful structure for external resources.
## Key Design Elements
### Pentagon is a framework for components that are generators.
It is loosely modeled after Rails or Django and aims to provide an extensible framework for component modules. These component modules may be native or external but when external modules are installed, the interface is transparent to the user. Pentagon generators produce configuration files that should have sensible defaults provided for most values, but can be overridden by configuration.
### Pentagon provides a way to keep your IACRs up to date.
As new decisions are made, new features are added, and standards or requirements change, it is important to keep your IACR up to date. As Pentagon versions changes, so should your IACRs. Pentagon provides a migration framework so that updating the configuration and content of your IACR is defined in code. Any structure or code change should involve a new versioned migration. Exceptions may be where an update would be a breaking change or where large scale recreation of assets is required.
## Scope
### In Scope:
- Any process or component module that templates or creates files and directories for use within the context of the IACR
- Migrations to update standards and defaults in an older IACR to a newer version
- Read only interaction with infrastructure resources
### Out of Scope:
- Deep documentation how to use the supporting tools (terraform, ansible, kops etc)
- Automations and scripts to support workflows for infrastructure management practices
- Tooling to support interaction with the infrastructure repository
- Creating, or modifying any infrastructure resources
## Architecture
TBD
================================================
FILE: Dockerfile
================================================
FROM ubuntu:16.04
RUN apt-get update && apt-get install software-properties-common -y
RUN apt-add-repository ppa:ansible/ansible -y && apt-get update
RUN apt-get install -y ansible git python-dev python-pip python-dev libffi-dev libssl-dev wget vim zip openvpn awscli jq
RUN wget https://releases.hashicorp.com/terraform/0.10.0/terraform_0.10.0_linux_amd64.zip && unzip terraform_0.10.0_linux_amd64.zip && mv terraform /usr/local/bin/
RUN wget https://github.com/kubernetes/kops/releases/download/1.6.1/kops-linux-amd64 && \
chmod +x kops-linux-amd64 &&\
mv kops-linux-amd64 /usr/local/bin/kops
RUN mkdir -p /pentagon
COPY . /pentagon/
RUN pip install -U -e ./pentagon
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2017, Reactive Ops Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: MANIFEST.in
================================================
recursive-include pentagon/component *
================================================
FILE: README.md
================================================
# Pentagon
# *Pentagon has been deprecated and will no longer be maintained.*
## What is Pentagon?
**Pentagon is a cli tool to generate repeatable, cloud-based [Kubernetes](https://kubernetes.io/) infrastructure.**
It can be used as a “batteries included” default which can:
- provide a network with a cluster
- Two HA KOPS based Kubernetes clusters
- Segregated multiple development / non-production environments
- VPN-based access control
- A highly-available network, built across multiple Availability Zones
## How does it work?
**Pentagon produces a directory.** The directory defines a basic set of configurations for [Ansible](https://www.ansible.com/), [Terraform](https://www.terraform.io/) and [kops](https://github.com/kubernetes/kops)). When those tools are run in a specific order the result is a VPC with a VPN and a Kubernetes Cluster in AWS. GKE Support is built in but not default. It is designed to be customizable while at the same time built with defaults that fit the needs of most web application companies.
## Getting Started
The [Getting Started](docs/getting-started.md) has information about installing Pentagon and creating your first project.
Table Of Contents
=================
* [Requirements](docs/getting-started.md#requirements)
* [Installation](docs/getting-started.md#installation)
* [Quick Start Guide](docs/getting-started.md)
* [VPC](docs/getting-started.md#vpc-setup)
* [VPN](docs/getting-started.md#vpn-setup)
* [KOPS](docs/getting-started.md#kops)
* [Advanced Usage](docs/getting-started.md#advanced-project-initialization)
* [Infrastrucure Repository Overview](docs/overview.md)
* [Component](docs/components.md)
## AWS Virtual Private Cloud
A VPC configuration is provided with Terraform. Details can be found on the [VPC Setup Page](docs/vpc.md).
## Virtual Private Network
Configuration is provided for an OpenVPN setup in the VPC. Details can be found on the [VPN Setup Page](docs/vpn.md).
[](https://cla-assistant.io/reactiveops/pentagon)
================================================
FILE: bin/yaml_source
================================================
#!/bin/bash -e
usage="$0 file [unset] -- Where file.yml is a yml file of key value pairs
Sets environment variable where Key is the variable name and Value is its value
If unset is used, it unsets the keys in the file
"
vars_file=$1
get_keys() {
cat $vars_file| shyaml keys
}
set_vars() {
for key in `get_keys`
do
raw_value=$(cat $vars_file | shyaml get-value $key)
# some values in vars.yml use other variables that need to be dereferenced
dereferenced_value=$(eval echo $raw_value)
export $key="${dereferenced_value}"
done
}
unset_vars() {
for key in `get_keys`
do
unset "${key}"
done
}
if [ -z $1 ]
then
echo $usage
elif [ ! -f "${vars_file}" ]
then
echo $vars_file does not exist.
elif [ ! -s "${vars_file}" ]
then
echo $vars_file is empty
else
if [[ $2 == 'unset' ]] ; then
unset_vars
else
set_vars
fi
fi
================================================
FILE: docs/_config.yml
================================================
theme: jekyll-theme-dinky
================================================
FILE: docs/components.md
================================================
# Pentagon Components
The functionality of Pentagon can be extended with components. Currently only two commands are accepted `add` and `get`. Data is passed to the compenent in `Key=Value` pairs and `-D` flag or from a datafile in yml or json format. For some components, environment variables may also be used. See documentation for the particular component.
Global options for both `get` and `add` component commands:
```
Usage: pentagon [add|get] [OPTIONS] COMPONENT_PATH [ADDITIONAL_ARGS]...
Options:
-D, --data TEXT Individual Key=Value pairs used by the component. There should be no spaces surrounding the `=`
-f, --file TEXT File to read Key=Value pair from (yaml or json are
supported)
-o, --out TEXT Path to output module result, if any
--log-level TEXT Log Level DEBUG,INFO,WARN,ERROR
--help Show this message and exit.
```
## Built in components
### gcp.cluster
**This component is deprecated and not maintained. We are working on a new Terraform module to manage GKE clusters. Use this at your own risk**
- add:
- Creates `./<cluster_name>/create_cluster.sh` compiled from the data passed in.
- `bash ./<cluster_name>/create_cluster.sh` will create the cluster as configured.
- Argument keys are lower case, underscore separated version of the [gcloud container cluster create](https://cloud.google.com/sdk/gcloud/reference/beta/container/clusters/create) command.
- If a `-f` file is passed in, data are merged with `-D` values ovveriding the file values.
- Example:
```
pentagon --log-level=DEBUG add gcp.cluster -D cluster_name="reactiveopsio-cluster" -D project="reactiveopsio" -D network="temp-network" -o ./demo -D node_locations="us-central1-a,us-central1-b" -D zone=us-central1-a
```
- get:
- Creates `./<cluster_name>/create_cluster.sh` by querying the state of an existing cluster and parsing values. For when you have an existing cluster that you want to capture its configuration.
- Creates `./<cluster_name>/node_pools/<node_pool_name>/create_nodepool.sh` for any nodepools that are not named `default-pool`. Set `-D get_default_nodepools=true` to capture configuration of `default-pool`. This is typically unecessary as the `create_cluster.sh` will already contain the configuration of the `default-pool`
- `bash ./<cluster_name>/create_cluster.sh` will result in an error indicating the cluster is already present.
- Argument keys are lower case, underscore separated version of the [gcloud container cluster describe](https://cloud.google.com/sdk/gcloud/reference/beta/container/node-pools/describe) command.
- If `-f` file is passed in, data are merged with `-D` values ovveriding the file values
- If `cluster` is omitted it will act on all clusters in the project
- Example:
```
pentagon get gcp.cluster -D project="pentagon" -D zone="us-central1-a -D cluster="pentagon-1" -D get_default_nodepool="true"
```
### gcp.nodepool
**This component is deprecated and not maintained. We are working on a new Terraform module to manage GKE clusters. Use this at your own risk**
- add:
- Creates `./<nodepool_name>/create_nodepool.sh` compiled from the data passed in.
- `bash ./<nodepool_name>/create_nodepool.sh` will create the nodepool as configured
- Argument keys are lower case, underscore separated version of the [gcloud container node-pools create](https://cloud.google.com/sdk/gcloud/reference/beta/container/node-pools/create) command
- If a `-f` file is passed in, data are merged with `-D` values ovveriding the file values
- Example:
```
pentagon add gcp.nodepool -D name="pentagon-1-nodepool" -D project="pentagon" -D zone="us-central1-a" -D additional_zones="us-central1-b,us-central1-b" -D machine_type="n1-standard-64" --enable-autoscaling
```
- get:
- Creates `./<nodepool_name>/create_nodepool.sh` by querying the state of an existing cluster nodepool and parsing values. For when you have an existing cluster that you want to capture its configuration.
- Creates `./<nodepool_name>/create_nodepool.sh`
- `bash ./<nodepool_name>/create_nodepool.sh` will result in an error indicating the cluster is already present.
- Argument keys are lower case, underscore separated version of the [gcloud container node-pools describe](https://cloud.google.com/sdk/gcloud/reference/beta/container/node-pools/describe) command
- If a `-f` file is passed in, data are merged with `-D` values ovveriding the file values
- If `name` is omitted it will act on all nodepool in the cluster
- Example:
```
pentagon get gcp.nodepool -D project="pentagon" -D zone="us-central1-a -D cluster="pentagon-1" -D name="pentagon-1-nodepool"
```
### vpc
- add:
- Creates `./vpc/` directory with Terraform code for the Pentagon default AWS VPC described [here](#network).
- `cd ./vpc; make all` will create the vpc as describe by the arguments passed in
- In the normal course of using Pentagon and the infrastructure repository, it is unlikely you'll use this component as it is automatically installed by default.
- Arguments:
- vpc_name
- vpc_cidr_base
- aws_availabilty_zones
- aws_availability_zone_count
- aws_region
- infrastructure_bucket
- Without the arguments above, the `add` will complete but the output will be missing values required to create the VPC. You must edit the output files to add those values before it will function properly
- Example:
```
pentagon add vpc -D vpc_name="pentagon-vpc" -D vpc_cidr_base="172.20" -D aws_availability_zones="ap-northeast-1a, ap-northeast-1c" -D aws_availability_zone_count = "2" -D aws_region = "ap-northeast-1"
```
### kops.cluster
- add:
- Creates yml files in `./<cluster_name>/` compiled from the data passed in.
- `bash ./<cluster_name>/kops.sh` will create the cluster as configured.
- Argument/ ConfigFile keys:
- `additional_policies`: Additional IAM policies to add to masters, nodes, or both
- `vpc_id`: AWS VPC Id of VPC to create cluster in (required)
- `cluster_name`: Name of the cluster to create (required)
- `kops_state_store_bucket`: Name of the s3 bucket where Kops State will be stored (required)
- `cluster_dns`: DNS domain for cluster records (required)
- `master_availability_zones`: List of AWS Availability zones to place masters (required)
- `availability_zones`: List of AWS Availability zones to place nodes (required)
- `kubernetes_version`: Version of Kubernetes Kops will install (required)
- `nat_gateways`: List of AWS ids of the nat-gateways the Private Kops subnets will use as egress. Must be in the same order as the `availability_zones` from above. (required)
- `master_node_type`: AWS instance type the masters should be (required)
- `worker_node_type`: AWS instance type the default node group should be (required)
- `ig_max_size`: Max number of instance in the default node group. (default: 3)
- `ig_min_size`: Min number of instance in the default node group. (default: 3)
- `ssh_key_path`: Path of public key for ssh access to nodes. (required)
- `network_cidr`: VPC cidr for Kops created Kubernetes subnetes (default: 172.0.0.0/16)
- `network_cidr_base`: First two octects of the network to template subnet cidrs from (default: 172.0)
- `third_octet`: Starting value for the third octet of the subnet cidrs (default: 16)
- `network_mask`: Value for network mask in subnet cidrs (defalt: 24)
- `third_octet_increment`: Increment to increase third octet by for each of the Kubernetes subnets (default: 1) By default, the cidr of the first three private subnets will be 172.20.16.0/24, 172.20.17.0/24, 172.20.18.0/24
- `authorization`: Authorization type for cluster. Allowed values are `alwaysAllow` and `rbac` (default: rbac)
- Example Config File
```
availability_zones: [eu-west-1a, eu-west-1b, eu-west-1c]
additional_policies: |
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": "*"
}
cluster_dns: cluster1.reactiveops.io
cluster_name: working-1.cluster1.reactiveops.io
ig_max_size: 3
ig_min_size: 3
kops_state_store_bucket: reactiveops.io-infrastructure
kubernetes_version: 1.5.7
master_availability_zones: [eu-west-1a, eu-west-1b, eu-west-1c]
master_node_type: t2.medium
node_type: t2.medium
ssh_key_path: ${INFRASTRUCTURE_REPO}/config/private//working-kube.pub
vpc_id: vpc-4aa3fa2d
network_cidr: 172.0.0.0/16
network_cidr_base: 172.0
third_octet: 16
third_octet_increment: 1
network_mask: 24
nat_gateways:
- nat-0c6ef9261d8ebd788
- nat-0de4ec4c946e3b7ce
- nat-08806276217bae9b5
```
- If a `-f` file is passed in, data are merged with `-D` values overiding the file values.
- Example:
```
pentagon --log-level=DEBUG add kops.cluster -f `pwd`/vars.yml
```
- get:
- Creates yml files in `./<cluster_name>/create_cluster.sh` by querying the state of an existing cluster and parsing values. For when you have an existing cluster that you want to capture its configuration.
- Creates `./<cluster_name>/cluster.yml`, `./<cluster_name>/nodes.yml`, `./<cluster_name>/master.yml`, `./<cluster_name>/secret.sh`
- `secret.sh` does not have the content of the secret and will be able re-create the cluster secret if needed. You will have to transform the key id into a saved public key.
- Arguments:
- `name`: Kops cluster name you are getting (required). Argument can also be set through and environment variable called "CLUSTER_NAME".
- `kops_state_store_bucket`: s3 bucket name where cluster state is stored (required). Argument can also be set through and environment variable called "KOPS_STATE_STORE_BUCKET"
- Example:
```
pentagon get kops.cluster -Dname=working-1.cluster.reactiveops.io -Dkops_state_store=reactiveops.io-infrastructure
```
### inventory
- add:
- Creates account configuration directory. Creates all necessary files in `config`, `clusters` and `resources`. Depending on `type` it may also add a `vpc` component and `vpn` component under `resources`. Creates `clusters` directory but does not create cluster configuration. Use the cluster component for that.
- Arguments:
- `name`: name of account to add to inventory (required)
- `type`: type of account to add to inventory aws or gcp (required).
- `project_name`: name of the project the inventory is being added to. (required)
- If a `-f` file is passed in, data are merged with `-D` values overriding the file values
- Example:
```
pentagon add inventory -Dtype=aws -Dname=prod -Daws_access_key=KEY -Daws_secret_key=SECRETKEY -Daws_default_region=us-east-1
```
## Writing your own components
Component modules must be named `pentagon<component_name>`. Classes are subclasses of the `pentagon.component.ComponentBase` class and they must be named <Component> (note the capital first letter). The `pentagon add <component_name>` command will prefer built in components to external components so ensure your component name is not already in use. The <component_name> argument can be a dot separated module path ie `gcp.cluster` where the last parameter is the lowercase class name. For example. `gcp.cluster` finds the Cluster class in the cluster module in the gcp module.
Examples of plugin component package module name and use:
- pentagon_examplecomponent:
* package name: `pentagon-example-component`
* command: `pentagon add component`
* module path: `pentagon_component`
* class: `Component()`
- pentagon_kops
* package name: `pentagon-kops`
* command: `pentagon add kops`
* module path: `pentagon_kops`
* class: `Kops()`
- pentagon_kops.cluster
* package name: `pentagon-kops`
* command: `pentagon add kops.cluster`
* module path: `pentagon_kops.kops`
* class: `Cluster()`
See [example](/example-component)
================================================
FILE: docs/getting-started.md
================================================
# What is Pentagon?
**Pentagon is a cli tool to generate repeatable, cloud-based [Kubernetes](https://kubernetes.io/) infrastructure**.
Pentagon is “batteries included”- not only does one get a network with a cluster, but the defaults include these commonly desired features:
- At it's core, powered by Kubernetes. Configured to be highly-available: masters and nodes are clustered
- Segregated multiple development / non-production environments
- VPN-based access control
- A highly-available network, built across multiple Availability Zones
## How does it work?
**Pentagon produces a directory.** The directory defines a basic set of configurations for [Ansible](https://www.ansible.com/), [Terraform](https://www.terraform.io/), and [kops](https://github.com/kubernetes/kops). When these tools are run in a specific order the result is a VPC with a VPN and a Kubernetes cluster in AWS. (GKE Support is in the works). Pentagon is designed to be customizable but has defaults that fit most software infrastructure needs.
# Getting Started with Pentagon
## Requirements
* python2 >= 2.7 [Install Python](https://www.python.org/downloads/)
* pip [Install Pip](https://pip.pypa.io/en/stable/installing/)
* git [Install Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
* Terraform [Install Terraform ](https://www.terraform.io/downloads.html)
* Ansible [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
* Kubectl [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* kops [Install kops](https://github.com/kubernetes/kops#installing)
* jq [Install JQ](https://stedolan.github.io/jq/download/)
## Installation
* `pip install pentagon`
# Basic Usage
## Quick Start
### Create a AWS Pentagon Project
* `pentagon start-project <project-name> --aws-access-key <aws-access-key> --aws-secret-key <aws-secret-key> --aws-default-region <aws-default-region> --dns-zone <your-dns-zone-name>`
### Create a GCP/GKE Pentagon Project
* `pentagon --log-level=DEBUG start-project --cloud=gcp <project-name> --gcp-zones=<zone_1>,<zone_2>,..,<zone_n> --gcp-project <gcp_project_name> --gcp-region <gcp_region>`
###
* With the above basic options set, defaults will be set for you. See [Advanced Project Initialization](#advanced-project-initialization) for more options.
* Arguments may also be set using environment variable in the format `PENTAGON_<argument_name_with_underscores>`.
* Or using a yaml file with key value pairs where the key is the option name
* Enter the directory project`cd <project-name>-infrastructure`
#### Next steps
The `pentagon` commands will take no action in your cloud infrastructure. You will need to run these commands to finish creation of a default project
* `export INFRASTRUCTURE_REPO=$(pwd)`
* `export INVENTORY=default`
* `. yaml_source inventory/default/config/local/vars.yml`
* `. yaml_source inventory/default/config/private/secrets.yml`
* Sources environment variables required for the following steps. This will be required each time you work with the infrastructure repository or if you move the repository to another location.
* `bash inventory/default/config/local/local-config-init`
* `. yaml_source inventory/default/config/local/vars.yml`
* If using AWS, create an S3 bucket named `<project-name>-infrastructure` in your AWS account. Terraform will store its state file here. Make sure the AWS IAM user has write access to it.
* `aws s3 mb s3://<project-name>-infrastructure`
## AWS
### Create a VPC
This creates the VPC and private, public, and admin subnets in that VPC for non Kubernetes resources. Read more about networking [here](network.md).
* `cd inventory/default/terraform`
* Edit `aws_vpc.auto.tfvars` and verify the generated `aws_azs` actually exist in `aws_region`
* `terraform init`
* `terraform plan`
* `terraform apply`
* In `inventory/default/clusters/*/vars.yml`, set `VPC_ID` using the newly created VPC ID. You can find that ID in Terraform output or using the AWS web console.
* Also, add the `aws_nat_gateway_ids` from the Terraform output to `inventory/default/clusters/*/vars.yml` as a list `nat_gateways`
### Configure DNS and Route53
If you don't already have a Route53 Hosted Zone configured, do that now.
* Create a Route53 Hosted Zone (e.g. `pentagon.mycompany.com`)
* In `inventory/default/clusters/*/vars.yml`, set `dns_zone` to your Hosted Zone (e.g. `pentagon.mycompany.com`)
### Setup a VPN
This creates a AWS instance running [OpenVPN](https://openvpn.net/). Read more about the VPN [here](vpn.md).
* `cd $INFRASTRUCTURE_REPO`
* `ansible-galaxy install -r ansible-requirements.yml`
* `cd inventory/default/resources/admin-environment`
* In `env.yml`, set the list of user names that should have access to the VPN under `openvpn_clients`. You can add more later.
* Run Ansible a few times
* Run `ansible-playbook vpn.yml` until it fails on `VPN security groups`
* Run `ansible-playbook vpn.yml` a second time and it will succeed
* Edit `inventory/default/config/private/ssh_config` and add the IP address from ansible's output to the `#VPN instance` section.
### Configure a Kubernetes Cluster
Pentagon uses Kops to create clusters in AWS. The default layout creates configurations for two Kubernetes clusters: `working` and `production`. See [Overview](overview.md) for a more comprehensive description of the directory layout.
* Make sure your KOPS variables are set correctly with `. yaml_source inventory/default/config/local/vars.yml && . yaml_source inventory/default/config/private/secrets.yml`
* Move into to the path for the cluster you want to work on with `cd inventory/default/clusters/<production|working>`
* If you are using the `aws_vpc` Terraform provided, ensure you have set `nat_gateways` in the `vars.yml` for each cluster and that they the order of the `nat_gateway` ids matches the order of the subnets listed. This will ensure that the Kops cluster will have a properly configured network with the private subnets associated to the existing NAT gateways.
### Create Kubernetes Cluster
* Use the [Kops component](components.md#kopscluster) to create your cluster.
* By default a `vars.yml` will be created at `inventory/default/clusters/working` and `inventory/default/clusters/production`. Those files are sufficient to create a cluster using the kops.cluster but you will need to enter `nat_gateways` and `vpc_id` as described in [kops component documentation](components.md#kopscluster)
* To generate the cluster configs run `pentagon --log-level=DEBUG add kops.cluster -f vars.yml` in the directory of the cluster you wish to create.
* To actually create the cluster: `cd cluster` then `kops.sh`
* Use [kops](https://github.com/kubernetes/kops/blob/master/docs/cli/kops.md) to manage the cluster if necessary.
* Run `kops edit cluster <clustername>` to view or edit the `cluster.spec`
* You may also wish to edit the instance groups prior to cluster creation:
* `kops get instancegroups --name <clustername>` to list them (one master group per AZ and one node group)
* `kops edit instancegroups --name <clustername> <instancegroupname>` to edit any of them
* Run `kops update cluster <clustername>` and review the out put to ensure it matches the cluster you wish to create
* Run `kops update cluster <clustername> --yes` to create the cluster
* While waiting for the cluster to create, consult the [kops documentation](https://github.com/kubernetes/kops/blob/master/docs/README.md) for more information about using Kops and interacting with your new cluster
## GCP/GKE
This component is deprecated and not maintained. We are working on a new Terraform module to manage GKE clusters. Use this at your own risk
### Intialize Terraform
* Make backend: `gsutil mb gs://<project_name>-infrastructure`
* `cd inventory/default/terraform/ && terraform init`
### Create Kubernetes Cluster
* `cd ${INFRASTRUCTURE_REPO}/inventory/default/clusters/*`
* `bash create_cluster.sh`
## Creating Resources Outside of Kubernetes
Typically infrastructure will be required outside of your Kubernetes cluster. Other EC2, RDS, or Elasticache instances, etc are often require for an application.
Pentagon convention suggests you use Ansible to create these resources and that the Ansible playbooks can be saved in the `inventory/default/resources/` or the `inventory/default/clusters/<cluster>/resources/` directory. This depends on the scope with which the play book will be utilized. If the resources are not specific to either cluster, then we suggest you save it at the `default/resources/` level. Likewise, if it is a resource that will only be used by one cluster, such as a staging database or a production database, then we suggest writing the Ansible playbook at the `default/cluster/<cluster>/resources/` level. Writing Ansible roles can be very helpful to DRY up your resource configurations.
# Advanced Project Initialization
If you wish to utilize the templating ability of the `pentagon start-project` command, but need to modify the defaults, a comprehensive list of command line flags (listed below) should be able to customize the output of the `pentagon start-project` command to your liking.
### Start New Project
* `pentagon start-project <project-name> <options>`
* This will create a skeleton repository with placeholder strings in place of the options shown above in the [QUICK START]
* Edit the `config/private/secrets.yml` and `config/local/env.yml` before proceeding onto the next step
### Clone Existing Project
* `pentagon start-project <project-name> --git-repo <repository-of-existing-project> <options>`
### Available Commands
* `pentagon start-project`
### _start-project_
`pentagon start-project` creates a new project in your workspace directory and creates a matching virtualenv for you. Most values have defaults that should get you up and running very quickly with a new Pentagon project. You may also clone an existing Pentagon project if one exists. You may set any of these options as environment variables instead by prefixing them with `PENTAGON_`, for example, for security purposes `PENTAGON_aws_access_key` can be used instead of `--aws-access-key`
#### Options
* **-f, --config-file**:
* File to read configuration options from.
* No default
* ***File supercedes command line options.***
* **-o, --output-file**:
* No default
* **--cloud**:
* Cloud provider to create default inventory.
* Defaults to 'aws'. [aws,gcp,none]
* **--repository-name**:
* Name of the folder to initialize the infrastructure repository
* Defaults to `<project-name>-infrastructure`
* **--configure / --no-configure:**:
* Configure project with default settings
* Default to True
* If you choose `--no-configure`, placeholder values will be used instead of defaults and you will have to manually edit the configuration files
* **--force / --no-force**:
* Ignore existing directories and copy project anyway
* Defaults to False
* **--aws-access-key**:
* AWS access key
* No Default
* **--aws-secret-key**:
* AWS secret key
* No Default
* **--aws-default-region**:
* AWS default region
* No Default
* If the `--aws-default-region` option is set it will allow the default to be set for `--aws-availability-zones` and `--aws-availability-zone-count`
* **--aws-availability-zones**:
* AWS availability zones as a comma delimited list.
* Defaults to `<aws-default-region>a`, `<aws-default-region>b`, ... `<aws-default-region>z` when `--aws-default-region` is set calculated using the `--aws-available-zone-count` value. Otherwise, a placeholder string is used.
* **--aws-availability-zone-count**:
* Number of availability zones to use
* Defaults to 3 when a default region is entered. Otherwise, a placeholder string is used
* **--dns-zone**:
* DNS Zone of the project. Used for VPN instance and Kubernetes api
* Kubernetes dns zones can be overriden with arguments found below
* Defaults to `<project-name>.com`
* **--infrastructure-bucket**:
* Name of S3 Bucket to store state
* Defaults to `<project-name>-infrastructure`
* pentagon start-project does not create this bucket and it will need to be created
* **--git-repo**:
* Existing git repository to clone
* No Default
* ***When --git-repo is set, no configuration actions are taken. Pentagon will setup the virutualenv and clone the repository only***
* **--create-keys / --no-create-keys**:
* Create SSH keys or not
* Defaults to True
* Keys are saved to `<workspace>/<repository-name>/config/private`
* 5 keys will be created:
* `admin_vpn`: key for the VPN instances
* `working_kube`: key for working Kubernetes instances
* `production_kube`: key for production Kubernetes instance
* `working_private`: key for non-Kubernetes resources in the working private subnets
* `production_private`: key for non-Kubernetes resources in the production private subnets
* ***Keys are not uploaded to AWS. When needed, this will need to be done manually***
* **--admin-vpn-key**:
* Name of the SSH key for the admin user of the VPN instance
* Defaults to 'admin_vpn'
* **--working-kube-key**:
* Name of the SSH key for the working Kubernetes cluster
* Defaults to 'working_kube'
* **--production-kube-key**:
* Name of the SSH key for the production Kubernetes cluster
* Defaults to 'production_kube'
* **--working-private-key**:
* Name of the SSH key for the working non-Kubernetes instances
* Defaults to 'working_private'
* **--production-private-key**:
* Name of the SSH key for the production non-Kubernetes instances
* Defaults to 'production_private'
* **--vpc-name**:
* Name of VPC to create
* Defaults to date string in the format `<YYYYMMDD>`
* **--vpc-cidr-base**
* First two octets of the VPC ip space
* Defaults to '172.20'
* **--working-kubernetes-cluster-name**:
* Name of the working Kubernetes cluster nodes
* Defaults to `working-1.<project-name>.com`
* **--working-kubernetes-node-count**:
* Number of the working Kubernetes cluster nodes
* Defaults to 3
* **--working-kubernetes-master-aws-zone**:
* Availability zone to place the Kube master in
* Defaults to the first zone in --aws-availability-zones
* **--working-kubernetes-master-node-type**:
* AWS instance type of the Kube master node in the working cluster
* Defaults to t2.medium
* **--working-kubernetes-worker-node-type**:
* AWS instance type of the Kube worker nodes in the working cluster
* Defaults to t2.medium
* **--working-kubernetes-dns-zone**:
* DNS Zone of the Kubernetes working cluster
* Defaults to `working.<project-name>.com`
* **--working-kubernetes-v-log-level**:
* V Log Level Kubernetes working cluster
* Defaults to 10
* **--working-kubernetes-network-cidr**:
* Network cidr of the Kubernetes working cluster
* Defaults to `172.20.0.0/16`
* **--production-kubernetes-cluster-name**:
* Name of the production Kubernetes cluster nodes
* Defaults to `production-1.<project-name>.com`
* **--production-kubernetes-node-count**:
* Number of the production Kubernetes cluster nodes
* Defaults to 3
* **--production-kubernetes-master-aws-zone**:
* Availability zone to place the Kube master in
* Defaults to the first zone in --AWS-availability-zones
* **--production-kubernetes-master-node-type**:
* AWS instance type of the Kube master node in the production cluster
* Defaults to t2.medium
* **--production-kubernetes-worker-node-type**:
* AWS instance type of the Kube worker nodes in the production cluster
* Defaults to t2.medium
* **--production-kubernetes-dns-zone**:
* DNS Zone of the Kubernetes production cluster
* Defaults to `production.<project-name>.com`
* **--production-kubernetes-v-log-level**:
* V Log Level Kubernetes production cluster
* Defaults to 10
* **--production-kubernetes-network-cidr**:
* Network cidr of the Kubernetes production cluster
* Defaults to `172.20.0.0/16`
* **--configure-vpn/--no-configure-vpn**:
* Do, or do not configure the vpn env.yaml file
* Defaults to True
* **--vpn-ami-id**
* AWS ami id to use for the VPN instance
* Defaults to looking up ami-id from AWS
* **--log-level**:
* Pentagon CLI Log Level. Accepts DEBUG,INFO,WARN,ERROR
* Defaults to INFO
* **--help**:
* Show help message and exit.
* **--gcp-project**
* Google Cloud Project to create clusters in
* This argument required when --cloud=gcp
* **--gcp-zones**
* Google Cloud Project zones to create clusters in. Comma separated list.
* This argument required when --cloud=gcp
* **--gcp-region**
* Google Cloud region to create resoures in.
* This argument required when --cloud=gcp
================================================
FILE: docs/network.md
================================================
# VPC Description
We create a base VPC with [terraform-vpc](https://github.com/reactiveops/terraform-vpc) that allocates capacity for AWS-based resources that a client needs to host, including `kubernetes`. We then let `kops` work in the same VPC to carve out a dedicated space for itself so that `kubernetes` is self-contained and manageable.
After running `pentagon start-project` you can alter the configuration of the VPC by editing the `default/vpc/terraform.tfvars` and `default/vpc/main.tf` files in the infrastructure. You can also configure the VPC using command line arguments to `pentagon start-project`
## VPC
The VPC is created by Terraform VPC which sets up a standard RO-style network platform. `kops` is then used to configure and deploy `kubernetes` into this existing VPC.
### Subnets
Per AZ, terraform-vpc creates 4 subnets: 1 `admin`, 1 `public`, and 2 `private` (one `working` and one `production`). Use these subnets to deploy any resources other than those directly associated with `kubernetes`.
Let `kops` create dedicated public and private subnets that run in parallel to those created by terraform-vpc. Each AZ consists of a pair of kops-defined subnets- `public` and `private`. In `kops edit cluster`, allocate CIDRs of available address space.
### NAT Gateways
NAT Gateways are created by terraform-vpc and one is needed for each AZ. You can share a NAT Gateway for use by `kubernetes` and your other AWS-based resources simultaneously. This is the only exception to the separation of `kops` and TF. During `kops edit cluster`, specify the NAT Gateway in the private subnet using the keyword `egress` as shown in the [kops Example networking spec](#kops-example-networking-spec). Egress is currently only useful if you are using private subnets as defined in kops.
## Route tables
terraform-vpc sets up route tables for all of the standard subnets. The `private` subnets default route for external traffic is the NAT Gateway in that zone. The `public` subnets default route is through an Internet Gateway.
`kops` manages the subnets for your `kubernetes` resources so it also manages these route tables. Specifying the NAT Gateway that terraform-vpc created in `egress` will configure the default routes for these subnets to its specified NAT Gateway.
Because NAT Gateways don't have tags on AWS, `kops` keeps track of this NAT Gateway by AWS-tagging the route table with K=V pair `AssociatedNatGateway=nat-05ee835341f099286`. This is for the delete logic in `kops` that likely wouldn't actually be able to delete the Gateway (because it would still be in use by other routes), but it would attempt to delete it as a "related resource".
## Tags
terraform-vpc tags all of the resources that it creates and manages as `Managed By=Terraform`. Likewise, `kops` tags the resources that it creates and manages with `KubernetesCluster=<clustername>`. By letting `kops` create its own subnets, `kops` related tags are all restricted to resources that are owned by `kops`, so terraform-vpc doesn't ever need to know about `kops` and vice versa.
# Kops network design
## Network overview diagram
| **Subnet Name (abstracted)** | **Example Name** | **Private / Public** | **Created / Managed by** |
| -------------------------------- | -------------------------------------------------------- | -------------------- | ------------------------ |
| admin_az$n | admin_az1 | Private | terraform-vpc |
| private_working_az$n | private_working_az1 | Private | terraform-vpc |
| private_prod_az$n | private_prod_az1 | Private | terraform-vpc |
| public_ax$n | public_az1 | Public | terraform-vpc |
| az$n.$cluster_identifier | us-east-1a.working-1.shareddev.dev.hillghost.com | Private | kops |
| utility-az$n.$cluster_identifier | utility-us-east-1a.working-1.shareddev.dev.hillghost.com | Public | kops |
CIDRs should always be allocated assuming a 4AZ layout for possible future expansion, even if the client doesn't initially need all of the AZs. [This Document](https://docs.google.com/spreadsheets/d/1wObSMI8xvgztqYEhUkIALNw8fDBFVx-Xv4a9UC8Z7HE) lays out some potential subnet CIDRs for various types of layouts.
## Example of possible network section of the kops cluster.spec
```
subnets:
- cidr: 172.20.16.0/24
egress: nat-05ee835341f099286
name: us-east-1a
type: Private
zone: us-east-1a
- cidr: 172.20.17.0/24
egress: nat-0973eca2e99f9249c
name: us-east-1b
type: Private
zone: us-east-1b
- cidr: 172.20.18.0/24
egress: nat-015aa74ead665693d
name: us-east-1c
type: Private
zone: us-east-1c
- cidr: 172.20.20.0/24
name: utility-us-east-1a
type: Utility
zone: us-east-1a
- cidr: 172.20.21.0/24
name: utility-us-east-1b
type: Utility
zone: us-east-1b
- cidr: 172.20.22.0/24
name: utility-us-east-1c
type: Utility
zone: us-east-1c
```
================================================
FILE: docs/overview.md
================================================
# Infrastructure Repository Overview
After running `pentagon start-project` you will have a directory with a layout similiar to:
```
.
├── README.md
├── ansible-requirements.yml
├── inventory/
├── docs/
├── plugins/
└── requirements.txt
```
See also [Extended Layout](#extended-layout-description)
Generally speaking, the layout of the infrastructure repository is heierachical. That is to say, higher level directories contain scripts, resources, and variables that are intended to be used earlier in the creation of your infrastructure.
## Core Directories
### inventory/
The inventory directory is used to store an arbitrary segment of your infrastructure. It can be a separate AWS account, AWS VPC, GCP Project or, GCP Netrowk. It can be as fine grained as you like, but the config directory in each "inventory item" is scoped to, at most, one AWS Account+VPC or one GCP Project+Network. By default, the `inventory` directoy includes one `default` directory with configurtion for one VPC and two Kops clusters. You can pass `pentagon start-project` the `--no-configure` flag to build your own.
### inventory/(default)/config/
The config directory is separated into `local` and `private`. Files, scripts, and templates in `config/local` are checked into source control and should not contain any workstation specific values.
`config/local/env-vars.sh` uses a specific list of variable names, locates the values in `config/local/vars.yml` and `config/private/secrets.yml` and exports them as an environment variable. These environment variables are used throughout the infrastructure repository so make sure you `source config/local/env-vars.sh`.
Some configurations require absolute paths which, if checked into source control, can make working with teams challenging. The `config/local/local-config-init` script makes this easier by providing a fast way to generate workstation specific configurations from the `ansible.cfg-default` and `ssh_config-default` template files. The generated workstation specific configuration files are written to `config/private`.
`config/private/ssh_config` and `config/private/ansible.cfg` greatly simplify interaction with your cloud VMs. It is configured to automatically use the correct key and user name based on the IP address of the host. You can either use the command `ssh -F '${INFRASTRUCTURE_REPO}/config/local/ssh_config` or alias SSH with `alias ssh="ssh -F '${INFRASTRUCTURE_REPO}/config/local/ssh_config'`.
`config/private`, in addition to `secrets.yml` also contains SSH keys generated by `start-project`. Unless you opted to not create the keys, the `admin-vpn` key pair will be uploaded to AWS for you when the VPN instance is created and the `*-kube` keys will automatically be uploaded when `kops` is invoked to create the Kubernetes cluster. The other keys, `production-private`, `working-private` are created as a convenience to be used for any instances that are created in the VPC `private-working` and `private-production` subnets. When `kops` is invoked to create the cluster, the Kubernetes config secret will also be created as `config/private/kube_config`
### inventory/(default)/
The `default/` contains most of the moving parts of the infrastructure repository. The name `default` is not important! The contents are. The goal is that the contents of the `default` directory can be deep copied and create parallel (cloud provider, cloud account, vpc) infrastructure in a single repository. *Consider this a guidline, not a rule!*
```
├── clusters
├── resources
└── vpc
```
### inventory/(default)/clusters/
Contains `working/` and `production/` directories. Both are laid out identically.
`working` is intended to contain any non-production Kubernetes pods, deployments, services. `production` is intended to contain any production Kubernetes objects pods, deployments, services etc.
```
├── kops.sh
├── cluster.yml
├── nodes.yml
├── masters.yml
└── secret.sh
```
`kops.sh` is a bash script that uploads the yml files the S3 bucket set in `inventory/(default)/config/local/vars.yml`
`secrets.sh` creates the secret that is the ssh public key mateial for the the nodes in the cluster
### inventory/(default)/terraform
The `terraform/` directory is for the AWS VPC Terraform. It is intended to hold the configuration for all Terraform for the "inventory item." Terraform modules should be used to organize the Terraform code.
### inventory/(default)/resources
This `resources/` is the directory into which Ansible playbooks to _non cluster specific_ cloud resources can be stored. The `admin-environment` playbook, which creates and configures the OpenVPN instance, is present "out of the box".
## Supporting Directories
### plugins/
This is the Ansible plugins directory. The `ec2` infrastructure plugin is enabled by default. Set in `config/private/ansible.cfg`.
### roles/
The Ansible roles are installed here by default. Set in `config/private/ansible.cfg`.
This is not checked into Git.
## Extended Layout Description
```
├── README.md
├── ansible-requirements.yml
├── config.yml
├── inventory
│ └── default * Directory for default cloud
│ ├── clusters * Directory for Clusters
│ │ ├── production * Production Cluster Directory
│ │ │ └── vars.yml * Variables specific to production. Used by `pentagon add kops.cluster`
│ │ └── working * Working Cluster Directory
│ │ └── vars.yml * Variables specific to working. Used by `pentagon add kops.cluster`
│ ├── config * Configuration Directory
│ │ ├── local * Local, non-secret configuration
│ │ │ ├── ansible.cfg-default * templating code to create private configuration
│ │ │ ├── local-config-init
│ │ │ ├── ssh_config-default
│ │ │ └── vars.yml
│ │ └── private * Private, secret configs. ignored by git
│ │ ├── admin-vpn * SSH key pairs generated by at `start-project`
│ │ ├── admin-vpn.pub
│ │ ├── production-kube
│ │ ├── production-kube.pub
│ │ ├── production-private
│ │ ├── production-private.pub
│ │ ├── secrets.yml * Secret values in yaml config file
│ │ ├── working-kube
│ │ ├── working-kube.pub
│ │ ├── working-private
│ │ └── working-private.pub
│ ├── kubernetes * You can store kubernetes manifests here
│ ├── resources * Ansible playbook for creating the OpenVPN instance
│ │ └── admin-environment
│ │ ├── destroy.yml
│ │ ├── env.yml
│ │ └── vpn.yml
│ └── terraform * Terraform for entire inventory item
│ ├── aws_vpc.auto.tfvars
│ ├── aws_vpc.tf
│ ├── aws_vpc_variables.tf
│ ├── backend.tf
│ └── provider.tf
├── plugins * Ansible plugins
└── requirements.txt
```
================================================
FILE: docs/vpn.md
================================================
# VPN
## Setup
The VPN allows ssh access to intances in the private subnets in the VPC. This includes the KOPS created subnets and the private subnets created during VPC creation.
This can be done before or after configuring and deploying your kubernetes cluster(s). It is required to have your VPC setup prior to starting VPN setup. By default an ssh key is created for the vpn instance during `pentagon start_project`. The playbook will upload the key and associate it with the new AWS instance.
* Review `account/vars.yml` and ensure that `vpc_tag_name`, `org_name`, `canonical_zone` and `vpn_bucket` are set.
* Ensure `config/local/ssh_config` has the key path and subnets set for ssh access
* In `default/resources/admin-environment/env.yml` verify the following are set properly
- `aws_key_name` : name of the key pair created earlier
- `default_ami` : If not preset, se the [Ubuntu AMI locator](https://cloud-images.ubuntu.com/locator/). Use Ubuntu Trusty and make sure it is located in correct region, instance type: `hvm:ebs-ssd`.
- Edit other variables as needed. VPN users to be created, aka VPN clients, are contained in the Ansible array, `openvpn_clients`
* If you haven't already, in the project directory, install ansible requirements:
```
ansible-galaxy install -r ansible-requirements.yml
```
* Run the VPN playbook:
```
ansible-playbook default/resources/admin-environment/vpn.yml
```
* Even when all the inputs are correct, sometimes you will need to re-run ansible a couple times to get through all of the steps.
## Usage
The VPN playbook will create an instance with OpenVPN software that you can connect to using a VPN client. On OSX, one possible alternative is Tunnelblick. See: [How to connect to access server from OSX](https://openvpn.net/index.php/access-server/docs/admin-guides/183-how-to-connect-to-access-server-from-a-mac.html)
No matter the client you choose to use, the keys for each of the users will be deposited into the s3 bucket specified in `default/resources/admin-environment/env.yml` before. Download these and keep use them to access your cluster.
================================================
FILE: example-component/LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2017, Reactive Ops Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: example-component/MANIFEST.in
================================================
recursive-include pentagoncomponent/files/ *
================================================
FILE: example-component/README.md
================================================
================================================
FILE: example-component/pentagon_component/__init__.py
================================================
from pentagon.component import ComponentBase
import os
class Component(ComponentBase):
_path = os.path.dirname(__file__)
================================================
FILE: example-component/pentagon_component/files/__init__.py
================================================
================================================
FILE: example-component/pentagon_component/files/example_template.jinja
================================================
# blank
================================================
FILE: example-component/requirement.txt
================================================
================================================
FILE: example-component/setup.py
================================================
#!/usr/bin/env python
# -- coding: utf-8 --
# Copyright 2017 Reactive Ops Inc.
#
# Licensed under the Apache License, Version 2.0 (the “License”);
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an “AS IS” BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
from setuptools import setup, find_packages
try:
from setuptools import setup, find_packages
except ImportError:
print("setup tools required. Please run: "
"pip install setuptools).")
sys.exit(1)
setup(name='pentagon-example-component',
version='0.0.1',
description='Example Pentagon Component',
author='ReactiveOp Inc.',
author_email='reactive@reactiveops.com',
url='http://reactiveops.com/',
license='Apache2.0',
include_package_data=True,
install_requires=[],
data_files=[],
packages=find_packages()
)
================================================
FILE: pentagon/__init__.py
================================================
================================================
FILE: pentagon/cli.py
================================================
#!/usr/bin/env python
import os
import click
import logging
import coloredlogs
import traceback
import oyaml as yaml
import json
import migration
from pydoc import locate
from .pentagon import PentagonException
from .pentagon import GCPPentagonProject, AWSPentagonProject, PentagonProject
from helpers import merge_dict
from meta import __version__
class RequiredIf(click.Option):
def __init__(self, *args, **kwargs):
self.required_if = kwargs.pop('required_if').split('=')
self.required_option = self.required_if[0]
self.required_value = self.required_if[1]
assert self.required_if, "'required_if' parameter required"
kwargs['help'] = (kwargs.get('help', '') +
' NOTE: This argument required when --%s=%s' %
(self.required_option, self.required_value)
).strip()
super(RequiredIf, self).__init__(*args, **kwargs)
def handle_parse_result(self, ctx, opts, args):
other_present = self.required_option in ctx.params
if not other_present or ctx.params[self.required_option] != self.required_value:
self.prompt = None
return super(RequiredIf, self).handle_parse_result(
ctx, opts, args)
def validate_not_empty_string(ctx, param, value):
''' Validates that value of prompted entry is not and empty string '''
try:
if value is not None and value.strip() == '':
raise click.BadParameter('{} cannot be empty'.format(param.name))
else:
return value
except click.BadParameter as e:
click.echo(e)
value = click.prompt(param.prompt)
return validate_not_empty_string(ctx, param, value)
@click.group()
@click.version_option(__version__)
@click.option('--log-level', default="INFO", help="Log Level DEBUG,INFO,WARN,ERROR")
@click.pass_context
def cli(ctx, log_level, *args, **kwargs):
coloredlogs.install(level=log_level)
@click.command()
@click.pass_context
@click.argument('name')
# General directory and file name options
@click.option('-f', '--config-file', help='File to read configuration options from. File supercedes command line options.')
@click.option('-o', '--output-file', default='config.yml', help='File to write output after completion.')
@click.option('--workspace-directory', help='Directory to place new project. Defaults to ./')
@click.option('--configure/--no-configure', default=True, help='Configure project with default settings.')
@click.option('--force/--no-force', help="Ignore existing directories and copy project.")
@click.option('--cloud', default="aws", help="Cloud provider to create default inventory. Defaults to 'aws'. [aws,gcp,none]")
# Currently only AWS but maybe we can/should add GCP later
@click.option('--configure-vpn/--no-configure-vpn', default=True, help="Whether or not to configure a vpn. Default True.")
@click.option('--vpc-name', help="Name of VPC to create.")
@click.option('--vpc-cidr-base', help="First two octets of the VPC ip space.")
@click.option('--vpc-id', help="AWS VPC id where the clusters are going to be created.")
@click.option('--admin-vpn-key', help="Name of the ssh key for the admin user of the VPN instance.")
@click.option('--vpn-ami-id', help="ami-id to use for the VPN instance.")
# General Kubernetes options
@click.option('--kubernetes-version', help="Version of kubernetes to use for cluster nodes.")
@click.option('--disk-size', help="Size disk to provision on the kubernetes vms.")
# Working
@click.option('--working-kubernetes-cluster-name', help="Name of the working kubernetes cluster nodes.")
@click.option('--working-kubernetes-node-count', help="Number of nodes for the working kubernetes cluster.")
@click.option('--working-kubernetes-worker-node-type', help="Node type of the kube workers.")
@click.option('--working-kubernetes-network-cidr', help="Network CIDR of the kubernetes working cluster.")
# Production
@click.option('--production-kubernetes-cluster-name', help="Name of the production kubernetes cluster nodes.")
@click.option('--production-kubernetes-node-count', help="Number of nodes for the production kubernetes cluster nodes.")
@click.option('--production-kubernetes-worker-node-type', help="Node type of the kube workers.")
@click.option('--production-kubernetes-network-cidr', help="Network CIDR of the kubernetes working cluster.")
# AWS Cloud options
@click.option('--aws-access-key', prompt=True, callback=validate_not_empty_string, default=lambda: os.environ.get('PENTAGON_aws_access_key'), help="AWS access key.", cls=RequiredIf, required_if='cloud=aws')
@click.option('--aws-secret-key', prompt=True, callback=validate_not_empty_string, default=lambda: os.environ.get('PENTAGON_aws_secret_key'), help="AWS secret key.", cls=RequiredIf, required_if='cloud=aws')
@click.option('--aws-default-region', help="AWS default region.", cls=RequiredIf, required_if='cloud=aws')
@click.option('--aws-availability-zones', help="[Deprecated] Use \"--availability-zones\". AWS availability zones as a comma delimited with spaces. Default to region a, region b, ... region z.")
@click.option('--aws-availability-zone-count', help="Number of availability zones to use.")
@click.option('--infrastructure-bucket', help="Name of S3 Bucket to store state.")
@click.option('--dns-zone', help="DNS zone to configure DNS records in.")
@click.option('--create-keys/--no-create-keys', default=True, help="Create ssh keys or not.")
# AWS only Kubernetes options
# Working
@click.option('--working-kubernetes-master-aws-zone', help="Availability zone to place the kube master in.")
@click.option('--working-kubernetes-master-node-type', help="AWS only. Node type of the kube master.")
@click.option('--working-kube-key', help="Name of the ssh key for the working kubernetes cluster.")
@click.option('--working-private-key', help="Name of the ssh key for the working non kubernetes instances.")
@click.option('--working-kubernetes-dns-zone', help="DNS Zone of the kubernetes working cluster.")
@click.option('--working-kubernetes-v-log-level', help="V Log Level kubernetes working cluster.")
# Production
@click.option('--production-kubernetes-master-aws-zone', help="Availability zone to place the kube master in.")
@click.option('--production-kubernetes-master-node-type', help=" AWS only. Node type of the kube master.")
@click.option('--production-kube-key', help="Name of the ssh key for the production kubernetes cluster.")
@click.option('--production-private-key', help="Name of the ssh key for the production non kubernetes instances.")
@click.option('--production-kubernetes-dns-zone', help="DNS Zone of the kubernetes production cluster.")
@click.option('--production-kubernetes-v-log-level', help="V Log Level kubernetes production cluster.")
# GCP Cloud options
@click.option('--gcp-project', prompt=True, callback=validate_not_empty_string, help="Google Cloud Project to create clusters in.", cls=RequiredIf, required_if='cloud=gcp')
@click.option('--gcp-region', prompt=True, callback=validate_not_empty_string, help="Google Cloud Project Region to use for Cluster.", cls=RequiredIf, required_if='cloud=gcp')
@click.option('--gcp-cluster-name', prompt=True, callback=validate_not_empty_string, help="Google GKE Cluster Name.", cls=RequiredIf, required_if='cloud=gcp')
@click.option('--gcp-nodes-cidr', prompt=True, callback=validate_not_empty_string, help="Google GKE Nodes CIDR.", cls=RequiredIf, required_if='cloud=gcp')
@click.option('--gcp-services-cidr', prompt=True, callback=validate_not_empty_string, help="Google GKE services CIDR.", cls=RequiredIf, required_if='cloud=gcp')
@click.option('--gcp-pods-cidr', prompt=True, callback=validate_not_empty_string, help="Google GKE pods CIDR.", cls=RequiredIf, required_if='cloud=gcp')
@click.option('--gcp-kubernetes-version', prompt=True, callback=validate_not_empty_string, help="Version of kubernetes to use for cluster nodes.", cls=RequiredIf, required_if='cloud=gcp')
@click.option('--gcp-infra-bucket', prompt=True, callback=validate_not_empty_string, help="The bucket where terraform will store its state for GCP.", cls=RequiredIf, required_if='cloud=gcp')
def start_project(ctx, name, **kwargs):
""" Create an infrastructure project from scratch with the configured options """
try:
logging.basicConfig(level=kwargs.get('log_level'))
file_data = {}
if kwargs.get('config_file'):
file_data = parse_in_file(kwargs.get('config_file'))[0]
kwargs.update(file_data)
logging.debug(kwargs)
cloud = kwargs.get('cloud')
if cloud.lower() == 'aws':
project = AWSPentagonProject(name, kwargs)
elif cloud.lower() == 'gcp':
project = GCPPentagonProject(name, kwargs)
elif cloud.lower() == 'none':
project = PentagonProject(name, kwargs)
else:
raise PentagonException(
"Value passed for option --cloud not 'aws' or 'gcp'")
logging.debug('Creating {} project {} with {}'.format(
cloud.upper(), name, kwargs))
project.start()
except Exception as e:
logging.error(e)
logging.debug(traceback.format_exc(e))
@click.command()
@click.pass_context
@click.argument('component_path')
@click.option('--data', '-D', multiple=True, help='Individual Key=Value pairs used by the component. There should be no spaces surrounding the `=`')
@click.option('--file', '-f', help='File to read Key=Value pair from (yaml or json are supported)')
@click.option('--out', '-o', default='./', help="Path to output module result, if any")
@click.argument('additional-args', nargs=-1, default=None)
def add(ctx, component_path, additional_args, **kwargs):
_run('add', component_path, additional_args, kwargs)
@click.command()
@click.pass_context
@click.argument('component_path')
@click.option('--data', '-D', multiple=True, help='Individual Key=Value pairs used by the component.')
@click.option('--file', '-f', help='File to read Key=Value pair from (yaml or json are supported).')
@click.option('--out', '-o', default='./', help="Path to output module result, if any.")
@click.argument('additional-args', nargs=-1, default=None)
def get(ctx, component_path, additional_args, **kwargs):
_run('get', component_path, additional_args, kwargs)
@cli.command()
@click.pass_context
@click.option("--dry-run/--no-dry-run", default=False, help="Test migration before applying.")
@click.option('--log-level', default="INFO", help="Log Level DEBUG,INFO,WARN,ERROR.")
@click.option('--branch', default="migration", help="Name of branch to create for migration. Default='migration'")
@click.option('--yes/--no', default=False, help="Confirm to run migration.")
def migrate(ctx, **kwargs):
""" Update Infrastructure Repository to the latest configuration """
logging.basicConfig(level=kwargs.get('log_level'))
migration.migrate(kwargs['branch'], kwargs['yes'])
def _run(action, component_path, additional_args, options):
logging.basicConfig(level=options.get('log_level'))
logging.debug("Importing module Pentagon {}".format(component_path))
logging.debug("with options: {}".format(options))
logging.debug("and additional arguments: {}".format(additional_args))
documents = [{}]
data = parse_data(options.get('data', {}))
try:
file = options.get('file', None)
if file is not None:
documents = parse_in_file(file)
except Exception as e:
logging.error("Error parsing data from file or -D arguments")
logging.error(e)
component_class = get_component_class(component_path)
try:
for doc in documents:
if callable(component_class):
data = merge_dict(doc, data, clobber=True)
data['prompt'] = options.get('prompt', True)
# Making data keys more flexible and allowing keys with
# - to be be corrected in place
data_copy = data.copy()
for key, value in data.iteritems():
flex_key = key.replace('-', '_')
if flex_key != key:
data_copy[flex_key] = value
data = data_copy
getattr(component_class(data, additional_args),
action)(options.get('out'))
else:
logging.error(
"Error locating module or class: {}".format(component_path))
except Exception, e:
logging.error(e)
logging.debug(traceback.format_exc(e))
# Making names more terminal friendly
cli.add_command(start_project, "start-project")
cli.add_command(add, "add")
cli.add_command(get, "get")
def get_component_class(component_path):
""" Construct Class path from component input """
component_path_list = component_path.split(".")
possible_component_paths = []
if len(component_path_list) > 1:
component_name = ".".join(component_path.split(".")[0:-1])
component_class_name = component_path.split(".")[-1]
else:
component_name = component_path
component_class_name = component_path
# Compile list of possible class paths
possible_component_paths.append(
'{}.{}'.format(component_name, component_class_name))
possible_component_paths.append('{}.{}'.format(
component_name, component_class_name.title()))
possible_component_paths.append(
'pentagon.component.{}.{}'.format(component_name, component_class_name))
possible_component_paths.append('pentagon.component.{}.{}'.format(
component_name, component_class_name.title()))
possible_component_paths.append(
'pentagon_{}.{}'.format(component_name, component_class_name))
possible_component_paths.append('pentagon_{}.{}'.format(
component_name, component_class_name.title()))
# Find Class if it exists
for class_path in possible_component_paths:
logging.debug('Seeking {}'.format(class_path))
component_class = locate(class_path)
if component_class is not None:
logging.debug("Found {}".format(component_class))
return component_class
logging.debug('{} Not found'.format(class_path))
def parse_in_file(file):
""" Parse data structure from file into dictionary for component use """
with open(file, 'r') as data_file:
try:
data = json.load(data_file)
logging.debug("Data parsed from file {}: {}".format(file, data))
return data
except ValueError as json_error:
pass
data_file.seek(0)
try:
data = list(yaml.load_all(
data_file, Loader=yaml.loader.FullLoader))
logging.debug("Data parsed from file {}: {}".format(file, data))
return data
except yaml.YAMLError as yaml_error:
pass
logging.error("Unable to parse in file. {} {} ".format(
json_error, yaml_error))
def parse_data(data, d=None):
""" Function to parse the incoming -D options into a dict """
if d is None:
d = {}
for kv in data:
key = kv.split('=')[0]
try:
val = kv.split('=', 1)[1]
except IndexError:
val = True
d[key] = val
return d
================================================
FILE: pentagon/component/__init__.py
================================================
import os
import glob
import shutil
import logging
import traceback
import sys
import re
import click
from pentagon.helpers import render_template
from pentagon.defaults import AWSPentagonDefaults as PentagonDefaults
class ComponentBase(object):
""" Base class for Pentagon Components. """
_required_parameters = []
# List of environment variables to use.
# If set, they should override other data sources.
# Lower Case here will find upper case environment variables.
# If a dictionary is passed, the key is the variable name used in context,
# and the value is the environment variable name.
_environment = []
_defaults = {}
def __init__(self, data, additional_args=None, **kwargs):
self._data = data
self._additional_args = additional_args
self._process_env_vars()
self._process_defaults()
missing_parameters = []
for item in self._required_parameters:
if item not in self._data.keys():
missing_parameters.append(item)
if missing_parameters:
logging.error("Missing required data parameters: {}".format(
", ".join(missing_parameters)))
logging.error("You can set parameters with '-Dparam_name=value'.")
sys.exit(1)
@property
def _destination_directory_name(self):
if self._destination != './':
return self._destination
return self._data.get('name', self.__class__.__name__.lower())
@property
def _files_directory(self):
return sys.modules[self.__module__].__path__[0] + "/files"
def _process_env_vars(self):
logging.debug('Fetching environment variables')
environ_data = {}
for item in self._environment:
if type(item) is dict:
context_var = item.keys()[0]
env_var = os.environ.get(item.values()[0])
else:
context_var = item.lower()
env_var = os.environ.get(item.upper())
environ_data[context_var] = env_var
self._merge_data(environ_data)
def _process_defaults(self):
""" Use _defaults from global pentagon defaults, then class and add them to missing values on the _data dict """
logging.debug('Processing Defaults')
self._merge_data(self._defaults)
try:
class_name = self.__class__.__name__.lower()
pentagon_defaults = getattr(PentagonDefaults, class_name)
logging.debug(
"Adding Pentagon Defaults Last {}".format(pentagon_defaults))
self._merge_data(pentagon_defaults)
except AttributeError, e:
logging.info("No top level defaults for Pentagon component {} ".format(
class_name.lower()))
def _render_directory_templates(self):
""" Loop and use render_template helper method on all templates in destination directory """
template_location = self._destination_directory_name
if os.path.isfile(template_location):
template_location = os.path.dirname(template_location)
logging.debug("{} is a file. Using the directory {} instead.".format(
self._destination_directory_name, template_location))
logging.debug("Rendering Templates in {}".format(template_location))
for folder, dirnames, files in os.walk(template_location):
for template in glob.glob(folder + "/*.jinja"):
logging.debug("Rendering {}".format(template))
template_file_name = template.split('/')[-1]
path = '/'.join(template.split('/')[0:-1])
target_file_name = re.sub(r'\.jinja$', '', template_file_name)
target = folder + "/" + target_file_name
render_template(template_file_name, path, target,
self._data, overwrite=self._overwrite)
def _remove_init_file(self):
""" delete init file, if it exists from template target directory """
for root, dirs, files in os.walk(self._destination_directory_name):
for name in files:
if "__init__.py" == name or "__init__.pyc" == name:
logging.debug('Removing: {}'.format(
os.path.join(root, name)))
os.remove(os.path.join(root, name))
def _merge_data(self, new_data, clobber=False):
""" accepts new_data (dict) and clobber (boolean). Merges dictionary with existing instance dictionary _data. If clobber is True, overwrites value. Defaults to false """
for key, value in new_data.items():
if self._data.get(key) is None or clobber:
logging.debug(
"Setting component data {}: {}".format(key, value))
self._data[key] = value
def add(self, destination, overwrite=False):
self._destination = destination
self._overwrite = overwrite
self._display_settings_to_user()
try:
# Add all files from the component templates to the destination directory
self._add_files()
# Remove any __init__ files in the destination that were copied from the component templates
self._remove_init_file()
# For all the jinja templates in the destination directory, render them
self._render_directory_templates()
logging.info("New component added. Source your environment before "
"proceeding or unexpected behavior may result.")
except Exception as e:
logging.error("Error occurred configuring component")
logging.error(e)
logging.debug(traceback.format_exc(e))
sys.exit(1)
def _display_settings_to_user(self):
logging.info("Pentagon will write to the following directory: "
"(set with '-o ./path')")
logging.info(" Path: \"{}\"".format(self._destination_directory_name))
logging.info("Displaying provided and default values for this component: "
"(e.g. '-Dparam_name=abcd')")
for key in sorted(self._data):
value = self._data[key]
using_defaults = False
if key in self._defaults.keys():
if self._data[key] == self._defaults[key]:
using_defaults = True
is_default = "(Default Value)" if using_defaults else ""
logging.info(" {0:40} = {1:20} {2}".format(
key,
str(value),
is_default,
))
if sys.stdin.isatty():
if click.confirm('This look ok to proceed?'):
return
else:
logging.info("Exiting because you did not accept the inputs.")
exit()
def _add_files(self, sub_path=None):
""" Copies files and templates from <component>/files """
if self._overwrite:
from distutils.dir_util import copy_tree
else:
from shutil import copytree as copy_tree
if sub_path is not None:
source = ('{}/{}').format(self._files_directory, sub_path)
else:
source = self._files_directory
logging.debug("Adding file: {} -> {}".format(source,
self._destination_directory_name))
if os.path.isfile(source):
shutil.copy(source, self._destination_directory_name)
elif os.path.isdir(source):
copy_tree(source, self._destination_directory_name)
================================================
FILE: pentagon/component/aws_vpc/__init__.py
================================================
import os
from pentagon.component import ComponentBase
from pentagon.defaults import AWSPentagonDefaults as PentagonDefaults
from pentagon.helpers import allege_aws_availability_zones
class AWSVpc(ComponentBase):
_required_parameters = ['aws_region']
def add(self, destination, overwrite):
for key, value in PentagonDefaults.vpc.iteritems():
if not self._data.get(key):
self._data[key] = value
if self._data.get('aws_availability_zones') is None:
self._data['aws_availability_zones'] = allege_aws_availability_zones(self._data['aws_region'], self._data['aws_availability_zone_count'])
return super(AWSVpc, self).add(destination, overwrite=overwrite)
================================================
FILE: pentagon/component/aws_vpc/files/aws_vpc.auto.tfvars.jinja
================================================
aws_vpc_name = "{{ vpc_name }}"
vpc_cidr_base = "{{ vpc_cidr_base }}"
aws_azs = "{{ aws_availability_zones }}"
az_count = "{{ aws_availability_zone_count }}"
aws_inventory_path = "$INFRASTRUCTURE_REPO/plugins/inventory"
aws_region = "{{ aws_region }}"
admin_subnet_parent_cidr = ".0.0/22"
admin_subnet_cidrs = {
zone0 = ".0.0/24"
zone1 = ".1.0/24"
zone2 = ".2.0/24"
zone3 = ".3.0/24"
}
public_subnet_parent_cidr = ".4.0/22"
public_subnet_cidrs = {
zone0 = ".4.0/24"
zone1 = ".5.0/24"
zone2 = ".6.0/24"
zone3 = ".7.0/24"
}
private_prod_subnet_parent_cidr = ".8.0/22"
private_prod_subnet_cidrs = {
zone0 = ".8.0/24"
zone1 = ".9.0/24"
zone2 = ".10.0/24"
zone3 = ".11.0/24"
}
private_working_subnet_parent_cidr = ".12.0/22"
private_working_subnet_cidrs = {
zone0 = ".12.0/24"
zone1 = ".13.0/24"
zone2 = ".14.0/24"
zone3 = ".15.0/24"
}
================================================
FILE: pentagon/component/aws_vpc/files/aws_vpc.tf.jinja
================================================
module "vpc" {
source = "git::https://github.com/reactiveops/terraform-vpc.git?ref=v3.0.0"
aws_vpc_name = "${var.aws_vpc_name}"
aws_region = "${var.aws_region}"
az_count = "${var.az_count}"
aws_azs = "${var.aws_azs}"
vpc_cidr_base = "${var.vpc_cidr_base}"
admin_subnet_parent_cidr = "${var.admin_subnet_parent_cidr}"
admin_subnet_cidrs = "${var.admin_subnet_cidrs}"
public_subnet_parent_cidr = "${var.public_subnet_parent_cidr}"
public_subnet_cidrs = "${var.public_subnet_cidrs}"
private_prod_subnet_parent_cidr = "${var.private_prod_subnet_parent_cidr}"
private_prod_subnet_cidrs = "${var.private_prod_subnet_cidrs}"
private_working_subnet_parent_cidr = "${var.private_working_subnet_parent_cidr}"
private_working_subnet_cidrs = "${var.private_working_subnet_cidrs}"
}
// Output VPC values to allow this statefile to be used as a datasource
output "aws_vpc_subnet_admin_ids" {
value = "${module.vpc.aws_subnet_admin_ids}"
}
output "aws_vpc_subnet_private_working_ids" {
value = "${module.vpc.aws_subnet_private_working_ids}"
}
output "aws_vpc_subnet_private_prod_ids" {
value = "${module.vpc.aws_subnet_private_prod_ids}"
}
output "aws_vpc_subnet_public_ids" {
value = "${module.vpc.aws_subnet_public_ids}"
}
output "aws_vpc_id" {
value = "${module.vpc.aws_vpc_id}"
}
output "aws_vpc_cidr" {
value = "${module.vpc.aws_vpc_cidr}"
}
output "aws_nat_gateway_ids" {
value = "${module.vpc.aws_nat_gateway_ids}"
}
================================================
FILE: pentagon/component/aws_vpc/files/aws_vpc_variables.tf
================================================
variable "aws_region" {}
variable "aws_azs" {}
variable "aws_vpc_name" {}
variable "az_count" {}
variable "vpc_cidr_base" {}
variable "admin_subnet_parent_cidr" {}
variable "admin_subnet_cidrs" {
default = {}
}
variable "public_subnet_parent_cidr" {}
variable "public_subnet_cidrs" {
default = {}
}
variable "private_prod_subnet_parent_cidr" {}
variable "private_prod_subnet_cidrs" {
default = {}
}
variable "private_working_subnet_parent_cidr" {}
variable "private_working_subnet_cidrs" {
default = {}
}
================================================
FILE: pentagon/component/core/__init__.py
================================================
from pentagon.component import ComponentBase
class Core(ComponentBase):
pass
================================================
FILE: pentagon/component/core/files/.gitignore
================================================
.DS_Store
.terraform
*.pyc
*.pem
*.pub
*secret*.yml
roles
helm
================================================
FILE: pentagon/component/core/files/README.md
================================================
# Documentation
## Getting Started
### System Requirements
This repository relies on system tools and Python libraries for your system.
System Tools:
* [Terraform](https://www.terraform.io)
* [kops](https://github.com/kubernetes/kops)
* [kubectl](https://kubernetes.io/docs/user-guide/kubectl-overview/)
Kubectl should try to match the cluster version.
Python Libraries:
Libraries required can be installed with `pip install -r requirements.txt`. These can be installed into a [virtualenv](https://virtualenv.pypa.io/en/stable/) to isolate the installation from your system.
### Shell environment
Shell variables are used extensively for configuration of the above tools. Where possible these are checked into the repository. Secrets/credentials must be obtained separately.
Some shell variables are stored as YAML and require [shyaml](https://github.com/0k/shyaml) installed as a Python requirement above.
First create the `config/private/secrets.yml` file and supply values:
```
AWS_ACCESS_KEY:
AWS_ACCESS_KEY_ID:
TF_VAR_aws_access_key:
AWS_SECRET_KEY:
AWS_SECRET_ACCESS_KEY:
TF_VAR_aws_secret_key:
```
Set `INFRASTRUCTURE_REPO` to the location of this project then source the environment variable setup script:
```
INFRASTRUCTURE_REPO="$(pwd)"
source config/local/env-vars.sh
source default/account/vars.sh
```
Per AWS account variables are in `default/account/vars.sh` used for creating the Kubernetes cluster but not needed for most other operations. When creating a cluster the `kops.sh` file
Per user configuration needs to be generated. These files cannot directly use the `$INFRASTRUCTURE_REPO` environment variable for various reasons. The `config/local/local-config-init` script will generate the files needed from templates in that same folder.
```
./config/local/local-config-init
```
### VPC
The VPC Terraform code is in `default/vpc`.
### Kubernetes
If you have an existing Kubernetes Config you can place the file as `config/private/kube_config`
If you are creating a cluster then the `default/clusters/*/cluster-config/kops.sh` file has steps to do so.
================================================
FILE: pentagon/component/core/files/ansible-requirements.yml
================================================
---
##
# Dependents not located in galaxy.ansible.com need to precede their parents
##
- src: "git+https://github.com/reactiveops/ansible-get-vpc-facts.git"
name: reactiveops.get-vpc-facts
version: 1.1.3
##
# End dependents not located in galaxy.ansible.com
##
- src: "git+https://github.com/reactiveops/ansible-vpn-stack.git"
name: reactiveops.vpn-stack
version: 1.2.0
- src: "https://github.com/Stouts/Stouts.users.git"
version: 1.2.0
name: Stouts.users-master
- src: "https://github.com/reactiveops/Stouts.openvpn.git"
version: 3.0.0
name: Stouts.openvpn-master
- src: "git+https://git@github.com/reactiveops/ansible-iam-role.git"
version: 1.0.0
name: reactiveops.iam-role
================================================
FILE: pentagon/component/core/files/inventory/__init__.py
================================================
================================================
FILE: pentagon/component/core/files/plugins/filter_plugins/flatten.py
================================================
# This function will take an irregular list composed of lists
# and flatten it
from compiler.ast import flatten
class FilterModule (object):
def filters(self):
return {
"flatten": flatten
}
================================================
FILE: pentagon/component/core/files/plugins/inventory/base
================================================
# https://github.com/ansible/ansible-modules-core/issues/2601#issuecomment-189503881
[all:vars]
ansible_python_interpreter = /usr/bin/env python
[localhost]
127.0.0.1
================================================
FILE: pentagon/component/core/files/plugins/inventory/ec2.ini
================================================
# Ansible EC2 external inventory script settings
#
[ec2]
# to talk to a private eucalyptus instance uncomment these lines
# and edit edit eucalyptus_host to be the host name of your cloud controller
#eucalyptus = True
#eucalyptus_host = clc.cloud.domain.org
# AWS regions to make calls to. Set this to 'all' to make request to all regions
# in AWS and merge the results together. Alternatively, set this to a comma
# separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2'
regions = all
regions_exclude = us-gov-west-1,cn-north-1
# When generating inventory, Ansible needs to know how to address a server.
# Each EC2 instance has a lot of variables associated with it. Here is the list:
# http://docs.pythonboto.org/en/latest/ref/ec2.html#module-boto.ec2.instance
# Below are 2 variables that are used as the address of a server:
# - destination_variable
# - vpc_destination_variable
# This is the normal destination variable to use. If you are running Ansible
# from outside EC2, then 'public_dns_name' makes the most sense. If you are
# running Ansible from within EC2, then perhaps you want to use the internal
# address, and should set this to 'private_dns_name'. The key of an EC2 tag
# may optionally be used; however the boto instance variables hold precedence
# in the event of a collision.
#destination_variable = public_dns_name
destination_variable = private_dns_name
# For server inside a VPC, using DNS names may not make sense. When an instance
# has 'subnet_id' set, this variable is used. If the subnet is public, setting
# this to 'ip_address' will return the public IP address. For instances in a
# private subnet, this should be set to 'private_ip_address', and Ansible must
# be run from within EC2. The key of an EC2 tag may optionally be used; however
# the boto instance variables hold precedence in the event of a collision.
# WARNING: - instances that are in the private vpc, _without_ public ip address
# will not be listed in the inventory until You set:
#vpc_destination_variable = ip_address
vpc_destination_variable = private_ip_address
# To tag instances on EC2 with the resource records that point to them from
# Route53, uncomment and set 'route53' to True.
route53 = False
# To exclude RDS instances from the inventory, uncomment and set to False.
rds = False
# To exclude ElastiCache instances from the inventory, uncomment and set to False.
elasticache = False
# Additionally, you can specify the list of zones to exclude looking up in
# 'route53_excluded_zones' as a comma-separated list.
# route53_excluded_zones = samplezone1.com, samplezone2.com
# By default, only EC2 instances in the 'running' state are returned. Set
# 'all_instances' to True to return all instances regardless of state.
all_instances = False
# By default, only EC2 instances in the 'running' state are returned. Specify
# EC2 instance states to return as a comma-separated list. This
# option is overriden when 'all_instances' is True.
# instance_states = pending, running, shutting-down, terminated, stopping, stopped
# By default, only RDS instances in the 'available' state are returned. Set
# 'all_rds_instances' to True return all RDS instances regardless of state.
all_rds_instances = False
# By default, only ElastiCache clusters and nodes in the 'available' state
# are returned. Set 'all_elasticache_clusters' and/or 'all_elastic_nodes'
# to True return all ElastiCache clusters and nodes, regardless of state.
#
# Note that all_elasticache_nodes only applies to listed clusters. That means
# if you set all_elastic_clusters to false, no node will be return from
# unavailable clusters, regardless of the state and to what you set for
# all_elasticache_nodes.
all_elasticache_replication_groups = False
all_elasticache_clusters = False
all_elasticache_nodes = False
# API calls to EC2 are slow. For this reason, we cache the results of an API
# call. Set this to the path you want cache files to be written to. Two files
# will be written to this directory:
# - ansible-ec2.cache
# - ansible-ec2.index
cache_path = ~/.ansible/tmp
# The number of seconds a cache file is considered valid. After this many
# seconds, a new API call will be made, and the cache file will be updated.
# To disable the cache, set this value to 0
cache_max_age = 300
# Organize groups into a nested/hierarchy instead of a flat namespace.
nested_groups = False
# The EC2 inventory output can become very large. To manage its size,
# configure which groups should be created.
group_by_instance_id = True
group_by_region = True
group_by_availability_zone = True
group_by_ami_id = True
group_by_instance_type = True
group_by_key_pair = True
group_by_vpc_id = True
group_by_security_group = True
group_by_tag_keys = True
group_by_tag_none = True
group_by_route53_names = True
group_by_rds_engine = True
group_by_rds_parameter_group = True
group_by_elasticache_engine = True
group_by_elasticache_cluster = True
group_by_elasticache_parameter_group = True
group_by_elasticache_replication_group = True
# If you only want to include hosts that match a certain regular expression
# pattern_include = staging-*
# If you want to exclude any hosts that match a certain regular expression
# pattern_exclude = staging-*
# Instance filters can be used to control which instances are retrieved for
# inventory. For the full list of possible filters, please read the EC2 API
# docs: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html#query-DescribeInstances-filters
# Filters are key/value pairs separated by '=', to list multiple filters use
# a list separated by commas. See examples below.
# Retrieve only instances with (key=value) env=staging tag
# instance_filters = tag:env=staging
# Retrieve only instances with role=webservers OR role=dbservers tag
# instance_filters = tag:role=webservers,tag:role=dbservers
# Retrieve only t1.micro instances OR instances with tag env=staging
# instance_filters = instance-type=t1.micro,tag:env=staging
# You can use wildcards in filter values also. Below will list instances which
# tag Name value matches webservers1*
# (ex. webservers15, webservers1a, webservers123 etc)
# instance_filters = tag:Name=webservers1*
================================================
FILE: pentagon/component/core/files/plugins/inventory/ec2.py
================================================
#!/usr/bin/env python
'''
EC2 external inventory script
=================================
Generates inventory that Ansible can understand by making API request to
AWS EC2 using the Boto library.
NOTE: This script assumes Ansible is being executed where the environment
variables needed for Boto have already been set:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
This script also assumes there is an ec2.ini file alongside it. To specify a
different path to ec2.ini, define the EC2_INI_PATH environment variable:
export EC2_INI_PATH=/path/to/my_ec2.ini
If you're using eucalyptus you need to set the above variables and
you need to define:
export EC2_URL=http://hostname_of_your_cc:port/services/Eucalyptus
For more details, see: http://docs.pythonboto.org/en/latest/boto_config_tut.html
When run against a specific host, this script returns the following variables:
- ec2_ami_launch_index
- ec2_architecture
- ec2_association
- ec2_attachTime
- ec2_attachment
- ec2_attachmentId
- ec2_client_token
- ec2_deleteOnTermination
- ec2_description
- ec2_deviceIndex
- ec2_dns_name
- ec2_eventsSet
- ec2_group_name
- ec2_hypervisor
- ec2_id
- ec2_image_id
- ec2_instanceState
- ec2_instance_type
- ec2_ipOwnerId
- ec2_ip_address
- ec2_item
- ec2_kernel
- ec2_key_name
- ec2_launch_time
- ec2_monitored
- ec2_monitoring
- ec2_networkInterfaceId
- ec2_ownerId
- ec2_persistent
- ec2_placement
- ec2_platform
- ec2_previous_state
- ec2_private_dns_name
- ec2_private_ip_address
- ec2_publicIp
- ec2_public_dns_name
- ec2_ramdisk
- ec2_reason
- ec2_region
- ec2_requester_id
- ec2_root_device_name
- ec2_root_device_type
- ec2_security_group_ids
- ec2_security_group_names
- ec2_shutdown_state
- ec2_sourceDestCheck
- ec2_spot_instance_request_id
- ec2_state
- ec2_state_code
- ec2_state_reason
- ec2_status
- ec2_subnet_id
- ec2_tenancy
- ec2_virtualization_type
- ec2_vpc_id
These variables are pulled out of a boto.ec2.instance object. There is a lack of
consistency with variable spellings (camelCase and underscores) since this
just loops through all variables the object exposes. It is preferred to use the
ones with underscores when multiple exist.
In addition, if an instance has AWS Tags associated with it, each tag is a new
variable named:
- ec2_tag_[Key] = [Value]
Security groups are comma-separated in 'ec2_security_group_ids' and
'ec2_security_group_names'.
'''
# (c) 2012, Peter Sankauskas
#
# This file is part of Ansible,
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
######################################################################
import sys
import os
import argparse
import re
from time import time
import boto
from boto import ec2
from boto import rds
from boto import elasticache
from boto import route53
import six
from six.moves import configparser
from collections import defaultdict
try:
import json
except ImportError:
import simplejson as json
class Ec2Inventory(object):
def _empty_inventory(self):
return {"_meta" : {"hostvars" : {}}}
def __init__(self):
''' Main execution path '''
# Inventory grouped by instance IDs, tags, security groups, regions,
# and availability zones
self.inventory = self._empty_inventory()
# Index of hostname (address) to instance ID
self.index = {}
# Read settings and parse CLI arguments
self.read_settings()
self.parse_cli_args()
# Cache
if self.args.refresh_cache:
self.do_api_calls_update_cache()
elif not self.is_cache_valid():
self.do_api_calls_update_cache()
# Data to print
if self.args.host:
data_to_print = self.get_host_info()
elif self.args.list:
# Display list of instances for inventory
if self.inventory == self._empty_inventory():
data_to_print = self.get_inventory_from_cache()
else:
data_to_print = self.json_format_dict(self.inventory, True)
print(data_to_print)
def is_cache_valid(self):
''' Determines if the cache files have expired, or if it is still valid '''
if os.path.isfile(self.cache_path_cache):
mod_time = os.path.getmtime(self.cache_path_cache)
current_time = time()
if (mod_time + self.cache_max_age) > current_time:
if os.path.isfile(self.cache_path_index):
return True
return False
def read_settings(self):
''' Reads the settings from the ec2.ini file '''
if six.PY2:
config = configparser.SafeConfigParser()
else:
config = configparser.ConfigParser()
ec2_default_ini_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'ec2.ini')
ec2_ini_path = os.path.expanduser(os.path.expandvars(os.environ.get('EC2_INI_PATH', ec2_default_ini_path)))
config.read(ec2_ini_path)
# is eucalyptus?
self.eucalyptus_host = None
self.eucalyptus = False
if config.has_option('ec2', 'eucalyptus'):
self.eucalyptus = config.getboolean('ec2', 'eucalyptus')
if self.eucalyptus and config.has_option('ec2', 'eucalyptus_host'):
self.eucalyptus_host = config.get('ec2', 'eucalyptus_host')
# Regions
self.regions = []
configRegions = config.get('ec2', 'regions')
configRegions_exclude = config.get('ec2', 'regions_exclude')
if (configRegions == 'all'):
if self.eucalyptus_host:
self.regions.append(boto.connect_euca(host=self.eucalyptus_host).region.name)
else:
for regionInfo in ec2.regions():
if regionInfo.name not in configRegions_exclude:
self.regions.append(regionInfo.name)
else:
self.regions = configRegions.split(",")
# Destination addresses
self.destination_variable = config.get('ec2', 'destination_variable')
self.vpc_destination_variable = config.get('ec2', 'vpc_destination_variable')
# Route53
self.route53_enabled = config.getboolean('ec2', 'route53')
self.route53_excluded_zones = []
if config.has_option('ec2', 'route53_excluded_zones'):
self.route53_excluded_zones.extend(
config.get('ec2', 'route53_excluded_zones', '').split(','))
# Include RDS instances?
self.rds_enabled = True
if config.has_option('ec2', 'rds'):
self.rds_enabled = config.getboolean('ec2', 'rds')
# Include ElastiCache instances?
self.elasticache_enabled = True
if config.has_option('ec2', 'elasticache'):
self.elasticache_enabled = config.getboolean('ec2', 'elasticache')
# Return all EC2 instances?
if config.has_option('ec2', 'all_instances'):
self.all_instances = config.getboolean('ec2', 'all_instances')
else:
self.all_instances = False
# Instance states to be gathered in inventory. Default is 'running'.
# Setting 'all_instances' to 'yes' overrides this option.
ec2_valid_instance_states = [
'pending',
'running',
'shutting-down',
'terminated',
'stopping',
'stopped'
]
self.ec2_instance_states = []
if self.all_instances:
self.ec2_instance_states = ec2_valid_instance_states
elif config.has_option('ec2', 'instance_states'):
for instance_state in config.get('ec2', 'instance_states').split(','):
instance_state = instance_state.strip()
if instance_state not in ec2_valid_instance_states:
continue
self.ec2_instance_states.append(instance_state)
else:
self.ec2_instance_states = ['running']
# Return all RDS instances? (if RDS is enabled)
if config.has_option('ec2', 'all_rds_instances') and self.rds_enabled:
self.all_rds_instances = config.getboolean('ec2', 'all_rds_instances')
else:
self.all_rds_instances = False
# Return all ElastiCache replication groups? (if ElastiCache is enabled)
if config.has_option('ec2', 'all_elasticache_replication_groups') and self.elasticache_enabled:
self.all_elasticache_replication_groups = config.getboolean('ec2', 'all_elasticache_replication_groups')
else:
self.all_elasticache_replication_groups = False
# Return all ElastiCache clusters? (if ElastiCache is enabled)
if config.has_option('ec2', 'all_elasticache_clusters') and self.elasticache_enabled:
self.all_elasticache_clusters = config.getboolean('ec2', 'all_elasticache_clusters')
else:
self.all_elasticache_clusters = False
# Return all ElastiCache nodes? (if ElastiCache is enabled)
if config.has_option('ec2', 'all_elasticache_nodes') and self.elasticache_enabled:
self.all_elasticache_nodes = config.getboolean('ec2', 'all_elasticache_nodes')
else:
self.all_elasticache_nodes = False
# Cache related
cache_dir = os.path.expanduser(config.get('ec2', 'cache_path'))
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
self.cache_path_cache = cache_dir + "/ansible-ec2.cache"
self.cache_path_index = cache_dir + "/ansible-ec2.index"
self.cache_max_age = config.getint('ec2', 'cache_max_age')
# Configure nested groups instead of flat namespace.
if config.has_option('ec2', 'nested_groups'):
self.nested_groups = config.getboolean('ec2', 'nested_groups')
else:
self.nested_groups = False
# Configure which groups should be created.
group_by_options = [
'group_by_instance_id',
'group_by_region',
'group_by_availability_zone',
'group_by_ami_id',
'group_by_instance_type',
'group_by_key_pair',
'group_by_vpc_id',
'group_by_security_group',
'group_by_tag_keys',
'group_by_tag_none',
'group_by_route53_names',
'group_by_rds_engine',
'group_by_rds_parameter_group',
'group_by_elasticache_engine',
'group_by_elasticache_cluster',
'group_by_elasticache_parameter_group',
'group_by_elasticache_replication_group',
]
for option in group_by_options:
if config.has_option('ec2', option):
setattr(self, option, config.getboolean('ec2', option))
else:
setattr(self, option, True)
# Do we need to just include hosts that match a pattern?
try:
pattern_include = config.get('ec2', 'pattern_include')
if pattern_include and len(pattern_include) > 0:
self.pattern_include = re.compile(pattern_include)
else:
self.pattern_include = None
except configparser.NoOptionError as e:
self.pattern_include = None
# Do we need to exclude hosts that match a pattern?
try:
pattern_exclude = config.get('ec2', 'pattern_exclude');
if pattern_exclude and len(pattern_exclude) > 0:
self.pattern_exclude = re.compile(pattern_exclude)
else:
self.pattern_exclude = None
except configparser.NoOptionError as e:
self.pattern_exclude = None
# Instance filters (see boto and EC2 API docs). Ignore invalid filters.
self.ec2_instance_filters = defaultdict(list)
if config.has_option('ec2', 'instance_filters'):
for instance_filter in config.get('ec2', 'instance_filters', '').split(','):
instance_filter = instance_filter.strip()
if not instance_filter or '=' not in instance_filter:
continue
filter_key, filter_value = [x.strip() for x in instance_filter.split('=', 1)]
if not filter_key:
continue
self.ec2_instance_filters[filter_key].append(filter_value)
def parse_cli_args(self):
''' Command line argument processing '''
parser = argparse.ArgumentParser(description='Produce an Ansible Inventory file based on EC2')
parser.add_argument('--list', action='store_true', default=True,
help='List instances (default: True)')
parser.add_argument('--host', action='store',
help='Get all the variables about a specific instance')
parser.add_argument('--refresh-cache', action='store_true', default=False,
help='Force refresh of cache by making API requests to EC2 (default: False - use cache files)')
self.args = parser.parse_args()
def do_api_calls_update_cache(self):
''' Do API calls to each region, and save data in cache files '''
if self.route53_enabled:
self.get_route53_records()
for region in self.regions:
self.get_instances_by_region(region)
if self.rds_enabled:
self.get_rds_instances_by_region(region)
if self.elasticache_enabled:
self.get_elasticache_clusters_by_region(region)
self.get_elasticache_replication_groups_by_region(region)
self.write_to_cache(self.inventory, self.cache_path_cache)
self.write_to_cache(self.index, self.cache_path_index)
def connect(self, region):
''' create connection to api server'''
if self.eucalyptus:
conn = boto.connect_euca(host=self.eucalyptus_host)
conn.APIVersion = '2010-08-31'
else:
conn = ec2.connect_to_region(region)
# connect_to_region will fail "silently" by returning None if the region name is wrong or not supported
if conn is None:
self.fail_with_error("region name: %s likely not supported, or AWS is down. connection to region failed." % region)
return conn
def get_instances_by_region(self, region):
''' Makes an AWS EC2 API call to the list of instances in a particular
region '''
try:
conn = self.connect(region)
reservations = []
if self.ec2_instance_filters:
for filter_key, filter_values in self.ec2_instance_filters.items():
reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values }))
else:
reservations = conn.get_all_instances()
for reservation in reservations:
for instance in reservation.instances:
self.add_instance(instance, region)
except boto.exception.BotoServerError as e:
if e.error_code == 'AuthFailure':
error = self.get_auth_error_message()
else:
backend = 'Eucalyptus' if self.eucalyptus else 'AWS'
error = "Error connecting to %s backend.\n%s" % (backend, e.message)
self.fail_with_error(error, 'getting EC2 instances')
def get_rds_instances_by_region(self, region):
''' Makes an AWS API call to the list of RDS instances in a particular
region '''
try:
conn = rds.connect_to_region(region)
if conn:
instances = conn.get_all_dbinstances()
for instance in instances:
self.add_rds_instance(instance, region)
except boto.exception.BotoServerError as e:
error = e.reason
if e.error_code == 'AuthFailure':
error = self.get_auth_error_message()
if not e.reason == "Forbidden":
error = "Looks like AWS RDS is down:\n%s" % e.message
self.fail_with_error(error, 'getting RDS instances')
def get_elasticache_clusters_by_region(self, region):
''' Makes an AWS API call to the list of ElastiCache clusters (with
nodes' info) in a particular region.'''
# ElastiCache boto module doesn't provide a get_all_intances method,
# that's why we need to call describe directly (it would be called by
# the shorthand method anyway...)
try:
conn = elasticache.connect_to_region(region)
if conn:
# show_cache_node_info = True
# because we also want nodes' information
response = conn.describe_cache_clusters(None, None, None, True)
except boto.exception.BotoServerError as e:
error = e.reason
if e.error_code == 'AuthFailure':
error = self.get_auth_error_message()
if not e.reason == "Forbidden":
error = "Looks like AWS ElastiCache is down:\n%s" % e.message
self.fail_with_error(error, 'getting ElastiCache clusters')
try:
# Boto also doesn't provide wrapper classes to CacheClusters or
# CacheNodes. Because of that wo can't make use of the get_list
# method in the AWSQueryConnection. Let's do the work manually
clusters = response['DescribeCacheClustersResponse']['DescribeCacheClustersResult']['CacheClusters']
except KeyError as e:
error = "ElastiCache query to AWS failed (unexpected format)."
self.fail_with_error(error, 'getting ElastiCache clusters')
for cluster in clusters:
self.add_elasticache_cluster(cluster, region)
def get_elasticache_replication_groups_by_region(self, region):
''' Makes an AWS API call to the list of ElastiCache replication groups
in a particular region.'''
# ElastiCache boto module doesn't provide a get_all_intances method,
# that's why we need to call describe directly (it would be called by
# the shorthand method anyway...)
try:
conn = elasticache.connect_to_region(region)
if conn:
response = conn.describe_replication_groups()
except boto.exception.BotoServerError as e:
error = e.reason
if e.error_code == 'AuthFailure':
error = self.get_auth_error_message()
if not e.reason == "Forbidden":
error = "Looks like AWS ElastiCache [Replication Groups] is down:\n%s" % e.message
self.fail_with_error(error, 'getting ElastiCache clusters')
try:
# Boto also doesn't provide wrapper classes to ReplicationGroups
# Because of that wo can't make use of the get_list method in the
# AWSQueryConnection. Let's do the work manually
replication_groups = response['DescribeReplicationGroupsResponse']['DescribeReplicationGroupsResult']['ReplicationGroups']
except KeyError as e:
error = "ElastiCache [Replication Groups] query to AWS failed (unexpected format)."
self.fail_with_error(error, 'getting ElastiCache clusters')
for replication_group in replication_groups:
self.add_elasticache_replication_group(replication_group, region)
def get_auth_error_message(self):
''' create an informative error message if there is an issue authenticating'''
errors = ["Authentication error retrieving ec2 inventory."]
if None in [os.environ.get('AWS_ACCESS_KEY_ID'), os.environ.get('AWS_SECRET_ACCESS_KEY')]:
errors.append(' - No AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY environment vars found')
else:
errors.append(' - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment vars found but may not be correct')
boto_paths = ['/etc/boto.cfg', '~/.boto', '~/.aws/credentials']
boto_config_found = list(p for p in boto_paths if os.path.isfile(os.path.expanduser(p)))
if len(boto_config_found) > 0:
errors.append(" - Boto configs found at '%s', but the credentials contained may not be correct" % ', '.join(boto_config_found))
else:
errors.append(" - No Boto config found at any expected location '%s'" % ', '.join(boto_paths))
return '\n'.join(errors)
def fail_with_error(self, err_msg, err_operation=None):
'''log an error to std err for ansible-playbook to consume and exit'''
if err_operation:
err_msg = 'ERROR: "{err_msg}", while: {err_operation}'.format(
err_msg=err_msg, err_operation=err_operation)
sys.stderr.write(err_msg)
sys.exit(1)
def get_instance(self, region, instance_id):
conn = self.connect(region)
reservations = conn.get_all_instances([instance_id])
for reservation in reservations:
for instance in reservation.instances:
return instance
def add_instance(self, instance, region):
''' Adds an instance to the inventory and index, as long as it is
addressable '''
# Only return instances with desired instance states
if instance.state not in self.ec2_instance_states:
return
# Select the best destination address
if instance.subnet_id:
dest = getattr(instance, self.vpc_destination_variable, None)
if dest is None:
dest = getattr(instance, 'tags').get(self.vpc_destination_variable, None)
else:
dest = getattr(instance, self.destination_variable, None)
if dest is None:
dest = getattr(instance, 'tags').get(self.destination_variable, None)
if not dest:
# Skip instances we cannot address (e.g. private VPC subnet)
return
# if we only want to include hosts that match a pattern, skip those that don't
if self.pattern_include and not self.pattern_include.match(dest):
return
# if we need to exclude hosts that match a pattern, skip those
if self.pattern_exclude and self.pattern_exclude.match(dest):
return
# Add to index
self.index[dest] = [region, instance.id]
# Inventory: Group by instance ID (always a group of 1)
if self.group_by_instance_id:
self.inventory[instance.id] = [dest]
if self.nested_groups:
self.push_group(self.inventory, 'instances', instance.id)
# Inventory: Group by region
if self.group_by_region:
self.push(self.inventory, region, dest)
if self.nested_groups:
self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone
if self.group_by_availability_zone:
self.push(self.inventory, instance.placement, dest)
if self.nested_groups:
if self.group_by_region:
self.push_group(self.inventory, region, instance.placement)
self.push_group(self.inventory, 'zones', instance.placement)
# Inventory: Group by Amazon Machine Image (AMI) ID
if self.group_by_ami_id:
ami_id = self.to_safe(instance.image_id)
self.push(self.inventory, ami_id, dest)
if self.nested_groups:
self.push_group(self.inventory, 'images', ami_id)
# Inventory: Group by instance type
if self.group_by_instance_type:
type_name = self.to_safe('type_' + instance.instance_type)
self.push(self.inventory, type_name, dest)
if self.nested_groups:
self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by key pair
if self.group_by_key_pair and instance.key_name:
key_name = self.to_safe('key_' + instance.key_name)
self.push(self.inventory, key_name, dest)
if self.nested_groups:
self.push_group(self.inventory, 'keys', key_name)
# Inventory: Group by VPC
if self.group_by_vpc_id and instance.vpc_id:
vpc_id_name = self.to_safe('vpc_id_' + instance.vpc_id)
self.push(self.inventory, vpc_id_name, dest)
if self.nested_groups:
self.push_group(self.inventory, 'vpcs', vpc_id_name)
# Inventory: Group by security group
if self.group_by_security_group:
try:
for group in instance.groups:
key = self.to_safe("security_group_" + group.name)
self.push(self.inventory, key, dest)
if self.nested_groups:
self.push_group(self.inventory, 'security_groups', key)
except AttributeError:
self.fail_with_error('\n'.join(['Package boto seems a bit older.',
'Please upgrade boto >= 2.3.0.']))
# Inventory: Group by tag keys
if self.group_by_tag_keys:
for k, v in instance.tags.items():
if v:
key = self.to_safe("tag_" + k + "=" + v)
else:
key = self.to_safe("tag_" + k)
self.push(self.inventory, key, dest)
if self.nested_groups:
self.push_group(self.inventory, 'tags', self.to_safe("tag_" + k))
self.push_group(self.inventory, self.to_safe("tag_" + k), key)
# Inventory: Group by Route53 domain names if enabled
if self.route53_enabled and self.group_by_route53_names:
route53_names = self.get_instance_route53_names(instance)
for name in route53_names:
self.push(self.inventory, name, dest)
if self.nested_groups:
self.push_group(self.inventory, 'route53', name)
# Global Tag: instances without tags
if self.group_by_tag_none and len(instance.tags) == 0:
self.push(self.inventory, 'tag_none', dest)
if self.nested_groups:
self.push_group(self.inventory, 'tags', 'tag_none')
# Global Tag: tag all EC2 instances
self.push(self.inventory, 'ec2', dest)
self.inventory["_meta"]["hostvars"][dest] = self.get_host_info_dict_from_instance(instance)
def add_rds_instance(self, instance, region):
''' Adds an RDS instance to the inventory and index, as long as it is
addressable '''
# Only want available instances unless all_rds_instances is True
if not self.all_rds_instances and instance.status != 'available':
return
# Select the best destination address
dest = instance.endpoint[0]
if not dest:
# Skip instances we cannot address (e.g. private VPC subnet)
return
# Add to index
self.index[dest] = [region, instance.id]
# Inventory: Group by instance ID (always a group of 1)
if self.group_by_instance_id:
self.inventory[instance.id] = [dest]
if self.nested_groups:
self.push_group(self.inventory, 'instances', instance.id)
# Inventory: Group by region
if self.group_by_region:
self.push(self.inventory, region, dest)
if self.nested_groups:
self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone
if self.group_by_availability_zone:
self.push(self.inventory, instance.availability_zone, dest)
if self.nested_groups:
if self.group_by_region:
self.push_group(self.inventory, region, instance.availability_zone)
self.push_group(self.inventory, 'zones', instance.availability_zone)
# Inventory: Group by instance type
if self.group_by_instance_type:
type_name = self.to_safe('type_' + instance.instance_class)
self.push(self.inventory, type_name, dest)
if self.nested_groups:
self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by VPC
if self.group_by_vpc_id and instance.subnet_group and instance.subnet_group.vpc_id:
vpc_id_name = self.to_safe('vpc_id_' + instance.subnet_group.vpc_id)
self.push(self.inventory, vpc_id_name, dest)
if self.nested_groups:
self.push_group(self.inventory, 'vpcs', vpc_id_name)
# Inventory: Group by security group
if self.group_by_security_group:
try:
if instance.security_group:
key = self.to_safe("security_group_" + instance.security_group.name)
self.push(self.inventory, key, dest)
if self.nested_groups:
self.push_group(self.inventory, 'security_groups', key)
except AttributeError:
self.fail_with_error('\n'.join(['Package boto seems a bit older.',
'Please upgrade boto >= 2.3.0.']))
# Inventory: Group by engine
if self.group_by_rds_engine:
self.push(self.inventory, self.to_safe("rds_" + instance.engine), dest)
if self.nested_groups:
self.push_group(self.inventory, 'rds_engines', self.to_safe("rds_" + instance.engine))
# Inventory: Group by parameter group
if self.group_by_rds_parameter_group:
self.push(self.inventory, self.to_safe("rds_parameter_group_" + instance.parameter_group.name), dest)
if self.nested_groups:
self.push_group(self.inventory, 'rds_parameter_groups', self.to_safe("rds_parameter_group_" + instance.parameter_group.name))
# Global Tag: all RDS instances
self.push(self.inventory, 'rds', dest)
self.inventory["_meta"]["hostvars"][dest] = self.get_host_info_dict_from_instance(instance)
def add_elasticache_cluster(self, cluster, region):
''' Adds an ElastiCache cluster to the inventory and index, as long as
it's nodes are addressable '''
# Only want available clusters unless all_elasticache_clusters is True
if not self.all_elasticache_clusters and cluster['CacheClusterStatus'] != 'available':
return
# Select the best destination address
if 'ConfigurationEndpoint' in cluster and cluster['ConfigurationEndpoint']:
# Memcached cluster
dest = cluster['ConfigurationEndpoint']['Address']
is_redis = False
else:
# Redis sigle node cluster
# Because all Redis clusters are single nodes, we'll merge the
# info from the cluster with info about the node
dest = cluster['CacheNodes'][0]['Endpoint']['Address']
is_redis = True
if not dest:
# Skip clusters we cannot address (e.g. private VPC subnet)
return
# Add to index
self.index[dest] = [region, cluster['CacheClusterId']]
# Inventory: Group by instance ID (always a group of 1)
if self.group_by_instance_id:
self.inventory[cluster['CacheClusterId']] = [dest]
if self.nested_groups:
self.push_group(self.inventory, 'instances', cluster['CacheClusterId'])
# Inventory: Group by region
if self.group_by_region and not is_redis:
self.push(self.inventory, region, dest)
if self.nested_groups:
self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone
if self.group_by_availability_zone and not is_redis:
self.push(self.inventory, cluster['PreferredAvailabilityZone'], dest)
if self.nested_groups:
if self.group_by_region:
self.push_group(self.inventory, region, cluster['PreferredAvailabilityZone'])
self.push_group(self.inventory, 'zones', cluster['PreferredAvailabilityZone'])
# Inventory: Group by node type
if self.group_by_instance_type and not is_redis:
type_name = self.to_safe('type_' + cluster['CacheNodeType'])
self.push(self.inventory, type_name, dest)
if self.nested_groups:
self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by VPC (information not available in the current
# AWS API version for ElastiCache)
# Inventory: Group by security group
if self.group_by_security_group and not is_redis:
# Check for the existence of the 'SecurityGroups' key and also if
# this key has some value. When the cluster is not placed in a SG
# the query can return None here and cause an error.
if 'SecurityGroups' in cluster and cluster['SecurityGroups'] is not None:
for security_group in cluster['SecurityGroups']:
key = self.to_safe("security_group_" + security_group['SecurityGroupId'])
self.push(self.inventory, key, dest)
if self.nested_groups:
self.push_group(self.inventory, 'security_groups', key)
# Inventory: Group by engine
if self.group_by_elasticache_engine and not is_redis:
self.push(self.inventory, self.to_safe("elasticache_" + cluster['Engine']), dest)
if self.nested_groups:
self.push_group(self.inventory, 'elasticache_engines', self.to_safe(cluster['Engine']))
# Inventory: Group by parameter group
if self.group_by_elasticache_parameter_group:
self.push(self.inventory, self.to_safe("elasticache_parameter_group_" + cluster['CacheParameterGroup']['CacheParameterGroupName']), dest)
if self.nested_groups:
self.push_group(self.inventory, 'elasticache_parameter_groups', self.to_safe(cluster['CacheParameterGroup']['CacheParameterGroupName']))
# Inventory: Group by replication group
if self.group_by_elasticache_replication_group and 'ReplicationGroupId' in cluster and cluster['ReplicationGroupId']:
self.push(self.inventory, self.to_safe("elasticache_replication_group_" + cluster['ReplicationGroupId']), dest)
if self.nested_groups:
self.push_group(self.inventory, 'elasticache_replication_groups', self.to_safe(cluster['ReplicationGroupId']))
# Global Tag: all ElastiCache clusters
self.push(self.inventory, 'elasticache_clusters', cluster['CacheClusterId'])
host_info = self.get_host_info_dict_from_describe_dict(cluster)
self.inventory["_meta"]["hostvars"][dest] = host_info
# Add the nodes
for node in cluster['CacheNodes']:
self.add_elasticache_node(node, cluster, region)
def add_elasticache_node(self, node, cluster, region):
''' Adds an ElastiCache node to the inventory and index, as long as
it is addressable '''
# Only want available nodes unless all_elasticache_nodes is True
if not self.all_elasticache_nodes and node['CacheNodeStatus'] != 'available':
return
# Select the best destination address
dest = node['Endpoint']['Address']
if not dest:
# Skip nodes we cannot address (e.g. private VPC subnet)
return
node_id = self.to_safe(cluster['CacheClusterId'] + '_' + node['CacheNodeId'])
# Add to index
self.index[dest] = [region, node_id]
# Inventory: Group by node ID (always a group of 1)
if self.group_by_instance_id:
self.inventory[node_id] = [dest]
if self.nested_groups:
self.push_group(self.inventory, 'instances', node_id)
# Inventory: Group by region
if self.group_by_region:
self.push(self.inventory, region, dest)
if self.nested_groups:
self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone
if self.group_by_availability_zone:
self.push(self.inventory, cluster['PreferredAvailabilityZone'], dest)
if self.nested_groups:
if self.group_by_region:
self.push_group(self.inventory, region, cluster['PreferredAvailabilityZone'])
self.push_group(self.inventory, 'zones', cluster['PreferredAvailabilityZone'])
# Inventory: Group by node type
if self.group_by_instance_type:
type_name = self.to_safe('type_' + cluster['CacheNodeType'])
self.push(self.inventory, type_name, dest)
if self.nested_groups:
self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by VPC (information not available in the current
# AWS API version for ElastiCache)
# Inventory: Group by security group
if self.group_by_security_group:
# Check for the existence of the 'SecurityGroups' key and also if
# this key has some value. When the cluster is not placed in a SG
# the query can return None here and cause an error.
if 'SecurityGroups' in cluster and cluster['SecurityGroups'] is not None:
for security_group in cluster['SecurityGroups']:
key = self.to_safe("security_group_" + security_group['SecurityGroupId'])
self.push(self.inventory, key, dest)
if self.nested_groups:
self.push_group(self.inventory, 'security_groups', key)
# Inventory: Group by engine
if self.group_by_elasticache_engine:
self.push(self.inventory, self.to_safe("elasticache_" + cluster['Engine']), dest)
if self.nested_groups:
self.push_group(self.inventory, 'elasticache_engines', self.to_safe("elasticache_" + cluster['Engine']))
# Inventory: Group by parameter group (done at cluster level)
# Inventory: Group by replication group (done at cluster level)
# Inventory: Group by ElastiCache Cluster
if self.group_by_elasticache_cluster:
self.push(self.inventory, self.to_safe("elasticache_cluster_" + cluster['CacheClusterId']), dest)
# Global Tag: all ElastiCache nodes
self.push(self.inventory, 'elasticache_nodes', dest)
host_info = self.get_host_info_dict_from_describe_dict(node)
if dest in self.inventory["_meta"]["hostvars"]:
self.inventory["_meta"]["hostvars"][dest].update(host_info)
else:
self.inventory["_meta"]["hostvars"][dest] = host_info
def add_elasticache_replication_group(self, replication_group, region):
''' Adds an ElastiCache replication group to the inventory and index '''
# Only want available clusters unless all_elasticache_replication_groups is True
if not self.all_elasticache_replication_groups and replication_group['Status'] != 'available':
return
# Select the best destination address (PrimaryEndpoint)
dest = replication_group['NodeGroups'][0]['PrimaryEndpoint']['Address']
if not dest:
# Skip clusters we cannot address (e.g. private VPC subnet)
return
# Add to index
self.index[dest] = [region, replication_group['ReplicationGroupId']]
# Inventory: Group by ID (always a group of 1)
if self.group_by_instance_id:
self.inventory[replication_group['ReplicationGroupId']] = [dest]
if self.nested_groups:
self.push_group(self.inventory, 'instances', replication_group['ReplicationGroupId'])
# Inventory: Group by region
if self.group_by_region:
self.push(self.inventory, region, dest)
if self.nested_groups:
self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone (doesn't apply to replication groups)
# Inventory: Group by node type (doesn't apply to replication groups)
# Inventory: Group by VPC (information not available in the current
# AWS API version for replication groups
# Inventory: Group by security group (doesn't apply to replication groups)
# Check this value in cluster level
# Inventory: Group by engine (replication groups are always Redis)
if self.group_by_elasticache_engine:
self.push(self.inventory, 'elasticache_redis', dest)
if self.nested_groups:
self.push_group(self.inventory, 'elasticache_engines', 'redis')
# Global Tag: all ElastiCache clusters
self.push(self.inventory, 'elasticache_replication_groups', replication_group['ReplicationGroupId'])
host_info = self.get_host_info_dict_from_describe_dict(replication_group)
self.inventory["_meta"]["hostvars"][dest] = host_info
def get_route53_records(self):
''' Get and store the map of resource records to domain names that
point to them. '''
r53_conn = route53.Route53Connection()
all_zones = r53_conn.get_zones()
route53_zones = [ zone for zone in all_zones if zone.name[:-1]
not in self.route53_excluded_zones ]
self.route53_records = {}
for zone in route53_zones:
rrsets = r53_conn.get_all_rrsets(zone.id)
for record_set in rrsets:
record_name = record_set.name
if record_name.endswith('.'):
record_name = record_name[:-1]
for resource in record_set.resource_records:
self.route53_records.setdefault(resource, set())
self.route53_records[resource].add(record_name)
def get_instance_route53_names(self, instance):
''' Check if an instance is referenced in the records we have from
Route53. If it is, return the list of domain names pointing to said
instance. If nothing points to it, return an empty list. '''
instance_attributes = [ 'public_dns_name', 'private_dns_name',
'ip_address', 'private_ip_address' ]
name_list = set()
for attrib in instance_attributes:
try:
value = getattr(instance, attrib)
except AttributeError:
continue
if value in self.route53_records:
name_list.update(self.route53_records[value])
return list(name_list)
def get_host_info_dict_from_instance(self, instance):
instance_vars = {}
for key in vars(instance):
value = getattr(instance, key)
key = self.to_safe('ec2_' + key)
# Handle complex types
# state/previous_state changed to properties in boto in https://github.com/boto/boto/commit/a23c379837f698212252720d2af8dec0325c9518
if key == 'ec2__state':
instance_vars['ec2_state'] = instance.state or ''
instance_vars['ec2_state_code'] = instance.state_code
elif key == 'ec2__previous_state':
instance_vars['ec2_previous_state'] = instance.previous_state or ''
instance_vars['ec2_previous_state_code'] = instance.previous_state_code
elif type(value) in [int, bool]:
instance_vars[key] = value
elif isinstance(value, six.string_types):
instance_vars[key] = value.strip()
elif type(value) == type(None):
instance_vars[key] = ''
elif key == 'ec2_region':
instance_vars[key] = value.name
elif key == 'ec2__placement':
instance_vars['ec2_placement'] = value.zone
elif key == 'ec2_tags':
for k, v in value.items():
key = self.to_safe('ec2_tag_' + k)
instance_vars[key] = v
elif key == 'ec2_groups':
group_ids = []
group_names = []
for group in value:
group_ids.append(group.id)
group_names.append(group.name)
instance_vars["ec2_security_group_ids"] = ','.join([str(i) for i in group_ids])
instance_vars["ec2_security_group_names"] = ','.join([str(i) for i in group_names])
else:
pass
# TODO Product codes if someone finds them useful
#print key
#print type(value)
#print value
return instance_vars
def get_host_info_dict_from_describe_dict(self, describe_dict):
''' Parses the dictionary returned by the API call into a flat list
of parameters. This method should be used only when 'describe' is
used directly because Boto doesn't provide specific classes. '''
# I really don't agree with prefixing everything with 'ec2'
# because EC2, RDS and ElastiCache are different services.
# I'm just following the pattern used until now to not break any
# compatibility.
host_info = {}
for key in describe_dict:
value = describe_dict[key]
key = self.to_safe('ec2_' + self.uncammelize(key))
# Handle complex types
# Target: Memcached Cache Clusters
if key == 'ec2_configuration_endpoint' and value:
host_info['ec2_configuration_endpoint_address'] = value['Address']
host_info['ec2_configuration_endpoint_port'] = value['Port']
# Target: Cache Nodes and Redis Cache Clusters (single node)
if key == 'ec2_endpoint' and value:
host_info['ec2_endpoint_address'] = value['Address']
host_info['ec2_endpoint_port'] = value['Port']
# Target: Redis Replication Groups
if key == 'ec2_node_groups' and value:
host_info['ec2_endpoint_address'] = value[0]['PrimaryEndpoint']['Address']
host_info['ec2_endpoint_port'] = value[0]['PrimaryEndpoint']['Port']
replica_count = 0
for node in value[0]['NodeGroupMembers']:
if node['CurrentRole'] == 'primary':
host_info['ec2_primary_cluster_address'] = node['ReadEndpoint']['Address']
host_info['ec2_primary_cluster_port'] = node['ReadEndpoint']['Port']
host_info['ec2_primary_cluster_id'] = node['CacheClusterId']
elif node['CurrentRole'] == 'replica':
host_info['ec2_replica_cluster_address_'+ str(replica_count)] = node['ReadEndpoint']['Address']
host_info['ec2_replica_cluster_port_'+ str(replica_count)] = node['ReadEndpoint']['Port']
host_info['ec2_replica_cluster_id_'+ str(replica_count)] = node['CacheClusterId']
replica_count += 1
# Target: Redis Replication Groups
if key == 'ec2_member_clusters' and value:
host_info['ec2_member_clusters'] = ','.join([str(i) for i in value])
# Target: All Cache Clusters
elif key == 'ec2_cache_parameter_group':
host_info["ec2_cache_node_ids_to_reboot"] = ','.join([str(i) for i in value['CacheNodeIdsToReboot']])
host_info['ec2_cache_parameter_group_name'] = value['CacheParameterGroupName']
host_info['ec2_cache_parameter_apply_status'] = value['ParameterApplyStatus']
# Target: Almost everything
elif key == 'ec2_security_groups':
# Skip if SecurityGroups is None
# (it is possible to have the key defined but no value in it).
if value is not None:
sg_ids = []
for sg in value:
sg_ids.append(sg['SecurityGroupId'])
host_info["ec2_security_group_ids"] = ','.join([str(i) for i in sg_ids])
# Target: Everything
# Preserve booleans and integers
elif type(value) in [int, bool]:
host_info[key] = value
# Target: Everything
# Sanitize string values
elif isinstance(value, six.string_types):
host_info[key] = value.strip()
# Target: Everything
# Replace None by an empty string
elif type(value) == type(None):
host_info[key] = ''
else:
# Remove non-processed complex types
pass
return host_info
def get_host_info(self):
''' Get variables about a specific host '''
if len(self.index) == 0:
# Need to load index from cache
self.load_index_from_cache()
if not self.args.host in self.index:
# try updating the cache
self.do_api_calls_update_cache()
if not self.args.host in self.index:
# host might not exist anymore
return self.json_format_dict({}, True)
(region, instance_id) = self.index[self.args.host]
instance = self.get_instance(region, instance_id)
return self.json_format_dict(self.get_host_info_dict_from_instance(instance), True)
def push(self, my_dict, key, element):
''' Push an element onto an array that may not have been defined in
the dict '''
group_info = my_dict.setdefault(key, [])
if isinstance(group_info, dict):
host_list = group_info.setdefault('hosts', [])
host_list.append(element)
else:
group_info.append(element)
def push_group(self, my_dict, key, element):
''' Push a group as a child of another group. '''
parent_group = my_dict.setdefault(key, {})
if not isinstance(parent_group, dict):
parent_group = my_dict[key] = {'hosts': parent_group}
child_groups = parent_group.setdefault('children', [])
if element not in child_groups:
child_groups.append(element)
def get_inventory_from_cache(self):
''' Reads the inventory from the cache file and returns it as a JSON
object '''
cache = open(self.cache_path_cache, 'r')
json_inventory = cache.read()
return json_inventory
def load_index_from_cache(self):
''' Reads the index from the cache file sets self.index '''
cache = open(self.cache_path_index, 'r')
json_index = cache.read()
self.index = json.loads(json_index)
def write_to_cache(self, data, filename):
''' Writes data in JSON format to a file '''
json_data = self.json_format_dict(data, True)
cache = open(filename, 'w')
cache.write(json_data)
cache.close()
def uncammelize(self, key):
temp = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', key)
return re.sub('([a-z0-9])([A-Z])', r'\1_\2', temp).lower()
def to_safe(self, word):
''' Converts 'bad' characters in a string to underscores so they can be
used as Ansible groups '''
return re.sub("[^A-Za-z0-9\_]", "_", word)
def json_format_dict(self, data, pretty=False):
''' Converts a dict to a JSON object and dumps it as a formatted
string '''
if pretty:
return json.dumps(data, sort_keys=True, indent=2)
else:
return json.dumps(data)
# Run the script
Ec2Inventory()
================================================
FILE: pentagon/component/core/files/requirements.txt
================================================
================================================
FILE: pentagon/component/gcp/__init__.py
================================================
import cluster
================================================
FILE: pentagon/component/gcp/cluster.py
================================================
"""
cluster.py
This class has a lot of magic in ComponentBase from pentagon. It can be
difficult to discern what properties, and class vars are needed to make this
run correctly. Best advice I can give to future time travelers is use a
debugger or trial and error.
"""
from pentagon.component import ComponentBase
import pkg_resources
class Public(ComponentBase):
"""
Adds all the terraform modules that create a single public cluster with
one node pool. This includes the network, cluster and node pool.
"""
_required_parameters = [
'cluster_id',
'cluster_name',
'kubernetes_version',
'network_name',
'nodes_cidr',
'nodes_subnetwork_name',
'pods_cidr',
'project',
'region',
'services_cidr',
'tf_module_gcp_vpc_native_version',
'tf_module_gke_module_version',
'tf_module_nodepool_module_version',
]
_defaults = {
'cluster_id': '1',
'network_name': 'kube',
'nodes_subnetwork_name': 'kube-nodes',
'region': 'us-central1',
'tf_module_gcp_vpc_native_version': 'default-v1.0.0',
'tf_module_gke_module_version': 'public-vpc-native-v1.0.0',
'tf_module_nodepool_module_version': 'node-pool-v1.0.0',
}
@property
def _files_directory(self):
_template_path = 'files/public_cluster'
if pkg_resources.resource_isdir(__name__, _template_path):
return pkg_resources.resource_filename(__name__, _template_path)
else:
raise StandardError(
'Could not find template path ({})'.format(_template_path))
================================================
FILE: pentagon/component/gcp/files/public_cluster/cluster.tf.jinja
================================================
# These local variables can be used as inputs to both a network and this GKE VPC Native cluster module.
locals {
project = "{{ project }}"
region = "{{ region }}"
network_name = "{{ network_name }}"
kubernetes_version = "{{ kubernetes_version }}"
}
module "network_{{ cluster_id }}" {
source = "git@github.com:reactiveops/terraform-gcp-vpc-native.git//default?ref={{ tf_module_gcp_vpc_native_version }}"
// base network parameters
network_name = "${local.network_name}"
subnetwork_name = "{{ nodes_subnetwork_name }}"
region = "${local.region}"
enable_flow_logs = "false"
//specify the staging subnetwork primary and secondary CIDRs for IP aliasing
subnetwork_range = "{{ nodes_cidr }}"
subnetwork_pods = "{{ pods_cidr }}"
subnetwork_services = "{{ services_cidr }}"
}
# Ref: https://github.com/reactiveops/terraform-gcp-vpc-native
module "cluster_{{ cluster_id }}" {
# Change the ref below to use a vX.Y.Z release instead of master.
source = "git@github.com:/reactiveops/terraform-gke//public-vpc-native?ref={{ tf_module_gke_module_version }}"
name = "{{ cluster_name }}-{{ cluster_id }}"
region = "${local.region}"
project = "${local.project}"
kubernetes_version = "${local.kubernetes_version}"
network_name = "${local.network_name}"
nodes_subnetwork_name = "${module.network_{{ cluster_id }}.subnetwork}"
pods_secondary_ip_range_name = "${module.network_{{ cluster_id }}.gke_pods_1}"
services_secondary_ip_range_name = "${module.network_{{ cluster_id }}.gke_services_1}"
master_authorized_network_cidrs = [
{
# This is the module default, but demonstrates specifying this input.
cidr_block = "0.0.0.0/0"
display_name = "from the Internet"
},
]
}
module "node_pool_{{ cluster_id }}" {
source = "git@github.com:/reactiveops/terraform-gke//node_pool?ref={{ tf_module_nodepool_module_version }}"
name = "node-pool-1"
region = "${module.cluster_{{ cluster_id }}.region}"
gke_cluster_name = "${module.cluster_{{ cluster_id }}.name}"
machine_type = "n1-standard-2"
min_node_count = "1"
max_node_count = "1"
# Match the Kubernetes version from the GKE cluster!
kubernetes_version = "${module.cluster_{{ cluster_id }}.kubernetes_version}"
}
================================================
FILE: pentagon/component/inventory/__init__.py
================================================
import os
import json
import sys
import logging
import traceback
from pentagon.component import ComponentBase
from pentagon.component.aws_vpc import AWSVpc as Vpc
from pentagon.component.vpn import Vpn
from pentagon.component import gcp
from pentagon.helpers import create_rsa_key
from pentagon.defaults import AWSPentagonDefaults as PentagonDefaults
class Inventory(ComponentBase):
_defaults = {'cloud': 'aws'}
_required_parameters = [
'name',
'infrastructure_bucket',
'aws_access_key',
'aws_secret_key',
'project_name'
]
def __init__(self, data, additional_args=None, **kwargs):
# HACK this if is to support start-project workflow
if 'cloud' in data.keys():
# HACK satisfy AWS requirements above in _required_parameters
if data['cloud'] == 'gcp':
data['aws_access_key'] = 'shouldneverbeused'
data['aws_secret_key'] = 'shouldneverbeused'
super(Inventory, self).__init__(data, additional_args, **kwargs)
self._ssh_keys = {
'admin_vpn_key': self._data.get('admin_vpn_key', PentagonDefaults.ssh['admin_vpn_key']),
'working_kube_key': self._data.get('working_kube_key', PentagonDefaults.ssh['working_kube_key']),
'production_kube_key': self._data.get('production_kube_key', PentagonDefaults.ssh['production_kube_key']),
'working_private_key': self._data.get('working_private_key', PentagonDefaults.ssh['working_private_key']),
'production_private_key': self._data.get('production_private_key', PentagonDefaults.ssh['production_private_key']),
}
@property
def _files_directory(self):
return sys.modules[self.__module__].__path__[0] + "/files/common"
def add(self, destination, overwrite=False):
"""Inventory version of Component.add Copies files and templates from <component>/files and templates the *.jinja files """
if destination == './':
self._destination = self._data.get('name', './default')
else:
self._destination = destination
self._overwrite = overwrite
self._display_settings_to_user()
try:
self._add_files()
if self._data['cloud'].lower() == 'aws':
self._data['aws_region'] = self._data.get('aws_default_region')
self._data['account'] = os.path.basename(self._destination)
self._merge_data(self._ssh_keys)
self.__create_keys()
Aws(self._data).add("{}/terraform".format(self._destination))
if self._data.get('configure_vpn', True):
Vpn(self._data).add(
"{}/resources".format(self._destination), overwrite=True)
if self._data['cloud'].lower() == 'gcp':
Gcp(self._data).add('{}/terraform/'.format(self._destination))
self._remove_init_file()
self._render_directory_templates()
except Exception as e:
logging.error("Error occurred configuring component")
logging.error(e)
logging.debug(traceback.format_exc(e))
sys.exit(1)
def __create_keys(self):
key_path = "{}/{}/".format(self._destination, "config/private")
for key in self._ssh_keys:
logging.debug("Creating ssh key {}".format(key))
key_name = "{}".format(self._ssh_keys[key])
if not os.path.isfile("{}{}".format(key_path, key_name)):
create_rsa_key(key_name, key_path)
else:
logging.warn("Key {}{} exist!".format(key_path, key_name))
class Aws(ComponentBase):
def add(self, destination):
Vpc(self._data).add("./{}".format(destination), overwrite=True)
class Gcp(ComponentBase):
def add(self, destination):
gcp.cluster.Public(self._data).add(
"./{}".format(destination), overwrite=True)
================================================
FILE: pentagon/component/inventory/files/__init__.py
================================================
================================================
FILE: pentagon/component/inventory/files/common/clusters/__init__.py
================================================
#__init__.py
================================================
FILE: pentagon/component/inventory/files/common/config/local/ansible.cfg-default.jinja
================================================
[defaults]
inventory = $INFRASTRUCTURE_REPO/plugins/inventory
roles_path = $INFRASTRUCTURE_REPO/roles
filter_plugins = $INFRASTRUCTURE_REPO/plugins/filter_plugins
retry_files_save_path = ~/.ansible-retry
hash_behavior = merge
[ssh_connection]
# this needs the path defined without the use of ENV variables
ssh_args = -F __INFRA_REPO_PATH__/inventory/{{ account }}/config/private/ssh_config
================================================
FILE: pentagon/component/inventory/files/common/config/local/local-config-init.jinja
================================================
#!/usr/bin/env bash
# this script creates personalized copies of *-default files
# the scripts primary purpose is to create config files which
# populate paths for items which do not leverage the
# $INFRASTRUCTURE_REPO environment
# The script basically replaces all instances of the string
# __INFRA_REPO_PATH__ with the contents of $INFRASTRUCTURE_REPO
# and stores the output in the ../private directory (which are .gitignored)
OUT_DIR="../private"
if [ -z "${INFRASTRUCTURE_REPO}" ]; then
echo "INFRASTRUCTURE_REPO environment variable must be set"
exit 1
elif [ ! -d "${INFRASTRUCTURE_REPO}" ]; then
echo "${INFRASTRUCTURE_REPO} doesn't exist or isn't a directory"
exit 1
fi
cd "${INFRASTRUCTURE_REPO}/inventory/{{ name }}/config/local" || exit 1
for default_file in *-default; do
out_file="${OUT_DIR}/${default_file//-default}"
echo -n "${default_file} -> ${out_file} "
if [ -e "${out_file}" ]; then
echo "already exists. skipping."
continue
else
cat "${default_file}" | sed -e "s@__INFRA_REPO_PATH__@$INFRASTRUCTURE_REPO@g" > "${out_file}"
echo "created."
fi
done
================================================
FILE: pentagon/component/inventory/files/common/config/local/ssh_config-default.jinja
================================================
# for the kube / kops working instances
Host 172.20.64.* 172.20.65.* 172.20.66.* 172.20.67.* 172.20.68.* 172.20.69.* 172.20.70.* 172.20.71.* 172.20.72.* 172.20.73.* 172.20.74.* 172.20.75.*
User admin
IdentityFile __INFRA_REPO_PATH__/inventory/{{ account }}/config/private/{{ working_kube_key }}
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
# for the kube / kops prod instances
Host 172.20.96.* 172.20.97.* 172.20.98.* 172.20.99.* 172.20.100.* 172.20.101.* 172.20.102.* 172.20.103.* 172.20.104.* 172.20.105.* 172.20.106.* 172.20.107.*
User admin
IdentityFile __INFRA_REPO_PATH__/inventory/{{ account }}/config/private/{{ production_kube_key }}
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
# for instances in private_working
Host 172.20.48.* 172.20.49.* 172.20.50.* 172.20.51.* 172.20.52.* 172.20.53.* 172.20.54.* 172.20.55.* 172.20.56.* 172.20.57.* 172.20.58.* 172.20.59.*
User ubuntu
IdentityFile __INFRA_REPO_PATH__/inventory/{{ account }}/config/private/{{ working_private_key }}
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
# for instances in private_prod
Host 172.20.32.* 172.20.33.* 172.20.34.* 172.20.35.* 172.20.36.* 172.20.37.* 172.20.38.* 172.20.39.* 172.20.40.* 172.20.41.* 172.20.42.* 172.20.43.*
User ubuntu
IdentityFile __INFRA_REPO_PATH__/inventory/{{ account }}/config/private/{{ production_private_key }}
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
# for instances in admin
Host 172.20.0.* 172.20.1.* 172.20.2.* 172.20.3.* 172.20.4.* 172.20.5.* 172.20.6.* 172.20.7.* 172.20.8.* 172.20.9.* 172.20.10.* 172.20.11.*
User ubuntu
IdentityFile __INFRA_REPO_PATH__/inventory/{{ account }}/config/private/{{ admin_vpn_key }}
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
# VPN instance
# Replace the '*' with the IP address of the VPN instance
Host *
User ubuntu
IdentityFile __INFRA_REPO_PATH__/inventory/{{ account }}/config/private/{{ admin_vpn_key }}
IdentitiesOnly yes
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
================================================
FILE: pentagon/component/inventory/files/common/config/local/vars.yml.jinja
================================================
ANSIBLE_CONFIG: '${INFRASTRUCTURE_REPO}/inventory/${INVENTORY}/config/private/ansible.cfg'
KUBECONFIG: '${INFRASTRUCTURE_REPO}/inventory/${INVENTORY}/config/private/kube_config'
HELM_HOME: "${INFRASTRUCTURE_REPO}/helm"
TILLER_NAMESPACE: "tiller"
{%- if cloud | lower == 'aws' %}
VPC_NAME: "{{ vpc_name }}"
INFRASTRUCTURE_BUCKET: "{{ infrastructure_bucket }}"
AWS_DEFAULT_REGION: "{{ aws_default_region }}"
AWS_AVAILABILITY_ZONES: "{{ aws_availability_zones }}"
AWS_AVAILABILITY_ZONE_COUNT: "{{ aws_availability_zone_count }}"
AWS_INVENTORY_PATH: '${INFRASTRUCTURE_REPO}/plugins/'
KOPS_STATE_STORE_BUCKET: "{{ infrastructure_bucket }}"
KOPS_STATE_STORE: "s3://${KOPS_STATE_STORE_BUCKET}"
vpc_tag_name: "{{ vpc_name }}"
org: "{{ project_name }}"
canonical_zone: "{{ dns_zone }}"
vpn_bucket: "{{project_name}}-vpn"
{%- elif cloud == 'gcp' %}
CLOUDSDK_CORE_PROJECT: "{{ gcp_project }}"
CLOUDSDK_COMPUTE_ZO
gitextract_z6vql54m/
├── .circleci/
│ └── config.yml
├── .github/
│ └── stale.yml
├── .gitignore
├── CHANGELOG.md
├── CODEOWNERS
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── DESIGN.md
├── Dockerfile
├── LICENSE
├── MANIFEST.in
├── README.md
├── bin/
│ └── yaml_source
├── docs/
│ ├── _config.yml
│ ├── components.md
│ ├── getting-started.md
│ ├── network.md
│ ├── overview.md
│ └── vpn.md
├── example-component/
│ ├── LICENSE
│ ├── MANIFEST.in
│ ├── README.md
│ ├── pentagon_component/
│ │ ├── __init__.py
│ │ └── files/
│ │ ├── __init__.py
│ │ └── example_template.jinja
│ ├── requirement.txt
│ └── setup.py
├── pentagon/
│ ├── __init__.py
│ ├── cli.py
│ ├── component/
│ │ ├── __init__.py
│ │ ├── aws_vpc/
│ │ │ ├── __init__.py
│ │ │ └── files/
│ │ │ ├── aws_vpc.auto.tfvars.jinja
│ │ │ ├── aws_vpc.tf.jinja
│ │ │ └── aws_vpc_variables.tf
│ │ ├── core/
│ │ │ ├── __init__.py
│ │ │ └── files/
│ │ │ ├── .gitignore
│ │ │ ├── README.md
│ │ │ ├── ansible-requirements.yml
│ │ │ ├── inventory/
│ │ │ │ └── __init__.py
│ │ │ ├── plugins/
│ │ │ │ ├── filter_plugins/
│ │ │ │ │ └── flatten.py
│ │ │ │ └── inventory/
│ │ │ │ ├── base
│ │ │ │ ├── ec2.ini
│ │ │ │ └── ec2.py
│ │ │ └── requirements.txt
│ │ ├── gcp/
│ │ │ ├── __init__.py
│ │ │ ├── cluster.py
│ │ │ └── files/
│ │ │ └── public_cluster/
│ │ │ └── cluster.tf.jinja
│ │ ├── inventory/
│ │ │ ├── __init__.py
│ │ │ └── files/
│ │ │ ├── __init__.py
│ │ │ └── common/
│ │ │ ├── clusters/
│ │ │ │ └── __init__.py
│ │ │ ├── config/
│ │ │ │ ├── local/
│ │ │ │ │ ├── ansible.cfg-default.jinja
│ │ │ │ │ ├── local-config-init.jinja
│ │ │ │ │ ├── ssh_config-default.jinja
│ │ │ │ │ └── vars.yml.jinja
│ │ │ │ └── private/
│ │ │ │ └── .gitignore
│ │ │ ├── kubernetes/
│ │ │ │ └── __init__.py
│ │ │ └── terraform/
│ │ │ ├── .gitignore
│ │ │ ├── backend.tf.jinja
│ │ │ └── provider.tf.jinja
│ │ ├── kops/
│ │ │ ├── __init__.py
│ │ │ └── files/
│ │ │ ├── cluster.yml.jinja
│ │ │ ├── kops.sh
│ │ │ ├── masters.yml.jinja
│ │ │ ├── nodes.yml.jinja
│ │ │ └── secret.sh.jinja
│ │ └── vpn/
│ │ ├── __init__.py
│ │ └── files/
│ │ └── admin-environment/
│ │ ├── destroy.yml
│ │ ├── env.yml.jinja
│ │ └── vpn.yml
│ ├── defaults.py
│ ├── filters.py
│ ├── helpers.py
│ ├── meta.py
│ ├── migration/
│ │ ├── __init__.py
│ │ └── migrations/
│ │ ├── __init__.py
│ │ ├── migration_1_2_0.py
│ │ ├── migration_2_0_0.py
│ │ ├── migration_2_1_0.py
│ │ ├── migration_2_2_0.py
│ │ ├── migration_2_3_1.py
│ │ ├── migration_2_4_1.py
│ │ ├── migration_2_4_3.py
│ │ ├── migration_2_5_0.py
│ │ ├── migration_2_6_0.py
│ │ ├── migration_2_6_2.py
│ │ ├── migration_2_7_1.py
│ │ ├── migration_2_7_3.py
│ │ └── migration_3_1_0.py
│ └── pentagon.py
├── setup.py
└── tests/
├── __init__.py
├── requirements.txt
├── test_args.py
└── test_base.py
SYMBOL INDEX (216 symbols across 32 files)
FILE: example-component/pentagon_component/__init__.py
class Component (line 5) | class Component(ComponentBase):
FILE: pentagon/cli.py
class RequiredIf (line 17) | class RequiredIf(click.Option):
method __init__ (line 19) | def __init__(self, *args, **kwargs):
method handle_parse_result (line 30) | def handle_parse_result(self, ctx, opts, args):
function validate_not_empty_string (line 39) | def validate_not_empty_string(ctx, param, value):
function cli (line 56) | def cli(ctx, log_level, *args, **kwargs):
function start_project (line 123) | def start_project(ctx, name, **kwargs):
function add (line 159) | def add(ctx, component_path, additional_args, **kwargs):
function get (line 170) | def get(ctx, component_path, additional_args, **kwargs):
function migrate (line 180) | def migrate(ctx, **kwargs):
function _run (line 186) | def _run(action, component_path, additional_args, options):
function get_component_class (line 234) | def get_component_class(component_path):
function parse_in_file (line 271) | def parse_in_file(file):
function parse_data (line 295) | def parse_data(data, d=None):
FILE: pentagon/component/__init__.py
class ComponentBase (line 14) | class ComponentBase(object):
method __init__ (line 26) | def __init__(self, data, additional_args=None, **kwargs):
method _destination_directory_name (line 45) | def _destination_directory_name(self):
method _files_directory (line 51) | def _files_directory(self):
method _process_env_vars (line 54) | def _process_env_vars(self):
method _process_defaults (line 69) | def _process_defaults(self):
method _render_directory_templates (line 85) | def _render_directory_templates(self):
method _remove_init_file (line 103) | def _remove_init_file(self):
method _merge_data (line 113) | def _merge_data(self, new_data, clobber=False):
method add (line 121) | def add(self, destination, overwrite=False):
method _display_settings_to_user (line 141) | def _display_settings_to_user(self):
method _add_files (line 168) | def _add_files(self, sub_path=None):
FILE: pentagon/component/aws_vpc/__init__.py
class AWSVpc (line 8) | class AWSVpc(ComponentBase):
method add (line 12) | def add(self, destination, overwrite):
FILE: pentagon/component/core/__init__.py
class Core (line 4) | class Core(ComponentBase):
FILE: pentagon/component/core/files/plugins/filter_plugins/flatten.py
class FilterModule (line 6) | class FilterModule (object):
method filters (line 7) | def filters(self):
FILE: pentagon/component/core/files/plugins/inventory/ec2.py
class Ec2Inventory (line 137) | class Ec2Inventory(object):
method _empty_inventory (line 138) | def _empty_inventory(self):
method __init__ (line 141) | def __init__(self):
method is_cache_valid (line 175) | def is_cache_valid(self):
method read_settings (line 188) | def read_settings(self):
method parse_cli_args (line 366) | def parse_cli_args(self):
method do_api_calls_update_cache (line 379) | def do_api_calls_update_cache(self):
method connect (line 396) | def connect(self, region):
method get_instances_by_region (line 408) | def get_instances_by_region(self, region):
method get_rds_instances_by_region (line 433) | def get_rds_instances_by_region(self, region):
method get_elasticache_clusters_by_region (line 452) | def get_elasticache_clusters_by_region(self, region):
method get_elasticache_replication_groups_by_region (line 488) | def get_elasticache_replication_groups_by_region(self, region):
method get_auth_error_message (line 522) | def get_auth_error_message(self):
method fail_with_error (line 539) | def fail_with_error(self, err_msg, err_operation=None):
method get_instance (line 547) | def get_instance(self, region, instance_id):
method add_instance (line 555) | def add_instance(self, instance, region):
method add_rds_instance (line 680) | def add_rds_instance(self, instance, region):
method add_elasticache_cluster (line 763) | def add_elasticache_cluster(self, cluster, region):
method add_elasticache_node (line 862) | def add_elasticache_node(self, node, cluster, region):
method add_elasticache_replication_group (line 949) | def add_elasticache_replication_group(self, replication_group, region):
method get_route53_records (line 1001) | def get_route53_records(self):
method get_instance_route53_names (line 1027) | def get_instance_route53_names(self, instance):
method get_host_info_dict_from_instance (line 1048) | def get_host_info_dict_from_instance(self, instance):
method get_host_info_dict_from_describe_dict (line 1093) | def get_host_info_dict_from_describe_dict(self, describe_dict):
method get_host_info (line 1178) | def get_host_info(self):
method push (line 1197) | def push(self, my_dict, key, element):
method push_group (line 1207) | def push_group(self, my_dict, key, element):
method get_inventory_from_cache (line 1216) | def get_inventory_from_cache(self):
method load_index_from_cache (line 1225) | def load_index_from_cache(self):
method write_to_cache (line 1233) | def write_to_cache(self, data, filename):
method uncammelize (line 1241) | def uncammelize(self, key):
method to_safe (line 1245) | def to_safe(self, word):
method json_format_dict (line 1251) | def json_format_dict(self, data, pretty=False):
FILE: pentagon/component/gcp/cluster.py
class Public (line 13) | class Public(ComponentBase):
method _files_directory (line 46) | def _files_directory(self):
FILE: pentagon/component/inventory/__init__.py
class Inventory (line 15) | class Inventory(ComponentBase):
method __init__ (line 26) | def __init__(self, data, additional_args=None, **kwargs):
method _files_directory (line 43) | def _files_directory(self):
method add (line 46) | def add(self, destination, overwrite=False):
method __create_keys (line 81) | def __create_keys(self):
class Aws (line 93) | class Aws(ComponentBase):
method add (line 95) | def add(self, destination):
class Gcp (line 99) | class Gcp(ComponentBase):
method add (line 101) | def add(self, destination):
FILE: pentagon/component/kops/__init__.py
class Cluster (line 16) | class Cluster(ComponentBase):
method add (line 19) | def add(self, destination):
method get (line 32) | def get(self, destination):
method _cluster_instance_groups (line 57) | def _cluster_instance_groups(self):
method _get_instance_group_yaml (line 68) | def _get_instance_group_yaml(self, ig):
method _get_cluster_admin_secret (line 91) | def _get_cluster_admin_secret(self):
method _get_cluster_yaml (line 104) | def _get_cluster_yaml(self):
FILE: pentagon/component/vpn/__init__.py
class Vpn (line 9) | class Vpn(ComponentBase):
method add (line 23) | def add(self, destination, overwrite=False):
method _get_vpn_ami_id (line 27) | def _get_vpn_ami_id(self):
FILE: pentagon/defaults.py
class AWSPentagonDefaults (line 4) | class AWSPentagonDefaults(object):
FILE: pentagon/filters.py
function register_filters (line 4) | def register_filters():
function get_jinja_filters (line 17) | def get_jinja_filters():
function regex_trim (line 23) | def regex_trim(input, regex, replace=''):
FILE: pentagon/helpers.py
function render_template (line 14) | def render_template(template_name, template_path, target, context, delet...
function write_yaml_file (line 55) | def write_yaml_file(filename, d, overwrite=False):
function create_rsa_key (line 69) | def create_rsa_key(name, path, bits=2048):
function merge_dict (line 86) | def merge_dict(d, new_data, clobber=False):
function allege_aws_availability_zones (line 95) | def allege_aws_availability_zones(region, count):
FILE: pentagon/migration/__init__.py
function migrate (line 26) | def migrate(branch='migration', yes=False):
function migrations_to_run (line 60) | def migrations_to_run(current_version, available_migrations):
function available_migrations (line 66) | def available_migrations():
function installed_version (line 77) | def installed_version():
function infrastructure_repository (line 82) | def infrastructure_repository():
function current_version (line 89) | def current_version(version_file=version_file):
class Migration (line 101) | class Migration(object):
class YamlEditor (line 104) | class YamlEditor(object):
method __init__ (line 106) | def __init__(self, file=None):
method update (line 117) | def update(self, new_data, clobber=False):
method remove (line 122) | def remove(self, keys):
method get_data (line 128) | def get_data(self):
method write (line 132) | def write(self, file=None):
method get (line 138) | def get(self, key, default=None):
method __getitem__ (line 141) | def __getitem__(self, key):
method __setitem__ (line 144) | def __setitem__(self, key, value):
method __str__ (line 147) | def __str__(self):
method __enter__ (line 150) | def __enter__(self):
method __exit__ (line 153) | def __exit__(self, type, value, traceback):
method __init__ (line 156) | def __init__(self, branch_name):
method start (line 161) | def start(self):
method version_only (line 165) | def version_only(self):
method real_path (line 169) | def real_path(self, path):
method _branch (line 172) | def _branch(self):
method _run (line 183) | def _run(self):
method _write_new_version (line 190) | def _write_new_version(self, version):
method _append_migration_readme (line 194) | def _append_migration_readme(self):
method move (line 199) | def move(self, source, destination):
method overwrite_file (line 213) | def overwrite_file(self, path, content, executable=False):
method create_file (line 217) | def create_file(self, path, content, executable=False):
method create_dir (line 228) | def create_dir(self, path):
method get_file_content (line 237) | def get_file_content(self, path):
method inventory (line 243) | def inventory(self, exclude=[]):
method delete (line 247) | def delete(self, path):
method find_files (line 258) | def find_files(self, path='./', file_pattern=None):
FILE: pentagon/migration/migrations/migration_1_2_0.py
class Migration (line 8) | class Migration(migration.Migration):
method run (line 12) | def run(self):
FILE: pentagon/migration/migrations/migration_2_0_0.py
class Migration (line 6) | class Migration(migration.Migration):
method run (line 10) | def run(self):
FILE: pentagon/migration/migrations/migration_2_1_0.py
class Migration (line 8) | class Migration(migration.Migration):
method run (line 12) | def run(self):
FILE: pentagon/migration/migrations/migration_2_2_0.py
class Migration (line 8) | class Migration(migration.Migration):
method run (line 12) | def run(self):
FILE: pentagon/migration/migrations/migration_2_3_1.py
class Migration (line 7) | class Migration(migration.Migration):
method run (line 11) | def run(self):
FILE: pentagon/migration/migrations/migration_2_4_1.py
class Migration (line 7) | class Migration(migration.Migration):
method run (line 11) | def run(self):
FILE: pentagon/migration/migrations/migration_2_4_3.py
class Migration (line 9) | class Migration(migration.Migration):
method run (line 13) | def run(self):
FILE: pentagon/migration/migrations/migration_2_5_0.py
class Migration (line 9) | class Migration(migration.Migration):
method run (line 13) | def run(self):
FILE: pentagon/migration/migrations/migration_2_6_0.py
class Migration (line 9) | class Migration(migration.Migration):
method run (line 13) | def run(self):
FILE: pentagon/migration/migrations/migration_2_6_2.py
class folded_unicode (line 235) | class folded_unicode(unicode):
class literal_unicode (line 239) | class literal_unicode(unicode):
function folded_unicode_representer (line 243) | def folded_unicode_representer(dumper, data):
function literal_unicode_representer (line 247) | def literal_unicode_representer(dumper, data):
class Migration (line 256) | class Migration(migration.Migration):
method run (line 262) | def run(self):
FILE: pentagon/migration/migrations/migration_2_7_1.py
class folded_unicode (line 30) | class folded_unicode(unicode):
class literal_unicode (line 34) | class literal_unicode(unicode):
function folded_unicode_representer (line 38) | def folded_unicode_representer(dumper, data):
function literal_unicode_representer (line 42) | def literal_unicode_representer(dumper, data):
class Migration (line 59) | class Migration(migration.Migration):
method run (line 65) | def run(self):
FILE: pentagon/migration/migrations/migration_2_7_3.py
class folded_unicode (line 28) | class folded_unicode(unicode):
class literal_unicode (line 32) | class literal_unicode(unicode):
function folded_unicode_representer (line 36) | def folded_unicode_representer(dumper, data):
function literal_unicode_representer (line 40) | def literal_unicode_representer(dumper, data):
class Migration (line 64) | class Migration(migration.Migration):
method run (line 70) | def run(self):
FILE: pentagon/migration/migrations/migration_3_1_0.py
class folded_unicode (line 27) | class folded_unicode(unicode):
class literal_unicode (line 31) | class literal_unicode(unicode):
function folded_unicode_representer (line 35) | def folded_unicode_representer(dumper, data):
function literal_unicode_representer (line 39) | def literal_unicode_representer(dumper, data):
class Migration (line 64) | class Migration(migration.Migration):
method run (line 70) | def run(self):
FILE: pentagon/pentagon.py
class PentagonException (line 25) | class PentagonException(Exception):
class PentagonProject (line 29) | class PentagonProject(object):
method __init__ (line 33) | def __init__(self, name, data={}):
method get_data (line 54) | def get_data(self, name, default=None):
method __git_init (line 62) | def __git_init(self):
method __write_config_file (line 66) | def __write_config_file(self):
method __repository_directory_exists (line 86) | def __repository_directory_exists(self):
method start (line 96) | def start(self):
method __create_repo_core (line 111) | def __create_repo_core(self):
class AWSPentagonProject (line 116) | class AWSPentagonProject(PentagonProject):
method __init__ (line 142) | def __init__(self, name, data={}):
method context (line 230) | def context(self):
method __add_kops_working_cluster (line 262) | def __add_kops_working_cluster(self):
method __add_kops_production_cluster (line 284) | def __add_kops_production_cluster(self):
method configure_default_project (line 306) | def configure_default_project(self):
class GCPPentagonProject (line 313) | class GCPPentagonProject(PentagonProject):
method __init__ (line 315) | def __init__(self, name, data={}):
method _build_inv_params (line 325) | def _build_inv_params(name, input_context):
method configure_default_project (line 348) | def configure_default_project(self):
FILE: setup.py
function package_files (line 30) | def package_files(directory):
FILE: tests/test_args.py
class TestPentagonProjectWithoutArgs (line 8) | class TestPentagonProjectWithoutArgs(TestPentagonProject):
method setUp (line 11) | def setUp(self):
method tearDown (line 14) | def tearDown(self):
class TestPentagonProjectWithAllArgs (line 18) | class TestPentagonProjectWithAllArgs(TestPentagonProject):
method setUp (line 64) | def setUp(self):
method tearDown (line 67) | def tearDown(self):
method test_configure_project (line 70) | def test_configure_project(self):
method test_aws_availability_zones (line 73) | def test_aws_availability_zones(self):
method test_vpc_name (line 79) | def test_vpc_name(self):
method test_kops_args (line 82) | def test_kops_args(self):
method test_kubernetes_args (line 85) | def test_kubernetes_args(self):
class TestPentagonProjectWithMinimalArgs (line 107) | class TestPentagonProjectWithMinimalArgs(TestPentagonProject):
method setUp (line 119) | def setUp(self):
method tearDown (line 122) | def tearDown(self):
method test_configure_project (line 125) | def test_configure_project(self):
method test_aws_availability_zones (line 128) | def test_aws_availability_zones(self):
class TestPentagon (line 137) | class TestPentagon(TestPentagonProject):
method test_noninteget_az_count (line 139) | def test_noninteget_az_count(self):
FILE: tests/test_base.py
class TestPentagonProject (line 8) | class TestPentagonProject(unittest.TestCase):
method setUp (line 11) | def setUp(self):
method tearDown (line 14) | def tearDown(self):
method test_instance (line 17) | def test_instance(self):
method test_name (line 20) | def test_name(self):
method test_repository_name (line 24) | def test_repository_name(self):
method test_repository_directory (line 27) | def test_repository_directory(self):
Condensed preview — 94 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (324K chars).
[
{
"path": ".circleci/config.yml",
"chars": 3488,
"preview": "#Copyright 2017 Reactive Ops Inc.\n#\n#Licensed under the Apache License, Version 2.0 (the “License”);\n#you may not use th"
},
{
"path": ".github/stale.yml",
"chars": 684,
"preview": "# Number of days of inactivity before an issue becomes stale\ndaysUntilStale: 60\n# Number of days of inactivity before a "
},
{
"path": ".gitignore",
"chars": 96,
"preview": ".DS_Store\n.terraform\nconfig/private\n*.pyc\n*.pem\n*.pub\npentagon.egg-info\n.vscode\nvenv\ndist\nbuild\n"
},
{
"path": "CHANGELOG.md",
"chars": 6519,
"preview": "# Changelog\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Changel"
},
{
"path": "CODEOWNERS",
"chars": 21,
"preview": "* @ejether @endzyme\n"
},
{
"path": "CODE_OF_CONDUCT.md",
"chars": 3230,
"preview": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nIn the interest of fostering an open and welcoming environment, w"
},
{
"path": "CONTRIBUTING.md",
"chars": 2390,
"preview": "# How to contribute\n\nIssues, whether bugs, tasks, or feature requests are essential for keeping Pentagon (and ReactiveOp"
},
{
"path": "DESIGN.md",
"chars": 2437,
"preview": "# Pentagon Design Document:\n\n## Intent\n\nPentagon is a framework for generating an Infrastructure As Code Repository (IAC"
},
{
"path": "Dockerfile",
"chars": 684,
"preview": "FROM ubuntu:16.04\n\nRUN apt-get update && apt-get install software-properties-common -y\nRUN apt-add-repository ppa:ansibl"
},
{
"path": "LICENSE",
"chars": 11348,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "MANIFEST.in",
"chars": 38,
"preview": "recursive-include pentagon/component *"
},
{
"path": "README.md",
"chars": 2086,
"preview": "# Pentagon\n\n# *Pentagon has been deprecated and will no longer be maintained.*\n\n## What is Pentagon?\n\n**Pentagon is a cl"
},
{
"path": "bin/yaml_source",
"chars": 865,
"preview": "#!/bin/bash -e\n\nusage=\"$0 file [unset] -- Where file.yml is a yml file of key value pairs\n Sets environment variable "
},
{
"path": "docs/_config.yml",
"chars": 25,
"preview": "theme: jekyll-theme-dinky"
},
{
"path": "docs/components.md",
"chars": 12507,
"preview": "# Pentagon Components\n\nThe functionality of Pentagon can be extended with components. Currently only two commands are ac"
},
{
"path": "docs/getting-started.md",
"chars": 16971,
"preview": "# What is Pentagon?\n\n**Pentagon is a cli tool to generate repeatable, cloud-based [Kubernetes](https://kubernetes.io/) i"
},
{
"path": "docs/network.md",
"chars": 5287,
"preview": "# VPC Description\nWe create a base VPC with [terraform-vpc](https://github.com/reactiveops/terraform-vpc) that allocates"
},
{
"path": "docs/overview.md",
"chars": 7190,
"preview": "# Infrastructure Repository Overview\nAfter running `pentagon start-project` you will have a directory with a layout simi"
},
{
"path": "docs/vpn.md",
"chars": 2112,
"preview": "# VPN\n\n## Setup\nThe VPN allows ssh access to intances in the private subnets in the VPC. This includes the KOPS created "
},
{
"path": "example-component/LICENSE",
"chars": 11348,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "example-component/MANIFEST.in",
"chars": 46,
"preview": "recursive-include pentagoncomponent/files/ *\n\n"
},
{
"path": "example-component/README.md",
"chars": 0,
"preview": ""
},
{
"path": "example-component/pentagon_component/__init__.py",
"chars": 127,
"preview": "from pentagon.component import ComponentBase\nimport os\n\n\nclass Component(ComponentBase):\n _path = os.path.dirname(__f"
},
{
"path": "example-component/pentagon_component/files/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "example-component/pentagon_component/files/example_template.jinja",
"chars": 7,
"preview": "# blank"
},
{
"path": "example-component/requirement.txt",
"chars": 0,
"preview": ""
},
{
"path": "example-component/setup.py",
"chars": 1241,
"preview": "#!/usr/bin/env python\n# -- coding: utf-8 --\n# Copyright 2017 Reactive Ops Inc.\n#\n# Licensed under the Apache License, Ve"
},
{
"path": "pentagon/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "pentagon/cli.py",
"chars": 15338,
"preview": "#!/usr/bin/env python\nimport os\nimport click\nimport logging\nimport coloredlogs\nimport traceback\nimport oyaml as yaml\nimp"
},
{
"path": "pentagon/component/__init__.py",
"chars": 7642,
"preview": "import os\nimport glob\nimport shutil\nimport logging\nimport traceback\nimport sys\nimport re\nimport click\n\nfrom pentagon.hel"
},
{
"path": "pentagon/component/aws_vpc/__init__.py",
"chars": 729,
"preview": "import os\n\nfrom pentagon.component import ComponentBase\nfrom pentagon.defaults import AWSPentagonDefaults as PentagonDef"
},
{
"path": "pentagon/component/aws_vpc/files/aws_vpc.auto.tfvars.jinja",
"chars": 908,
"preview": "aws_vpc_name = \"{{ vpc_name }}\"\nvpc_cidr_base = \"{{ vpc_cidr_base }}\"\naws_azs = \"{{ aws_availability_zones }}\"\naz_count"
},
{
"path": "pentagon/component/aws_vpc/files/aws_vpc.tf.jinja",
"chars": 1685,
"preview": "\nmodule \"vpc\" {\n source = \"git::https://github.com/reactiveops/terraform-vpc.git?ref=v3.0.0"
},
{
"path": "pentagon/component/aws_vpc/files/aws_vpc_variables.tf",
"chars": 523,
"preview": "variable \"aws_region\" {}\nvariable \"aws_azs\" {}\n\nvariable \"aws_vpc_name\" {}\n\nvariable \"az_count\" {}\nvariable \"vpc_cidr_ba"
},
{
"path": "pentagon/component/core/__init__.py",
"chars": 83,
"preview": "from pentagon.component import ComponentBase\n\n\nclass Core(ComponentBase):\n pass\n"
},
{
"path": "pentagon/component/core/files/.gitignore",
"chars": 63,
"preview": ".DS_Store\n.terraform\n*.pyc\n*.pem\n*.pub\n*secret*.yml\nroles\nhelm\n"
},
{
"path": "pentagon/component/core/files/README.md",
"chars": 2092,
"preview": "# Documentation\n\n## Getting Started\n\n### System Requirements\n\nThis repository relies on system tools and Python librarie"
},
{
"path": "pentagon/component/core/files/ansible-requirements.yml",
"chars": 702,
"preview": "---\n\n##\n# Dependents not located in galaxy.ansible.com need to precede their parents\n##\n- src: \"git+https://github.com/r"
},
{
"path": "pentagon/component/core/files/inventory/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "pentagon/component/core/files/plugins/filter_plugins/flatten.py",
"chars": 225,
"preview": "# This function will take an irregular list composed of lists \n# and flatten it\n\nfrom compiler.ast import flatten\n\nclass"
},
{
"path": "pentagon/component/core/files/plugins/inventory/base",
"chars": 168,
"preview": "# https://github.com/ansible/ansible-modules-core/issues/2601#issuecomment-189503881\n[all:vars]\nansible_python_interpret"
},
{
"path": "pentagon/component/core/files/plugins/inventory/ec2.ini",
"chars": 6230,
"preview": "# Ansible EC2 external inventory script settings\n#\n\n[ec2]\n\n# to talk to a private eucalyptus instance uncomment these li"
},
{
"path": "pentagon/component/core/files/plugins/inventory/ec2.py",
"chars": 52494,
"preview": "#!/usr/bin/env python\n\n'''\nEC2 external inventory script\n=================================\n\nGenerates inventory that Ans"
},
{
"path": "pentagon/component/core/files/requirements.txt",
"chars": 0,
"preview": ""
},
{
"path": "pentagon/component/gcp/__init__.py",
"chars": 15,
"preview": "import cluster\n"
},
{
"path": "pentagon/component/gcp/cluster.py",
"chars": 1652,
"preview": "\"\"\"\ncluster.py\nThis class has a lot of magic in ComponentBase from pentagon. It can be\ndifficult to discern what propert"
},
{
"path": "pentagon/component/gcp/files/public_cluster/cluster.tf.jinja",
"chars": 2586,
"preview": "# These local variables can be used as inputs to both a network and this GKE VPC Native cluster module. "
},
{
"path": "pentagon/component/inventory/__init__.py",
"chars": 3980,
"preview": "import os\nimport json\nimport sys\nimport logging\nimport traceback\n\nfrom pentagon.component import ComponentBase\nfrom pent"
},
{
"path": "pentagon/component/inventory/files/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "pentagon/component/inventory/files/common/clusters/__init__.py",
"chars": 13,
"preview": "#__init__.py\n"
},
{
"path": "pentagon/component/inventory/files/common/config/local/ansible.cfg-default.jinja",
"chars": 391,
"preview": "[defaults]\ninventory = $INFRASTRUCTURE_REPO/plugins/inventory\nroles_path = $INFRASTRUCTURE_REPO/roles\nfilter_plugins = $"
},
{
"path": "pentagon/component/inventory/files/common/config/local/local-config-init.jinja",
"chars": 1110,
"preview": "#!/usr/bin/env bash\n\n# this script creates personalized copies of *-default files\n# the scripts primary purpose is to cr"
},
{
"path": "pentagon/component/inventory/files/common/config/local/ssh_config-default.jinja",
"chars": 2035,
"preview": "# for the kube / kops working instances\nHost 172.20.64.* 172.20.65.* 172.20.66.* 172.20.67.* 172.20.68.* 172.20.69.* 172"
},
{
"path": "pentagon/component/inventory/files/common/config/local/vars.yml.jinja",
"chars": 990,
"preview": "ANSIBLE_CONFIG: '${INFRASTRUCTURE_REPO}/inventory/${INVENTORY}/config/private/ansible.cfg'\nKUBECONFIG: '${INFRASTRUCTURE"
},
{
"path": "pentagon/component/inventory/files/common/config/private/.gitignore",
"chars": 14,
"preview": "*\n!.gitignore\n"
},
{
"path": "pentagon/component/inventory/files/common/kubernetes/__init__.py",
"chars": 13,
"preview": "#__init__.py\n"
},
{
"path": "pentagon/component/inventory/files/common/terraform/.gitignore",
"chars": 58,
"preview": "*.tfplan\n*.tfstate\n*.tfstate.backup\n.terraform/\n.DS_Store\n"
},
{
"path": "pentagon/component/inventory/files/common/terraform/backend.tf.jinja",
"chars": 381,
"preview": "// terraform backend config\n\nterraform {\n{%- if cloud | lower == 'aws' %}\n backend \"s3\" {\n bucket = \"{{ infrastructu"
},
{
"path": "pentagon/component/inventory/files/common/terraform/provider.tf.jinja",
"chars": 311,
"preview": "\n{%- if cloud | lower == 'aws' %}\nprovider \"aws\" {\n # Configuration set in env vars $AWS_ACCESS_KEY_ID, $AWS_SECRET_A"
},
{
"path": "pentagon/component/kops/__init__.py",
"chars": 3764,
"preview": "import os\nimport glob\nimport shutil\nimport logging\nimport traceback\nimport sys\nimport re\nimport subprocess\nimport yaml\n\n"
},
{
"path": "pentagon/component/kops/files/cluster.yml.jinja",
"chars": 9504,
"preview": "apiVersion: kops/v1alpha2\nkind: Cluster\nmetadata:\n name: {{ cluster_name }}\nspec:\n kubelet:\n anonymousAuth: false\n "
},
{
"path": "pentagon/component/kops/files/kops.sh",
"chars": 123,
"preview": "#!/bin/bash\nset -x\nset -e\n\nkops create -f cluster.yml\nkops create -f masters.yml\nkops create -f nodes.yml\nbash ./secret."
},
{
"path": "pentagon/component/kops/files/masters.yml.jinja",
"chars": 336,
"preview": "{% for az in master_availability_zones -%}\n---\napiVersion: kops/v1alpha2\nkind: InstanceGroup\nmetadata:\n labels:\n kop"
},
{
"path": "pentagon/component/kops/files/nodes.yml.jinja",
"chars": 595,
"preview": "{% for az in availability_zones -%}\n---\napiVersion: kops/v1alpha2\nkind: InstanceGroup\nmetadata:\n labels:\n kops.k8s.i"
},
{
"path": "pentagon/component/kops/files/secret.sh.jinja",
"chars": 86,
"preview": "kops create secret sshpublickey admin -i {{ ssh_key_path }} --name {{ cluster_name }}\n"
},
{
"path": "pentagon/component/vpn/__init__.py",
"chars": 1643,
"preview": "\nimport os\nimport logging\nimport boto3\n\nfrom pentagon.component import ComponentBase\n\n\nclass Vpn(ComponentBase):\n\n _r"
},
{
"path": "pentagon/component/vpn/files/admin-environment/destroy.yml",
"chars": 618,
"preview": "---\n- name: remove admin ssh key\n hosts: localhost\n connection: local\n gather_facts: False\n\n pre_tasks:\n - includ"
},
{
"path": "pentagon/component/vpn/files/admin-environment/env.yml.jinja",
"chars": 846,
"preview": "---\n{% raw -%}\nenv: \"admin-{{ org }}\"\n{%- endraw %}\naws_key_name: '{{ admin_vpn_key }}'\ndefault_ami: '{{ vpn_ami_id }}'\n"
},
{
"path": "pentagon/component/vpn/files/admin-environment/vpn.yml",
"chars": 1034,
"preview": "---\n- name: upload admin ssh key\n hosts: localhost\n connection: local\n gather_facts: False\n\n pre_tasks:\n - includ"
},
{
"path": "pentagon/defaults.py",
"chars": 1530,
"preview": "from datetime import datetime\n\n\nclass AWSPentagonDefaults(object):\n ssh = {\n 'admin_vpn_key': 'admin-vpn',\n "
},
{
"path": "pentagon/filters.py",
"chars": 676,
"preview": "import re\n\n\ndef register_filters():\n\t\"\"\"Register a function with decorator\"\"\"\n\tregistry = {}\n\tdef registrar(func):\n\t\treg"
},
{
"path": "pentagon/helpers.py",
"chars": 4147,
"preview": "import logging\nimport os\nimport traceback\nimport jinja2\nimport string\n\nimport oyaml as yaml\nfrom Crypto.PublicKey import"
},
{
"path": "pentagon/meta.py",
"chars": 55,
"preview": "__version__ = \"3.1.4\"\n__author__ = 'ReactiveOps, Inc.'\n"
},
{
"path": "pentagon/migration/__init__.py",
"chars": 9240,
"preview": "\nimport logging\nimport os\nimport shutil\nimport sys\nimport glob\nimport git\nimport oyaml as yaml\nimport semver\nimport fnma"
},
{
"path": "pentagon/migration/migrations/__init__.py",
"chars": 45,
"preview": "\nfrom pentagon.migration.migrations import *\n"
},
{
"path": "pentagon/migration/migrations/migration_1_2_0.py",
"chars": 7052,
"preview": "\nimport pentagon\nfrom pentagon import migration\nfrom collections import OrderedDict\nfrom pentagon.migration import *\n\n\nc"
},
{
"path": "pentagon/migration/migrations/migration_2_0_0.py",
"chars": 744,
"preview": "\nfrom pentagon import migration\nfrom pentagon.migration import *\n\n\nclass Migration(migration.Migration):\n _starting_v"
},
{
"path": "pentagon/migration/migrations/migration_2_1_0.py",
"chars": 3920,
"preview": "\nfrom pentagon import migration\nfrom pentagon.migration import *\nfrom pentagon.component import core, inventory\nfrom pen"
},
{
"path": "pentagon/migration/migrations/migration_2_2_0.py",
"chars": 595,
"preview": "\nfrom pentagon import migration\nfrom pentagon.migration import *\nfrom pentagon.component import core, inventory\nfrom pen"
},
{
"path": "pentagon/migration/migrations/migration_2_3_1.py",
"chars": 1542,
"preview": "from pentagon import migration\nfrom pentagon.migration import *\nfrom pentagon.component import core, inventory\nfrom pent"
},
{
"path": "pentagon/migration/migrations/migration_2_4_1.py",
"chars": 471,
"preview": "from pentagon import migration\nfrom pentagon.migration import *\nfrom pentagon.component import core, inventory\nfrom pent"
},
{
"path": "pentagon/migration/migrations/migration_2_4_3.py",
"chars": 2083,
"preview": "import oyaml as yaml\n\nfrom pentagon import migration\nfrom pentagon.migration import *\nfrom pentagon.component import cor"
},
{
"path": "pentagon/migration/migrations/migration_2_5_0.py",
"chars": 2412,
"preview": "from pentagon import migration\nfrom pentagon.migration import *\nfrom pentagon.component import core, inventory\nfrom pent"
},
{
"path": "pentagon/migration/migrations/migration_2_6_0.py",
"chars": 1223,
"preview": "from pentagon import migration\nfrom pentagon.migration import *\nfrom pentagon.component import core, inventory\nfrom pent"
},
{
"path": "pentagon/migration/migrations/migration_2_6_2.py",
"chars": 21108,
"preview": "from copy import deepcopy\n\n\nfrom pentagon import migration\nfrom pentagon.migration import *\nimport yaml\n\nig_message = \"\""
},
{
"path": "pentagon/migration/migrations/migration_2_7_1.py",
"chars": 4489,
"preview": "from copy import deepcopy\n\n\nfrom pentagon import migration\nimport yaml\nimport os\nimport logging\n\nreadme = \"\"\"\n\n# Migrati"
},
{
"path": "pentagon/migration/migrations/migration_2_7_3.py",
"chars": 4381,
"preview": "from copy import deepcopy\n\n\nfrom pentagon import migration\nimport yaml\nimport os\nimport logging\n\nreadme = \"\"\"\n\n# Migrati"
},
{
"path": "pentagon/migration/migrations/migration_3_1_0.py",
"chars": 4425,
"preview": "from copy import deepcopy\n\n\nfrom pentagon import migration\nimport yaml\nimport os\nimport logging\n\nreadme = \"\"\"\n\n# Migrati"
},
{
"path": "pentagon/pentagon.py",
"chars": 16798,
"preview": "# from __future__ import (absolute_import, division, print_function)\n# __metaclass__ = type\n\nimport datetime\nimport shut"
},
{
"path": "setup.py",
"chars": 2852,
"preview": "#!/usr/bin/env python\n# -- coding: utf-8 --\n# Copyright 2017 Reactive Ops Inc.\n#\n# Licensed under the Apache License, Ve"
},
{
"path": "tests/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "tests/requirements.txt",
"chars": 35,
"preview": "nose==1.3.7\npytest\nflake8\nautopep8\n"
},
{
"path": "tests/test_args.py",
"chars": 5841,
"preview": "import unittest\nimport pentagon.pentagon as pentagon\nimport os\nimport logging\nfrom tests.test_base import TestPentagonPr"
},
{
"path": "tests/test_base.py",
"chars": 725,
"preview": "\nimport unittest\nimport pentagon.pentagon as pentagon\nimport os\nimport logging\n\n\nclass TestPentagonProject(unittest.Test"
}
]
About this extraction
This page contains the full source code of the reactiveops/pentagon GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 94 files (300.9 KB), approximately 72.4k tokens, and a symbol index with 216 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.