Repository: Traackr/ansible-elasticsearch
Branch: master
Commit: 21905c6b445c
Files: 29
Total size: 62.8 KB
Directory structure:
gitextract_245obtda/
├── .gitignore
├── .travis.yml
├── README.md
├── Vagrantfile
├── defaults/
│ └── main.yml
├── handlers/
│ └── main.yml
├── meta/
│ └── main.yml
├── tasks/
│ ├── aws.yml
│ ├── custom-jars.yml
│ ├── elastic-install.yml
│ ├── java.yml
│ ├── main.yml
│ ├── marvel.yml
│ ├── plugins.yml
│ ├── post-run.yml
│ ├── spm.yml
│ └── timezone.yml
├── templates/
│ ├── elasticsearch.default.j2
│ ├── elasticsearch.in.sh.j2
│ └── elasticsearch.yml.j2
├── tests/
│ ├── ansible.cfg
│ ├── elastic_test.sh
│ ├── local.ini
│ ├── test1.yml
│ └── test1_var.yml
├── vagrant-inventory.ini
├── vagrant-main.yml
└── vars/
├── sample.yml
└── vagrant.yml
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
# Numerous always-ignore extensions
*.diff
*.err
*.orig
*.log
*.rej
*.swo
*.swp
*.vi
*~
*.sass-cache
*.iml
# OS or Editor folders
.DS_Store
Thumbs.db
.cache
.project
.classpath
.settings
.tmproj
*.esproj
nbproject
*.sublime-project
*.sublime-workspace
*.ipr
*.iws
# Folders to ignore
.hg
.svn
.CVS
intermediate
publish
.idea
target
target-eclipse
.vagrant
================================================
FILE: .travis.yml
================================================
---
language: python
python: "2.7"
before_install:
# Make sure everything's up to date.
- sudo apt-get update -qq
- sudo apt-get install -qq python-apt python-pycurl
install:
# Install Ansible.
- pip install ansible
script:
- cd tests
- echo -e "\e[31m######AnsibleVersion#######\e[0m"
- ansible --version
# 1st: check syntax
- echo "\e[31m######***************************** SYNTAX CHECK (1) *****************************\e[0m"
- ansible-playbook -i local.ini test1.yml --syntax-check
# 2nd: Make sure we run the entire playbook
- echo "***************************** RUN PLAY (2) *****************************"
- ansible-playbook -i local.ini test1.yml --sudo -vvvv --diff
- sudo netstat -antlp | grep LISTEN
- sudo ps aux | grep java
- curl 127.0.0.1:9200
# 3rd: Make sure our playbook is idempotent
- echo "***************************** Idempotence test (3) *****************************"
- >
ansible-playbook -i local.ini test1.yml --sudo -vvvv --diff | tee ansible_output
| grep -q 'changed=0.*failed=0'
&& (echo 'Idempotence test: pass' && exit 0)
|| (echo 'Idempotence test: fail' && exit 1)
# 4th: Application test
- ./elastic_test.sh
after_failure:
- echo -e "\e[31m######IdepotanceLog#######\e[0m"
- sudo cat ansible_output
- echo -e "\e[31m######netstat#######\e[0m"
- sudo netstat -atnlp
- echo -e "\e[31m######ps#######\e[0m"
- sudo ps aux | grep java
- echo -e "\e[31m######CurlElasticsearch#######\e[0m"
- curl 127.0.0.1:9200
- echo -e "\e[31m######AnsibleFacts#######\e[0m"
- ansible -i 127.0.0.1, -m setup all -c local
after_success:
- echo -e "\e[0;32m######Cool Success#######\e[0m"
================================================
FILE: README.md
================================================
# Important Note
This project is no longer actively maintained. We recommend using the official [ansible-elasticsearch](https://github.com/elastic/ansible-elasticsearch) repo which is a lot more comprehensive.
# Ansible Playbook for Elasticsearch
[](https://travis-ci.org/Traackr/ansible-elasticsearch)
This is an [Ansible](http://www.ansibleworks.com/) playbook for [Elasticsearch](http://www.elasticsearch.org/). You can use it by itself or as part of a larger playbook customized for your local environment.
## Features
- Support for installing plugins
- Support for installing and configuring EC2 plugin
- Support for installing custom JARs in the Elasticsearch classpath (e.g. custom Lucene Similarity JAR)
- Support for installing the [Sematext SPM](http://www.sematext.com/spm/) monitor
- Support for installing the [Marvel](http://www.elasticsearch.org/guide/en/marvel/current/) plugin
## Installing
Install [ansible-elasticsearch](https://galaxy.ansible.com/list#/roles/181) via ansible galaxy:
```
ansible-galaxy install gpstathis.elasticsearch
```
## Testing locally with Vagrant
A sample [Vagrant](http://www.vagrantup.com/) configuration is provided to help with local testing. After installing Vagrant, run `vagrant up` at the root of the project to get an VM instance bootstrapped and configured with a running instance of Elasticsearch. Look at `vars/vagrant.yml` and `defaults/main.yml` for the variables that will be substituted in `templates/elasticsearch.yml.j2`.
## Running Standalone Playbook
### Copy Example Files
Make copies of the following files and rename them to suit your environment. E.g.:
- vagrant-main.yml => my-playbook-main.yml
- vagrant-inventory.ini => my-inventory.ini
- vars/vagrant.yml => vars/my-vars.yml
Edit the copied files to suit your environment and needs. See examples below.
### Edit your my-inventory.ini
Edit your my-inventory.ini and customize your cluster and node names:
```
#########################
# Elasticsearch Cluster #
#########################
[es_node_1]
1.2.3.4.compute-1.amazonaws.com
[es_node_1:vars]
elasticsearch_node_name=elasticsearch-1
[es_node_2]
5.6.7.8.compute-1.amazonaws.com
[es_node_2:vars]
elasticsearch_node_name=elasticsearch-2
[es_node_3]
9.10.11.12.compute-1.amazonaws.com
[es_node_3:vars]
elasticsearch_node_name=elasticsearch-3
[all_nodes:children]
es_node_1
es_node_2
es_node_3
[all_nodes:vars]
elasticsearch_cluster_name=my.elasticsearch.cluster
elasticsearch_plugin_aws_ec2_groups=MyElasticSearchGroup
spm_client_token=<your SPM token here>
```
### Edit your vars/my-vars.yml
See `vars/sample.yml` and `vars/vagrant.yml` for example variable files. These are the files where you specify Elasticsearch settings and apply certain features such as plugins, custom JARs or monitoring. The best way to enable configurations is to look at `templates/elasticsearch.yml.j2` and see which variables you want to defile in your `vars/my-vars.yml`. See below for configurations regarding EC2, plugins and custom JARs.
### Edit your my-playbook-main.yml
Example `my-playbook-main.yml`:
```
---
#########################
# Elasticsearch install #
#########################
- hosts: all_nodes
user: $user
sudo: yes
vars_files:
- defaults/main.yml
- vars/my-vars.yml
tasks:
- include: tasks/main.yml
```
### Launch
```
$ ansible-playbook my-playbook-main.yml -i my-inventory.ini -e user=<your sudo user for the elasticsearch installation>
```
## Enabling Added Features
### Configuring EC2
The following variables need to be defined in your playbook or inventory:
- elasticsearch_plugin_aws_version
See [https://github.com/elasticsearch/elasticsearch-cloud-aws](https://github.com/elasticsearch/elasticsearch-cloud-aws) for the version that most accurately matches your installation.
The following variables provide a for now limited configuration for the plugin. More options may be available in the future (see [http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-ec2.html)](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-ec2.html)):
- elasticsearch_plugin_aws_ec2_groups
- elasticsearch_plugin_aws_ec2_ping_timeout
- elasticsearch_plugin_aws_access_key
- elasticsearch_plugin_aws_secret_key
- elasticsearch_plugin_aws_region
### Installing plugins
You will need to define an array called `elasticsearch_plugins` in your playbook or inventory, such that:
```
elasticsearch_plugins:
- { name: '<plugin name>', url: '<[optional] plugin url>' }
- ...
```
where if you were to install the plugin via bin/plugin, you would type:
`bin/plugin -install <plugin name>` or `bin/plugin -install <plugin name> -url <plugin url>`
Example for [https://github.com/elasticsearch/elasticsearch-mapper-attachments](https://github.com/elasticsearch/elasticsearch-mapper-attachments) (`bin/plugin -install elasticsearch/elasticsearch-mapper-attachments/1.9.0`):
```
elasticsearch_plugins:
- { name: 'elasticsearch/elasticsearch-mapper-attachments/1.9.0' }
```
Example for [https://github.com/richardwilly98/elasticsearch-river-mongodb](https://github.com/richardwilly98/elasticsearch-river-mongodb) (`bin/plugin -i com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/1.7.1`):
```
elasticsearch_plugins:
- { name: 'com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/1.7.1' }
```
Example for [https://github.com/imotov/elasticsearch-facet-script](https://github.com/imotov/elasticsearch-facet-script) (`bin/plugin -install facet-script -url http://dl.bintray.com/content/imotov/elasticsearch-plugins/elasticsearch-facet-script-1.1.2.zip`):
```
elasticsearch_plugins:
- { name: 'facet-script', url: 'http://dl.bintray.com/content/imotov/elasticsearch-plugins/elasticsearch-facet-script-1.1.2.zip' }
```
### Installing Custom JARs
Custom jars are made available to the Elasticsearch classpath by being downloaded into the elasticsearch_home_dir/lib folder. An example of a custom jar can include a custom Lucene Similarity Provider. You will need to define an array called `elasticsearch_custom_jars` in your playbook or inventory, such that:
```
elasticsearch_custom_jars:
- { uri: '<URL where JAR can be downloaded from: required>', filename: '<alternative name for final JAR if different from file downladed: leave blank to use same filename>', user: '<BASIC auth username: leave blank of not needed>', passwd: '<BASIC auth password: leave blank of not needed>' }
- ...
```
### Configuring Thread Pools
Elasticsearch [thread pools](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-threadpool.html) can be configured using the `elasticsearch_thread_pools` list variable:
```
elasticsearch_thread_pools:
- "threadpool.bulk.type: fixed"
- "threadpool.bulk.size: 50"
- "threadpool.bulk.queue_size: 1000"
```
### Enabling Sematext SPM
Enable the SPM task in your playbook:
```
tasks:
- include: tasks/spm.yml
- ...
```
Set the spm_client_token variable in your inventory.ini to your SPM key.
### Configuring Marvel
The following variables need to be defined in your playbook or inventory:
- elasticsearch_plugin_marvel_version
The following variables provide configuration for the plugin. More options may be available in the future (see [http://www.elasticsearch.org/guide/en/marvel/current/#stats-export](http://www.elasticsearch.org/guide/en/marvel/current/#stats-export)):
- elasticsearch_plugin_marvel_agent_enabled
- elasticsearch_plugin_marvel_agent_exporter_es_hosts
- elasticsearch_plugin_marvel_agent_indices
- elasticsearch_plugin_marvel_agent_interval
- elasticsearch_plugin_marvel_agent_exporter_es_index_timeformat
## Disable Java installation
If you prefer to skip the built-in installation of the Oracle JRE, use the `elasticsearch_install_java` flag:
```
elasticsearch_install_java: "false"
```
## Include role in a larger playbook
### Add this role as a git submodule
Assuming your playbook structure is such as:
```
- my-master-playbook
|- vars
|- roles
|- my-master-playbook-main.yml
\- my-master-inventory.ini
```
Checkout this project as a submodule under roles:
```
$ cd roles
$ git submodule add git://github.com/traackr/ansible-elasticsearch.git ./ansible-elasticsearch
$ git submodule update --init
$ git commit ./submodule -m "Added submodule as ./subm"
```
### Include this playbook as a role in your master playbook
Example `my-master-playbook-main.yml`:
```
---
#########################
# Elasticsearch install #
#########################
- hosts: all_nodes
user: ubuntu
sudo: yes
roles:
- ansible-elasticsearch
vars_files:
- vars/my-vars.yml
```
# Issues, requests, contributions
This software is provided as is. Having said that, if you see an issue, feel free to log a ticket. We'll do our best to address it. Same if you want to see a certain feature supported in the fututre. No guarantees are made that any requested feature will be implemented. If you'd like to contribute, feel free to clone and submit a pull request.
# Dependencies
None
# License
MIT
# Author Information
George Stathis - gstathis [at] traackr.com
================================================
FILE: Vagrantfile
================================================
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "ubuntu/trusty64"
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network :private_network, ip: "192.168.33.10"
config.vm.network :private_network, ip: "192.168.111.10"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
config.vm.synced_folder '.', '/vagrant'
config.vm.provision :ansible do |ansible|
ansible.inventory_path = "vagrant-inventory.ini"
ansible.playbook = "vagrant-main.yml"
ansible.extra_vars = { user: "vagrant" }
ansible.limit = 'all'
# ansible.verbose = 'vvvv'
end
end
================================================
FILE: defaults/main.yml
================================================
---
# Elasticsearch Ansible Variables
elasticsearch_user: elasticsearch
elasticsearch_group: elasticsearch
elasticsearch_download_url: https://download.elasticsearch.org/elasticsearch/elasticsearch
elasticsearch_version: 1.7.3
elasticsearch_apt_repos:
- 'ppa:webupd8team/java'
elasticsearch_apt_java_package: oracle-java7-installer
elasticsearch_apt_dependencies:
- htop
- ntp
- unzip
elasticsearch_max_open_files: 65535
elasticsearch_home_dir: /usr/share/elasticsearch
elasticsearch_plugin_dir: /usr/share/elasticsearch/plugins
elasticsearch_log_dir: /var/log/elasticsearch
elasticsearch_data_dir: /var/lib/elasticsearch
elasticsearch_work_dir: /tmp/elasticsearch
elasticsearch_conf_dir: /etc/elasticsearch
elasticsearch_pid_dir: /var/run
elasticsearch_service_startonboot: no
elasticsearch_timezone: "Etc/UTC" # Default to UTC
#elasticsearch_http_cors_enabled: "false"
elasticsearch_service_state: started
# Non-Elasticsearch Defaults
apt_cache_valid_time: 300 # seconds between "apt-get update" calls.
elasticsearch_install_java: "true"
================================================
FILE: handlers/main.yml
================================================
---
# Elasticsearch Ansible Handlers
# Restart Elasticsearch
- name: Restart Elasticsearch
service: name=elasticsearch state=restarted
================================================
FILE: meta/main.yml
================================================
---
# Elasticsearch Ansible Meta
galaxy_info:
author: "George Stathis"
company: Traackr
license: MIT
min_ansible_version: 1.3
platforms:
- name: Ubuntu
versions:
- precise
categories:
- database:nosql
dependencies: []
================================================
FILE: tasks/aws.yml
================================================
---
# Install AWS Plugin (see https://github.com/elasticsearch/elasticsearch-cloud-aws)
#
# The following variables need to be defined in your playbook or inventory:
# - elasticsearch_plugin_aws_version
#
# The following variables provide a for now limited configuration for the plugin.
# More options may be available in the future.
# (see http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-ec2.html):
# - elasticsearch_plugin_aws_ec2_groups
# - elasticsearch_plugin_aws_ec2_ping_timeout
# - elasticsearch_plugin_aws_access_key
# - elasticsearch_plugin_aws_secret_key
# - elasticsearch_plugin_aws_region
- name: aws | Removing AWS Plugin if it exists
shell: bin/plugin --remove cloud-aws
chdir={{ elasticsearch_home_dir }}
when: elasticsearch_plugin_aws_reinstall is defined and elasticsearch_plugin_aws_reinstall == True
ignore_errors: yes
- name: aws | Installing AWS Plugin
shell: bin/plugin -install elasticsearch/elasticsearch-cloud-aws/{{ elasticsearch_plugin_aws_version }}
chdir={{ elasticsearch_home_dir }}
register: aws_plugins_installed
changed_when: "'Installed' in aws_plugins_installed.stdout"
failed_when: "aws_plugins_installed.rc != 0 and aws_plugins_installed.stdout.find('already exists. To update the plugin') == -1"
================================================
FILE: tasks/custom-jars.yml
================================================
---
# Install Custom JARs
#
# Custom jars are made available to the Elasticsearch classpath by being downloaded into the elasticsearch_home_dir/lib folder.
# An example of a custom jar can include a custom Lucene Similarity Provider. You will need to define an array called
# 'elasticsearch_custom_jars' in your playbook or inventory, such that:
#
# elasticsearch_custom_jars:
# - { uri: '<URL where JAR can be downloaded from: required>', filename: '<alternative name for final JAR if different from file downladed: leave blank to use same filename>', user: '<BASIC auth username: leave blank of not needed>', passwd: '<BASIC auth password: leave blank of not needed>' }
# - ...
# Loop though elasticsearch_custom_jars and install them
- name: Installing Custom JARs
action: >
get_url url={{ item.uri }}
url_username={{ item.user }} url_password={{ item.passwd }} dest="{{ elasticsearch_home_dir }}/lib/{{ item.filename }}"
with_items: elasticsearch_custom_jars
# Fix permissions
- file: >
path="{{ elasticsearch_home_dir }}/lib" state=directory
owner={{ elasticsearch_user }} group={{ elasticsearch_group }}
recurse=yes
================================================
FILE: tasks/elastic-install.yml
================================================
---
- name: elastic-install | Install python-software-properties
apt:
pkg=python-software-properties
state=present
update_cache=yes
cache_valid_time={{apt_cache_valid_time}}
- name: elastic-install | Install dependencies
apt:
pkg={{ item }}
state=present
with_items: elasticsearch_apt_dependencies
- name: elastic-install | Configuring elastic group
group:
name={{ elasticsearch_group }}
- name: elastic-install | Configuring elastic user
user:
name={{ elasticsearch_user }}
group={{ elasticsearch_group }}
createhome=no
- name: elastic-install | Ensure temp elasticsearch directories exists
file:
path="{{ elasticsearch_work_dir }}"
state=directory
owner={{ elasticsearch_user }}
group={{ elasticsearch_group }}
recurse=yes
- name: elastic-install | Check if we have elastic with same version installed
stat:
path="/usr/share/elasticsearch/lib/elasticsearch-{{ elasticsearch_version }}.jar"
register: installed_version
- name: elastic-install | Try to stop elasticsearch if running
service:
name=elasticsearch
state=stopped
ignore_errors: yes
when: not installed_version.stat.exists
- name: elastic-install | Download Elasticsearch deb
get_url:
url={{ elasticsearch_download_url }}/elasticsearch-{{ elasticsearch_version }}.deb
dest=/tmp/elasticsearch-{{ elasticsearch_version }}.deb
mode=0440
when: not installed_version.stat.exists
#shell: dpkg --remove elasticsearch
- name: elastic-install | Uninstalling previous version if applicable
apt:
name="elasticsearch"
state="absent"
when: not installed_version.stat.exists
- name: elastic-install | Remove elasticsearch directory
file:
path="{{ elasticsearch_home_dir }}"
state=absent
when: not installed_version.stat.exists
- name: elastic-install | Install Elasticsearch deb
shell: dpkg -i -E --force-confnew /tmp/elasticsearch-{{ elasticsearch_version }}.deb
when: not installed_version.stat.exists
notify: Restart Elasticsearch
- name: elastic-install | Ensure elastic directories exists
file:
path="{{ item }}"
state=directory
owner={{ elasticsearch_user }}
group={{ elasticsearch_group }}
recurse=yes
with_items:
- "{{ elasticsearch_work_dir }}"
- "{{ elasticsearch_home_dir }}"
- "{{ elasticsearch_log_dir }}"
- "{{ elasticsearch_data_dir }}"
- "{{ elasticsearch_work_dir }}"
- "{{ elasticsearch_conf_dir }}"
- name: Configure limits max_open_files
lineinfile:
dest=/etc/security/limits.conf
regexp='^{{ elasticsearch_user }} - nofile {{ elasticsearch_max_open_files }}'
insertafter=EOF
line='{{ elasticsearch_user }} - nofile {{ elasticsearch_max_open_files }}'
when: elasticsearch_max_open_files is defined
notify: Restart Elasticsearch
- name: elastic-install | Configure limits max_locked_memory
lineinfile:
dest=/etc/security/limits.conf
regexp='^{{ elasticsearch_user }} - memlock {{ elasticsearch_max_locked_memory }}'
insertafter=EOF
line='{{ elasticsearch_user }} - memlock {{ elasticsearch_max_locked_memory }}'
when: elasticsearch_max_locked_memory is defined
notify: Restart Elasticsearch
- name: elastic-install | Configure su pam_limits.so
lineinfile:
dest=/etc/pam.d/su
regexp='^session required pam_limits.so'
insertafter=EOF
line='session required pam_limits.so'
notify: Restart Elasticsearch
- name: elastic-install | Configure common-session pam_limits.so
lineinfile:
dest=/etc/pam.d/common-session
regexp='^session required pam_limits.so'
insertafter=EOF
line='session required pam_limits.so'
notify: Restart Elasticsearch
- name: elastic-install | Configure common-session-noninteractive pam_limits.so
lineinfile:
dest=/etc/pam.d/common-session-noninteractive
regexp='^session required pam_limits.so'
insertafter=EOF
line='session required pam_limits.so'
notify: Restart Elasticsearch
- name: elastic-install | Configure sudo pam_limits.so
lineinfile:
dest=/etc/pam.d/sudo
regexp='^session required pam_limits.so'
insertafter=EOF
line='session required pam_limits.so'
notify: Restart Elasticsearch
- name: elastic-install | Configure initd java opts in /etc/init.d/elasticsearch
lineinfile:
dest=/etc/init.d/elasticsearch
regexp='^(DAEMON_OPTS=".*-Des.max-open-files=true")$'
insertafter='^(DAEMON_OPTS=".*CONF_DIR")$'
line='DAEMON_OPTS="$DAEMON_OPTS -Des.max-open-files=true"'
notify: Restart Elasticsearch
- name: elastic-install | Configuring Elasticsearch elasticsearch.yml Node
template:
src=elasticsearch.yml.j2
dest={{ elasticsearch_conf_dir }}/elasticsearch.yml
owner={{ elasticsearch_user }}
group={{ elasticsearch_group }}
mode=0644
when: elasticsearch_conf_dir is defined
notify: Restart Elasticsearch
- name : elastic-install | Configure /etc/default/elasticsearch
template:
src=elasticsearch.default.j2
dest=/etc/default/elasticsearch
owner={{ elasticsearch_user }}
group={{ elasticsearch_group }}
mode=0644
notify: Restart Elasticsearch
================================================
FILE: tasks/java.yml
================================================
---
- name: java | Accept Oracle license prior JDK installation
shell: echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections; echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
creates=/usr/lib/jvm/java-8-oracle
- name: java | Update repositories
apt_repository:
repo={{ item }}
state=present
update_cache=yes
with_items: elasticsearch_apt_repos
- name: java | Install dependencies
apt:
pkg={{elasticsearch_apt_java_package}}
state=present
================================================
FILE: tasks/main.yml
================================================
---
# Elasticsearch Ansible Tasks
# Install Java
- include: java.yml
when: elasticsearch_install_java
# Configure timezome
- include: timezone.yml
# Install and configure elasticsearch
- include: elastic-install.yml
# Install AWS Plugin
- include: aws.yml
when: (elasticsearch_plugin_aws_version is defined)
# Install Other Generic Plugins
- include: plugins.yml
when: (elasticsearch_plugins is defined)
# Install custom JARs
- include: custom-jars.yml
when: (elasticsearch_custom_jars is defined)
# Install Marvel Plugin
- include: marvel.yml
when: (elasticsearch_plugin_marvel_version is defined)
# Always run post-run tasks
- include: post-run.yml
================================================
FILE: tasks/marvel.yml
================================================
---
# Install Marvel (see http://www.elasticsearch.org/guide/en/marvel/current/)
#
# The following variables need to be defined in your playbook or inventory:
# - elasticsearch_plugin_marvel_version
#
# The following variables provide configuration for the plugin.
# More options may be available in the future:
# - elasticsearch_plugin_marvel_agent_enabled
# - elasticsearch_plugin_marvel_agent_exporter_es_hosts
# - elasticsearch_plugin_marvel_agent_indices
# - elasticsearch_plugin_marvel_agent_interval
# - elasticsearch_plugin_marvel_agent_exporter_es_index_timeformat
- name: marvel | Removing Marvel Plugin if it exists
shell: bin/plugin --remove marvel
chdir={{ elasticsearch_home_dir }}
ignore_errors: yes
- name: marvel | Installing Marvel Plugin
shell: bin/plugin -i elasticsearch/marvel/{{ elasticsearch_plugin_marvel_version }}
chdir={{ elasticsearch_home_dir }}
register: marvel_plugins_installed
changed_when: "'Installed' in marvel_plugins_installed.stdout"
failed_when: "marvel_plugins_installed.rc != 0 and marvel_plugins_installed.stdout.find('already exists. To update the plugin') == -1"
================================================
FILE: tasks/plugins.yml
================================================
---
# Install Elasticsearch Plugins
#
# You will need to define an array called 'elasticsearch_plugins' in your playbook or inventory, such that:
#
# elasticsearch_plugins:
# - { name: '<plugin name>', url: '<[optional] plugin url>' }
# - ...
# where if you were to install the plugin via bin/plugin, you would type:
#
# bin/plugin -install <plugin name>
#
# or
#
# bin/plugin -install <plugin name> -url <plugin url>
# Example for https://github.com/elasticsearch/elasticsearch-mapper-attachments (bin/plugin -install elasticsearch/elasticsearch-mapper-attachments/1.9.0):
# elasticsearch_plugins:
# - { name: 'elasticsearch/elasticsearch-mapper-attachments/1.9.0' }
#
# Example for https://github.com/richardwilly98/elasticsearch-river-mongodb (bin/plugin -i com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/1.7.1):
# elasticsearch_plugins:
# - { name: 'com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/1.7.1' }
#
# Example for https://github.com/imotov/elasticsearch-facet-script (bin/plugin -install facet-script -url http://dl.bintray.com/content/imotov/elasticsearch-plugins/elasticsearch-facet-script-1.1.2.zip):
# elasticsearch_plugins:
# - { name: 'facet-script', url: 'http://dl.bintray.com/content/imotov/elasticsearch-plugins/elasticsearch-facet-script-1.1.2.zip' }
# Loop though elasticsearch_plugins and install them
- name: plugins | Removing Old Plugins
shell: bin/plugin --remove {{ item.name }}
chdir={{ elasticsearch_home_dir }}
when: item.name is defined and item.reinstall is defined and item.reinstall == True
with_items: elasticsearch_plugins
ignore_errors: yes
- name: plugins | Installing Plugins
shell: bin/plugin -install {{ item.name }} {{ '-url ' + item.url if item.url is defined else '' }}
chdir={{ elasticsearch_home_dir }}
when: item.download_only is not defined
with_items: elasticsearch_plugins
register: plugins_installed
changed_when: "'Installed' in plugins_installed.stdout"
failed_when: "plugins_installed.rc != 0 and plugins_installed.stdout.find('already exists. To update the plugin') == -1"
- name: plugins | Ensure Paths for Download only Plugins
file:
path="{{ elasticsearch_plugin_dir }}/{{ item.name }}"
state=directory
when: item.download_only is defined and item.name is defined
with_items: elasticsearch_plugins
- name: plugins | Download only Plugins
get_url:
url={{ item.url }}
dest={{ elasticsearch_plugin_dir }}/{{ item.name }}
when: item.download_only is defined and item.name is defined
with_items: elasticsearch_plugins
ignore_errors: yes
================================================
FILE: tasks/post-run.yml
================================================
---
- name: post-run | Ensure plugins directory permission are correct
file:
path="{{ elasticsearch_plugin_dir }}"
state=directory
owner={{ elasticsearch_user }}
group={{ elasticsearch_group }}
recurse=yes
- name: post-run | Ensure Elasticsearch is running and started on boot
service:
name=elasticsearch
enabled={{ elasticsearch_service_startonboot }}
state={{ elasticsearch_service_state }}
when: installed_version.stat.exists
# Flush handlers to restart elastic if needed
- meta: flush_handlers
- name: post-run | Check http port is open and running version. timeout 160s
wait_for:
host={{ '127.0.0.1' if elasticsearch_network_bind_host is not defined or elasticsearch_network_bind_host == '0.0.0.0' else elasticsearch_network_bind_host }}
port={{ elasticsearch_network_http_port | default('9200') }}
timeout=160
when: elasticsearch_service_state == "started"
================================================
FILE: tasks/spm.yml
================================================
---
# SPM Tasks (see http://sematext.com/spm/)
- name: spm | Install Collectd
apt:
pkg=collectd
state=present
# TODO: Make idempotent
- name: spm | Downloading and running package
shell: wget --no-check-certificate -O installer.sh "https://apps.sematext.com/spm-reports/installerDownload.do?client_token={{ spm_client_token }}" && bash installer.sh
# TODO: Make idempotent
- name: spm | Preparing monitor configurations
shell: bash /opt/spm/bin/spm-client-setup-conf.sh {{ spm_client_token }} es standalone
- name: spm | Configuring JXM in Elasticsearch
lineinfile:
dest=/etc/default/elasticsearch
regexp='^(ES_JAVA_OPTS="\$ES_JAVA_OPTS -Dcom.sun.management.jmxremote.*")$'
insertafter='^(#ES_JAVA_OPTS=.*)$'
line='ES_JAVA_OPTS="\$ES_JAVA_OPTS -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=3000 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"'
notify: Restart Elasticsearch
- name: spm | Configuring JXM in SPM
lineinfile:
dest=/opt/spm/spm-monitor/conf/spm-monitor-config-{{ spm_client_token }}-default.properties
regexp='^(SPM_MONITOR_JMX_PARAMS="")$'
line='SPM_MONITOR_JMX_PARAMS="-Dspm.remote.jmx.url=localhost:3000"'
- name: spm | Restarting SPM
service:
name=spm-monitor
state=restarted
================================================
FILE: tasks/timezone.yml
================================================
---
- name: timezone | check current timezone
shell: cat /etc/timezone
changed_when: 0
register: current_zone
- name: timezone | Set timezone variables
copy:
content={{elasticsearch_timezone}}
dest=/etc/timezone
owner=root
group=root
mode=0644
when: current_zone.stdout != elasticsearch_timezone
- name: timezone | Run dpkg-reconfigure to configure timezone
shell: dpkg-reconfigure --frontend noninteractive tzdata
when: current_zone.stdout != elasticsearch_timezone
================================================
FILE: templates/elasticsearch.default.j2
================================================
# Run ElasticSearch as this user ID and group ID
#ES_USER=elasticsearch
{% if elasticsearch_user is defined %}
ES_USER={{ elasticsearch_user }}
{% endif %}
#ES_GROUP=elasticsearch
{% if elasticsearch_group is defined %}
ES_GROUP={{ elasticsearch_group }}
{% endif %}
# Heap Size (defaults to 256m min, 1g max)
#ES_HEAP_SIZE=2g
{% if elasticsearch_heap_size is defined %}
ES_HEAP_SIZE={{ elasticsearch_heap_size }}
{% endif %}
# Heap new generation
#ES_HEAP_NEWSIZE=
# max direct memory
#ES_DIRECT_SIZE=
# Maximum number of open files, defaults to 65535.
#MAX_OPEN_FILES=65535
{% if elasticsearch_max_open_files is defined %}
MAX_OPEN_FILES={{ elasticsearch_max_open_files }}
{% endif %}
# Maximum locked memory size. Set to "unlimited" if you use the
# bootstrap.mlockall option in elasticsearch.yml. You must also set
# ES_HEAP_SIZE.
#MAX_LOCKED_MEMORY=unlimited
{% if elasticsearch_max_locked_memory is defined %}
MAX_LOCKED_MEMORY={{ elasticsearch_max_locked_memory }}
{% endif %}
# ElasticSearch log directory
#LOG_DIR=/var/log/elasticsearch
# ElasticSearch data directory
#DATA_DIR=/var/lib/elasticsearch
# ElasticSearch work directory
#WORK_DIR=/tmp/elasticsearch
# ElasticSearch configuration directory
#CONF_DIR=/etc/elasticsearch
# ElasticSearch configuration file (elasticsearch.yml)
#CONF_FILE=/etc/elasticsearch/elasticsearch.yml
# Additional Java OPTS
{% if elasticsearch_java_opts is defined %}
ES_JAVA_OPTS="{{ elasticsearch_java_opts }}"
{% else %}
#ES_JAVA_OPTS=
{% endif %}
# Environment Vars
{% if elasticsearch_env_use_gc_logging is defined %}
export ES_USE_GC_LOGGING=true
{% endif %}
{% if elasticsearch_java_home is defined %}
JAVA_HOME={{ elasticsearch_java_home }}
{% endif %}
PID_DIR={{ elasticsearch_pid_dir }}
================================================
FILE: templates/elasticsearch.in.sh.j2
================================================
#!/bin/sh
ES_CLASSPATH=$ES_CLASSPATH:$ES_HOME/lib/elasticsearch-{{ elasticsearch_version }}.jar:$ES_HOME/lib/*:$ES_HOME/lib/sigar/*
if [ "x$ES_MIN_MEM" = "x" ]; then
ES_MIN_MEM=256m
fi
if [ "x$ES_MAX_MEM" = "x" ]; then
ES_MAX_MEM=1g
fi
if [ "x$ES_HEAP_SIZE" != "x" ]; then
ES_MIN_MEM=$ES_HEAP_SIZE
ES_MAX_MEM=$ES_HEAP_SIZE
fi
# min and max heap sizes should be set to the same value to avoid
# stop-the-world GC pauses during resize, and so that we can lock the
# heap in memory on startup to prevent any of it from being swapped
# out.
JAVA_OPTS="$JAVA_OPTS -Xms${ES_MIN_MEM}"
JAVA_OPTS="$JAVA_OPTS -Xmx${ES_MAX_MEM}"
# new generation
if [ "x$ES_HEAP_NEWSIZE" != "x" ]; then
JAVA_OPTS="$JAVA_OPTS -Xmn${ES_HEAP_NEWSIZE}"
fi
# max direct memory
if [ "x$ES_DIRECT_SIZE" != "x" ]; then
JAVA_OPTS="$JAVA_OPTS -XX:MaxDirectMemorySize=${ES_DIRECT_SIZE}"
fi
# reduce the per-thread stack size
JAVA_OPTS="$JAVA_OPTS -Xss256k"
# set to headless, just in case
JAVA_OPTS="$JAVA_OPTS -Djava.awt.headless=true"
# Force the JVM to use IPv4 stack
if [ "x$ES_USE_IPV4" != "x" ]; then
JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true"
fi
JAVA_OPTS="$JAVA_OPTS -XX:+UseParNewGC"
JAVA_OPTS="$JAVA_OPTS -XX:+UseConcMarkSweepGC"
JAVA_OPTS="$JAVA_OPTS -XX:CMSInitiatingOccupancyFraction=75"
JAVA_OPTS="$JAVA_OPTS -XX:+UseCMSInitiatingOccupancyOnly"
# When running under Java 7
# JAVA_OPTS="$JAVA_OPTS -XX:+UseCondCardMark"
# GC logging options
if [ "x$ES_USE_GC_LOGGING" != "x" ]; then
JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCDetails"
JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCTimeStamps"
JAVA_OPTS="$JAVA_OPTS -XX:+PrintClassHistogram"
JAVA_OPTS="$JAVA_OPTS -XX:+PrintTenuringDistribution"
JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCApplicationStoppedTime"
{% if elasticsearch_log_dir is defined %}
JAVA_OPTS="$JAVA_OPTS -Xloggc:{{ elasticsearch_log_dir }}/gc.log"
{% else %}
JAVA_OPTS="$JAVA_OPTS -Xloggc:/var/log/elasticsearch/gc.log"
{% endif %}
{% if elasticsearch_java_opts is defined %}
JAVA_OPTS="$JAVA_OPTS {{ elasticsearch_java_opts }}"
{% endif %}
fi
# Causes the JVM to dump its heap on OutOfMemory.
JAVA_OPTS="$JAVA_OPTS -XX:+HeapDumpOnOutOfMemoryError"
# The path to the heap dump location, note directory must exists and have enough
# space for a full heap dump.
#JAVA_OPTS="$JAVA_OPTS -XX:HeapDumpPath=$ES_HOME/logs/heapdump.hprof"
================================================
FILE: templates/elasticsearch.yml.j2
================================================
##################### ElasticSearch Configuration Example #####################
# This file contains an overview of various configuration settings,
# targeted at operations staff. Application developers should
# consult the guide at <http://elasticsearch.org/guide>.
#
# The installation procedure is covered at
# <http://elasticsearch.org/guide/reference/setup/installation.html>.
#
# ElasticSearch comes with reasonable defaults for most settings,
# so you can try it out without bothering with configuration.
#
# Most of the time, these defaults are just fine for running a production
# cluster. If you're fine-tuning your cluster, or wondering about the
# effect of certain configuration option, please _do ask_ on the
# mailing list or IRC channel [http://elasticsearch.org/community].
# See <http://elasticsearch.org/guide/reference/setup/configuration.html>
# for information on supported formats and syntax for the configuration file.
################################### Cluster ###################################
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
# cluster.name: elasticsearch
{% if elasticsearch_cluster_name is defined %}
cluster.name: {{ elasticsearch_cluster_name }}
{% endif %}
#################################### Node #####################################
# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
# node.name: "Franz Kafka"
{% if elasticsearch_node_name is defined %}
node.name: {{ elasticsearch_node_name }}
{% endif %}
# Every node can be configured to allow or deny being eligible as the master,
# and to allow or deny to store the data.
#
# Allow this node to be eligible as a master node (enabled by default):
#
# node.master: true
{% if elasticsearch_node_master is defined %}
node.master: {{ elasticsearch_node_master }}
{% endif %}
#
# Allow this node to store data (enabled by default):
#
# node.data: true
{% if elasticsearch_node_data is defined %}
node.data: {{ elasticsearch_node_data }}
{% endif %}
# You can exploit these settings to design advanced cluster topologies.
#
# 1. You want this node to never become a master node, only to hold data.
# This will be the "workhorse" of your cluster.
#
# node.master: false
# node.data: true
#
# 2. You want this node to only serve as a master: to not store any data and
# to have free resources. This will be the "coordinator" of your cluster.
#
# node.master: true
# node.data: false
#
# 3. You want this node to be neither master nor data node, but
# to act as a "search load balancer" (fetching data from nodes,
# aggregating results, etc.)
#
# node.master: false
# node.data: false
# Use the Cluster Health API [http://localhost:9200/_cluster/health], the
# Node Info API [http://localhost:9200/_cluster/nodes] or GUI tools
# such as <http://github.com/lukas-vlcek/bigdesk> and
# <http://mobz.github.com/elasticsearch-head> to inspect the cluster state.
# A node can have generic attributes associated with it, which can later be used
# for customized shard allocation filtering, or allocation awareness. An attribute
# is a simple key value pair, similar to node.key: value, here is an example:
#
# node.rack: rack314
{% if elasticsearch_node_rack is defined %}
node.rack: {{ elasticsearch_node_rack }}
{% endif %}
# By default, multiple nodes are allowed to start from the same installation location
# to disable it, set the following:
# node.max_local_storage_nodes: 1
{% if elasticsearch_node_max_local_storage_nodes is defined %}
node.max_local_storage_nodes: {{ elasticsearch_node_max_local_storage_nodes }}
{% endif %}
# Disable network discovery by setting the following (useful for development):
# node.local: true
{% if elasticsearch_node_local is defined %}
node.local: {{ elasticsearch_node_local }}
{% endif %}
#################################### Index ####################################
# You can set a number of options (such as shard/replica options, mapping
# or analyzer definitions, translog settings, ...) for indices globally,
# in this file.
#
# Note, that it makes more sense to configure index settings specifically for
# a certain index, either when creating it or by using the index templates API.
#
# See <http://elasticsearch.org/guide/reference/index-modules/> and
# <http://elasticsearch.org/guide/reference/api/admin-indices-create-index.html>
# for more information.
# Set the number of shards (splits) of an index (5 by default):
#
# index.number_of_shards: 5
{% if elasticsearch_index_number_of_shards is defined %}
index.number_of_shards: {{ elasticsearch_index_number_of_shards }}
{% endif %}
# Set the number of replicas (additional copies) of an index (1 by default):
#
# index.number_of_replicas: 1
{% if elasticsearch_index_number_of_replicas is defined %}
index.number_of_replicas: {{ elasticsearch_index_number_of_replicas }}
{% endif %}
# Note, that for development on a local machine, with small indices, it usually
# makes sense to "disable" the distributed features:
#
# index.number_of_shards: 1
# index.number_of_replicas: 0
# These settings directly affect the performance of index and search operations
# in your cluster. Assuming you have enough machines to hold shards and
# replicas, the rule of thumb is:
#
# 1. Having more *shards* enhances the _indexing_ performance and allows to
# _distribute_ a big index across machines.
# 2. Having more *replicas* enhances the _search_ performance and improves the
# cluster _availability_.
#
# The "number_of_shards" is a one-time setting for an index.
#
# The "number_of_replicas" can be increased or decreased anytime,
# by using the Index Update Settings API.
#
# ElasticSearch takes care about load balancing, relocating, gathering the
# results from nodes, etc. Experiment with different settings to fine-tune
# your setup.
# Use the Index Status API (<http://localhost:9200/A/_status>) to inspect
# the index status.
{% if elasticsearch_index_mapper_dynamic is defined %}
index.mapper_dynamic: {{ elasticsearch_index_mapper_dynamic }}
{% endif %}
{% if elasticsearch_misc_query_bool_max_clause_count is defined %}
index.query.bool.max_clause_count: {{ elasticsearch_misc_query_bool_max_clause_count }}
{% endif %}
{% if elasticsearch_index_refresh_interval is defined %}
index.refresh_interval: {{ elasticsearch_index_refresh_interval }}
{% endif %}
{% if elasticsearch_index_store_throttle_type is defined %}
index.store.throttle.type: {{ elasticsearch_index_store_throttle_type }}
{% endif %}
{% if elasticsearch_index_store_throttle_max_bytes_per_sec is defined %}
indices.store.throttle.max_bytes_per_sec: {{ elasticsearch_index_store_throttle_max_bytes_per_sec }}
{% endif %}
{% if elasticsearch_index_merge_scheduler_max_thread_count is defined %}
index.merge.scheduler.max_thread_count: {{ elasticsearch_index_merge_scheduler_max_thread_count }}
{% endif %}
#################################### Paths ####################################
# Path to directory containing configuration (this file and logging.yml):
#
# path.conf: /path/to/conf
{% if elasticsearch_conf_dir is defined %}
path.conf: {{ elasticsearch_conf_dir }}
{% endif %}
# Path to directory where to store index data allocated for this node.
#
# path.data: /path/to/data
{% if elasticsearch_data_dir is defined %}
path.data: {{ elasticsearch_data_dir }}
{% endif %}
# Can optionally include more than one location, causing data to be striped across
# the locations (a la RAID 0) on a file level, favouring locations with most free
# space on creation. For example:
#
# path.data: /path/to/data1,/path/to/data2
# Path to temporary files:
#
# path.work: /path/to/work
{% if elasticsearch_work_dir is defined %}
path.work: {{ elasticsearch_work_dir }}
{% endif %}
# Path to log files:
#
# path.logs: /path/to/logs
{% if elasticsearch_log_dir is defined %}
path.logs: {{ elasticsearch_log_dir }}
{% endif %}
# Path to where plugins are installed:
#
# path.plugins: /path/to/plugins
{% if elasticsearch_plugin_dir is defined %}
path.plugins: {{ elasticsearch_plugin_dir }}
{% endif %}
#################################### Plugin ###################################
# If a plugin listed here is not installed for current node, the node will not start.
#
# plugin.mandatory: mapper-attachments,lang-groovy
################################### Memory ####################################
# ElasticSearch performs poorly when JVM starts swapping: you should ensure that
# it _never_ swaps.
#
# Set this property to true to lock the memory:
#
#
{% if elasticsearch_memory_bootstrap_mlockall is defined %}
bootstrap.mlockall: {{ elasticsearch_memory_bootstrap_mlockall }}
{% endif %}
# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set
# to the same value, and that the machine has enough memory to allocate
# for ElasticSearch, leaving enough memory for the operating system itself.
#
# You should also make sure that the ElasticSearch process is allowed to lock
# the memory, eg. by using `ulimit -l unlimited`.
{% if elasticsearch_indices_fielddata_cache_size is defined %}
indices.fielddata.cache.size: {{ elasticsearch_indices_fielddata_cache_size }}
{% endif %}
{% if elasticsearch_indices_breaker_fielddata_limit is defined %}
indices.breaker.fielddata.limit: {{ elasticsearch_indices_breaker_fielddata_limit }}
{% endif %}
{% if elasticsearch_indices_breaker_request_limit is defined %}
indices.breaker.request.limit: {{ elasticsearch_indices_breaker_request_limit }}
{% endif %}
{% if elasticsearch_indices_breaker_total_limit is defined %}
indices.breaker.total.limit: {{ elasticsearch_indices_breaker_total_limit }}
{% endif %}
############################## Network And HTTP ###############################
# ElasticSearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).
# Set the bind address specifically (IPv4 or IPv6):
#
# network.bind_host: 192.168.0.1
{% if elasticsearch_network_bind_host is defined %}
network.bind_host: {{ elasticsearch_network_bind_host }}
{% endif %}
# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
#
# network.publish_host: 192.168.0.1
{% if elasticsearch_network_publish_host is defined %}
network.publish_host: {{ elasticsearch_network_publish_host }}
{% endif %}
# Set both 'bind_host' and 'publish_host':
#
# network.host: 192.168.0.1
{% if elasticsearch_network_host is defined %}
network.host: {{ elasticsearch_network_host }}
{% endif %}
# Set a custom port for the node to node communication (9300 by default):
#
# transport.tcp.port: 9300
{% if elasticsearch_network_transport_tcp_port is defined %}
transport.tcp.port: {{ elasticsearch_network_transport_tcp_port }}
{% endif %}
# Enable compression for all communication between nodes (disabled by default):
#
# transport.tcp.compress: true
{% if elasticsearch_network_transport_tcp_compress is defined %}
transport.tcp.compress: {{ elasticsearch_network_transport_tcp_compress }}
{% endif %}
# Set a custom port to listen for HTTP traffic:
#
# http.port: 9200
{% if elasticsearch_network_http_port is defined %}
http.port: {{ elasticsearch_network_http_port }}
{% endif %}
# Set a custom allowed content length:
#
# http.max_content_length: 100mb
{% if elasticsearch_network_http_max_content_lengtht is defined %}
http.max_content_length: {{ elasticsearch_network_http_max_content_lengtht }}
{% endif %}
# Disable HTTP completely:
#
# http.enabled: false
{% if elasticsearch_network_http_enabled is defined %}
http.enabled: {{ elasticsearch_network_http_enabled }}
{% endif %}
# Basic HTTP
{% if elasticsearch_http_basic_log is defined %}
http.basic.log: {{ elasticsearch_http_basic_log }}
{% endif %}
{% if elasticsearch_http_basic_user is defined %}
http.basic.user: {{ elasticsearch_http_basic_user }}
{% endif %}
{% if elasticsearch_http_basic_password is defined %}
http.basic.password: {{ elasticsearch_http_basic_password }}
{% endif %}
################################### Gateway ###################################
# The gateway allows for persisting the cluster state between full cluster
# restarts. Every change to the state (such as adding an index) will be stored
# in the gateway, and when the cluster starts up for the first time,
# it will read its state from the gateway.
# There are several types of gateway implementations. For more information,
# see <http://elasticsearch.org/guide/reference/modules/gateway>.
# The default gateway type is the "local" gateway (recommended):
#
# gateway.type: local
{% if elasticsearch_gateway_type is defined %}
gateway.type: {{ elasticsearch_gateway_type }}
{% endif %}
# Settings below control how and when to start the initial recovery process on
# a full cluster restart (to reuse as much local data as possible when using shared
# gateway).
# Allow recovery process after N nodes in a cluster are up:
#
# gateway.recover_after_nodes: 1
{% if elasticsearch_gateway_recover_after_nodes is defined %}
gateway.recover_after_nodes: {{ elasticsearch_gateway_recover_after_nodes }}
{% endif %}
# Set the timeout to initiate the recovery process, once the N nodes
# from previous setting are up (accepts time value):
#
# gateway.recover_after_time: 5m
{% if elasticsearch_gateway_recover_after_time is defined %}
gateway.recover_after_time: {{ elasticsearch_gateway_recover_after_time }}
{% endif %}
# Set how many nodes are expected in this cluster. Once these N nodes
# are up (and recover_after_nodes is met), begin recovery process immediately
# (without waiting for recover_after_time to expire):
#
# gateway.expected_nodes: 2
{% if elasticsearch_gateway_expected_nodes is defined %}
gateway.expected_nodes: {{ elasticsearch_gateway_expected_nodes }}
{% endif %}
############################# Recovery Throttling #############################
# These settings allow to control the process of shards allocation between
# nodes during initial recovery, replica allocation, rebalancing,
# or when adding and removing nodes.
# Set the number of concurrent recoveries happening on a node:
#
# 1. During the initial recovery
#
# cluster.routing.allocation.node_initial_primaries_recoveries: 4
{% if elasticsearch_recovery_node_initial_primaries_recoveries is defined %}
cluster.routing.allocation.node_initial_primaries_recoveries: {{ elasticsearch_recovery_node_initial_primaries_recoveries }}
{% endif %}
#
# 2. During adding/removing nodes, rebalancing, etc
#
# cluster.routing.allocation.node_concurrent_recoveries: 2
{% if elasticsearch_recovery_node_concurrent_recoveries is defined %}
cluster.routing.allocation.node_concurrent_recoveries: {{ elasticsearch_recovery_node_concurrent_recoveries }}
{% endif %}
# Set to throttle throughput when recovering (eg. 100mb, by default unlimited):
#
# indices.recovery.max_size_per_sec: 0
{% if elasticsearch_recovery_max_size_per_sec is defined %}
indices.recovery.max_size_per_sec: {{ elasticsearch_recovery_max_size_per_sec }}
{% endif %}
# Set to limit the number of open concurrent streams when
# recovering a shard from a peer:
#
# indices.recovery.concurrent_streams: 5
{% if elasticsearch_recovery_concurrent_streams is defined %}
indices.recovery.concurrent_streams: {{ elasticsearch_recovery_concurrent_streams }}
{% endif %}
################################## Discovery ##################################
{% if elasticsearch_plugin_aws_version is defined %}
{% if elasticsearch_plugin_aws_discovery_disable is not defined %}
discovery.type: ec2
{% endif %}
{% if elasticsearch_plugin_aws_ec2_groups is defined %}
discovery.ec2.groups: '{{ elasticsearch_plugin_aws_ec2_groups }}'
{% endif %}
{% if elasticsearch_plugin_aws_ec2_ping_timeout is defined %}
discovery.ec2.ping_timeout: {{ elasticsearch_plugin_aws_ec2_ping_timeout}}
{% endif %}
cloud.node.auto_attributes: true
{% if elasticsearch_plugin_aws_access_key is defined %}
{% if elasticsearch_plugin_aws_secret_key is defined %}
cloud.aws.access_key: '{{ elasticsearch_plugin_aws_access_key }}'
cloud.aws.secret_key: '{{ elasticsearch_plugin_aws_secret_key }}'
{% endif %}
{% endif %}
{% if elasticsearch_plugin_aws_region is defined %}
cloud.aws.region: {{ elasticsearch_plugin_aws_region}}
{% endif %}
{% endif %}
# Discovery infrastructure ensures nodes can be found within a cluster
# and master node is elected. Multicast discovery is the default.
# Set to ensure a node sees N other master eligible nodes to be considered
# operational within the cluster. Set this option to a higher value (2-4)
# for large clusters (>3 nodes):
#
# discovery.zen.minimum_master_nodes: 1
{% if elasticsearch_discovery_zen_minimum_master_nodes is defined %}
discovery.zen.minimum_master_nodes: {{ elasticsearch_discovery_zen_minimum_master_nodes }}
{% endif %}
# Set the time to wait for ping responses from other nodes when discovering.
# Set this option to a higher value on a slow or congested network
# to minimize discovery failures:
#
# discovery.zen.ping.timeout: 3s
{% if elasticsearch_discovery_zen_ping_timeout is defined %}
discovery.zen.ping.timeout: {{ elasticsearch_discovery_zen_ping_timeout }}
{% endif %}
# See <http://elasticsearch.org/guide/reference/modules/discovery/zen.html>
# for more information.
# Unicast discovery allows to explicitly control which nodes will be used
# to discover the cluster. It can be used when multicast is not present,
# or to restrict the cluster communication-wise.
#
# 1. Disable multicast discovery (enabled by default):
#
# discovery.zen.ping.multicast.enabled: false
{% if elasticsearch_discovery_zen_ping_multicast_enabled is defined %}
discovery.zen.ping.multicast.enabled: {{ elasticsearch_discovery_zen_ping_multicast_enabled }}
{% endif %}
#
# 2. Configure an initial list of master nodes in the cluster
# to perform discovery when new nodes (master or data) are started:
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3[portX-portY]"]
{% if elasticsearch_discovery_zen_ping_unicast_hosts is defined %}
discovery.zen.ping.unicast.hosts: {{ elasticsearch_discovery_zen_ping_unicast_hosts }}
{% endif %}
# EC2 discovery allows to use AWS EC2 API in order to perform discovery.
#
# You have to install the cloud-aws plugin for enabling the EC2 discovery.
#
# See <http://elasticsearch.org/guide/reference/modules/discovery/ec2.html>
# for more information.
#
# See <http://elasticsearch.org/tutorials/2011/08/22/elasticsearch-on-ec2.html>
# for a step-by-step tutorial.
# Fault Discovery
# See <http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html#fault-detection>
{% if elasticsearch_discovery_zen_fd_ping_interval is defined %}
discovery.zen.fd.ping_interval: {{ elasticsearch_discovery_zen_fd_ping_interval }}
{% endif %}
{% if elasticsearch_discovery_zen_fd_ping_timeout is defined %}
discovery.zen.fd.ping_timeout: {{ elasticsearch_discovery_zen_fd_ping_timeout }}
{% endif %}
{% if elasticsearch_discovery_zen_fd_ping_retries is defined %}
discovery.zen.fd.ping_retries: {{ elasticsearch_discovery_zen_fd_ping_retries }}
{% endif %}
# Configurations for Google Compute Engine plugin
{% if elasticsearch_cloud_gce_project_id is defined %}
cloud.gce.project_id: {{ elasticsearch_cloud_gce_project_id }}
{% endif %}
{% if elasticsearch_cloud_gce_zone is defined %}
cloud.gce.zone: {{ elasticsearch_cloud_gce_zone }}
{% endif %}
{% if elasticsearch_discovery_type is defined %}
discovery.type: {{ elasticsearch_discovery_type }}
{% endif %}
{% if elasticsearch_discovery_gce_tags is defined %}
discovery.gce.tags: {{ elasticsearch_discovery_gce_tags }}
{% endif %}
################################## Slow Log ##################################
# Shard level query and fetch threshold logging.
#index.search.slowlog.threshold.query.warn: 10s
{% if elasticsearch_slowlog_threshold_query_warn is defined %}
index.search.slowlog.threshold.query.warn: {{ elasticsearch_slowlog_threshold_query_warn }}
{% endif %}
#index.search.slowlog.threshold.query.info: 5s
{% if elasticsearch_slowlog_threshold_query_info is defined %}
index.search.slowlog.threshold.query.info: {{ elasticsearch_slowlog_threshold_query_info }}
{% endif %}
#index.search.slowlog.threshold.query.debug: 2s
{% if elasticsearch_slowlog_threshold_query_debug is defined %}
index.search.slowlog.threshold.query.debug: {{ elasticsearch_slowlog_threshold_query_debug }}
{% endif %}
#index.search.slowlog.threshold.query.trace: 500ms
{% if elasticsearch_slowlog_threshold_query_trace is defined %}
index.search.slowlog.threshold.query.trace: {{ elasticsearch_slowlog_threshold_query_trace }}
{% endif %}
#index.search.slowlog.threshold.fetch.warn: 1s
{% if elasticsearch_slowlog_threshold_fetch_warn is defined %}
index.search.slowlog.threshold.fetch.warn: {{ elasticsearch_slowlog_threshold_fetch_warn }}
{% endif %}
#index.search.slowlog.threshold.fetch.info: 800ms
{% if elasticsearch_slowlog_threshold_fetch_info is defined %}
index.search.slowlog.threshold.fetch.info: {{ elasticsearch_slowlog_threshold_fetch_info }}
{% endif %}
#index.search.slowlog.threshold.fetch.debug: 500ms
{% if elasticsearch_slowlog_threshold_fetch_debug is defined %}
index.search.slowlog.threshold.fetch.debug: {{ elasticsearch_slowlog_threshold_fetch_debug }}
{% endif %}
#index.search.slowlog.threshold.fetch.trace: 200ms
{% if elasticsearch_slowlog_threshold_fetch_trace is defined %}
index.search.slowlog.threshold.fetch.trace: {{ elasticsearch_slowlog_threshold_fetch_trace }}
{% endif %}
#index.indexing.slowlog.threshold.index.warn: 10s
{% if elasticsearch_slowlog_threshold_index_warn is defined %}
index.indexing.slowlog.threshold.index.warn: {{ elasticsearch_slowlog_threshold_index_warn }}
{% endif %}
#index.indexing.slowlog.threshold.index.info: 5s
{% if elasticsearch_slowlog_threshold_index_info is defined %}
index.indexing.slowlog.threshold.index.info: {{ elasticsearch_slowlog_threshold_index_info }}
{% endif %}
#index.indexing.slowlog.threshold.index.debug: 2s
{% if elasticsearch_slowlog_threshold_index_debug is defined %}
index.indexing.slowlog.threshold.index.debug: {{ elasticsearch_slowlog_threshold_index_debug }}
{% endif %}
#index.indexing.slowlog.threshold.index.trace: 500ms
{% if elasticsearch_slowlog_threshold_index_trace is defined %}
index.indexing.slowlog.threshold.index.trace: {{ elasticsearch_slowlog_threshold_index_trace }}
{% endif %}
################################## GC Logging ################################
#monitor.jvm.gc.ParNew.warn: 1000ms
{% if elasticsearch_gc_par_new_warn is defined %}
monitor.jvm.gc.ParNew.warn: {{ elasticsearch_gc_par_new_warn }}
{% endif %}
#monitor.jvm.gc.ParNew.info: 700ms
{% if elasticsearch_gc_par_new_info is defined %}
monitor.jvm.gc.ParNew.info: {{ elasticsearch_gc_par_new_info }}
{% endif %}
#monitor.jvm.gc.ParNew.debug: 400ms
{% if elasticsearch_gc_par_new_debug is defined %}
monitor.jvm.gc.ParNew.debug: {{ elasticsearch_gc_par_new_debug }}
{% endif %}
#monitor.jvm.gc.ConcurrentMarkSweep.warn: 10s
{% if elasticsearch_gc_soncurrent_mark_sweep_warn is defined %}
monitor.jvm.gc.ConcurrentMarkSweep.warn: {{ elasticsearch_gc_soncurrent_mark_sweep_warn }}
{% endif %}
#monitor.jvm.gc.ConcurrentMarkSweep.info: 5s
{% if elasticsearch_gc_soncurrent_mark_sweep_info is defined %}
monitor.jvm.gc.ConcurrentMarkSweep.info: {{ elasticsearch_gc_soncurrent_mark_sweep_info }}
{% endif %}
#monitor.jvm.gc.ConcurrentMarkSweep.debug: 2s
{% if elasticsearch_gc_soncurrent_mark_sweep_debug is defined %}
monitor.jvm.gc.ConcurrentMarkSweep.debug: {{ elasticsearch_gc_soncurrent_mark_sweep_debug }}
{% endif %}
################################### Varia #####################################
{% if elasticsearch_plugin_marvel_version is defined %}
{% if elasticsearch_plugin_marvel_agent_exporter_es_index_timeformat is defined %}
marvel.agent.exporter.es.index.timeformat: {{ elasticsearch_plugin_marvel_agent_exporter_es_index_timeformat }}
{% endif %}
{% if elasticsearch_plugin_marvel_agent_interval is defined %}
marvel.agent.interval: {{ elasticsearch_plugin_marvel_agent_interval }}
{% endif %}
{% if elasticsearch_plugin_marvel_agent_indices is defined %}
marvel.agent.indices: {{ elasticsearch_plugin_marvel_agent_indices }}
{% endif %}
{% if elasticsearch_plugin_marvel_agent_exporter_es_hosts is defined %}
marvel.agent.exporter.es.hosts: {{ elasticsearch_plugin_marvel_agent_exporter_es_hosts }}
{% endif %}
{% if elasticsearch_plugin_marvel_agent_enabled is defined %}
marvel.agent.enabled: {{ elasticsearch_plugin_marvel_agent_enabled }}
{% endif %}
{% endif %}
{% if elasticsearch_misc_auto_create_index is defined %}
action.auto_create_index: {{ elasticsearch_misc_auto_create_index }}
{% endif %}
{% if elasticsearch_misc_disable_delete_all_indices is defined %}
action.disable_delete_all_indices: {{ elasticsearch_misc_disable_delete_all_indices }}
{% endif %}
{% if elasticsearch_thread_pools is defined %}
{% for threadPoolSetting in elasticsearch_thread_pools %}
{{ threadPoolSetting }}
{% endfor %}
{% endif %}
{% if elasticsearch_indices_cache_filter_size is defined %}
indices.cache.filter.size: {{ elasticsearch_indices_cache_filter_size }}
{% endif %}
################################### Dynamic Scripting #####################################
{% if elasticsearch_script_disable_dynamic is defined %}
script.disable_dynamic: {{ elasticsearch_script_disable_dynamic }}
{% endif %}
################################### CORS Settings #####################################
{% if elasticsearch_http_cors_enabled is defined %}
http.cors.enabled: {{ elasticsearch_http_cors_enabled }}
{% endif %}
{% if elasticsearch_http_cors_allow_origin is defined %}
http.cors.allow-origin: {{ elasticsearch_http_cors_allow_origin}}
{% endif %}
{% if elasticsearch_script_groovy_sandbox_enabled is defined %}
script.groovy.sandbox.enabled: {{ elasticsearch_script_groovy_sandbox_enabled }}
{% endif %}
################################### Awareness Settings #####################################
{% if elasticsearch_cluster_routing_allocation_zone_awareness is defined %}
cluster.routing.allocation.awareness.attributes: zone
node.zone: {{ facter_ec2_placement_availability_zone }}
{% endif %}
================================================
FILE: tests/ansible.cfg
================================================
[defaults]
roles_path=../../
================================================
FILE: tests/elastic_test.sh
================================================
#!/bin/bash
set -e
# Put data
curl -XPUT 'http://localhost:9200/blog/user/dilbert' -d '{ "name" : "Dilbert Brown" }'
#Get data
curl -XGET 'http://localhost:9200/blog/user/dilbert?pretty=true' | grep "\"name\" : \"Dilbert Brown\""
# Check if kopf is running
curl -XGET 'http://localhost:9200/_plugin/kopf/' | grep "ng-app=\"kopf\""
================================================
FILE: tests/local.ini
================================================
localhost ansible_connection='local'
================================================
FILE: tests/test1.yml
================================================
---
# This playbook nexus
- hosts: all
sudo: true
vars_files:
- "test1_var.yml"
roles:
- ansible-elasticsearch
================================================
FILE: tests/test1_var.yml
================================================
---
elasticsearch_version: 1.4.2
elasticsearch_apt_java_package: oracle-java8-installer
elasticsearch_java_home: /usr/lib/jvm/java-8-oracle
elasticsearch_heap_size: 1g
elasticsearch_max_open_files: 65535
elasticsearch_timezone: "America/New_York"
elasticsearch_node_max_local_storage_nodes: 1
elasticsearch_index_mapper_dynamic: "true"
elasticsearch_memory_bootstrap_mlockall: "true"
elasticsearch_install_java: "true"
elasticsearch_plugins:
- { name: 'elasticsearch/elasticsearch-mapper-attachments/2.4.1' }
- { name: 'com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/2.0.5' }
- { name: 'facet-script', url: 'http://dl.bintray.com/content/imotov/elasticsearch-plugins/elasticsearch-facet-script-1.1.2.zip' }
- { name: 'lmenezes/elasticsearch-kopf' }
elasticsearch_thread_pools:
- "threadpool.bulk.type: fixed"
- "threadpool.bulk.size: 50"
- "threadpool.bulk.queue_size: 1000"
================================================
FILE: vagrant-inventory.ini
================================================
#
# Inventory for provisioning with Vagrant
#
#####################
# Local Environment #
#####################
[vagrant]
192.168.111.10 ansible_ssh_user=vagrant ansible_ssh_pass=vagrant
[vagrant:vars]
# spm_client_token=<enter your token>
================================================
FILE: vagrant-main.yml
================================================
---
# Elasticsearch Ansible Playbook
- hosts: all
user: $user
sudo: yes
vars_files:
- defaults/main.yml
- vars/vagrant.yml
tasks:
- include: tasks/main.yml
# Uncomment to install and enable SPM. Make sure to set the spm_client_token variable in your inventory.ini to your SPM key
# - include: tasks/spm.yml
handlers:
- include: handlers/main.yml
================================================
FILE: vars/sample.yml
================================================
---
# Elasticsearch Ansible Sample Variables
elasticsearch_version: 0.90.5
elasticsearch_heap_size: 1g
elasticsearch_max_open_files: 65535
elasticsearch_max_locked_memory: unlimited
elasticsearch_timezone: "America/New_York"
elasticsearch_cluster_name: elasticsearch-ansible
elasticsearch_node_name: elasticsearch-ansible-node
elasticsearch_node_max_local_storage_nodes: 1
elasticsearch_index_mapper_dynamic: "true"
elasticsearch_memory_bootstrap_mlockall: "true"
elasticsearch_gateway_type: local
elasticsearch_gateway_recover_after_nodes: 1
elasticsearch_gateway_recover_after_time: 2m
elasticsearch_gateway_expected_nodes: 1
elasticsearch_discovery_zen_minimum_master_nodes: 1
elasticsearch_discovery_zen_ping_timeout: 30s
elasticsearch_discovery_zen_ping_multicast_enabled: "true"
elasticsearch_misc_auto_create_index: "true"
elasticsearch_misc_query_bool_max_clause_count: 4096
elasticsearch_misc_disable_delete_all_indices: "true"
elasticsearch_java_opts: "-XX:-UseSuperWord"
================================================
FILE: vars/vagrant.yml
================================================
---
# Elasticsearch Ansible Variables
elasticsearch_version: 1.7.3
elasticsearch_apt_java_package: oracle-java8-installer
elasticsearch_java_home: /usr/lib/jvm/java-8-oracle
elasticsearch_heap_size: 1g
elasticsearch_max_open_files: 65535
elasticsearch_timezone: "America/New_York"
elasticsearch_node_max_local_storage_nodes: 1
elasticsearch_index_mapper_dynamic: "true"
elasticsearch_memory_bootstrap_mlockall: "true"
elasticsearch_install_java: "true"
elasticsearch_plugins:
- { name: 'elasticsearch/elasticsearch-mapper-attachments/2.7.1', reinstall: false }
- { name: 'com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/2.0.9', reinstall: false }
- { name: 'facet-script', url: 'http://dl.bintray.com/content/imotov/elasticsearch-plugins/elasticsearch-facet-script-1.1.2.zip', reinstall: false }
- { name: 'http-basic', url: 'https://github.com/Asquera/elasticsearch-http-basic/releases/download/v1.5.1/elasticsearch-http-basic-1.5.1.jar', download_only: true, reinstall: false }
elasticsearch_thread_pools:
- "threadpool.bulk.type: fixed"
- "threadpool.bulk.size: 50"
- "threadpool.bulk.queue_size: 1000"
elasticsearch_service_startonboot: yes
gitextract_245obtda/
├── .gitignore
├── .travis.yml
├── README.md
├── Vagrantfile
├── defaults/
│ └── main.yml
├── handlers/
│ └── main.yml
├── meta/
│ └── main.yml
├── tasks/
│ ├── aws.yml
│ ├── custom-jars.yml
│ ├── elastic-install.yml
│ ├── java.yml
│ ├── main.yml
│ ├── marvel.yml
│ ├── plugins.yml
│ ├── post-run.yml
│ ├── spm.yml
│ └── timezone.yml
├── templates/
│ ├── elasticsearch.default.j2
│ ├── elasticsearch.in.sh.j2
│ └── elasticsearch.yml.j2
├── tests/
│ ├── ansible.cfg
│ ├── elastic_test.sh
│ ├── local.ini
│ ├── test1.yml
│ └── test1_var.yml
├── vagrant-inventory.ini
├── vagrant-main.yml
└── vars/
├── sample.yml
└── vagrant.yml
Condensed preview — 29 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (68K chars).
[
{
"path": ".gitignore",
"chars": 358,
"preview": "# Numerous always-ignore extensions\n*.diff\n*.err\n*.orig\n*.log\n*.rej\n*.swo\n*.swp\n*.vi\n*~\n*.sass-cache\n*.iml\n\n# OS or Edit"
},
{
"path": ".travis.yml",
"chars": 1758,
"preview": "---\nlanguage: python\npython: \"2.7\"\n\nbefore_install:\n # Make sure everything's up to date.\n - sudo apt-get update -qq\n "
},
{
"path": "README.md",
"chars": 9266,
"preview": "# Important Note\nThis project is no longer actively maintained. We recommend using the official [ansible-elasticsearch]("
},
{
"path": "Vagrantfile",
"chars": 1224,
"preview": "# -*- mode: ruby -*-\n# vi: set ft=ruby :\n\n# Vagrantfile API/syntax version. Don't touch unless you know what you're doin"
},
{
"path": "defaults/main.yml",
"chars": 1052,
"preview": "---\n# Elasticsearch Ansible Variables\n\nelasticsearch_user: elasticsearch\nelasticsearch_group: elasticsearch\nelasticsearc"
},
{
"path": "handlers/main.yml",
"chars": 138,
"preview": "---\n# Elasticsearch Ansible Handlers\n\n# Restart Elasticsearch\n- name: Restart Elasticsearch\n service: name=elasticsearc"
},
{
"path": "meta/main.yml",
"chars": 247,
"preview": "---\n# Elasticsearch Ansible Meta\ngalaxy_info:\n author: \"George Stathis\"\n company: Traackr\n license: MIT\n min_ansible"
},
{
"path": "tasks/aws.yml",
"chars": 1298,
"preview": "---\n# Install AWS Plugin (see https://github.com/elasticsearch/elasticsearch-cloud-aws)\n#\n# The following variables need"
},
{
"path": "tasks/custom-jars.yml",
"chars": 1149,
"preview": "---\n# Install Custom JARs\n#\n# Custom jars are made available to the Elasticsearch classpath by being downloaded into the"
},
{
"path": "tasks/elastic-install.yml",
"chars": 5216,
"preview": "---\n\n- name: elastic-install | Install python-software-properties\n apt:\n pkg=python-software-properties\n state=pr"
},
{
"path": "tasks/java.yml",
"chars": 550,
"preview": "---\n\n- name: java | Accept Oracle license prior JDK installation\n shell: echo debconf shared/accepted-oracle-license-v1"
},
{
"path": "tasks/main.yml",
"chars": 669,
"preview": "---\n# Elasticsearch Ansible Tasks\n\n# Install Java\n- include: java.yml\n when: elasticsearch_install_java\n\n# Configure ti"
},
{
"path": "tasks/marvel.yml",
"chars": 1130,
"preview": "---\n# Install Marvel (see http://www.elasticsearch.org/guide/en/marvel/current/)\n#\n# The following variables need to be "
},
{
"path": "tasks/plugins.yml",
"chars": 2601,
"preview": "---\n# Install Elasticsearch Plugins\n#\n# You will need to define an array called 'elasticsearch_plugins' in your playbook"
},
{
"path": "tasks/post-run.yml",
"chars": 920,
"preview": "---\n\n- name: post-run | Ensure plugins directory permission are correct\n file:\n path=\"{{ elasticsearch_plugin_dir }}"
},
{
"path": "tasks/spm.yml",
"chars": 1329,
"preview": "---\n# SPM Tasks (see http://sematext.com/spm/)\n\n- name: spm | Install Collectd\n apt:\n pkg=collectd\n state=present"
},
{
"path": "tasks/timezone.yml",
"chars": 503,
"preview": "---\n\n- name: timezone | check current timezone\n shell: cat /etc/timezone\n changed_when: 0\n register: current_zone\n\n- "
},
{
"path": "templates/elasticsearch.default.j2",
"chars": 1751,
"preview": "# Run ElasticSearch as this user ID and group ID\n#ES_USER=elasticsearch\n{% if elasticsearch_user is defined %}\nES_USER={"
},
{
"path": "templates/elasticsearch.in.sh.j2",
"chars": 2364,
"preview": "#!/bin/sh\n\nES_CLASSPATH=$ES_CLASSPATH:$ES_HOME/lib/elasticsearch-{{ elasticsearch_version }}.jar:$ES_HOME/lib/*:$ES_HOME"
},
{
"path": "templates/elasticsearch.yml.j2",
"chars": 26518,
"preview": "##################### ElasticSearch Configuration Example #####################\n\n# This file contains an overview of var"
},
{
"path": "tests/ansible.cfg",
"chars": 28,
"preview": "[defaults]\nroles_path=../../"
},
{
"path": "tests/elastic_test.sh",
"chars": 334,
"preview": "#!/bin/bash\nset -e\n\n# Put data\ncurl -XPUT 'http://localhost:9200/blog/user/dilbert' -d '{ \"name\" : \"Dilbert Brown\" }'\n\n#"
},
{
"path": "tests/local.ini",
"chars": 36,
"preview": "localhost ansible_connection='local'"
},
{
"path": "tests/test1.yml",
"chars": 125,
"preview": "---\n# This playbook nexus\n- hosts: all\n sudo: true\n vars_files:\n - \"test1_var.yml\"\n roles:\n - ansible-elasticse"
},
{
"path": "tests/test1_var.yml",
"chars": 911,
"preview": "---\n\nelasticsearch_version: 1.4.2\nelasticsearch_apt_java_package: oracle-java8-installer\nelasticsearch_java_home: /usr/l"
},
{
"path": "vagrant-inventory.ini",
"chars": 242,
"preview": "#\n# Inventory for provisioning with Vagrant\n#\n\n#####################\n# Local Environment #\n#####################\n[vagran"
},
{
"path": "vagrant-main.yml",
"chars": 383,
"preview": "---\n# Elasticsearch Ansible Playbook\n- hosts: all\n user: $user\n sudo: yes\n\n vars_files:\n - defaults/main.yml\n -"
},
{
"path": "vars/sample.yml",
"chars": 983,
"preview": "---\n# Elasticsearch Ansible Sample Variables\n\nelasticsearch_version: 0.90.5\nelasticsearch_heap_size: 1g\nelasticsearch_ma"
},
{
"path": "vars/vagrant.yml",
"chars": 1180,
"preview": "---\n# Elasticsearch Ansible Variables\n\nelasticsearch_version: 1.7.3\nelasticsearch_apt_java_package: oracle-java8-install"
}
]
About this extraction
This page contains the full source code of the Traackr/ansible-elasticsearch GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 29 files (62.8 KB), approximately 16.2k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.