Showing preview only (253K chars total). Download the full file or copy to clipboard to get everything.
Repository: tomMoulard/make-my-server
Branch: master
Commit: 5cc89d09a603
Files: 130
Total size: 225.1 KB
Directory structure:
gitextract_76wwh0f9/
├── .github/
│ ├── FUNDING.yml
│ ├── dependabot.yml
│ └── workflows/
│ ├── dockerpublish.yml
│ └── healthcheck.workflow.tmpl.yml
├── .gitignore
├── .yamllint
├── README.md
├── arachni/
│ ├── README.md
│ └── docker-compose.arachni.yml
├── bazarr/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.bazarr.yml
├── bitwarden/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.bitwarden.yml
├── ciao/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.ciao.yml
├── codimd/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.codimd.yml
├── docker-compose.networks.yml
├── docker-compose.yml
├── elk/
│ ├── README.md
│ ├── docker-compose.elk.yml
│ ├── elasticsearch/
│ │ ├── .gitignore
│ │ └── .gitkeep
│ └── logstash/
│ └── logstash.conf
├── factorio/
│ ├── .gitignore
│ ├── README.md
│ ├── config/
│ │ ├── .gitignore
│ │ ├── map-gen-settings.json
│ │ ├── map-settings.json
│ │ └── server-settings.json
│ ├── docker-compose.factorio.yml
│ └── mods/
│ └── mod-list.json
├── framadate/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.framadate.yml
├── gitlab/
│ ├── .gitignore
│ ├── README.md
│ ├── config/
│ │ └── gitlab.rb
│ ├── docker-compose.gitlab.yml
│ └── runner/
│ └── config.toml
├── hits/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.hits.yml
├── homeassistant/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.homeassistant.yml
├── hugo/
│ ├── .gitignore
│ ├── README.md
│ ├── docker-compose.hugo.yml
│ └── nginx/
│ ├── conf/
│ │ └── nginx.conf
│ └── logs/
│ └── .gitkeep
├── jackett/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.jackett.yml
├── jellyfin/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.jellyfin.yml
├── jupyter/
│ ├── README.md
│ └── docker-compose.jupyter.yml
├── kavita/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.kavita.yml
├── minecraft/
│ ├── README.md
│ ├── docker-compose.minecraft-ftb.yml
│ └── docker-compose.minecraft.yml
├── mumble/
│ ├── README.md
│ └── docker-compose.mumble.yml
├── musicbot/
│ ├── README.md
│ ├── conf/
│ │ └── config.txt
│ └── docker-compose.musicBot.yml
├── nextcloud/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.nextcloud.yml
├── nginx/
│ ├── README.md
│ ├── conf/
│ │ ├── nginx.conf
│ │ └── www/
│ │ └── index.html
│ ├── docker-compose.nginx.yml
│ └── logs/
│ └── .gitkeep
├── pastebin/
│ ├── README.md
│ └── docker-compose.pastebin.yml
├── peertube/
│ ├── .gitignore
│ ├── README.md
│ ├── config/
│ │ └── production.yaml
│ └── docker-compose.peertube.yml
├── pihole/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.pihole.yml
├── portainer/
│ ├── README.md
│ └── docker-compose.portainer.yml
├── remotely/
│ ├── README.md
│ └── docker-compose.remotely.yml
├── rocketchat/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.rocket-chat.yml
├── searxng/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.searxng.yml
├── sharelatex/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.sharelatex.yml
├── sonarr/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.sonarr.yml
├── streama/
│ ├── README.md
│ └── docker-compose.streama.yml
├── test.sh
├── test_config.yml
├── theia/
│ ├── README.md
│ └── docker-compose.theia.yml
├── tor-relay/
│ ├── .gitignore
│ ├── README.md
│ ├── docker-compose.tor-relay.yml
│ └── keys/
│ └── .gitkeep
├── traefik/
│ ├── README.md
│ ├── docker-compose.traefik.yml
│ ├── dynamic_conf/
│ │ ├── fail2ban.yml
│ │ ├── middlewares.yml
│ │ └── tls.yml
│ └── logs/
│ └── .gitkeep
├── transmission/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.transmission.yml
├── vpn/
│ ├── README.md
│ └── docker-compose.vpn.yml
└── watchtower/
├── README.md
└── docker-compose.watchtower.yml
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/FUNDING.yml
================================================
# These are supported funding model platforms
github: 'tommoulard'
================================================
FILE: .github/dependabot.yml
================================================
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://help.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
# Maintain dependencies for GitHub Actions
- package-ecosystem: 'github-actions'
directory: '/'
schedule:
interval: 'weekly'
================================================
FILE: .github/workflows/dockerpublish.yml
================================================
name: 'Tests'
on: # yamllint disable-line rule:truthy
push: {}
jobs:
Config-test:
runs-on: 'ubuntu-latest'
steps:
- uses: 'KengoTODA/actions-setup-docker-compose@v1'
with:
version: '2.20.2'
- uses: 'actions/checkout@v6'
- name: 'DEBUG'
run: 'docker version && docker compose version'
- name: 'Run tests'
run: './test.sh'
- uses: 'actions/upload-artifact@v7'
if: 'failure()'
with:
name: 'test-artifacts'
path: |
log.log
*.patch
Health-checks-codimd:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
service_name: 'codimd'
timeout-minutes: 5
# Health-checks-grafana:
# uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
# with:
# sus: 'mkdir -p ./grafana/grafana/ ./grafana/prometheus/data/'
# service_name: 'grafana'
# Health-checks-hits:
# uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
# with:
# sus: 'mkdir -p ./hits/postgresql/data'
# service_name: 'hits'
Health-checks-homeassistant:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
service_name: 'homeassistant'
# Health-checks-hugo:
# uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
# with:
# service_name: 'hugo'
Health-checks-jackett:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
service_name: 'jackett'
Health-checks-kavita:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
service_name: 'kavita'
# Health-checks-mastodon:
# uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
# with:
# service_name: 'mastodon'
Health-checks-nextcloud:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
service_name: 'nextcloud'
Health-checks-nginx:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
service_name: 'nginx'
Health-checks-searxng:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
service_name: 'searxng'
Health-checks-sharelatex:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
service_name: 'sharelatex'
Health-checks-streama:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
sus: 'touch streama/streama.mv.db streama/streama.trace.db'
service_name: 'streama'
Health-checks-traefik:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
service_name: 'traefik'
Health-checks-transmission:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
service_name: 'transmission'
Health-checks-wordpress:
uses: 'tomMoulard/make-my-server/.github/workflows/healthcheck.workflow.tmpl.yml@master'
with:
service_name: 'wordpress'
Lint:
runs-on: 'ubuntu-latest'
steps:
- uses: 'actions/checkout@v6'
- name: 'Install yamllint'
run: 'pip install yamllint'
- name: 'yamllint version'
run: 'yamllint --version'
- name: 'Lint YAML files'
run: 'yamllint --format github .'
================================================
FILE: .github/workflows/healthcheck.workflow.tmpl.yml
================================================
on: # yamllint disable-line rule:truthy
workflow_call:
inputs:
sus:
description: 'a StartUp Script to run beforehand'
required: false
type: 'string'
service_name:
description: 'A service name to check health check uppon'
required: true
type: 'string'
timeout-minutes:
description: 'a timeout to waitfor running containers'
required: false
default: 2
type: 'number'
jobs:
health-check:
runs-on: 'ubuntu-latest'
steps:
- uses: 'actions/checkout@v6'
- uses: 'KengoTODA/actions-setup-docker-compose@v1'
with:
version: '2.20.2'
- name: 'Caching'
uses: 'actions/cache@v5'
with:
path: '/var/lib/docker/'
key: '${{ runner.os }}-health-${{ github.job }}'
- name: 'Setting up job'
if: '${{ inputs.sus }}'
run: |
${{ inputs.sus }}
- name: 'Starting the docker-compose stack'
run: |
echo -e "USERS=runner\nUSERNAME=octocat\nTRAEFIK_DNS_ENTRYPOINT=8080" > .env
docker-compose up -d ${{inputs.service_name}}
- name: 'Waiting for running containers'
timeout-minutes: '${{ inputs.timeout-minutes }}'
run: |
while :; do
echo "sleeping for 5s"
sleep 5s;
docker-compose ps ${{inputs.service_name}} | grep "starting" || exit 0
done
- name: 'Checking containers health'
run: |
docker-compose ps ${{inputs.service_name}} | grep "healthy"
- name: 'Checking for unattended volumes'
run: |
git diff --exit-code .
- name: 'Exporting logs'
if: '${{ failure() }}'
run: |
docker-compose ps ${{inputs.service_name}}
docker-compose logs ${{inputs.service_name}}
================================================
FILE: .gitignore
================================================
*.log
blog/blog*
blog/nginx/conf/www
gitlab/logs
portainer/data
.env
.env.generated
*.patch
*.swp
================================================
FILE: .yamllint
================================================
# yaml-language-server: $schema=https://json.schemastore.org/yamllint.json
# ref: https://yamllint.readthedocs.io/en/stable/configuration.html
extends: 'default'
ignore: |
homer/**
matrix/**
huginn/**
registry/**
registery/**
results.json
peertube/config/custom-environment-variables.yaml
.git/
peertube/config/default.yaml
test_config.yml
# https://yamllint.readthedocs.io/en/stable/rules.html
rules:
document-start:
present: false
comments:
min-spaces-from-content: 1
line-length:
allow-non-breakable-inline-mappings: true
ignore:
- '.github/workflows/healthcheck.workflow.tmpl.yml'
- '.github/workflows/dockerpublish.yml'
truthy:
allowed-values:
- 'false'
- 'on'
- 'true'
indentation:
spaces: 2
empty-values: 'enable'
float-values:
forbid-inf: true
forbid-nan: true
forbid-scientific-notation: true
require-numeral-before-decimal: true
octal-values: 'enable'
quoted-strings:
quote-type: 'single'
required: true
allow-quoted-quotes: false
# vim: ft=yaml
================================================
FILE: README.md
================================================
# Server configuration
[](https://discord.gg/zQV6m9Jk6Z)
Your (my) own server configuration, managed by docker-compose, with
comprehensive default configuration.
## Setup
IF you are using [docker compose version <2.20](https://docs.docker.com/compose/multiple-compose-files/include/),
you need to use the following bash command to use this project:
```bash
docker-compose ()
{
docker-compose $(find -name 'docker-compose.*.yml' -type f -printf '%p\t%d\n' 2>/dev/null | sort -n -k2 | cut -f 1 | awk '{print "-f "$0}') $@
}
```
### Run
```bash
SITE=tom.moulard.org docker-compose up -d
```
Now you have your own server configuration.
To be a little more consistent with the management, you can use a `.env` file
and do:
```bash
cp .env.default .env
```
And edit the `.env` file to use the correct configuration.
The `docker-compose` function gather all docker-compose files in order to have
the whole configuration in one place (see `docker-compose config`).
### Tear down
```bash
docker-compose down
```
### Services list
There **should** be only one service by folder:
For example, le folder `traefik/` contains all the necessary configuration to
run the `traefik` service.
Thus each folder represent an available service.
The directory must follow the following architecture:
```
service/
├── conf
│ └── ...
├── data
│ └── ...
├── docker-compose.servicename.yml
├── logs
│ ├── access.log
│ └── error.log
└── README.md
```
If the service you are adding can use volumes:
- `data/`, is where to store to service data
- `conf/`, is where to store to service configuration
- `logs/`, is where to store to service logs (others than Docker logs)
Feel free to do a Pull Request to add your ideas.
[more ideas](https://github.com/awesome-selfhosted/awesome-selfhosted)
## Configuration
Don't forget to change:
- db passwords (might not be needed since they are beyond the reverse proxy)
- VPN secrets (if none provided, they are generated directly).
Configuration files are: `docker-compose.yml`, `nginx.conf`
To set the password:
```bash
echo "USERS=$(htpasswd -nB $USER)" >> .env
```
You can add a new set of credentials by editing the .env file like
```env
USERS=toto:pass,tata:pass, ...
```
The `.env.default` is generated using this command:
```bash
grep '${' **/docker-compose.*.yml | sed "s/.*\${\(.*\)}.*/\1/g" | cut -d":" -f 1 | sort -u | sort | xargs -I % echo "%=" >> .env.default
```
### For local developments
Edit the file `/etc/hosts` to provide the reverse proxy with good URLs.
For example, adding this in your `/etc/hosts` will allow to run and debug the
Traefik service locally:
```bash
127.0.0.1 traefik.moulard.org
```
### Scaling up
```bash
docker-compose scale nginx=2
```
## Tests
### Lint
! Warning: This is enforced for all PRs.
We are using yamllint to lint our yaml files.
You can install it by looking at the [official
documentation](https://yamllint.readthedocs.io/en/stable/quickstart.html#installation).
Once installed, you can run the following command to lint all the yaml files:
```bash
yamllint .
```
### docker-compose config
! Warning: This is enforced for all PRs.
You can run the following command to check that the docker-compose files are
correctly written:
```bash
./test.sh
```
It tests that:
- all docker-compose files are valid
- all docker-compose files are parsable
- all docker-compose files are consistent with the test_config.yml file
- all environment variables are set inside the `.env.default` file
Once this shell scritp is run, if the tests failes, you can see a bunch of
modified files (e.g., `test_config.yml`) that indicates what is wrong.
Note that the GitHub Action will run this script for you, and provides a
`patch.patch` file that **should** solve most of your issues.
# Authors
Main author:
- [Tom](http://tom.moulard.org)
Gitlab helper:
- [michel_k](mailto:thomas.michelot@epita.fr)
Discord MusicBot/minecraft:
- [huvell_m](mailto:martin.huvelle@epita.fr),
see PR [#6](https://github.com/tomMoulard/make-my-server/pull/6)
================================================
FILE: arachni/README.md
================================================
# arachni
https://www.arachni-scanner.com/
Arachni is a feature-full, modular, high-performance Ruby framework aimed
towards helping penetration testers and administrators evaluate the security of
modern web applications.
It is versatile enough to cover a great deal of use cases, ranging from a
simple command line scanner utility, to a global high performance grid of
scanners, to a Ruby library allowing for scripted audits, to a multi-user
multi-scan web collaboration platform. In addition, its simple REST API makes
integration a cinch.
================================================
FILE: arachni/docker-compose.arachni.yml
================================================
services:
arachni:
image: 'arachni/arachni'
labels:
traefik.enable: true
traefik.http.routers.arachni.middlewares: 'basic_auth@docker'
traefik.http.routers.arachni.rule: 'Host(`arachni.${SITE:-localhost}`)'
traefik.http.services.arachni.loadbalancer.server.port: 9292
networks:
- 'srv'
restart: 'always'
================================================
FILE: bazarr/.gitignore
================================================
config/
movies/
tv/
================================================
FILE: bazarr/README.md
================================================
# bazarr
https://www.bazarr.media/
Bazarr is a companion application to Sonarr and Radarr that manages and
downloads subtitles based on your requirements.
================================================
FILE: bazarr/docker-compose.bazarr.yml
================================================
services:
bazarr:
image: 'linuxserver/bazarr:${BAZARR_IMAGE_VERSION:-v1.2.2}'
environment:
PGID: '${BAZARR_GPID:-1000}'
PUID: '${BAZARR_PUID:-1000}'
TZ: '${TZ:-Europe/Paris}'
labels:
traefik.enable: true
traefik.http.routers.bazarr.middlewares: 'basic_auth@docker'
traefik.http.routers.bazarr.rule: 'Host(`bazarr.${SITE:-localhost}`)'
traefik.http.services.bazarr.loadbalancer.server.port: 8080
links:
- 'transmission'
- 'jackett'
- 'sonarr'
networks:
- 'srv'
restart: 'always'
volumes:
- './config:/config'
- './movies:/movies'
- './tv:/tv'
================================================
FILE: bitwarden/.gitignore
================================================
data/
================================================
FILE: bitwarden/README.md
================================================
# bitwarden
https://bitwarden.com
Bitwarden is an outstanding password manager that includes all the bells and
whistles you've come to expect from such a tool. And because Bitwarden is open
source, it updates regularly. Plans and Pricing Options
Here is used `https://hub.docker.com/r/bitwardenrs/server`:
This is a Bitwarden server API implementation written in Rust compatible with
upstream Bitwarden clients*, perfect for self-hosted deployment where running
the official resource-heavy service might not be ideal.
This server is based on vaultwarden instead of bitwarden_rw. See
[dani-garcia/vaultwarden#1642](https://github.com/dani-garcia/vaultwarden/discussions/1642)
for more explanation.
================================================
FILE: bitwarden/docker-compose.bitwarden.yml
================================================
services:
bitwarden:
image: 'vaultwarden/server:${BITWARDEN_IMAGE_VERSION:-latest}'
environment:
ADMIN_TOKEN: '${USERS}'
# to enable U2F and FIDO2 WebAuthn authentication
DOMAIN: 'https://bitwarden.${SITE:-localhost}'
PASSWORD_ITERATIONS: 500000
ROCKET_PORT: 8080
# whether users are allowed to create Bitwarden Sends/
SENDS_ALLOWED: 'true'
SIGNUPS_ALLOWED: 'true'
# if new users need to verify their email address upon registration
SIGNUPS_VERIFY: 'false'
TZ: '${TZ:-Europe/Paris}'
labels:
traefik.enable: true
traefik.http.routers.bitwarden-admin.middlewares: 'basic_auth@docker'
traefik.http.routers.bitwarden-admin.rule: |
'Host(`bitwarden.${SITE:-localhost}`) && PathPrefix(`/admin`)'
traefik.http.routers.bitwarden-user.rule: |
'Host(`bitwarden.${SITE:-localhost}`) && !PathPrefix(`/admin`)'
traefik.http.services.bitwarden.loadbalancer.server.port: 8080
networks:
- 'srv'
restart: 'always'
user: 'nobody'
volumes:
- './data:/data'
================================================
FILE: ciao/.gitignore
================================================
db
================================================
FILE: ciao/README.md
================================================
# ciao
https://github.com/brotandgames/ciao
ciao checks HTTP(S) URL endpoints for a HTTP status code (or errors on the lower TCP stack) and sends a notification on status change via E-Mail or Webhooks.
It uses Cron syntax to schedule the checks and comes along with a Web UI and a RESTful JSON API.
================================================
FILE: ciao/docker-compose.ciao.yml
================================================
services:
ciao:
image: 'brotandgames/ciao:${CIAO_IMAGE_VERSION:-latest}'
environment:
PROMETHEUS_ENABLED: '${CIAO_PROMETHEUS_ENABLED:-false}'
TIME_ZONE: '${TZ:-Europe/Paris}'
labels:
traefik.enable: true
traefik.http.routers.ciao.middlewares: 'basic_auth@docker'
traefik.http.routers.ciao.rule: 'Host(`ciao.${SITE:-localhost}`)'
traefik.http.services.ciao.loadbalancer.server.port: 3000
networks:
- 'srv'
restart: 'always'
volumes:
- './db:/app/db/sqlite/'
================================================
FILE: codimd/.gitignore
================================================
data/
db/
================================================
FILE: codimd/README.md
================================================
# codimd
https://github.com/hackmdio/codimd
A hackmd self hosted.
The best platform to write and share markdown. Sign In or Explore all features.
Real time collaboration. Works with charts and MathJax. Supports slide mode.
## Installation
To install codimd, follow these steps:
```bash
mkdir -p codimd/data codimd/db
chown -R 1500:1500 codimd/data
```
## User creation
```bash
$ docker-compose exec codimd ./bin/manage_users
You did not specify either --add or --del or --reset!
Command-line utility to create users for email-signin.
Usage: bin/manage_users [--pass password] (--add | --del) user-email
Options:
--add Add user with the specified user-email
--del Delete user with specified user-email
--reset Reset user password with specified user-email
--pass Use password from cmdline rather than prompting
```
================================================
FILE: codimd/docker-compose.codimd.yml
================================================
networks:
codi-internal: {}
services:
codimd:
image: 'hackmdio/hackmd:${CODIMD_IMAGE_VERSION:-2.4.2-cjk}'
depends_on:
- 'codimd-db'
environment:
# https://hackmd.io/c/codimd-documentation/%2Fs%2Fcodimd-configuration
CMD_DB_URL: 'postgres://codimd:mypwd@codimd-db/codimd'
CMD_USECDN: 'false'
healthcheck:
test: ['CMD', 'wget', '0.0.0.0:3000']
labels:
traefik.enable: true
traefik.http.routers.codimd.rule: 'Host(`codimd.${SITE:-localhost}`)'
traefik.http.services.codimd.loadbalancer.server.port: 3000
links:
- 'codimd-db'
networks:
- 'codi-internal'
- 'srv'
restart: 'always'
volumes:
- './data:/home/hackmd/app/public/uploads'
codimd-db:
image: 'postgres:11.6-alpine'
environment:
POSTGRES_DB: 'codimd'
POSTGRES_PASSWORD: 'mypwd'
POSTGRES_USER: 'codimd'
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'codimd']
labels:
traefik.enable: false
networks:
- 'codi-internal'
restart: 'always'
volumes:
- './db:/var/lib/postgresql/data'
================================================
FILE: docker-compose.networks.yml
================================================
networks:
srv: {}
================================================
FILE: docker-compose.yml
================================================
include:
- path: 'docker-compose.networks.yml'
- path: 'arachni/docker-compose.arachni.yml'
- path: 'bazarr/docker-compose.bazarr.yml'
- path: 'bitwarden/docker-compose.bitwarden.yml'
- path: 'ciao/docker-compose.ciao.yml'
- path: 'codimd/docker-compose.codimd.yml'
- path: 'elk/docker-compose.elk.yml'
- path: 'factorio/docker-compose.factorio.yml'
- path: 'framadate/docker-compose.framadate.yml'
- path: 'gitlab/docker-compose.gitlab.yml'
- path: 'grafana/docker-compose.grafana.yml'
- path: 'hits/docker-compose.hits.yml'
- path: 'homeassistant/docker-compose.homeassistant.yml'
- path: 'hugo/docker-compose.hugo.yml'
- path: 'jackett/docker-compose.jackett.yml'
- path: 'jellyfin/docker-compose.jellyfin.yml'
- path: 'jupyter/docker-compose.jupyter.yml'
- path: 'kavita/docker-compose.kavita.yml'
- path: 'mastodon/docker-compose.mastodon.yml'
- path: 'minecraft/docker-compose.minecraft-ftb.yml'
- path: 'minecraft/docker-compose.minecraft.yml'
- path: 'mumble/docker-compose.mumble.yml'
- path: 'musicbot/docker-compose.musicBot.yml'
- path: 'nextcloud/docker-compose.nextcloud.yml'
- path: 'nginx/docker-compose.nginx.yml'
- path: 'pastebin/docker-compose.pastebin.yml'
- path: 'peertube/docker-compose.peertube.yml'
- path: 'pihole/docker-compose.pihole.yml'
- path: 'portainer/docker-compose.portainer.yml'
- path: 'remotely/docker-compose.remotely.yml'
- path: 'rocketchat/docker-compose.rocket-chat.yml'
- path: 'searxng/docker-compose.searxng.yml'
- path: 'sharelatex/docker-compose.sharelatex.yml'
- path: 'sonarr/docker-compose.sonarr.yml'
- path: 'streama/docker-compose.streama.yml'
- path: 'theia/docker-compose.theia.yml'
- path: 'tor-relay/docker-compose.tor-relay.yml'
- path: 'traefik/docker-compose.traefik.yml'
- path: 'transmission/docker-compose.transmission.yml'
- path: 'vpn/docker-compose.vpn.yml'
- path: 'watchtower/docker-compose.watchtower.yml'
- path: 'wordpress/docker-compose.wordpress.yml'
================================================
FILE: elk/README.md
================================================
# elk
https://www.elastic.co/fr/what-is/elk-stack
The Elastic suite: Elastic search, Logstash Kibana.
Elastic search is a database with a search engine backed in.
Logstash is a log pipeline to gather all logs and send them to elastic search.
Kibana allow users to view elastic search's data using tables and graphs.
================================================
FILE: elk/docker-compose.elk.yml
================================================
services:
elasticsearch:
image: 'docker.elastic.co/elasticsearch/elasticsearch:${ELASTICSEARCH_IMAGE_VERSION:-7.1.0}'
environment:
ES_JAVA_OPTS: '${ELASTICSEARCH_JAVA_OPTS:--Xms512m -Xmx512m}'
bootstrap.memory_lock: '${ELASTICSEARCH_MEMORY_LOCK:-true}'
cluster.name: '${ELASTICSEARCH_CLUSTER_NAME:-docker-cluster}'
discovery.type: '${ELASTICSEARCH_DISCOVERY_TYPE:-single-node}'
labels:
traefik.enable: false
restart: 'always'
ulimits:
memlock:
hard: -1
soft: -1
volumes:
- './elasticsearch/data:/usr/share/elasticsearch/data'
kibana:
image: 'docker.elastic.co/kibana/kibana:${KIBANA_IMAGE_VERSION:-7.1.0}'
labels:
traefik.enable: true
traefik.http.routers.kibana.middlewares: 'basic_auth@docker'
traefik.http.routers.kibana.rule: 'Host(`kibana.${SITE:-localhost}`)'
traefik.http.services.kibana.loadbalancer.server.port: 5601
links:
- 'elasticsearch'
networks:
- 'srv'
restart: 'always'
volumes:
- './kibana/kibana.yml:/usr/share/kibana/config/kibana.yml'
logstash:
image: 'docker.elastic.co/logstash/logstash:${LOGSTASH_IMAGE_VERSION:-7.1.0}'
labels:
traefik.enable: false
links:
- 'elasticsearch'
restart: 'always'
volumes:
- './logstash/:/usr/share/logstash/pipeline/'
- '../nginx/logs:/var/log/nginx'
- '../traefik/logs:/var/log/traefik'
================================================
FILE: elk/elasticsearch/.gitignore
================================================
data/
================================================
FILE: elk/elasticsearch/.gitkeep
================================================
================================================
FILE: elk/logstash/logstash.conf
================================================
# logstash.con
# Where you see:
# # start_position => "beginning"
# You can un comment this line if the elasticsearch instance is new and you want
# to import all previous logs.
input {
file {
path => "/var/log/traefik/traefik.log"
type => "traefik_log"
# start_position => "beginning"
}
file {
path => "/var/log/traefik/access.log"
type => "traefik_access"
# start_position => "beginning"
}
file {
path => "/var/log/nginx/access.log"
type => "nginx_access"
# start_position => "beginning"
}
file {
path => "/var/log/nginx/error.log"
type => "nginx_error"
# start_position => "beginning"
}
}
filter {
if [type] == "nginx_access" {
grok {
match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
remove_field => "message"
}
mutate {
add_field => { "read_timestamp" => "%{@timestamp}" }
}
date {
match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
remove_field => "[nginx][access][time]"
}
useragent {
source => "[nginx][access][agent]"
target => "[nginx][access][user_agent]"
remove_field => "[nginx][access][agent]"
}
# This is not needed because traefik hides the real ip
# geoip {
# source => "[nginx][access][remote_ip]"
# target => "[nginx][access][geoip]"
# }
}
if [type] == "nginx_error" {
grok {
match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
remove_field => "message"
}
mutate {
rename => { "@timestamp" => "read_timestamp" }
}
date {
match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
remove_field => "[nginx][error][time]"
}
}
if [type] == "traefik_access" {
json {
source => "message"
}
date {
match => [ "timestamp", "dd/MM/YYYY:KK:mm:ss Z"]
target => "event_timestamp"
}
geoip {
source => "ClientHost"
}
useragent {
source => "request_User-Agent"
}
}
if [type] == "traefik_log" {
json {
source => "message"
}
date {
match => [ "timestamp", "dd/MM/YYYY:KK:mm:ss Z"]
target => "event_timestamp"
}
}
uuid {
target => "uuid"
}
}
output {
elasticsearch {
hosts => [ "elasticsearch" ]
user => "elastic"
password => "changeme"
index => "logstash-%{+YYYY.MM.dd}"
}
}
================================================
FILE: factorio/.gitignore
================================================
saves/
*.log
player-data.json
saves/
scenarios/
script-output/
================================================
FILE: factorio/README.md
================================================
# factorio
https://www.factorio.com
Factorio is a game in which you build and maintain factories. You will be
mining resources, researching technologies, building infrastructure, automating
production and fighting enemies.
================================================
FILE: factorio/config/.gitignore
================================================
rconpw
================================================
FILE: factorio/config/map-gen-settings.json
================================================
{
"_terrain_segmentation_comment": "Inverse of map scale",
"terrain_segmentation": 1,
"_water_comment":
[
"Multiplier for water 'coverage' - higher increases the water level.",
"Water level = 10 * log2(this value)"
],
"water": 1,
"_comment_width+height": "Width and height of map, in tiles; 0 means infinite",
"width": 0,
"height": 0,
"_starting_area_comment": "Multiplier for 'biter free zone radius'",
"starting_area": 1,
"peaceful_mode": false,
"autoplace_controls":
{
"coal": {"frequency": 1, "size": 1, "richness": 1},
"stone": {"frequency": 1, "size": 1, "richness": 1},
"copper-ore": {"frequency": 1, "size": 1,"richness": 1},
"iron-ore": {"frequency": 1, "size": 1, "richness": 1},
"uranium-ore": {"frequency": 1, "size": 1, "richness": 1},
"crude-oil": {"frequency": 1, "size": 1, "richness": 1},
"trees": {"frequency": 1, "size": 1, "richness": 1},
"enemy-base": {"frequency": 1, "size": 1, "richness": 1}
},
"cliff_settings":
{
"_name_comment": "Name of the cliff prototype",
"name": "cliff",
"_cliff_elevation_0_comment": "Elevation of first row of cliffs",
"cliff_elevation_0": 10,
"_cliff_elevation_interval_comment": "Elevation difference between successive rows of cliffs",
"cliff_elevation_interval": 10,
"_richness_comment": "Multiplier for cliff continuity; 0 will result in no cliffs, 10 will make all cliff rows completely solid",
"richness": 1
},
"_property_expression_names_comment":
[
"Overrides for property value generators",
"Elevation influences water and cliff placement.",
"Leave it blank to get 'normal' terrain.",
"Use '0_16-elevation' to reproduce terrain from 0.16.",
"Use '0_17-island' to get an island."
],
"property_expression_names":
{
"elevation": "0_17-island",
"control-setting:aux:bias": "0.300000",
"control-setting:aux:frequency:multiplier": "1.333333",
"control-setting:moisture:bias": "0.100000",
"control-setting:moisture:frequency:multiplier": "0.500000"
},
"starting_points":
[
{"x": 1000, "y": 2000}
],
"_seed_comment": "Use null for a random seed, number for a specific seed.",
"seed": null
}
================================================
FILE: factorio/config/map-settings.json
================================================
{
"difficulty_settings":
{
"recipe_difficulty": 0,
"technology_difficulty": 0,
"technology_price_multiplier": 1,
"research_queue_setting": "after-victory"
},
"pollution":
{
"enabled": true,
"_comment_min_to_diffuse_1": "these are values for 60 ticks (1 simulated second)",
"_comment_min_to_diffuse_2": "amount that is diffused to neighboring chunk",
"diffusion_ratio": 0.02,
"min_to_diffuse": 15,
"ageing": 1,
"expected_max_per_chunk": 150,
"min_to_show_per_chunk": 50,
"min_pollution_to_damage_trees": 60,
"pollution_with_max_forest_damage": 150,
"pollution_per_tree_damage": 50,
"pollution_restored_per_tree_damage": 10,
"max_pollution_to_restore_trees": 20,
"enemy_attack_pollution_consumption_modifier": 1
},
"enemy_evolution":
{
"enabled": true,
"time_factor": 0.000004,
"destroy_factor": 0.002,
"pollution_factor": 0.0000009
},
"enemy_expansion":
{
"enabled": true,
"min_base_spacing": 3,
"max_expansion_distance": 7,
"friendly_base_influence_radius": 2,
"enemy_building_influence_radius": 2,
"building_coefficient": 0.1,
"other_base_coefficient": 2.0,
"neighbouring_chunk_coefficient": 0.5,
"neighbouring_base_chunk_coefficient": 0.4,
"max_colliding_tiles_coefficient": 0.9,
"settler_group_min_size": 5,
"settler_group_max_size": 20,
"min_expansion_cooldown": 14400,
"max_expansion_cooldown": 216000
},
"unit_group":
{
"min_group_gathering_time": 3600,
"max_group_gathering_time": 36000,
"max_wait_time_for_late_members": 7200,
"max_group_radius": 30.0,
"min_group_radius": 5.0,
"max_member_speedup_when_behind": 1.4,
"max_member_slowdown_when_ahead": 0.6,
"max_group_slowdown_factor": 0.3,
"max_group_member_fallback_factor": 3,
"member_disown_distance": 10,
"tick_tolerance_when_member_arrives": 60,
"max_gathering_unit_groups": 30,
"max_unit_group_size": 200
},
"steering":
{
"default":
{
"radius": 1.2,
"separation_force": 0.005,
"separation_factor": 1.2,
"force_unit_fuzzy_goto_behavior": false
},
"moving":
{
"radius": 3,
"separation_force": 0.01,
"separation_factor": 3,
"force_unit_fuzzy_goto_behavior": false
}
},
"path_finder":
{
"fwd2bwd_ratio": 5,
"goal_pressure_ratio": 2,
"max_steps_worked_per_tick": 100,
"max_work_done_per_tick": 8000,
"use_path_cache": true,
"short_cache_size": 5,
"long_cache_size": 25,
"short_cache_min_cacheable_distance": 10,
"short_cache_min_algo_steps_to_cache": 50,
"long_cache_min_cacheable_distance": 30,
"cache_max_connect_to_cache_steps_multiplier": 100,
"cache_accept_path_start_distance_ratio": 0.2,
"cache_accept_path_end_distance_ratio": 0.15,
"negative_cache_accept_path_start_distance_ratio": 0.3,
"negative_cache_accept_path_end_distance_ratio": 0.3,
"cache_path_start_distance_rating_multiplier": 10,
"cache_path_end_distance_rating_multiplier": 20,
"stale_enemy_with_same_destination_collision_penalty": 30,
"ignore_moving_enemy_collision_distance": 5,
"enemy_with_different_destination_collision_penalty": 30,
"general_entity_collision_penalty": 10,
"general_entity_subsequent_collision_penalty": 3,
"extended_collision_penalty": 3,
"max_clients_to_accept_any_new_request": 10,
"max_clients_to_accept_short_new_request": 100,
"direct_distance_to_consider_short_request": 100,
"short_request_max_steps": 1000,
"short_request_ratio": 0.5,
"min_steps_to_check_path_find_termination": 2000,
"start_to_goal_cost_multiplier_to_terminate_path_find": 500.0,
"overload_levels": [0, 100, 500],
"overload_multipliers": [2, 3, 4]
},
"max_failed_behavior_count": 3
}
================================================
FILE: factorio/config/server-settings.json
================================================
{
"name": "Name of the game as it will appear in the game listing",
"description": "Description of the game that will appear in the listing",
"tags": ["game", "tags", "github.com/tomMoulard/make-my-server"],
"_comment_max_players": "Maximum number of players allowed, admins can join even a full server. 0 means unlimited.",
"max_players": 0,
"_comment_visibility": ["public: Game will be published on the official Factorio matching server",
"lan: Game will be broadcast on LAN"],
"visibility":
{
"public": false,
"lan": true
},
"_comment_credentials": "Your factorio.com login credentials. Required for games with visibility public",
"username": "",
"password": "",
"_comment_token": "Authentication token. May be used instead of 'password' above.",
"token": "",
"game_password": "",
"_comment_require_user_verification": "When set to true, the server will only allow clients that have a valid Factorio.com account",
"require_user_verification": false,
"_comment_max_upload_in_kilobytes_per_second" : "optional, default value is 0. 0 means unlimited.",
"max_upload_in_kilobytes_per_second": 0,
"_comment_max_upload_slots" : "optional, default value is 5. 0 means unlimited.",
"max_upload_slots": 5,
"_comment_minimum_latency_in_ticks": "optional one tick is 16ms in default speed, default value is 0. 0 means no minimum.",
"minimum_latency_in_ticks": 0,
"_comment_ignore_player_limit_for_returning_players": "Players that played on this map already can join even when the max player limit was reached.",
"ignore_player_limit_for_returning_players": false,
"_comment_allow_commands": "possible values are, true, false and admins-only",
"allow_commands": "admins-only",
"_comment_autosave_interval": "Autosave interval in minutes",
"autosave_interval": 10,
"_comment_autosave_slots": "server autosave slots, it is cycled through when the server autosaves.",
"autosave_slots": 5,
"_comment_afk_autokick_interval": "How many minutes until someone is kicked when doing nothing, 0 for never.",
"afk_autokick_interval": 0,
"_comment_auto_pause": "Whether should the server be paused when no players are present.",
"auto_pause": true,
"only_admins_can_pause_the_game": true,
"_comment_autosave_only_on_server": "Whether autosaves should be saved only on server or also on all connected clients. Default is true.",
"autosave_only_on_server": true,
"_comment_non_blocking_saving": "Highly experimental feature, enable only at your own risk of losing your saves. On UNIX systems, server will fork itself to create an autosave. Autosaving on connected Windows clients will be disabled regardless of autosave_only_on_server option.",
"non_blocking_saving": false,
"_comment_segment_sizes": "Long network messages are split into segments that are sent over multiple ticks. Their size depends on the number of peers currently connected. Increasing the segment size will increase upload bandwidth requirement for the server and download bandwidth requirement for clients. This setting only affects server outbound messages. Changing these settings can have a negative impact on connection stability for some clients.",
"minimum_segment_size": 25,
"minimum_segment_size_peer_count": 20,
"maximum_segment_size": 100,
"maximum_segment_size_peer_count": 10
}
================================================
FILE: factorio/docker-compose.factorio.yml
================================================
services:
factorio:
image: 'factoriotools/factorio'
labels:
traefik.enable: false
ports:
- '34197:34197/udp'
# - '27015:27015/tcp' # RCON port
restart: 'always'
volumes:
- '.:/factorio'
================================================
FILE: factorio/mods/mod-list.json
================================================
{
"mods":
[
{
"name": "base",
"enabled": true
}
]
}
================================================
FILE: framadate/.gitignore
================================================
db
================================================
FILE: framadate/README.md
================================================
# framadate
https://framagit.org/framasoft/framadate/framadate/
[Framadate](https://framadate.org) is an online service for planning an appointment or making a decision quickly and easily. It's a community free/libre software alternative to Doodle.
================================================
FILE: framadate/docker-compose.framadate.yml
================================================
networks:
framadate-internal: {}
services:
framadate:
image: 'xgaia/framadate:${FRAMADATE_IMAGE_VERSION:-latest}'
depends_on:
- 'framadate-db'
environment:
ADMIN_PASSWORD: '${FRAMADATE_ADMIN_PASSWORD:-pass}'
APP_NAME: 'Framadate'
APP_URL: 'framadate.${SITE:-localhost}'
DEFAULT_POLL_DURATION: '365'
MARKDOWN_EDITOR_BY_DEFAULT: 'true'
MYSQL_DATABASE: '${FRAMADATE_MYSQL_DATABASE:-framadate}'
MYSQL_PASSWORD: '${FRAMADATE_MYSQL_PASSWORD:-framadate}'
MYSQL_ROOT_PASSWORD: '${FRAMADATE_MYSQL_ROOT_PASSWORD:-pass}'
MYSQL_USER: '${FRAMADATE_MYSQL_USER:-framadate}'
PROVIDE_FORK_AWESOME: 'true'
SERVERNAME: 'framadate.${SITE:-localhost}'
SHOW_CULTIVATE_YOUR_GARDEN: 'true'
SHOW_THE_SOFTWARE: 'true'
SHOW_WHAT_IS_THAT: 'true'
USER_CAN_ADD_IMG_OR_LINK: 'true'
labels:
- 'traefik.enable=true'
- 'traefik.http.routers.framadate.rule=Host(`framadate.${SITE:-localhost}`)'
- 'traefik.http.services.framadate.loadbalancer.server.port=80'
restart: 'always'
networks:
- 'framadate-internal'
- 'srv'
framadate-db:
image: 'mysql:5.7'
environment:
MYSQL_DATABASE: '${FRAMADATE_MYSQL_DATABASE:-framadate}'
MYSQL_PASSWORD: '${FRAMADATE_MYSQL_PASSWORD:-framadate}'
MYSQL_ROOT_PASSWORD: '${FRAMADATE_MYSQL_ROOT_PASSWORD:-pass}'
MYSQL_USER: '${FRAMADATE_MYSQL_USER:-framadate}'
healthcheck:
test: ['CMD', 'mysqlcheck', '--all-databases', '-ppass']
labels:
- 'traefik.enable=false'
networks:
- 'framadate-internal'
restart: 'always'
volumes:
- './db:/var/lib/mysql'
================================================
FILE: gitlab/.gitignore
================================================
config/gitlab-secrets.json
config/*.pub
config/*_key
data/
================================================
FILE: gitlab/README.md
================================================
# Gitlab
https://about.gitlab.com/
GitLab is a web-based DevOps lifecycle tool that provides a Git-repository
manager providing wiki, issue-tracking and continuous integration and
deployment pipeline features, using an open-source license, developed by
GitLab Inc. The software was created by Ukrainian developers Dmitriy
Zaporozhets and Valery Sizov.
## Gitlab runner
### Get the Registration Token
Find your runner registration token (\$REGISTRATION_TOKEN) at
`http://GITLAB_HOST/$PROJECT_GROUP/$PROJECT_NAME/settings/ci_cd`.
There is **two** way to register the runner:
### Register via CLI
Steps:
- up the runner `docker-compose up -d runner`
- register the runner
```bash
docker-compose exec runner gitlab-runner register \
--non-interactive \
--executor "docker" \
--docker-image alpine:latest \
--url "http://gitlab/" \
--registration-token "$REGISTRATION_TOKEN" \
--description "The Best Runner" \
--tag-list "docker,aws" \
--run-untagged="true" \
--locked="false" \
--access-level="not_protected"
```
### Register via the configuration file
Register the Registration Token to have a Runner Token
```bash
curl -X POST 'http://gitlab.${SITE}/api/v4/runners' --form 'token=$REGISTRATION_TOKEN' --form 'description=The Best Runner'
```
#### Change runner configuration
Now change the token in the [configuration file](gitlab/runner/config.toml).
```toml
[[runners]]
token = "XXXXXXXXXXXXXXXXXXXX"
```
and run the runner
```bash
docker-compose up -d runner
```
================================================
FILE: gitlab/config/gitlab.rb
================================================
## GitLab configuration settings
##! This file is generated during initial installation and **is not** modified
##! during upgrades.
##! Check out the latest version of this file to know about the different
##! settings that can be configured by this file, which may be found at:
##! https://gitlab.com/gitlab-org/omnibus-gitlab/raw/master/files/gitlab-config-template/gitlab.rb.template
## GitLab URL
##! URL on which GitLab will be reachable.
##! For more details on configuring external_url see:
##! https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab
# external_url 'GENERATED_EXTERNAL_URL'
## Roles for multi-instance GitLab
##! The default is to have no roles enabled, which results in GitLab running as an all-in-one instance.
##! Options:
##! redis_sentinel_role redis_master_role redis_slave_role geo_primary_role geo_secondary_role
##! For more details on each role, see:
##! https://docs.gitlab.com/omnibus/roles/README.html#roles
##!
# roles ['redis_sentinel_role', 'redis_master_role']
## Legend
##! The following notations at the beginning of each line may be used to
##! differentiate between components of this file and to easily select them using
##! a regex.
##! ## Titles, subtitles etc
##! ##! More information - Description, Docs, Links, Issues etc.
##! Configuration settings have a single # followed by a single space at the
##! beginning; Remove them to enable the setting.
##! **Configuration settings below are optional.**
##! **The values currently assigned are only examples and ARE NOT the default
##! values.**
################################################################################
################################################################################
## Configuration Settings for GitLab CE and EE ##
################################################################################
################################################################################
################################################################################
## gitlab.yml configuration
##! Docs: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/gitlab.yml.md
################################################################################
# gitlab_rails['gitlab_ssh_host'] = 'ssh.host_example.com'
# gitlab_rails['time_zone'] = 'UTC'
### Email Settings
# gitlab_rails['gitlab_email_enabled'] = true
# gitlab_rails['gitlab_email_from'] = 'example@example.com'
# gitlab_rails['gitlab_email_display_name'] = 'Example'
# gitlab_rails['gitlab_email_reply_to'] = 'noreply@example.com'
# gitlab_rails['gitlab_email_subject_suffix'] = ''
### GitLab user privileges
# gitlab_rails['gitlab_default_can_create_group'] = true
# gitlab_rails['gitlab_username_changing_enabled'] = true
### Default Theme
# gitlab_rails['gitlab_default_theme'] = 2
### Default project feature settings
# gitlab_rails['gitlab_default_projects_features_issues'] = true
# gitlab_rails['gitlab_default_projects_features_merge_requests'] = true
# gitlab_rails['gitlab_default_projects_features_wiki'] = true
# gitlab_rails['gitlab_default_projects_features_snippets'] = true
# gitlab_rails['gitlab_default_projects_features_builds'] = true
# gitlab_rails['gitlab_default_projects_features_container_registry'] = true
### Automatic issue closing
###! See https://docs.gitlab.com/ce/customization/issue_closing.html for more
###! information about this pattern.
# gitlab_rails['gitlab_issue_closing_pattern'] = "\b((?:[Cc]los(?:e[sd]?|ing)|\b[Ff]ix(?:e[sd]|ing)?|\b[Rr]esolv(?:e[sd]?|ing)|\b[Ii]mplement(?:s|ed|ing)?)(:?) +(?:(?:issues? +)?%{issue_ref}(?:(?:, *| +and +)?)|([A-Z][A-Z0-9_]+-\d+))+)"
### Download location
###! When a user clicks e.g. 'Download zip' on a project, a temporary zip file
###! is created in the following directory.
###! Should not be the same path, or a sub directory of any of the `git_data_dirs`
# gitlab_rails['gitlab_repository_downloads_path'] = 'tmp/repositories'
### Gravatar Settings
# gitlab_rails['gravatar_plain_url'] = 'http://www.gravatar.com/avatar/%{hash}?s=%{size}&d=identicon'
# gitlab_rails['gravatar_ssl_url'] = 'https://secure.gravatar.com/avatar/%{hash}?s=%{size}&d=identicon'
### Auxiliary jobs
###! Periodically executed jobs, to self-heal Gitlab, do external
###! synchronizations, etc.
###! Docs: https://github.com/ondrejbartas/sidekiq-cron#adding-cron-job
###! https://docs.gitlab.com/ce/ci/yaml/README.html#artifacts:expire_in
# gitlab_rails['stuck_ci_jobs_worker_cron'] = "0 0 * * *"
# gitlab_rails['expire_build_artifacts_worker_cron'] = "50 * * * *"
# gitlab_rails['pipeline_schedule_worker_cron'] = "41 * * * *"
# gitlab_rails['ci_archive_traces_cron_worker_cron'] = "17 * * * *"
# gitlab_rails['repository_check_worker_cron'] = "20 * * * *"
# gitlab_rails['admin_email_worker_cron'] = "0 0 * * 0"
# gitlab_rails['repository_archive_cache_worker_cron'] = "0 * * * *"
# gitlab_rails['pages_domain_verification_cron_worker'] = "*/15 * * * *"
### Webhook Settings
###! Number of seconds to wait for HTTP response after sending webhook HTTP POST
###! request (default: 10)
# gitlab_rails['webhook_timeout'] = 10
### Trusted proxies
###! Customize if you have GitLab behind a reverse proxy which is running on a
###! different machine.
###! **Add the IP address for your reverse proxy to the list, otherwise users
###! will appear signed in from that address.**
# gitlab_rails['trusted_proxies'] = []
### Monitoring settings
###! IP whitelist controlling access to monitoring endpoints
# gitlab_rails['monitoring_whitelist'] = ['127.0.0.0/8', '::1/128']
###! Time between sampling of unicorn socket metrics, in seconds
# gitlab_rails['monitoring_unicorn_sampler_interval'] = 10
### Reply by email
###! Allow users to comment on issues and merge requests by replying to
###! notification emails.
###! Docs: https://docs.gitlab.com/ce/administration/reply_by_email.html
# gitlab_rails['incoming_email_enabled'] = true
#### Incoming Email Address
####! The email address including the `%{key}` placeholder that will be replaced
####! to reference the item being replied to.
####! **The placeholder can be omitted but if present, it must appear in the
####! "user" part of the address (before the `@`).**
# gitlab_rails['incoming_email_address'] = "gitlab-incoming+%{key}@gmail.com"
#### Email account username
####! **With third party providers, this is usually the full email address.**
####! **With self-hosted email servers, this is usually the user part of the
####! email address.**
# gitlab_rails['incoming_email_email'] = "gitlab-incoming@gmail.com"
#### Email account password
# gitlab_rails['incoming_email_password'] = "[REDACTED]"
#### IMAP Settings
# gitlab_rails['incoming_email_host'] = "imap.gmail.com"
# gitlab_rails['incoming_email_port'] = 993
# gitlab_rails['incoming_email_ssl'] = true
# gitlab_rails['incoming_email_start_tls'] = false
#### Incoming Mailbox Settings
####! The mailbox where incoming mail will end up. Usually "inbox".
# gitlab_rails['incoming_email_mailbox_name'] = "inbox"
####! The IDLE command timeout.
# gitlab_rails['incoming_email_idle_timeout'] = 60
### Job Artifacts
# gitlab_rails['artifacts_enabled'] = true
# gitlab_rails['artifacts_path'] = "/var/opt/gitlab/gitlab-rails/shared/artifacts"
####! Job artifacts Object Store
####! Docs: https://docs.gitlab.com/ee/administration/job_artifacts.html#using-object-storage
# gitlab_rails['artifacts_object_store_enabled'] = false
# gitlab_rails['artifacts_object_store_direct_upload'] = false
# gitlab_rails['artifacts_object_store_background_upload'] = true
# gitlab_rails['artifacts_object_store_proxy_download'] = false
# gitlab_rails['artifacts_object_store_remote_directory'] = "artifacts"
# gitlab_rails['artifacts_object_store_connection'] = {
# 'provider' => 'AWS',
# 'region' => 'eu-west-1',
# 'aws_access_key_id' => 'AWS_ACCESS_KEY_ID',
# 'aws_secret_access_key' => 'AWS_SECRET_ACCESS_KEY',
# # # The below options configure an S3 compatible host instead of AWS
# # 'aws_signature_version' => 4, # For creation of signed URLs. Set to 2 if provider does not support v4.
# # 'endpoint' => 'https://s3.amazonaws.com', # default: nil - Useful for S3 compliant services such as DigitalOcean Spaces
# # 'host' => 's3.amazonaws.com',
# # 'path_style' => false # Use 'host/bucket_name/object' instead of 'bucket_name.host/object'
# }
### Git LFS
# gitlab_rails['lfs_enabled'] = true
# gitlab_rails['lfs_storage_path'] = "/var/opt/gitlab/gitlab-rails/shared/lfs-objects"
# gitlab_rails['lfs_object_store_enabled'] = false
# gitlab_rails['lfs_object_store_direct_upload'] = false
# gitlab_rails['lfs_object_store_background_upload'] = true
# gitlab_rails['lfs_object_store_proxy_download'] = false
# gitlab_rails['lfs_object_store_remote_directory'] = "lfs-objects"
# gitlab_rails['lfs_object_store_connection'] = {
# 'provider' => 'AWS',
# 'region' => 'eu-west-1',
# 'aws_access_key_id' => 'AWS_ACCESS_KEY_ID',
# 'aws_secret_access_key' => 'AWS_SECRET_ACCESS_KEY',
# # # The below options configure an S3 compatible host instead of AWS
# # 'aws_signature_version' => 4, # For creation of signed URLs. Set to 2 if provider does not support v4.
# # 'endpoint' => 'https://s3.amazonaws.com', # default: nil - Useful for S3 compliant services such as DigitalOcean Spaces
# # 'host' => 's3.amazonaws.com',
# # 'path_style' => false # Use 'host/bucket_name/object' instead of 'bucket_name.host/object'
# }
### GitLab uploads
###! Docs: https://docs.gitlab.com/ee/administration/uploads.html
# gitlab_rails['uploads_storage_path'] = "/var/opt/gitlab/gitlab-rails/public"
# gitlab_rails['uploads_base_dir'] = "uploads/-/system"
# gitlab_rails['uploads_object_store_enabled'] = false
# gitlab_rails['uploads_object_store_direct_upload'] = false
# gitlab_rails['uploads_object_store_background_upload'] = true
# gitlab_rails['uploads_object_store_proxy_download'] = false
# gitlab_rails['uploads_object_store_remote_directory'] = "uploads"
# gitlab_rails['uploads_object_store_connection'] = {
# 'provider' => 'AWS',
# 'region' => 'eu-west-1',
# 'aws_access_key_id' => 'AWS_ACCESS_KEY_ID',
# 'aws_secret_access_key' => 'AWS_SECRET_ACCESS_KEY',
# # # The below options configure an S3 compatible host instead of AWS
# # 'host' => 's3.amazonaws.com',
# # 'aws_signature_version' => 4, # For creation of signed URLs. Set to 2 if provider does not support v4.
# # 'endpoint' => 'https://s3.amazonaws.com', # default: nil - Useful for S3 compliant services such as DigitalOcean Spaces
# # 'path_style' => false # Use 'host/bucket_name/object' instead of 'bucket_name.host/object'
# }
### Impersonation settings
# gitlab_rails['impersonation_enabled'] = true
### Usage Statistics
# gitlab_rails['usage_ping_enabled'] = true
### GitLab Mattermost
###! These settings are void if Mattermost is installed on the same omnibus
###! install
# gitlab_rails['mattermost_host'] = "https://mattermost.example.com"
### LDAP Settings
###! Docs: https://docs.gitlab.com/omnibus/settings/ldap.html
###! **Be careful not to break the indentation in the ldap_servers block. It is
###! in yaml format and the spaces must be retained. Using tabs will not work.**
# gitlab_rails['ldap_enabled'] = false
###! **remember to close this block with 'EOS' below**
# gitlab_rails['ldap_servers'] = YAML.load <<-'EOS'
# main: # 'main' is the GitLab 'provider ID' of this LDAP server
# label: 'LDAP'
# host: '_your_ldap_server'
# port: 389
# uid: 'sAMAccountName'
# bind_dn: '_the_full_dn_of_the_user_you_will_bind_with'
# password: '_the_password_of_the_bind_user'
# encryption: 'plain' # "start_tls" or "simple_tls" or "plain"
# verify_certificates: true
# active_directory: true
# allow_username_or_email_login: false
# lowercase_usernames: false
# block_auto_created_users: false
# base: ''
# user_filter: ''
# ## EE only
# group_base: ''
# admin_group: ''
# sync_ssh_keys: false
#
# secondary: # 'secondary' is the GitLab 'provider ID' of second LDAP server
# label: 'LDAP'
# host: '_your_ldap_server'
# port: 389
# uid: 'sAMAccountName'
# bind_dn: '_the_full_dn_of_the_user_you_will_bind_with'
# password: '_the_password_of_the_bind_user'
# encryption: 'plain' # "start_tls" or "simple_tls" or "plain"
# verify_certificates: true
# active_directory: true
# allow_username_or_email_login: false
# lowercase_usernames: false
# block_auto_created_users: false
# base: ''
# user_filter: ''
# ## EE only
# group_base: ''
# admin_group: ''
# sync_ssh_keys: false
# EOS
### Smartcard authentication settings
###! Docs: https://docs.gitlab.com/ee/administration/auth/smartcard.html
# gitlab_rails['smartcard_enabled'] = false
# gitlab_rails['smartcard_ca_file'] = "/etc/gitlab/ssl/CA.pem"
# gitlab_rails['smartcard_client_certificate_required_port'] = 3444
### OmniAuth Settings
###! Docs: https://docs.gitlab.com/ce/integration/omniauth.html
# gitlab_rails['omniauth_enabled'] = nil
# gitlab_rails['omniauth_allow_single_sign_on'] = ['saml']
# gitlab_rails['omniauth_sync_email_from_provider'] = 'saml'
# gitlab_rails['omniauth_sync_profile_from_provider'] = ['saml']
# gitlab_rails['omniauth_sync_profile_attributes'] = ['email']
# gitlab_rails['omniauth_auto_sign_in_with_provider'] = 'saml'
# gitlab_rails['omniauth_block_auto_created_users'] = true
# gitlab_rails['omniauth_auto_link_ldap_user'] = false
# gitlab_rails['omniauth_auto_link_saml_user'] = false
# gitlab_rails['omniauth_external_providers'] = ['twitter', 'google_oauth2']
# gitlab_rails['omniauth_providers'] = [
# {
# "name" => "google_oauth2",
# "app_id" => "YOUR APP ID",
# "app_secret" => "YOUR APP SECRET",
# "args" => { "access_type" => "offline", "approval_prompt" => "" }
# }
# ]
### Backup Settings
###! Docs: https://docs.gitlab.com/omnibus/settings/backups.html
# gitlab_rails['manage_backup_path'] = true
# gitlab_rails['backup_path'] = "/var/opt/gitlab/backups"
###! Docs: https://docs.gitlab.com/ce/raketasks/backup_restore.html#backup-archive-permissions
# gitlab_rails['backup_archive_permissions'] = 0644
# gitlab_rails['backup_pg_schema'] = 'public'
###! The duration in seconds to keep backups before they are allowed to be deleted
# gitlab_rails['backup_keep_time'] = 604800
# gitlab_rails['backup_upload_connection'] = {
# 'provider' => 'AWS',
# 'region' => 'eu-west-1',
# 'aws_access_key_id' => 'AKIAKIAKI',
# 'aws_secret_access_key' => 'secret123'
# }
# gitlab_rails['backup_upload_remote_directory'] = 'my.s3.bucket'
# gitlab_rails['backup_multipart_chunk_size'] = 104857600
###! **Turns on AWS Server-Side Encryption with Amazon S3-Managed Keys for
###! backups**
# gitlab_rails['backup_encryption'] = 'AES256'
###! **Specifies Amazon S3 storage class to use for backups. Valid values
###! include 'STANDARD', 'STANDARD_IA', and 'REDUCED_REDUNDANCY'**
# gitlab_rails['backup_storage_class'] = 'STANDARD'
### Pseudonymizer Settings
# gitlab_rails['pseudonymizer_manifest'] = 'config/pseudonymizer.yml'
# gitlab_rails['pseudonymizer_upload_remote_directory'] = 'gitlab-elt'
# gitlab_rails['pseudonymizer_upload_connection'] = {
# 'provider' => 'AWS',
# 'region' => 'eu-west-1',
# 'aws_access_key_id' => 'AKIAKIAKI',
# 'aws_secret_access_key' => 'secret123'
# }
### For setting up different data storing directory
###! Docs: https://docs.gitlab.com/omnibus/settings/configuration.html#storing-git-data-in-an-alternative-directory
###! **If you want to use a single non-default directory to store git data use a
###! path that doesn't contain symlinks.**
# git_data_dirs({
# "default" => {
# "path" => "/mnt/nfs-01/git-data"
# }
# })
### Gitaly settings
# gitlab_rails['gitaly_token'] = 'secret token'
### For storing GitLab application uploads, eg. LFS objects, build artifacts
###! Docs: https://docs.gitlab.com/ce/development/shared_files.html
# gitlab_rails['shared_path'] = '/var/opt/gitlab/gitlab-rails/shared'
### Wait for file system to be mounted
###! Docs: https://docs.gitlab.com/omnibus/settings/configuration.html#only-start-omnibus-gitlab-services-after-a-given-filesystem-is-mounted
# high_availability['mountpoint'] = ["/var/opt/gitlab/git-data", "/var/opt/gitlab/gitlab-rails/shared"]
### GitLab Shell settings for GitLab
# gitlab_rails['gitlab_shell_ssh_port'] = 22
# gitlab_rails['gitlab_shell_git_timeout'] = 800
### Extra customization
# gitlab_rails['extra_google_analytics_id'] = '_your_tracking_id'
# gitlab_rails['extra_piwik_url'] = '_your_piwik_url'
# gitlab_rails['extra_piwik_site_id'] = '_your_piwik_site_id'
##! Docs: https://docs.gitlab.com/omnibus/settings/environment-variables.html
# gitlab_rails['env'] = {
# 'BUNDLE_GEMFILE' => "/opt/gitlab/embedded/service/gitlab-rails/Gemfile",
# 'PATH' => "/opt/gitlab/bin:/opt/gitlab/embedded/bin:/bin:/usr/bin"
# }
# gitlab_rails['rack_attack_git_basic_auth'] = {
# 'enabled' => false,
# 'ip_whitelist' => ["127.0.0.1"],
# 'maxretry' => 10,
# 'findtime' => 60,
# 'bantime' => 3600
# }
# gitlab_rails['rack_attack_protected_paths'] = [
# '/users/password',
# '/users/sign_in',
# '/api/#{API::API.version}/session.json',
# '/api/#{API::API.version}/session',
# '/users',
# '/users/confirmation',
# '/unsubscribes/',
# '/import/github/personal_access_token'
# ]
###! **We do not recommend changing these directories.**
# gitlab_rails['dir'] = "/var/opt/gitlab/gitlab-rails"
# gitlab_rails['log_directory'] = "/var/log/gitlab/gitlab-rails"
### GitLab application settings
# gitlab_rails['uploads_directory'] = "/var/opt/gitlab/gitlab-rails/uploads"
# gitlab_rails['rate_limit_requests_per_period'] = 10
# gitlab_rails['rate_limit_period'] = 60
#### Change the initial default admin password and shared runner registration tokens.
####! **Only applicable on initial setup, changing these settings after database
####! is created and seeded won't yield any change.**
# gitlab_rails['initial_root_password'] = "password"
# gitlab_rails['initial_shared_runners_registration_token'] = "token"
#### Enable or disable automatic database migrations
# gitlab_rails['auto_migrate'] = true
#### This is advanced feature used by large gitlab deployments where loading
#### whole RAILS env takes a lot of time.
# gitlab_rails['rake_cache_clear'] = true
### GitLab database settings
###! Docs: https://docs.gitlab.com/omnibus/settings/database.html
###! **Only needed if you use an external database.**
# gitlab_rails['db_adapter'] = "postgresql"
# gitlab_rails['db_encoding'] = "unicode"
# gitlab_rails['db_collation'] = nil
# gitlab_rails['db_database'] = "gitlabhq_production"
# gitlab_rails['db_pool'] = 10
# gitlab_rails['db_username'] = "gitlab"
# gitlab_rails['db_password'] = nil
# gitlab_rails['db_host'] = nil
# gitlab_rails['db_port'] = 5432
# gitlab_rails['db_socket'] = nil
# gitlab_rails['db_sslmode'] = nil
# gitlab_rails['db_sslcompression'] = 0
# gitlab_rails['db_sslrootcert'] = nil
# gitlab_rails['db_prepared_statements'] = false
# gitlab_rails['db_statements_limit'] = 1000
### GitLab Redis settings
###! Connect to your own Redis instance
###! Docs: https://docs.gitlab.com/omnibus/settings/redis.html
#### Redis TCP connection
# gitlab_rails['redis_host'] = "127.0.0.1"
# gitlab_rails['redis_port'] = 6379
# gitlab_rails['redis_ssl'] = false
# gitlab_rails['redis_password'] = nil
# gitlab_rails['redis_database'] = 0
#### Redis local UNIX socket (will be disabled if TCP method is used)
# gitlab_rails['redis_socket'] = "/var/opt/gitlab/redis/redis.socket"
#### Sentinel support
####! To have Sentinel working, you must enable Redis TCP connection support
####! above and define a few Sentinel hosts below (to get a reliable setup
####! at least 3 hosts).
####! **You don't need to list every sentinel host, but the ones not listed will
####! not be used in a fail-over situation to query for the new master.**
# gitlab_rails['redis_sentinels'] = [
# {'host' => '127.0.0.1', 'port' => 26379},
# ]
#### Separate instances support
###! Docs: https://docs.gitlab.com/omnibus/settings/redis.html#running-with-multiple-redis-instances
# gitlab_rails['redis_cache_instance'] = nil
# gitlab_rails['redis_cache_sentinels'] = nil
# gitlab_rails['redis_queues_instance'] = nil
# gitlab_rails['redis_queues_sentinels'] = nil
# gitlab_rails['redis_shared_state_instance'] = nil
# gitlab_rails['redis_shared_sentinels'] = nil
### GitLab email server settings
###! Docs: https://docs.gitlab.com/omnibus/settings/smtp.html
###! **Use smtp instead of sendmail/postfix.**
# gitlab_rails['smtp_enable'] = true
# gitlab_rails['smtp_address'] = "smtp.server"
# gitlab_rails['smtp_port'] = 465
# gitlab_rails['smtp_user_name'] = "smtp user"
# gitlab_rails['smtp_password'] = "smtp password"
# gitlab_rails['smtp_domain'] = "example.com"
# gitlab_rails['smtp_authentication'] = "login"
# gitlab_rails['smtp_enable_starttls_auto'] = true
# gitlab_rails['smtp_tls'] = false
###! **Can be: 'none', 'peer', 'client_once', 'fail_if_no_peer_cert'**
###! Docs: http://api.rubyonrails.org/classes/ActionMailer/Base.html
# gitlab_rails['smtp_openssl_verify_mode'] = 'none'
# gitlab_rails['smtp_ca_path'] = "/etc/ssl/certs"
# gitlab_rails['smtp_ca_file'] = "/etc/ssl/certs/ca-certificates.crt"
################################################################################
## Container Registry settings
##! Docs: https://docs.gitlab.com/ce/administration/container_registry.html
################################################################################
# registry_external_url 'https://registry.gitlab.example.com'
### Settings used by GitLab application
# gitlab_rails['registry_enabled'] = true
# gitlab_rails['registry_host'] = "registry.gitlab.example.com"
# gitlab_rails['registry_port'] = "5005"
# gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
###! **Do not change the following 3 settings unless you know what you are
###! doing**
# gitlab_rails['registry_api_url'] = "http://localhost:5000"
# gitlab_rails['registry_key_path'] = "/var/opt/gitlab/gitlab-rails/certificate.key"
# gitlab_rails['registry_issuer'] = "omnibus-gitlab-issuer"
### Settings used by Registry application
# registry['enable'] = true
# registry['username'] = "registry"
# registry['group'] = "registry"
# registry['uid'] = nil
# registry['gid'] = nil
# registry['dir'] = "/var/opt/gitlab/registry"
# registry['registry_http_addr'] = "localhost:5000"
# registry['debug_addr'] = "localhost:5001"
# registry['log_directory'] = "/var/log/gitlab/registry"
# registry['env_directory'] = "/opt/gitlab/etc/registry/env"
# registry['env'] = {
# 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
# }
# registry['log_level'] = "info"
# registry['log_formatter'] = "text"
# registry['rootcertbundle'] = "/var/opt/gitlab/registry/certificate.crt"
# registry['health_storagedriver_enabled'] = true
# registry['storage_delete_enabled'] = true
### Registry backend storage
###! Docs: https://docs.gitlab.com/ce/administration/container_registry.html#container-registry-storage-driver
# registry['storage'] = {
# 's3' => {
# 'accesskey' => 'AKIAKIAKI',
# 'secretkey' => 'secret123',
# 'bucket' => 'gitlab-registry-bucket-AKIAKIAKI'
# }
# }
### Registry notifications endpoints
# registry['notifications'] = [
# {
# 'name' => 'test_endpoint',
# 'url' => 'https://gitlab.example.com/notify2',
# 'timeout' => '500ms',
# 'threshold' => 5,
# 'backoff' => '1s',
# 'headers' => {
# "Authorization" => ["AUTHORIZATION_EXAMPLE_TOKEN"]
# }
# }
# ]
### Default registry notifications
# registry['default_notifications_timeout'] = "500ms"
# registry['default_notifications_threshold'] = 5
# registry['default_notifications_backoff'] = "1s"
# registry['default_notifications_headers'] = {}
################################################################################
## GitLab Workhorse
##! Docs: https://gitlab.com/gitlab-org/gitlab-workhorse/blob/master/README.md
################################################################################
# gitlab_workhorse['enable'] = true
# gitlab_workhorse['ha'] = false
# gitlab_workhorse['listen_network'] = "unix"
# gitlab_workhorse['listen_umask'] = 000
# gitlab_workhorse['listen_addr'] = "/var/opt/gitlab/gitlab-workhorse/socket"
# gitlab_workhorse['auth_backend'] = "http://localhost:8080"
##! the empty string is the default in gitlab-workhorse option parser
# gitlab_workhorse['auth_socket'] = "''"
##! put an empty string on the command line
# gitlab_workhorse['pprof_listen_addr'] = "''"
# gitlab_workhorse['prometheus_listen_addr'] = "localhost:9229"
# gitlab_workhorse['dir'] = "/var/opt/gitlab/gitlab-workhorse"
# gitlab_workhorse['log_directory'] = "/var/log/gitlab/gitlab-workhorse"
# gitlab_workhorse['proxy_headers_timeout'] = "1m0s"
##! limit number of concurrent API requests, defaults to 0 which is unlimited
# gitlab_workhorse['api_limit'] = 0
##! limit number of API requests allowed to be queued, defaults to 0 which
##! disables queuing
# gitlab_workhorse['api_queue_limit'] = 0
##! duration after which we timeout requests if they sit too long in the queue
# gitlab_workhorse['api_queue_duration'] = "30s"
##! Long polling duration for job requesting for runners
# gitlab_workhorse['api_ci_long_polling_duration'] = "60s"
##! Log format: default is text, can also be json or none.
# gitlab_workhorse['log_format'] = "json"
# gitlab_workhorse['env_directory'] = "/opt/gitlab/etc/gitlab-workhorse/env"
# gitlab_workhorse['env'] = {
# 'PATH' => "/opt/gitlab/bin:/opt/gitlab/embedded/bin:/bin:/usr/bin",
# 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
# }
################################################################################
## GitLab User Settings
##! Modify default git user.
##! Docs: https://docs.gitlab.com/omnibus/settings/configuration.html#changing-the-name-of-the-git-user-group
################################################################################
# user['username'] = "git"
# user['group'] = "git"
# user['uid'] = nil
# user['gid'] = nil
##! The shell for the git user
# user['shell'] = "/bin/sh"
##! The home directory for the git user
# user['home'] = "/var/opt/gitlab"
# user['git_user_name'] = "GitLab"
# user['git_user_email'] = "gitlab@#{node['fqdn']}"
################################################################################
## GitLab Unicorn
##! Tweak unicorn settings.
##! Docs: https://docs.gitlab.com/omnibus/settings/unicorn.html
################################################################################
# unicorn['enable'] = true
# unicorn['worker_timeout'] = 60
###! Minimum worker_processes is 2 at this moment
###! See https://gitlab.com/gitlab-org/gitlab-ce/issues/18771
# unicorn['worker_processes'] = 2
### Advanced settings
# unicorn['listen'] = 'localhost'
# unicorn['port'] = 8080
# unicorn['socket'] = '/var/opt/gitlab/gitlab-rails/sockets/gitlab.socket'
# unicorn['pidfile'] = '/opt/gitlab/var/unicorn/unicorn.pid'
# unicorn['tcp_nopush'] = true
# unicorn['backlog_socket'] = 1024
###! **Make sure somaxconn is equal or higher then backlog_socket**
# unicorn['somaxconn'] = 1024
###! **We do not recommend changing this setting**
# unicorn['log_directory'] = "/var/log/gitlab/unicorn"
### **Only change these settings if you understand well what they mean**
###! Docs: https://about.gitlab.com/2015/06/05/how-gitlab-uses-unicorn-and-unicorn-worker-killer/
###! https://github.com/kzk/unicorn-worker-killer
# unicorn['worker_memory_limit_min'] = "400 * 1 << 20"
# unicorn['worker_memory_limit_max'] = "650 * 1 << 20"
################################################################################
## GitLab Puma
##! Tweak puma settings. You should only use Unicorn or Puma, not both.
##! Docs: https://docs.gitlab.com/omnibus/settings/puma.html
################################################################################
# puma['enable'] = false
# puma['ha'] = false
# puma['worker_timeout'] = 60
# puma['worker_processes'] = 2
# puma['min_threads'] = 1
# puma['max_threads'] = 16
### Advanced settings
# puma['listen'] = '127.0.0.1'
# puma['port'] = 8080
# puma['socket'] = '/var/opt/gitlab/gitlab-rails/sockets/gitlab.socket'
# puma['pidfile'] = '/opt/gitlab/var/puma/puma.pid'
# puma['state_path'] = '/opt/gitlab/var/puma/puma.state'
###! **We do not recommend changing this setting**
# puma['log_directory'] = "/var/log/gitlab/puma"
### **Only change these settings if you understand well what they mean**
###! Docs: https://github.com/schneems/puma_worker_killer
# puma['per_worker_max_memory_mb'] = 650
################################################################################
## GitLab Sidekiq
################################################################################
# sidekiq['log_directory'] = "/var/log/gitlab/sidekiq"
# sidekiq['log_format'] = "default"
# sidekiq['shutdown_timeout'] = 4
# sidekiq['concurrency'] = 25
# sidekiq['metrics_enabled'] = true
# sidekiq['listen_address'] = "localhost"
# sidekiq['listen_port'] = 8082
################################################################################
## gitlab-shell
################################################################################
# gitlab_shell['audit_usernames'] = false
# gitlab_shell['log_level'] = 'INFO'
# gitlab_shell['log_format'] = 'json'
# gitlab_shell['http_settings'] = { user: 'username', password: 'password', ca_file: '/etc/ssl/cert.pem', ca_path: '/etc/pki/tls/certs', self_signed_cert: false}
# gitlab_shell['log_directory'] = "/var/log/gitlab/gitlab-shell/"
# gitlab_shell['custom_hooks_dir'] = "/opt/gitlab/embedded/service/gitlab-shell/hooks"
# gitlab_shell['auth_file'] = "/var/opt/gitlab/.ssh/authorized_keys"
### Git trace log file.
###! If set, git commands receive GIT_TRACE* environment variables
###! Docs: https://git-scm.com/book/es/v2/Git-Internals-Environment-Variables#Debugging
###! An absolute path starting with / – the trace output will be appended to
###! that file. It needs to exist so we can check permissions and avoid
###! throwing warnings to the users.
# gitlab_shell['git_trace_log_file'] = "/var/log/gitlab/gitlab-shell/gitlab-shell-git-trace.log"
##! **We do not recommend changing this directory.**
# gitlab_shell['dir'] = "/var/opt/gitlab/gitlab-shell"
################################################################
## GitLab PostgreSQL
################################################################
###! Changing any of these settings requires a restart of postgresql.
###! By default, reconfigure reloads postgresql if it is running. If you
###! change any of these settings, be sure to run `gitlab-ctl restart postgresql`
###! after reconfigure in order for the changes to take effect.
# postgresql['enable'] = true
# postgresql['listen_address'] = nil
# postgresql['port'] = 5432
# postgresql['data_dir'] = "/var/opt/gitlab/postgresql/data"
##! **recommend value is 1/4 of total RAM, up to 14GB.**
# postgresql['shared_buffers'] = "256MB"
### Advanced settings
# postgresql['ha'] = false
# postgresql['dir'] = "/var/opt/gitlab/postgresql"
# postgresql['log_directory'] = "/var/log/gitlab/postgresql"
# postgresql['username'] = "gitlab-psql"
# postgresql['group'] = "gitlab-psql"
##! `SQL_USER_PASSWORD_HASH` can be generated using the command `gitlab-ctl pg-password-md5 gitlab`
# postgresql['sql_user_password'] = 'SQL_USER_PASSWORD_HASH'
# postgresql['uid'] = nil
# postgresql['gid'] = nil
# postgresql['shell'] = "/bin/sh"
# postgresql['home'] = "/var/opt/gitlab/postgresql"
# postgresql['user_path'] = "/opt/gitlab/embedded/bin:/opt/gitlab/bin:$PATH"
# postgresql['sql_user'] = "gitlab"
# postgresql['max_connections'] = 200
# postgresql['md5_auth_cidr_addresses'] = []
# postgresql['trust_auth_cidr_addresses'] = []
# postgresql['wal_buffers'] = "-1"
# postgresql['autovacuum_max_workers'] = "3"
# postgresql['autovacuum_freeze_max_age'] = "200000000"
# postgresql['log_statement'] = nil
# postgresql['track_activity_query_size'] = "1024"
# postgresql['shared_preload_libraries'] = nil
# postgresql['dynamic_shared_memory_type'] = nil
# postgresql['hot_standby'] = "off"
### SSL settings
# See https://www.postgresql.org/docs/9.6/static/runtime-config-connection.html#GUC-SSL-CERT-FILE for more details
# postgresql['ssl'] = 'on'
# postgresql['ssl_ciphers'] = 'HIGH:MEDIUM:+3DES:!aNULL:!SSLv3:!TLSv1'
# postgresql['ssl_cert_file'] = 'server.crt'
# postgresql['ssl_key_file'] = 'server.key'
# postgresql['ssl_ca_file'] = '/opt/gitlab/embedded/ssl/certs/cacert.pem'
# postgresql['ssl_crl_file'] = nil
### Replication settings
###! Note, some replication settings do not require a full restart. They are documented below.
# postgresql['wal_level'] = "hot_standby"
# postgresql['max_wal_senders'] = 5
# postgresql['max_replication_slots'] = 0
# postgresql['max_locks_per_transaction'] = 128
# Backup/Archive settings
# postgresql['archive_mode'] = "off"
###! Changing any of these settings only requires a reload of postgresql. You do not need to
###! restart postgresql if you change any of these and run reconfigure.
# postgresql['work_mem'] = "16MB"
# postgresql['maintenance_work_mem'] = "16MB"
# postgresql['checkpoint_segments'] = 10
# postgresql['checkpoint_timeout'] = "5min"
# postgresql['checkpoint_completion_target'] = 0.9
# postgresql['effective_io_concurrency'] = 1
# postgresql['checkpoint_warning'] = "30s"
# postgresql['effective_cache_size'] = "1MB"
# postgresql['shmmax'] = 17179869184 # or 4294967295
# postgresql['shmall'] = 4194304 # or 1048575
# postgresql['autovacuum'] = "on"
# postgresql['log_autovacuum_min_duration'] = "-1"
# postgresql['autovacuum_naptime'] = "1min"
# postgresql['autovacuum_vacuum_threshold'] = "50"
# postgresql['autovacuum_analyze_threshold'] = "50"
# postgresql['autovacuum_vacuum_scale_factor'] = "0.02"
# postgresql['autovacuum_analyze_scale_factor'] = "0.01"
# postgresql['autovacuum_vacuum_cost_delay'] = "20ms"
# postgresql['autovacuum_vacuum_cost_limit'] = "-1"
# postgresql['statement_timeout'] = "60000"
# postgresql['idle_in_transaction_session_timeout'] = "60000"
# postgresql['log_line_prefix'] = "%a"
# postgresql['max_worker_processes'] = 8
# postgresql['max_parallel_workers_per_gather'] = 0
# postgresql['log_lock_waits'] = 1
# postgresql['deadlock_timeout'] = '5s'
# postgresql['track_io_timing'] = 0
# postgresql['default_statistics_target'] = 1000
### Available in PostgreSQL 9.6 and later
# postgresql['min_wal_size'] = 80MB
# postgresql['max_wal_size'] = 1GB
# Backup/Archive settings
# postgresql['archive_command'] = nil
# postgresql['archive_timeout'] = "0"
### Replication settings
# postgresql['sql_replication_user'] = "gitlab_replicator"
# postgresql['sql_replication_password'] = "md5 hash of postgresql password" # You can generate with `gitlab-ctl pg-password-md5 <dbuser>`
# postgresql['wal_keep_segments'] = 10
# postgresql['max_standby_archive_delay'] = "30s"
# postgresql['max_standby_streaming_delay'] = "30s"
# postgresql['synchronous_commit'] = on
# postgresql['synchronous_standby_names'] = ''
# postgresql['hot_standby_feedback'] = 'off'
# postgresql['random_page_cost'] = 2.0
# postgresql['log_temp_files'] = -1
# postgresql['log_checkpoints'] = 'off'
# To add custom entries to pg_hba.conf use the following
# postgresql['custom_pg_hba_entries'] = {
# APPLICATION: [ # APPLICATION should identify what the settings are used for
# {
# type: example,
# database: example,
# user: example,
# cidr: example,
# method: example,
# option: example
# }
# ]
# }
# See https://www.postgresql.org/docs/9.6/static/auth-pg-hba-conf.html for an explanation
# of the values
################################################################################
## GitLab Redis
##! **Can be disabled if you are using your own Redis instance.**
##! Docs: https://docs.gitlab.com/omnibus/settings/redis.html
################################################################################
# redis['enable'] = true
# redis['ha'] = false
# redis['hz'] = 10
# redis['dir'] = "/var/opt/gitlab/redis"
# redis['log_directory'] = "/var/log/gitlab/redis"
# redis['username'] = "gitlab-redis"
# redis['group'] = "gitlab-redis"
# redis['maxclients'] = "10000"
# redis['maxmemory'] = "0"
# redis['maxmemory_policy'] = "noeviction"
# redis['maxmemory_samples'] = "5"
# redis['tcp_backlog'] = 511
# redis['tcp_timeout'] = "60"
# redis['tcp_keepalive'] = "300"
# redis['uid'] = nil
# redis['gid'] = nil
###! **To enable only Redis service in this machine, uncomment
###! one of the lines below (choose master or slave instance types).**
###! Docs: https://docs.gitlab.com/omnibus/settings/redis.html
###! https://docs.gitlab.com/ce/administration/high_availability/redis.html
# redis_master_role['enable'] = true
# redis_slave_role['enable'] = true
### Redis TCP support (will disable UNIX socket transport)
# redis['bind'] = '0.0.0.0' # or specify an IP to bind to a single one
# redis['port'] = 6379
# redis['password'] = 'redis-password-goes-here'
### Redis Sentinel support
###! **You need a master slave Redis replication to be able to do failover**
###! **Please read the documentation before enabling it to understand the
###! caveats:**
###! Docs: https://docs.gitlab.com/ce/administration/high_availability/redis.html
### Replication support
#### Slave Redis instance
# redis['master'] = false # by default this is true
#### Slave and Sentinel shared configuration
####! **Both need to point to the master Redis instance to get replication and
####! heartbeat monitoring**
# redis['master_name'] = 'gitlab-redis'
# redis['master_ip'] = nil
# redis['master_port'] = 6379
#### Support to run redis slaves in a Docker or NAT environment
####! Docs: https://redis.io/topics/replication#configuring-replication-in-docker-and-nat
# redis['announce_ip'] = nil
# redis['announce_port'] = nil
####! **Master password should have the same value defined in
####! redis['password'] to enable the instance to transition to/from
####! master/slave in a failover event.**
# redis['master_password'] = 'redis-password-goes-here'
####! Increase these values when your slaves can't catch up with master
# redis['client_output_buffer_limit_normal'] = '0 0 0'
# redis['client_output_buffer_limit_slave'] = '256mb 64mb 60'
# redis['client_output_buffer_limit_pubsub'] = '32mb 8mb 60'
#####! Redis snapshotting frequency
#####! Set to [] to disable
#####! Set to [''] to clear previously set values
# redis['save'] = [ '900 1', '300 10', '60 10000' ]
################################################################################
## GitLab Web server
##! Docs: https://docs.gitlab.com/omnibus/settings/nginx.html#using-a-non-bundled-web-server
################################################################################
##! When bundled nginx is disabled we need to add the external webserver user to
##! the GitLab webserver group.
# web_server['external_users'] = []
# web_server['username'] = 'gitlab-www'
# web_server['group'] = 'gitlab-www'
# web_server['uid'] = nil
# web_server['gid'] = nil
# web_server['shell'] = '/bin/false'
# web_server['home'] = '/var/opt/gitlab/nginx'
################################################################################
## GitLab NGINX
##! Docs: https://docs.gitlab.com/omnibus/settings/nginx.html
################################################################################
# nginx['enable'] = true
# nginx['client_max_body_size'] = '250m'
# nginx['redirect_http_to_https'] = false
# nginx['redirect_http_to_https_port'] = 80
##! Most root CA's are included by default
# nginx['ssl_client_certificate'] = "/etc/gitlab/ssl/ca.crt"
##! enable/disable 2-way SSL client authentication
# nginx['ssl_verify_client'] = "off"
##! if ssl_verify_client on, verification depth in the client certificates chain
# nginx['ssl_verify_depth'] = "1"
# nginx['ssl_certificate'] = "/etc/gitlab/ssl/#{node['fqdn']}.crt"
# nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/#{node['fqdn']}.key"
# nginx['ssl_ciphers'] = "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256"
# nginx['ssl_prefer_server_ciphers'] = "on"
##! **Recommended by: https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
##! https://cipherli.st/**
# nginx['ssl_protocols'] = "TLSv1.1 TLSv1.2"
##! **Recommended in: https://nginx.org/en/docs/http/ngx_http_ssl_module.html**
# nginx['ssl_session_cache'] = "builtin:1000 shared:SSL:10m"
##! **Default according to https://nginx.org/en/docs/http/ngx_http_ssl_module.html**
# nginx['ssl_session_timeout'] = "5m"
# nginx['ssl_dhparam'] = nil # Path to dhparams.pem, eg. /etc/gitlab/ssl/dhparams.pem
# nginx['listen_addresses'] = ['*', '[::]']
##! **Defaults to forcing web browsers to always communicate using only HTTPS**
##! Docs: https://docs.gitlab.com/omnibus/settings/nginx.html#setting-http-strict-transport-security
# nginx['hsts_max_age'] = 31536000
# nginx['hsts_include_subdomains'] = false
##! **Docs: http://nginx.org/en/docs/http/ngx_http_gzip_module.html**
# nginx['gzip_enabled'] = true
##! **Override only if you use a reverse proxy**
##! Docs: https://docs.gitlab.com/omnibus/settings/nginx.html#setting-the-nginx-listen-port
# nginx['listen_port'] = nil
##! **Override only if your reverse proxy internally communicates over HTTP**
##! Docs: https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl
# nginx['listen_https'] = nil
# nginx['custom_gitlab_server_config'] = "location ^~ /foo-namespace/bar-project/raw/ {\n deny all;\n}\n"
# nginx['custom_nginx_config'] = "include /etc/nginx/conf.d/example.conf;"
# nginx['proxy_read_timeout'] = 3600
# nginx['proxy_connect_timeout'] = 300
# nginx['proxy_set_headers'] = {
# "Host" => "$http_host_with_default",
# "X-Real-IP" => "$remote_addr",
# "X-Forwarded-For" => "$proxy_add_x_forwarded_for",
# "X-Forwarded-Proto" => "https",
# "X-Forwarded-Ssl" => "on",
# "Upgrade" => "$http_upgrade",
# "Connection" => "$connection_upgrade"
# }
# nginx['proxy_cache_path'] = 'proxy_cache keys_zone=gitlab:10m max_size=1g levels=1:2'
# nginx['proxy_cache'] = 'gitlab'
# nginx['http2_enabled'] = true
# nginx['real_ip_trusted_addresses'] = []
# nginx['real_ip_header'] = nil
# nginx['real_ip_recursive'] = nil
# nginx['custom_error_pages'] = {
# '404' => {
# 'title' => 'Example title',
# 'header' => 'Example header',
# 'message' => 'Example message'
# }
# }
### Advanced settings
# nginx['dir'] = "/var/opt/gitlab/nginx"
# nginx['log_directory'] = "/var/log/gitlab/nginx"
# nginx['worker_processes'] = 4
# nginx['worker_connections'] = 10240
# nginx['log_format'] = '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"'
# nginx['sendfile'] = 'on'
# nginx['tcp_nopush'] = 'on'
# nginx['tcp_nodelay'] = 'on'
# nginx['gzip'] = "on"
# nginx['gzip_http_version'] = "1.0"
# nginx['gzip_comp_level'] = "2"
# nginx['gzip_proxied'] = "any"
# nginx['gzip_types'] = [ "text/plain", "text/css", "application/x-javascript", "text/xml", "application/xml", "application/xml+rss", "text/javascript", "application/json" ]
# nginx['keepalive_timeout'] = 65
# nginx['cache_max_size'] = '5000m'
# nginx['server_names_hash_bucket_size'] = 64
##! These paths have proxy_request_buffering disabled
# nginx['request_buffering_off_path_regex'] = "\.git/git-receive-pack$|\.git/info/refs?service=git-receive-pack$|\.git/gitlab-lfs/objects|\.git/info/lfs/objects/batch$"
### Nginx status
# nginx['status'] = {
# "enable" => true,
# "listen_addresses" => ["127.0.0.1"],
# "fqdn" => "dev.example.com",
# "port" => 9999,
# "vts_enable" => true,
# "options" => {
# "stub_status" => "on", # Turn on stats
# "server_tokens" => "off", # Don't show the version of NGINX
# "access_log" => "off", # Disable logs for stats
# "allow" => "127.0.0.1", # Only allow access from localhost
# "deny" => "all" # Deny access to anyone else
# }
# }
################################################################################
## GitLab Logging
##! Docs: https://docs.gitlab.com/omnibus/settings/logs.html
################################################################################
# logging['svlogd_size'] = 200 * 1024 * 1024 # rotate after 200 MB of log data
# logging['svlogd_num'] = 30 # keep 30 rotated log files
# logging['svlogd_timeout'] = 24 * 60 * 60 # rotate after 24 hours
# logging['svlogd_filter'] = "gzip" # compress logs with gzip
# logging['svlogd_udp'] = nil # transmit log messages via UDP
# logging['svlogd_prefix'] = nil # custom prefix for log messages
# logging['logrotate_frequency'] = "daily" # rotate logs daily
# logging['logrotate_size'] = nil # do not rotate by size by default
# logging['logrotate_rotate'] = 30 # keep 30 rotated logs
# logging['logrotate_compress'] = "compress" # see 'man logrotate'
# logging['logrotate_method'] = "copytruncate" # see 'man logrotate'
# logging['logrotate_postrotate'] = nil # no postrotate command by default
# logging['logrotate_dateformat'] = nil # use date extensions for rotated files rather than numbers e.g. a value of "-%Y-%m-%d" would give rotated files like production.log-2016-03-09.gz
### UDP log forwarding
##! Docs: http://docs.gitlab.com/omnibus/settings/logs.html#udp-log-forwarding
##! remote host to ship log messages to via UDP
# logging['udp_log_shipping_host'] = nil
##! override the hostname used when logs are shipped via UDP,
## by default the system hostname will be used.
# logging['udp_log_shipping_hostname'] = nil
##! remote port to ship log messages to via UDP
# logging['udp_log_shipping_port'] = 514
################################################################################
## Logrotate
##! Docs: https://docs.gitlab.com/omnibus/settings/logs.html#logrotate
##! You can disable built in logrotate feature.
################################################################################
# logrotate['enable'] = true
################################################################################
## Users and groups accounts
##! Disable management of users and groups accounts.
##! **Set only if creating accounts manually**
##! Docs: https://docs.gitlab.com/omnibus/settings/configuration.html#disable-user-and-group-account-management
################################################################################
# manage_accounts['enable'] = false
################################################################################
## Storage directories
##! Disable managing storage directories
##! Docs: https://docs.gitlab.com/omnibus/settings/configuration.html#disable-storage-directories-management
################################################################################
##! **Set only if the select directories are created manually**
# manage_storage_directories['enable'] = false
# manage_storage_directories['manage_etc'] = false
################################################################################
## Runtime directory
##! Docs: https://docs.gitlab.com//omnibus/settings/configuration.html#configuring-runtime-directory
################################################################################
# runtime_dir '/run'
################################################################################
## Git
##! Advanced setting for configuring git system settings for omnibus-gitlab
##! internal git
################################################################################
##! For multiple options under one header use array of comma separated values,
##! eg.:
##! { "receive" => ["fsckObjects = true"], "alias" => ["st = status", "co = checkout"] }
# omnibus_gitconfig['system'] = {
# "pack" => ["threads = 1"],
# "receive" => ["fsckObjects = true", "advertisePushOptions = true"],
# "repack" => ["writeBitmaps = true"],
# "transfer" => ["hideRefs=^refs/tmp/", "hideRefs=^refs/keep-around/", "hideRefs=^refs/remotes/"],
# }
################################################################################
## GitLab Pages
##! Docs: https://docs.gitlab.com/ce/pages/administration.html
################################################################################
##! Define to enable GitLab Pages
# pages_external_url "http://pages.example.com/"
# gitlab_pages['enable'] = false
##! Configure to expose GitLab Pages on external IP address, serving the HTTP
# gitlab_pages['external_http'] = []
##! Configure to expose GitLab Pages on external IP address, serving the HTTPS
# gitlab_pages['external_https'] = []
##! Configure to enable health check endpoint on GitLab Pages
# gitlab_pages['status_uri'] = "/@status"
##! Tune the maximum number of concurrent connections GitLab Pages will handle.
##! This should be in the range 1 - 10000, defaulting to 5000.
# gitlab_pages['max_connections'] = 5000
##! Configure to use JSON structured logging in GitLab Pages
# gitlab_pages['log_format'] = "json"
##! Configure verbose logging for GitLab Pages
# gitlab_pages['log_verbose'] = false
##! Listen for requests forwarded by reverse proxy
# gitlab_pages['listen_proxy'] = "localhost:8090"
# gitlab_pages['redirect_http'] = true
# gitlab_pages['use_http2'] = true
# gitlab_pages['dir'] = "/var/opt/gitlab/gitlab-pages"
# gitlab_pages['log_directory'] = "/var/log/gitlab/gitlab-pages"
# gitlab_pages['artifacts_server'] = true
# gitlab_pages['artifacts_server_url'] = nil # Defaults to external_url + '/api/v4'
# gitlab_pages['artifacts_server_timeout'] = 10
##! Environments that do not support bind-mounting should set this parameter to
##! true. This is incompatible with the artifacts server
# gitlab_pages['inplace_chroot'] = false
##! Prometheus metrics for Pages docs: https://gitlab.com/gitlab-org/gitlab-pages/#enable-prometheus-metrics
# gitlab_pages['metrics_address'] = ":9235"
##! Configure the pages admin API
# gitlab_pages['admin_secret_token'] = 'custom secret'
# gitlab_pages['admin_https_listener'] = '0.0.0.0:5678'
# gitlab_pages['admin_https_cert'] = '/etc/gitlab/pages-admin.crt'
# gitlab_pages['admin_https_key'] = '/etc/gitlab/pages-admin.key'
##! Client side configuration for gitlab-pages admin API, in case pages runs on a different host
# gitlab_rails['pages_admin_address'] = 'pages.gitlab.example.com:5678'
# gitlab_rails['pages_admin_certificate'] = '/etc/gitlab/pages-admin.crt'
##! Pages access control
# gitlab_pages['access_control'] = false
# gitlab_pages['gitlab_id'] = nil # Automatically generated if not present
# gitlab_pages['gitlab_secret'] = nil # Generated if not present
# gitlab_pages['auth_redirect_uri'] = nil # Defaults to projects subdomain of pages_external_url and + '/auth'
# gitlab_pages['auth_server'] = nil # Defaults to external_url
# gitlab_pages['auth_secret'] = nil # Generated if not present
################################################################################
## GitLab Pages NGINX
################################################################################
# All the settings defined in the "GitLab Nginx" section are also available in this "GitLab Pages NGINX" section
# You just have to change the key "nginx['some_settings']" with "pages_nginx['some_settings']"
# Below you can find settings that are exclusive to "GitLab Pages NGINX"
# pages_nginx['enable'] = false
# gitlab_rails['pages_path'] = "/var/opt/gitlab/gitlab-rails/shared/pages"
################################################################################
## GitLab CI
##! Docs: https://docs.gitlab.com/ce/ci/quick_start/README.html
################################################################################
# gitlab_ci['gitlab_ci_all_broken_builds'] = true
# gitlab_ci['gitlab_ci_add_pusher'] = true
# gitlab_ci['builds_directory'] = '/var/opt/gitlab/gitlab-ci/builds'
################################################################################
## GitLab Mattermost
##! Docs: https://docs.gitlab.com/omnibus/gitlab-mattermost
################################################################################
# mattermost_external_url 'http://mattermost.example.com'
# mattermost['enable'] = false
# mattermost['username'] = 'mattermost'
# mattermost['group'] = 'mattermost'
# mattermost['uid'] = nil
# mattermost['gid'] = nil
# mattermost['home'] = '/var/opt/gitlab/mattermost'
# mattermost['database_name'] = 'mattermost_production'
# mattermost['env'] = {
# 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
# }
# mattermost['service_address'] = "127.0.0.1"
# mattermost['service_port'] = "8065"
# mattermost['service_site_url'] = nil
# mattermost['service_allowed_untrusted_internal_connections'] = ""
# mattermost['service_enable_api_team_deletion'] = true
# mattermost['team_site_name'] = "GitLab Mattermost"
# mattermost['sql_driver_name'] = 'mysql'
# mattermost['sql_data_source'] = "mmuser:mostest@tcp(dockerhost:3306)/mattermost_test?charset=utf8mb4,utf8"
# mattermost['log_file_directory'] = '/var/log/gitlab/mattermost/'
# mattermost['gitlab_enable'] = false
# mattermost['gitlab_id'] = "12345656"
# mattermost['gitlab_secret'] = "123456789"
# mattermost['gitlab_scope'] = ""
# mattermost['gitlab_auth_endpoint'] = "http://gitlab.example.com/oauth/authorize"
# mattermost['gitlab_token_endpoint'] = "http://gitlab.example.com/oauth/token"
# mattermost['gitlab_user_api_endpoint'] = "http://gitlab.example.com/api/v4/user"
# mattermost['file_directory'] = "/var/opt/gitlab/mattermost/data"
# mattermost['plugin_directory'] = "/var/opt/gitlab/mattermost/plugins"
# mattermost['plugin_client_directory'] = "/var/opt/gitlab/mattermost/client-plugins"
################################################################################
## Mattermost NGINX
################################################################################
# All the settings defined in the "GitLab NGINX" section are also available in this "Mattermost NGINX" section
# You just have to change the key "nginx['some_settings']" with "mattermost_nginx['some_settings']"
# Below you can find settings that are exclusive to "Mattermost NGINX"
# mattermost_nginx['enable'] = false
# mattermost_nginx['custom_gitlab_mattermost_server_config'] = "location ^~ /foo-namespace/bar-project/raw/ {\n deny all;\n}\n"
# mattermost_nginx['proxy_set_headers'] = {
# "Host" => "$http_host",
# "X-Real-IP" => "$remote_addr",
# "X-Forwarded-For" => "$proxy_add_x_forwarded_for",
# "X-Frame-Options" => "SAMEORIGIN",
# "X-Forwarded-Proto" => "https",
# "X-Forwarded-Ssl" => "on",
# "Upgrade" => "$http_upgrade",
# "Connection" => "$connection_upgrade"
# }
################################################################################
## Registry NGINX
################################################################################
# All the settings defined in the "GitLab NGINX" section are also available in this "Registry NGINX" section
# You just have to change the key "nginx['some_settings']" with "registry_nginx['some_settings']"
# Below you can find settings that are exclusive to "Registry NGINX"
# registry_nginx['enable'] = false
# registry_nginx['proxy_set_headers'] = {
# "Host" => "$http_host",
# "X-Real-IP" => "$remote_addr",
# "X-Forwarded-For" => "$proxy_add_x_forwarded_for",
# "X-Forwarded-Proto" => "https",
# "X-Forwarded-Ssl" => "on"
# }
################################################################################
## Prometheus
##! Docs: https://docs.gitlab.com/ce/administration/monitoring/prometheus/
################################################################################
# prometheus['enable'] = true
# prometheus['monitor_kubernetes'] = true
# prometheus['username'] = 'gitlab-prometheus'
# prometheus['group'] = 'gitlab-prometheus'
# prometheus['uid'] = nil
# prometheus['gid'] = nil
# prometheus['shell'] = '/bin/sh'
# prometheus['home'] = '/var/opt/gitlab/prometheus'
# prometheus['log_directory'] = '/var/log/gitlab/prometheus'
# prometheus['rules_files'] = ['/var/opt/gitlab/prometheus/rules/*.rules']
# prometheus['scrape_interval'] = 15
# prometheus['scrape_timeout'] = 15
# prometheus['chunk_encoding_version'] = 2
# prometheus['env_directory'] = '/opt/gitlab/etc/prometheus/env'
# prometheus['env'] = {
# 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
# }
#
### Custom scrape configs
#
# Prometheus can scrape additional jobs via scrape_configs. The default automatically
# includes all of the exporters supported by the omnibus config.
#
# See: https://prometheus.io/docs/operating/configuration/#<scrape_config>
#
# Example:
#
# prometheus['scrape_configs'] = [
# {
# 'job_name': 'example',
# 'static_configs' => [
# 'targets' => ['hostname:port'],
# ],
# },
# ]
#
### Prometheus Memory Management
#
# Prometheus needs to be configured for how much memory is used.
# * This sets the target heap size.
# * This value accounts for approximately 2/3 of the memory used by the server.
# * The recommended memory is 4kb per unique metrics time-series.
# See: https://prometheus.io/docs/operating/storage/#memory-usage
#
# prometheus['target_heap_size'] = (
# # Use 25mb + 2% of total memory for Prometheus memory.
# 26_214_400 + (node['memory']['total'].to_i * 1024 * 0.02 )
# ).to_i
#
# prometheus['flags'] = {
# 'storage.local.path' => "#{node['gitlab']['prometheus']['home']}/data",
# 'storage.local.chunk-encoding-version' => user_config['chunk-encoding-version'],
# 'storage.local.target-heap-size' => node['gitlab']['prometheus']['target-heap-size'],
# 'config.file' => "#{node['gitlab']['prometheus']['home']}/prometheus.yml"
# }
##! Advanced settings. Should be changed only if absolutely needed.
# prometheus['listen_address'] = 'localhost:9090'
################################################################################
## Prometheus Alertmanager
##! Docs: https://docs.gitlab.com/ce/administration/monitoring/prometheus/alertmanager.html
################################################################################
# alertmanager['enable'] = true
# alertmanager['home'] = '/var/opt/gitlab/alertmanager'
# alertmanager['log_directory'] = '/var/log/gitlab/alertmanager'
# alertmanager['admin_email'] = 'admin@example.com'
# alertmanager['flags'] = {
# 'web.listen-address' => "#{node['gitlab']['alertmanager']['listen_address']}"
# 'storage.path' => "#{node['gitlab']['alertmanager']['home']}/data"
# 'config.file' => "#{node['gitlab']['alertmanager']['home']}/alertmanager.yml"
# }
# alertmanager['env_directory'] = '/opt/gitlab/etc/alertmanager/env'
# alertmanager['env'] = {
# 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
# }
##! Advanced settings. Should be changed only if absolutely needed.
# alertmanager['listen_address'] = 'localhost:9093'
################################################################################
## Prometheus Node Exporter
##! Docs: https://docs.gitlab.com/ce/administration/monitoring/prometheus/node_exporter.html
################################################################################
# node_exporter['enable'] = true
# node_exporter['home'] = '/var/opt/gitlab/node-exporter'
# node_exporter['log_directory'] = '/var/log/gitlab/node-exporter'
# node_exporter['flags'] = {
# 'collector.textfile.directory' => "#{node['gitlab']['node-exporter']['home']}/textfile_collector"
# }
# node_exporter['env_directory'] = '/opt/gitlab/etc/node-exporter/env'
# node_exporter['env'] = {
# 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
# }
##! Advanced settings. Should be changed only if absolutely needed.
# node_exporter['listen_address'] = 'localhost:9100'
################################################################################
## Prometheus Redis exporter
##! Docs: https://docs.gitlab.com/ce/administration/monitoring/prometheus/redis_exporter.html
################################################################################
# redis_exporter['enable'] = true
# redis_exporter['log_directory'] = '/var/log/gitlab/redis-exporter'
# redis_exporter['flags'] = {
# 'redis.addr' => "unix://#{node['gitlab']['gitlab-rails']['redis_socket']}",
# }
# redis_exporter['env_directory'] = '/opt/gitlab/etc/redis-exporter/env'
# redis_exporter['env'] = {
# 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
# }
##! Advanced settings. Should be changed only if absolutely needed.
# redis_exporter['listen_address'] = 'localhost:9121'
################################################################################
## Prometheus Postgres exporter
##! Docs: https://docs.gitlab.com/ce/administration/monitoring/prometheus/postgres_exporter.html
################################################################################
# postgres_exporter['enable'] = true
# postgres_exporter['home'] = '/var/opt/gitlab/postgres-exporter'
# postgres_exporter['log_directory'] = '/var/log/gitlab/postgres-exporter'
# postgres_exporter['flags'] = {}
# postgres_exporter['listen_address'] = 'localhost:9187'
# postgres_exporter['env_directory'] = '/opt/gitlab/etc/postgres-exporter/env'
# postgres_exporter['env'] = {
# 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
# }
################################################################################
## Prometheus PgBouncer exporter (EE only)
##! Docs: https://docs.gitlab.com/ee/administration/monitoring/prometheus/pgbouncer_exporter.html
################################################################################
# pgbouncer_exporter['enable'] = false
# pgbouncer_exporter['log_directory'] = "/var/log/gitlab/pgbouncer-exporter"
# pgbouncer_exporter['listen_address'] = 'localhost:9188'
# pgbouncer_exporter['env_directory'] = '/opt/gitlab/etc/pgbouncer-exporter/env'
# pgbouncer_exporter['env'] = {
# 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
# }
################################################################################
## Prometheus Gitlab monitor
##! Docs: https://docs.gitlab.com/ce/administration/monitoring/prometheus/gitlab_monitor_exporter.html
################################################################################
# gitlab_monitor['enable'] = true
# gitlab_monitor['log_directory'] = "/var/log/gitlab/gitlab-monitor"
# gitlab_monitor['home'] = "/var/opt/gitlab/gitlab-monitor"
##! Advanced settings. Should be changed only if absolutely needed.
# gitlab_monitor['listen_address'] = 'localhost'
# gitlab_monitor['listen_port'] = '9168'
##! Manage gitlab-monitor sidekiq probes. false by default when Sentinels are
##! found.
# gitlab_monitor['probe_sidekiq'] = true
# To completely disable prometheus, and all of it's exporters, set to false
# prometheus_monitoring['enable'] = true
################################################################################
## Gitaly
##! Docs:
################################################################################
# The gitaly['enable'] option exists for the purpose of cluster
# deployments, see https://docs.gitlab.com/ee/administration/gitaly/index.html .
# gitaly['enable'] = true
# gitaly['dir'] = "/var/opt/gitlab/gitaly"
# gitaly['log_directory'] = "/var/log/gitlab/gitaly"
# gitaly['bin_path'] = "/opt/gitlab/embedded/bin/gitaly"
# gitaly['env_directory'] = "/opt/gitlab/etc/gitaly/env"
# gitaly['env'] = {
# 'PATH' => "/opt/gitlab/bin:/opt/gitlab/embedded/bin:/bin:/usr/bin",
# 'HOME' => '/var/opt/gitlab'
# }
# gitaly['socket_path'] = "/var/opt/gitlab/gitaly/gitaly.socket"
# gitaly['listen_addr'] = "localhost:8075"
# gitaly['prometheus_listen_addr'] = "localhost:9236"
# gitaly['logging_level'] = "warn"
# gitaly['logging_format'] = "json"
# gitaly['logging_sentry_dsn'] = "https://<key>:<secret>@sentry.io/<project>"
# gitaly['logging_ruby_sentry_dsn'] = "https://<key>:<secret>@sentry.io/<project>"
# gitaly['prometheus_grpc_latency_buckets'] = "[0.001, 0.005, 0.025, 0.1, 0.5, 1.0, 10.0, 30.0, 60.0, 300.0, 1500.0]"
# gitaly['auth_token'] = '<secret>'
# gitaly['auth_transitioning'] = false # When true, auth is logged to Prometheus but NOT enforced
# gitaly['ruby_max_rss'] = 300000000 # RSS threshold in bytes for triggering a gitaly-ruby restart
# gitaly['ruby_graceful_restart_timeout'] = '10m' # Grace time for a gitaly-ruby process to finish ongoing requests
# gitaly['ruby_restart_delay'] = '5m' # Period of sustained high RSS that needs to be observed before restarting gitaly-ruby
# gitaly['ruby_num_workers'] = 3 # Number of gitaly-ruby worker processes. Minimum 2, default 2.
# gitaly['storage'] = [
# {
# 'name' => 'default',
# 'path' => '/mnt/nfs-01/git-data/repositories'
# },
# {
# 'name' => 'secondary',
# 'path' => '/mnt/nfs-02/git-data/repositories'
# }
# ]
# gitaly['concurrency'] = [
# {
# 'rpc' => "/gitaly.SmartHTTPService/PostReceivePack",
# 'max_per_repo' => 20
# }, {
# 'rpc' => "/gitaly.SSHService/SSHUploadPack",
# 'max_per_repo' => 5
# }
# ]
################################################################################
# Storage check
################################################################################
# storage_check['enable'] = false
# storage_check['target'] = 'unix:///var/opt/gitlab/gitlab-rails/sockets/gitlab.socket'
# storage_check['log_directory'] = '/var/log/gitlab/storage-check'
################################################################################
# Let's Encrypt integration
################################################################################
# letsencrypt['enable'] = nil
# letsencrypt['contact_emails'] = [] # This should be an array of email addresses to add as contacts
# letsencrypt['group'] = 'root'
# letsencrypt['key_size'] = 2048
# letsencrypt['owner'] = 'root'
# letsencrypt['wwwroot'] = '/var/opt/gitlab/nginx/www'
# See http://docs.gitlab.com/omnibus/settings/ssl.html#automatic-renewal for more on these sesttings
# letsencrypt['auto_renew'] = true
# letsencrypt['auto_renew_hour'] = 0
# letsencrypt['auto_renew_minute'] = nil # Should be a number or cron expression, if specified.
# letsencrypt['auto_renew_day_of_month'] = "*/4"
################################################################################
################################################################################
## Configuration Settings for GitLab EE only ##
################################################################################
################################################################################
################################################################################
## Auxiliary cron jobs applicable to GitLab EE only
################################################################################
#
# gitlab_rails['geo_file_download_dispatch_worker_cron'] = "*/10 * * * *"
# gitlab_rails['geo_repository_sync_worker_cron'] = "*/5 * * * *"
# gitlab_rails['geo_prune_event_log_worker_cron'] = "*/5 * * * *"
# gitlab_rails['geo_repository_verification_primary_batch_worker_cron'] = "*/5 * * * *"
# gitlab_rails['geo_repository_verification_secondary_scheduler_worker_cron'] = "*/5 * * * *"
# gitlab_rails['geo_migrated_local_files_clean_up_worker_cron'] = "15 */6 * * *"
# gitlab_rails['ldap_sync_worker_cron'] = "30 1 * * *"
# gitlab_rails['ldap_group_sync_worker_cron'] = "0 * * * *"
# gitlab_rails['historical_data_worker_cron'] = "0 12 * * *"
# gitlab_rails['pseudonymizer_worker_cron'] = "0 23 * * *"
################################################################################
## Kerberos (EE Only)
##! Docs: https://docs.gitlab.com/ee/integration/kerberos.html#http-git-access
################################################################################
# gitlab_rails['kerberos_enabled'] = true
# gitlab_rails['kerberos_keytab'] = /etc/http.keytab
# gitlab_rails['kerberos_service_principal_name'] = HTTP/gitlab.example.com@EXAMPLE.COM
# gitlab_rails['kerberos_use_dedicated_port'] = true
# gitlab_rails['kerberos_port'] = 8443
# gitlab_rails['kerberos_https'] = true
################################################################################
## Package repository (EE Only)
##! Docs: https://docs.gitlab.com/ee/administration/maven_packages.md
################################################################################
# gitlab_rails['packages_enabled'] = true
# gitlab_rails['packages_storage_path'] = "/var/opt/gitlab/gitlab-rails/shared/packages"
# gitlab_rails['packages_object_store_enabled'] = false
# gitlab_rails['packages_object_store_direct_upload'] = false
# gitlab_rails['packages_object_store_background_upload'] = true
# gitlab_rails['packages_object_store_proxy_download'] = false
# gitlab_rails['packages_object_store_remote_directory'] = "packages"
# gitlab_rails['packages_object_store_connection'] = {
# 'provider' => 'AWS',
# 'region' => 'eu-west-1',
# 'aws_access_key_id' => 'AWS_ACCESS_KEY_ID',
# 'aws_secret_access_key' => 'AWS_SECRET_ACCESS_KEY',
# # # The below options configure an S3 compatible host instead of AWS
# # 'host' => 's3.amazonaws.com',
# # 'aws_signature_version' => 4, # For creation of signed URLs. Set to 2 if provider does not support v4.
# # 'endpoint' => 'https://s3.amazonaws.com', # default: nil - Useful for S3 compliant services such as DigitalOcean Spaces
# # 'path_style' => false # Use 'host/bucket_name/object' instead of 'bucket_name.host/object'
# }
################################################################################
## GitLab Sentinel (EE Only)
##! Docs: http://docs.gitlab.com/ce/administration/high_availability/redis.html#high-availability-with-sentinel
################################################################################
##! **Make sure you configured all redis['master_*'] keys above before
##! continuing.**
##! To enable Sentinel and disable all other services in this machine,
##! uncomment the line below (if you've enabled Redis role, it will keep it).
##! Docs: https://docs.gitlab.com/ce/administration/high_availability/redis.html
# redis_sentinel_role['enable'] = true
# sentinel['enable'] = true
##! Bind to all interfaces, uncomment to specify an IP and bind to a single one
# sentinel['bind'] = '0.0.0.0'
##! Uncomment to change default port
# sentinel['port'] = 26379
#### Support to run sentinels in a Docker or NAT environment
#####! Docs: https://redis.io/topics/sentinel#sentinel-docker-nat-and-possible-issues
# In an standard case, Sentinel will run in the same network service as Redis, so the same IP will be announce for Redis and Sentinel
# Only define these values if it is needed to announce for Sentinel a differen IP service than Redis
# sentinel['announce_ip'] = nil # If not defined, its value will be taken from redis['announce_ip'] or nil if not present
# sentinel['announce_port'] = nil # If not defined, its value will be taken from sentinel['port'] or nil if redis['announce_ip'] not present
##! Quorum must reflect the amount of voting sentinels it take to start a
##! failover.
##! **Value must NOT be greater then the amount of sentinels.**
##! The quorum can be used to tune Sentinel in two ways:
##! 1. If a the quorum is set to a value smaller than the majority of Sentinels
##! we deploy, we are basically making Sentinel more sensible to master
##! failures, triggering a failover as soon as even just a minority of
##! Sentinels is no longer able to talk with the master.
##! 2. If a quorum is set to a value greater than the majority of Sentinels, we
##! are making Sentinel able to failover only when there are a very large
##! number (larger than majority) of well connected Sentinels which agree
##! about the master being down.
# sentinel['quorum'] = 1
### Consider unresponsive server down after x amount of ms.
# sentinel['down_after_milliseconds'] = 10000
### Specifies the failover timeout in milliseconds.
##! It is used in many ways:
##!
##! - The time needed to re-start a failover after a previous failover was
##! already tried against the same master by a given Sentinel, is two
##! times the failover timeout.
##!
##! - The time needed for a slave replicating to a wrong master according
##! to a Sentinel current configuration, to be forced to replicate
##! with the right master, is exactly the failover timeout (counting since
##! the moment a Sentinel detected the misconfiguration).
##!
##! - The time needed to cancel a failover that is already in progress but
##! did not produced any configuration change (SLAVEOF NO ONE yet not
##! acknowledged by the promoted slave).
##!
##! - The maximum time a failover in progress waits for all the slaves to be
##! reconfigured as slaves of the new master. However even after this time
##! the slaves will be reconfigured by the Sentinels anyway, but not with
##! the exact parallel-syncs progression as specified.
# sentinel['failover_timeout'] = 60000
################################################################################
## GitLab Sidekiq Cluster (EE only)
################################################################################
##! GitLab Enterprise Edition allows one to start an extra set of Sidekiq processes
##! besides the default one. These processes can be used to consume a dedicated set
##! of queues. This can be used to ensure certain queues always have dedicated
##! workers, no matter the amount of jobs that need to be processed.
# sidekiq_cluster['enable'] = false
# sidekiq_cluster['ha'] = false
# sidekiq_cluster['log_directory'] = "/var/log/gitlab/sidekiq-cluster"
# sidekiq_cluster['interval'] = 5 # The number of seconds to wait between worker checks
# sidekiq_cluster['max_concurrency'] = 50 # The maximum number of threads each Sidekiq process should run
##! Each entry in the queue_groups array denotes a group of queues that have to be processed by a
##! Sidekiq process. Multiple queues can be processed by the same process by
##! separating them with a comma within the group entry
# sidekiq_cluster['queue_groups'] = [
# "process_commit,post_receive",
# "gitlab_shell"
# ]
#
##! If negate is enabled then sidekiq-cluster will process all the queues that
##! don't match those in queue_groups.
# sidekiq_cluster['negate'] = false
################################################################################
## Additional Database Settings (EE only)
##! Docs: https://docs.gitlab.com/ee/administration/database_load_balancing.html
################################################################################
# gitlab_rails['db_load_balancing'] = { 'hosts' => ['secondary1.example.com'] }
################################################################################
## GitLab Geo
##! Docs: https://docs.gitlab.com/ee/gitlab-geo
################################################################################
# geo_primary_role['enable'] = false
# geo_secondary_role['enable'] = false
################################################################################
## GitLab Geo Secondary (EE only)
################################################################################
# geo_secondary['auto_migrate'] = true
# geo_secondary['db_adapter'] = "postgresql"
# geo_secondary['db_encoding'] = "unicode"
# geo_secondary['db_collation'] = nil
# geo_secondary['db_database'] = "gitlabhq_geo_production"
# geo_secondary['db_pool'] = 10
# geo_secondary['db_username'] = "gitlab_geo"
# geo_secondary['db_password'] = nil
# geo_secondary['db_host'] = "/var/opt/gitlab/geo-postgresql"
# geo_secondary['db_port'] = 5431
# geo_secondary['db_socket'] = nil
# geo_secondary['db_sslmode'] = nil
# geo_secondary['db_sslcompression'] = 0
# geo_secondary['db_sslrootcert'] = nil
# geo_secondary['db_sslca'] = nil
# geo_secondary['db_fdw'] = true
################################################################################
## GitLab Geo Secondary Tracking Database (EE only)
################################################################################
# geo_postgresql['enable'] = false
# geo_postgresql['ha'] = false
# geo_postgresql['dir'] = '/var/opt/gitlab/geo-postgresql'
# geo_postgresql['data_dir'] = '/var/opt/gitlab/geo-postgresql/data'
# geo_postgresql['pgbouncer_user'] = nil
# geo_postgresql['pgbouncer_user_password'] = nil
################################################################################
# Pgbouncer (EE only)
# See [GitLab PgBouncer documentation](http://docs.gitlab.com/omnibus/settings/database.html#enabling-pgbouncer-ee-only)
# See the [PgBouncer page](https://pgbouncer.github.io/config.html) for details
################################################################################
# pgbouncer['enable'] = false
# pgbouncer['log_directory'] = '/var/log/gitlab/pgbouncer'
# pgbouncer['data_directory'] = '/var/opt/gitlab/pgbouncer'
# pgbouncer['env_directory'] = '/opt/gitlab/etc/pgbouncer/env'
# pgbouncer['env'] = {
# 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
# }
# pgbouncer['listen_addr'] = '0.0.0.0'
# pgbouncer['listen_port'] = '6432'
# pgbouncer['pool_mode'] = 'transaction'
# pgbouncer['server_reset_query'] = 'DISCARD ALL'
# pgbouncer['application_name_add_host'] = '1'
# pgbouncer['max_client_conn'] = '2048'
# pgbouncer['default_pool_size'] = '100'
# pgbouncer['min_pool_size'] = '0'
# pgbouncer['reserve_pool_size'] = '5'
# pgbouncer['reserve_pool_timeout'] = '5.0'
# pgbouncer['server_round_robin'] = '0'
# pgbouncer['log_connections'] = '0'
# pgbouncer['server_idle_timeout'] = '30'
# pgbouncer['dns_max_ttl'] = '15.0'
# pgbouncer['dns_zone_check_period'] = '0'
# pgbouncer['dns_nxdomain_ttl'] = '15.0'
# pgbouncer['admin_users'] = %w(gitlab-psql postgres pgbouncer)
# pgbouncer['stats_users'] = %w(gitlab-psql postgres pgbouncer)
# pgbouncer['ignore_startup_parameters'] = 'extra_float_digits'
# pgbouncer['databases'] = {
# DATABASE_NAME: {
# host: HOSTNAME,
# port: PORT
# user: USERNAME,
# password: PASSWORD
###! generate this with `echo -n '$password + $username' | md5sum`
# }
# ...
# }
# pgbouncer['logfile'] = nil
# pgbouncer['unix_socket_dir'] = nil
# pgbouncer['unix_socket_mode'] = '0777'
# pgbouncer['unix_socket_group'] = nil
# pgbouncer['auth_type'] = 'md5'
# pgbouncer['auth_hba_file'] = nil
# pgbouncer['auth_query'] = 'SELECT username, password FROM public.pg_shadow_lookup($1)'
# pgbouncer['users'] = {
# {
# name: USERNAME,
# password: MD5_PASSWORD_HASH
# }
# }
# postgresql['pgbouncer_user'] = nil
# postgresql['pgbouncer_user_password'] = nil
# pgbouncer['server_reset_query_always'] = 0
# pgbouncer['server_check_query'] = 'select 1'
# pgbouncer['server_check_delay'] = 30
# pgbouncer['max_db_connections'] = nil
# pgbouncer['max_user_connections'] = nil
# pgbouncer['syslog'] = 0
# pgbouncer['syslog_facility'] = 'daemon'
# pgbouncer['syslog_ident'] = 'pgbouncer'
# pgbouncer['log_disconnections'] = 1
# pgbouncer['log_pooler_errors'] = 1
# pgbouncer['stats_period'] = 60
# pgbouncer['verbose'] = 0
# pgbouncer['server_lifetime'] = 3600
# pgbouncer['server_connect_timeout'] = 15
# pgbouncer['server_login_retry'] = 15
# pgbouncer['query_timeout'] = 0
# pgbouncer['query_wait_timeout'] = 120
# pgbouncer['client_idle_timeout'] = 0
# pgbouncer['client_login_timeout'] = 60
# pgbouncer['autodb_idle_timeout'] = 3600
# pgbouncer['suspend_timeout'] = 10
# pgbouncer['idle_transaction_timeout'] = 0
# pgbouncer['pkt_buf'] = 4096
# pgbouncer['listen_backlog'] = 128
# pgbouncer['sbuf_loopcnt'] = 5
# pgbouncer['max_packet_size'] = 2147483647
# pgbouncer['tcp_defer_accept'] = 0
# pgbouncer['tcp_socket_buffer'] = 0
# pgbouncer['tcp_keepalive'] = 1
# pgbouncer['tcp_keepcnt'] = 0
# pgbouncer['tcp_keepidle'] = 0
# pgbouncer['tcp_keepintvl'] = 0
# pgbouncer['disable_pqexec'] = 0
## Pgbouncer client TLS options
# pgbouncer['client_tls_sslmode'] = 'disable'
# pgbouncer['client_tls_ca_file'] = nil
# pgbouncer['client_tls_key_file'] = nil
# pgbouncer['client_tls_cert_file'] = nil
# pgbouncer['client_tls_protocols'] = 'all'
# pgbouncer['client_tls_dheparams'] = 'auto'
# pgbouncer['client_tls_ecdhcurve'] = 'auto'
#
## Pgbouncer server TLS options
# pgbouncer['server_tls_sslmode'] = 'disable'
# pgbouncer['server_tls_ca_file'] = nil
# pgbouncer['server_tls_key_file'] = nil
# pgbouncer['server_tls_cert_file'] = nil
# pgbouncer['server_tls_protocols'] = 'all'
# pgbouncer['server_tls_ciphers'] = 'fast'
################################################################################
# Repmgr (EE only)
################################################################################
# repmgr['enable'] = false
# repmgr['cluster'] = 'gitlab_cluster'
# repmgr['database'] = 'gitlab_repmgr'
# repmgr['host'] = nil
# repmgr['node_number'] = nil
# repmgr['port'] = 5432
# repmgr['trust_auth_cidr_addresses'] = []
# repmgr['user'] = 'gitlab_repmgr'
# repmgr['sslmode'] = 'prefer'
# repmgr['sslcompression'] = 0
# repmgr['failover'] = 'automatic'
# repmgr['log_directory'] = '/var/log/gitlab/repmgrd'
# repmgr['node_name'] = nil
# repmgr['pg_bindir'] = '/opt/gitlab/embedded/bin'
# repmgr['service_start_command'] = '/opt/gitlab/bin/gitlab-ctl start postgresql'
# repmgr['service_stop_command'] = '/opt/gitlab/bin/gitlab-ctl stop postgresql'
# repmgr['service_reload_command'] = '/opt/gitlab/bin/gitlab-ctl hup postgresql'
# repmgr['service_restart_command'] = '/opt/gitlab/bin/gitlab-ctl restart postgresql'
# repmgr['service_promote_command'] = nil
# repmgr['promote_command'] = '/opt/gitlab/embedded/bin/repmgr standby promote -f /var/opt/gitlab/postgresql/repmgr.conf'
# repmgr['follow_command'] = '/opt/gitlab/embedded/bin/repmgr standby follow -f /var/opt/gitlab/postgresql/repmgr.conf'
# repmgr['upstream_node'] = nil
# repmgr['use_replication_slots'] = false
# repmgr['loglevel'] = 'INFO'
# repmgr['logfacility'] = 'STDERR'
# repmgr['logfile'] = nil
# repmgr['event_notification_command'] = nil
# repmgr['event_notifications'] = nil
# repmgr['rsync_options'] = nil
# repmgr['ssh_options'] = nil
# repmgr['priority'] = nil
#
# HA setting to specify if a node should attempt to be master on initialization
# repmgr['master_on_initialization'] = true
# repmgr['retry_promote_interval_secs'] = 300
# repmgr['witness_repl_nodes_sync_interval_secs'] = 15
# repmgr['reconnect_attempts'] = 6
# repmgr['reconnect_interval'] = 10
# repmgr['monitor_interval_secs'] = 2
# repmgr['master_response_timeout'] = 60
# repmgr['daemon'] = true
# repmgrd['enable'] = true
################################################################################
# Consul (EEP only)
################################################################################
# consul['enable'] = false
# consul['dir'] = '/var/opt/gitlab/consul'
# consul['user'] = 'gitlab-consul'
# consul['group'] = 'gitlab-consul'
# consul['config_file'] = '/var/opt/gitlab/consul/config.json'
# consul['config_dir'] = '/var/opt/gitlab/consul/config.d'
# consul['data_dir'] = '/var/opt/gitlab/consul/data'
# consul['log_directory'] = '/var/log/gitlab/consul'
# consul['env_directory'] = '/opt/gitlab/etc/consul/env'
# consul['env'] = {
# 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
# }
# consul['node_name'] = nil
# consul['script_directory'] = '/var/opt/gitlab/consul/scripts'
# consul['configuration'] = {
# 'client_addr' => nil,
# 'datacenter' => 'gitlab_consul',
# 'enable_script_checks' => true,
# 'server' => false
# }
# consul['services'] = []
# consul['service_config'] = {
# 'postgresql' => {
# 'service' => {
# 'name' => "postgresql",
# 'address' => '',
# 'port' => 5432,
# 'checks' => [
# {
# 'script' => "/var/opt/gitlab/consul/scripts/check_postgresql",
# 'interval' => "10s"
# }
# ]
# }
# }
# }
# consul['watchers'] = {
# 'postgresql' => {
# enable: false,
# handler: 'failover_pgbouncer'
# }
# }
================================================
FILE: gitlab/docker-compose.gitlab.yml
================================================
services:
gitlab:
image: 'gitlab/gitlab-ce:${GITLAB_IMAGE_VERSION:-latest}'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.${SITE:-localhost}:80'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
hostname: 'gitlab.${SITE:-localhost}'
labels:
traefik.enable: true
traefik.http.routers.gitlab.rule: 'Host(`gitlab.${SITE:-localhost}`)'
traefik.http.services.gitlab.loadbalancer.server.port: 80
networks:
- 'srv'
ports:
- '2224:22'
restart: 'always'
volumes:
- './config:/etc/gitlab'
- './data:/var/opt/gitlab'
- './logs:/var/log/gitlab'
runner:
image: 'gitlab/gitlab-runner:${GITLAB_RUNNER_IMAGE_VERSION:-latest}'
labels:
traefik.enable: false
links:
- 'gitlab'
restart: 'always'
volumes:
- './runner:/etc/gitlab-runner'
- '/var/run/docker.sock:/var/run/docker.sock'
================================================
FILE: gitlab/runner/config.toml
================================================
concurrent = 10
check_interval = 1
log_level = "info"
log_format = "json"
[session_server]
session_timeout = 1800
[[runners]]
token = ""
name = "The Best Runner"
url = "http://gitlab/"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "ubuntu:19.04"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.custom]
run_exec = ""
================================================
FILE: hits/.gitignore
================================================
postgresql
.*.swp
================================================
FILE: hits/README.md
================================================
# Hits
Is a hit counter using images: one image is served, the count is incremented.
See [source](https://github.com/dwyl/hits).
## Setup
Since PostgreSQL is a pain to setup, you must do this before launching the
service:
```bash
mkdir hits/postgresql
```
================================================
FILE: hits/docker-compose.hits.yml
================================================
networks:
hits-internal: {}
services:
hits:
image: 'tommoulard/hits'
depends_on:
- 'hits-postgresql'
healthcheck:
test: ['CMD', 'curl', '0.0.0.0:4000']
labels:
traefik.enable: true
traefik.http.routers.hits.rule: 'Host(`hits.${SITE:-localhost}`)'
traefik.http.services.hits.loadbalancer.server.port: 4000
networks:
- 'hits-internal'
- 'srv'
restart: 'always'
hits-postgresql:
image: 'postgres'
environment:
POSTGRES_PASSWORD: 'postgres'
POSTGRES_USER: 'postgres'
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres']
networks:
hits-internal:
aliases:
- 'postgresql'
restart: 'always'
user: '1000:1000'
volumes:
- './postgresql/:/var/lib/postgresql/data'
================================================
FILE: homeassistant/.gitignore
================================================
config
================================================
FILE: homeassistant/README.md
================================================
# Home Assistant
https://www.home-assistant.io
Open source home automation that puts local control and privacy first. Powered
by a worldwide community of tinkerers and DIY enthusiasts. Perfect to run on a
Raspberry Pi or a local server.
Note that traefik's basic auth cannot be used with home assistant, as
[HA does not support](https://github.com/home-assistant/iOS/issues/193#issuecomment-760662881)
using the `Authorization` header for anything else than HA.
## configuration
To enable [prometheus metrics](https://www.home-assistant.io/integrations/prometheus/),
add the following to your `configuration.yaml`:
```yaml
prometheus:
```
You can also [configure basic informations](https://www.home-assistant.io/docs/configuration/basic/)
about your home assistant instance by setting the `homeassistant` key in your
`configuration.yaml`. It is the recommended way to configure your instance as
is is not possible to secure the instance with traefik's basic auth.
Here is how to tell HA that it is [behind](https://www.home-assistant.io/integrations/http/#reverse-proxies)
a reverse proxy:
```yaml
http:
use_x_forwarded_for: true
trusted_proxies:
- 127.0.0.1 # localhost
- ::1 # localhost but in IPv6
- 172.0.0.0/8 # docker network
```
================================================
FILE: homeassistant/docker-compose.homeassistant.yml
================================================
services:
homeassistant:
image: 'ghcr.io/home-assistant/home-assistant:${HOME_ASSISTANT_IMAGE_VERSION:-stable}'
# devices: # For passing through USB, serial or gpio devices.
# - '/dev/ttyUSB0:/dev/ttyUSB0'
environment:
GUID: 1000
PUID: 1000
healthcheck:
test: ['CMD', 'curl', '0.0.0.0:8123']
labels:
traefik.enable: true
traefik.http.routers.homeassistant.rule: |
Host(`homeassistant.${SITE:-localhost}`) && !Path(`/api/prometheus`)
traefik.http.services.homeassistant.loadbalancer.server.port: 8123
# network_mode: host # might be required to discover som devices(i.e.,UPnP).
networks:
- 'srv'
# privileged: true
restart: 'always'
volumes:
- './config:/config'
- '/etc/localtime:/etc/localtime:ro'
================================================
FILE: hugo/.gitignore
================================================
blog/
================================================
FILE: hugo/README.md
================================================
# blog
https://gohugo.io/
The world’s fastest framework for building websites
Hugo is one of the most popular open-source static site generators. With its
amazing speed and flexibility, Hugo makes building websites fun again.
================================================
FILE: hugo/docker-compose.hugo.yml
================================================
services:
hugo:
image: 'nginx:stable-alpine'
depends_on:
- 'hugo-builder'
healthcheck:
test: ['CMD', 'curl', '0.0.0.0:80']
labels:
traefik.enable: true
traefik.http.routers.hugo.rule: 'Host(`hugo.${SITE:-localhost}`)'
traefik.http.services.hugo.loadbalancer.server.port: 80
networks:
- 'srv'
restart: 'always'
volumes:
- './nginx/conf:/etc/nginx/conf.d'
- './nginx/logs:/var/log/nginx/'
hugo-builder:
image: 'jojomi/hugo:0.59'
environment:
HUGO_BASEURL: 'https://hugo.${SITE:-localhost}/'
HUGO_REFRESH_TIME: 3600
HUGO_THEME: 'hugo-theme-cactus-plus'
labels:
traefik.enable: false
restart: 'always'
volumes:
- './blog:/src'
- './nginx/conf/www:/output'
================================================
FILE: hugo/nginx/conf/nginx.conf
================================================
error_log /var/log/nginx/error.log;
log_format main_log_format '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
server {
access_log /var/log/nginx/access.log main_log_format;
root /etc/nginx/conf.d/www;
index index.html;
location /{
try_files $uri $uri/ =404;
autoindex on;
}
# deny all direct access for these folders
location ~* /(.git|cache|bin|logs|backup|tests)/.*$ { return 403; }
location ~* ^.+\.(ico|js|gif|jpg|jpeg|png|bmp)$ {
expires 30d;
}
}
================================================
FILE: hugo/nginx/logs/.gitkeep
================================================
================================================
FILE: jackett/.gitignore
================================================
config/
downloads/
================================================
FILE: jackett/README.md
================================================
# jackett
https://github.com/Jackett/Jackett
Jackett is a single repository of maintained indexer scraping & translation
logic - removing the burden from other apps. Developer note: The software
implements the Torznab (with hybrid nZEDb / Newznab category numbering) and
TorrentPotato APIs. A third-party Golang SDK for Jackett is available from
webtor-io/go-jackett
================================================
FILE: jackett/docker-compose.jackett.yml
================================================
services:
jackett:
image: 'linuxserver/jackett:${JACKETT_IMAGE_VERSION:-v0.20.567-ls56}'
dns:
- '1.1.1.1'
environment:
PGID: '${JACKETT_GPID:-1000}'
PUID: '${JACKETT_PUID:-1000}'
TZ: '${TZ:-Europe/Paris}'
healthcheck:
test: ['CMD', 'curl', '0.0.0.0:9117/UI/Login?ReturnUrl=%2FUI%2FDashboard']
labels:
traefik.enable: true
traefik.http.routers.jackett.middlewares: 'basic_auth@docker'
traefik.http.routers.jackett.rule: 'Host(`jackett.${SITE:-localhost}`)'
traefik.http.services.jackett.loadbalancer.server.port: 9117
networks:
- 'srv'
restart: 'always'
volumes:
- './config:/config'
- './downloads:/downloads'
================================================
FILE: jellyfin/.gitignore
================================================
cache/
config/
================================================
FILE: jellyfin/README.md
================================================
# jellyfin
https://jellyfin.org
Jellyfin is a suite of multimedia applications designed to organize, manage,
and share digital media files to networked devices. Jellyfin also can serve
media to DLNA and Chromecast-enabled devices. It is an open-source fork of
Emby.
Jellyfin is Free Software, licensed under the GNU GPL. You can use it, study
it, modify it, build it, and distribute it for free, as long as your changes
are licensed the same way. The project is community-built, relying entirely on
contributions from volunteers.
================================================
FILE: jellyfin/docker-compose.jellyfin.yml
================================================
services:
jellyfin:
image: 'jellyfin/jellyfin'
labels:
traefik.enable: true
traefik.http.routers.jellyfin.rule: 'Host(`jellyfin.${SITE:-localhost}`)'
traefik.http.services.jellyfin.loadbalancer.server.port: 8096
networks:
- 'srv'
restart: 'always'
user: '1000:1000'
volumes:
- './cache:/cache'
- './config:/config'
- './logs:/logs'
- './media:/media'
================================================
FILE: jupyter/README.md
================================================
# jupyter
https://jupyter.org
The Jupyter Notebook is an open-source web application that allows you to
create and share documents that contain live code, equations, visualizations
and narrative text. Uses include: data cleaning and transformation, numerical
simulation, statistical modeling, data visualization, machine learning, and
much more using Python.
================================================
FILE: jupyter/docker-compose.jupyter.yml
================================================
services:
jupyter:
image: 'jupyter/tensorflow-notebook:45f07a14b422'
command: |
jupyter notebook
--NotebookApp.token=''
--NotebookApp.password=''
# removing token & password to enable traefik auth
environment:
JUPYTER_ENABLE_LAB: 'yes'
labels:
traefik.enable: true
traefik.http.routers.jupyter.middlewares: 'basic_auth@docker'
traefik.http.routers.jupyter.rule: 'Host(`jupyter.${SITE:-localhost}`)'
traefik.http.services.jupyter.loadbalancer.server.port: 8888
networks:
- 'srv'
restart: 'always'
volumes:
- './jupyter/config:/root/.jupyter/'
- './notebooks:/home/jovyan/'
================================================
FILE: kavita/.gitignore
================================================
config
================================================
FILE: kavita/README.md
================================================
# kavita
https://www.kavitareader.com/
https://docs.linuxserver.io/images/docker-kavita/
Kavita is a fast, feature rich, cross platform reading server. Built with a
focus for being a full solution for all your reading needs. Setup your own
server and share your reading collection with your friends and family!
================================================
FILE: kavita/docker-compose.kavita.yml
================================================
services:
kavita:
image: 'lscr.io/linuxserver/kavita:${KAVITA_IMAGE_VERSION:-latest}'
environment:
PGID: '${KAVITA_GPID:-1000}'
PUID: '${KAVITA_PUID:-1000}'
TZ: '${TZ:-Europe/Paris}'
healthcheck:
test: ['CMD', 'curl', '0.0.0.0:5000']
labels:
traefik.enable: true
traefik.http.routers.kavita.rule: 'Host(`kavita.${SITE:-localhost}`)'
traefik.http.services.kavita.loadbalancer.server.port: 5000
networks:
- 'srv'
restart: 'always'
volumes:
- './data:/data'
- './config:/config'
================================================
FILE: minecraft/README.md
================================================
# minecraft
https://github.com/itzg/docker-minecraft-server
Minecraft is a sandbox video game developed by Mojang. The game was created by
Markus "Notch" Persson in the Java programming language. Following several
early test versions, it was released as a paid public alpha for personal
computers in 2009 before releasing in November 2011, with Jens Bergensten
taking over development. Minecraft has since been ported to several other
platforms and is the best-selling video game of all time, with 200 million
copies sold and 126 million monthly active users as of 2020.
================================================
FILE: minecraft/docker-compose.minecraft-ftb.yml
================================================
services:
minecraft-ftb:
image: 'jonasbonno/ftb-revelation'
labels:
traefik.enable: false
ports:
- '25565:25565/udp'
restart: 'always'
volumes:
- './ftb-data:/minecraft'
================================================
FILE: minecraft/docker-compose.minecraft.yml
================================================
services:
minecraft:
image: 'itzg/minecraft-server'
environment:
EULA: 'true'
restart: 'always'
labels:
- 'traefik.enable=false'
ports:
- '25565:25565/udp'
volumes:
- './minecraft-data:/data'
================================================
FILE: mumble/README.md
================================================
# Mumble
Mumble is a free, open source, low latency, high quality voice chat application.
================================================
FILE: mumble/docker-compose.mumble.yml
================================================
services:
mumble:
image: 'mumblevoip/mumble-server:${MUMBLE_IMAGE_VERSION:-latest}'
environment:
MUMBLE_SUPERUSER_PASSWORD: '${MUMBLE_SUPERUSER_PASSWORD:-CHANGE_ME}'
labels:
traefik.enable: false
networks:
- 'srv'
ports:
- '64738:64738'
- '64738:64738/udp'
restart: 'always'
volumes:
- './data:/data'
================================================
FILE: musicbot/README.md
================================================
# musicbot
https://github.com/jagrosh/MusicBot
A Discord music bot that's easy to set up and run yourself!
================================================
FILE: musicbot/conf/config.txt
================================================
/// START OF JMUSICBOT CONFIG ///
/////////////////////////////////////////////////////////
// Config for the JMusicBot //
/////////////////////////////////////////////////////////
// Any line starting with // is ignored //
// You MUST set the token and owner //
// All other items have defaults if you don't set them //
// Open in Notepad++ for best results //
/////////////////////////////////////////////////////////
// This sets the token for the bot to log in with
// This MUST be a bot token (user tokens will not work)
// If you don't know how to get a bot token, please see the guide here:
// https://github.com/jagrosh/MusicBot/wiki/Getting-a-Bot-Token
token = BOT_TOKEN_HERE
// This sets the owner of the bot
// This needs to be the owner's ID (a 17-18 digit number)
// https://github.com/jagrosh/MusicBot/wiki/Finding-Your-User-ID
owner = 0 // OWNER ID
// This sets the prefix for the bot
// The prefix is used to control the commands
// If you use !!, the play command will be !!play
// If you do not set this, the prefix will be a mention of the bot (@Botname play)
// If you make this blank, the bot will not use a prefix
prefix = "@mention"
// If you set this, it modifies the default game of the bot
// Set this to NONE to have no game
// Set this to DEFAULT to use the default game
// You can make the game "Playing X", "Listening to X", or "Watching X"
// where X is the title. If you don't include an action, it will use the
// default of "Playing"
game = "DEFAULT"
// If you set this, it will modify the default status of bot
// Valid values: ONLINE IDLE DND INVISIBLE
status = ONLINE
// If you set this to true, the bot will list the title of the song it is currently playing in its
// "Playing" status. Note that this will ONLY work if the bot is playing music on ONE guild;
// if the bot is playing on multiple guilds, this will not work.
songinstatus=false
// If you set this, the bot will also use this prefix in addition to
// the one provided above
altprefix = "NONE"
// If you set these, it will change the various emojis
success = "🎶"
warning = "💡"
error = "🚫"
loading = "⌚"
searching = "🔎"
// If you set this, you change the word used to view the help.
// For example, if you set the prefix to !! and the help to cmds, you would type
// !!cmds to see the help text
help = help
// If you set this, the "nowplaying" command will show youtube thumbnails
// Note: If you set this to true, the nowplaying boxes will NOT refresh
// This is because refreshing the boxes causes the image to be reloaded
// every time it refreshes.
npimages = false
// If you set this, the bot will not leave a voice channel after it finishes a queue.
// Keep in mind that being connected to a voice channel uses additional bandwith,
// so this option is not recommended if bandwidth is a concern.
stayinchannel = false
// This sets the maximum amount of seconds any track loaded can be. If not set or set
// to any number less than or equal to zero, there is no maximum time length. This time
// restriction applies to songs loaded from any source.
maxtime = 0
// This sets an alternative folder to be used as the Playlists folder
// This can be a relative or absolute path
playlistsfolder = "Playlists"
// By default, the bot will DM the owner if the bot is running and a new version of the bot
// becomes available. Set this to false to disable this feature.
updatealerts=true
// Changing this changes the lyrics provider
// Currently available providers: "A-Z Lyrics", "Genius", "MusicMatch"
// At the time of writing, I would recommend sticking with A-Z Lyrics or MusicMatch,
// as Genius tends to have a lot of non-song results and you might get something
// completely unrelated to what you want.
// If you are interested in contributing a provider, please see
// https://github.com/jagrosh/JLyrics
lyrics.default = "A-Z Lyrics"
// These settings allow you to configure custom aliases for all commands.
// Multiple aliases may be given, separated by commas.
//
// Example 1: Giving command "play" the alias "p":
// play = [ p ]
//
// Example 2: Giving command "search" the aliases "yts" and "find":
// search = [ yts, find ]
aliases {
// General commands
settings = [ status ]
// Music commands
lyrics = []
nowplaying = [ np, current ]
play = []
playlists = [ pls ]
queue = [ list ]
remove = [ delete ]
scsearch = []
search = [ ytsearch ]
shuffle = []
skip = [ voteskip ]
// Admin commands
prefix = [ setprefix ]
setdj = []
settc = []
setvc = []
// DJ Commands
forceremove = [ forcedelete, modremove, moddelete, modelete ]
forceskip = [ modskip ]
movetrack = [ move ]
pause = []
playnext = []
repeat = []
skipto = [ jumpto ]
stop = []
volume = [ vol ]
}
// If you set this to true, it will enable the eval command for the bot owner. This command
// allows the bot owner to run arbitrary code from the bot's account.
//
// WARNING:
// This command can be extremely dangerous. If you don't know what you're doing, you could
// cause horrific problems on your Discord server or on whatever computer this bot is running
// on. Never run this command unless you are completely positive what you are running.
//
// DO NOT ENABLE THIS IF YOU DON'T KNOW WHAT THIS DOES OR HOW TO USE IT
// IF SOMEONE ASKS YOU TO ENABLE THIS, THERE IS AN 11/10 CHANCE THEY ARE TRYING TO SCAM YOU
eval=false
/// END OF JMUSICBOT CONFIG ///
================================================
FILE: musicbot/docker-compose.musicBot.yml
================================================
services:
musicbot:
image: 'raiponce/musicbot:0.2.10'
labels:
traefik.enable: false
networks:
- 'srv'
restart: 'always'
volumes:
- './conf:/musicBot/conf/'
- './playlists:/musicBot/playlists/'
================================================
FILE: nextcloud/.gitignore
================================================
data/
db/
================================================
FILE: nextcloud/README.md
================================================
# NextCloud
http://nextcloud.com/
NextCloud is a suite of client-server software for creating and using file
hosting services. NextCloud is free and open-source, which means that anyone is
allowed to install and operate it on their own private server devices.
With the integrated OnlyOffice, NextCloud application functionally is similar
to Dropbox, Office 365 or Google Drive, but can be used on home-local computers
or for off-premises file storage hosting.
The original OwnCloud developer Frank Karlitschek forked OwnCloud and created
NextCloud, which continues to be actively developed by Karlitschek and other
members of the original OwnCloud team.
## Setup
### Cron
Ajax is the default, but cron is the best
To setup cron, add this line to your crontab:
```
*/5 * * * * docker exec -u www-data make-my-server-nextcloud-1 php -f cron.php
```
Which should lead to:
```bash
$ crontab -l
...
#min hour day Month Day_Of_Week Command
*/5 * * * * docker exec -u www-data make-my-server-nextcloud-1 php -f cron.php
```
### Database
If you forgot to install NextCloud with its dedicated database, you can run this command to migrate from anything to the mariadb instance:
```
docker-compose exec -u www-data nextcloud php occ db:convert-type --all-apps --port 3306 --password nextcloud mysql nextcloud nextcloud-db nextcloud
```
## Upgrade
How to upgrade your NextCloud instance:
```bash
docker-compose pull nextcloud
docker-compose stop nextcloud && docker-compose up -d nextcloud
docker-compose exec -u www-data nextcloud php occ upgrade -vvv
```
To remove maintenance mode:
```bash
docker-compose exec -u www-data nextcloud php occ maintenance:mode --off
```
## Misc
### Re apply the configuration
If you want to re apply the configuration of NextCloud, you can always run this:
```bash
docker-compose exec -u www-data nextcloud php occ maintenance:repair -vvv
```
### php-imagick
To fix this issue:
```
Module php-imagick in this instance has no SVG support. For better compatibility it is recommended to install it.
```
Run:
```bash
docker-compose exec nextcloud apt -y install libmagickcore-6.q16-6-extra
```
### default_phone_region
To fix this issue:
```
ERROR: Can not validate phone numbers without `default_phone_region` being set in the config file
```
Run:
```bash
docker-compose -u www-data exec nextcloud php occ config:system:set default_phone_region --type string --value="FR"
```
================================================
FILE: nextcloud/docker-compose.nextcloud.yml
================================================
networks:
nextcloud-internal: {}
services:
nextcloud:
image: 'nextcloud:${NEXTCLOUD_IMAGE_VERSION:-latest}'
depends_on:
- 'nextcloud-db'
healthcheck:
test: ['CMD', 'curl', '0.0.0.0:80']
labels:
traefik.enable: true
# https://docs.nextcloud.com/server/22/admin_manual/installation/harden_server.html
# https://doc.traefik.io/traefik/v2.6/middlewares/http/headers/
traefik.http.middlewares.header-nextcloud.headers.browserXssFilter: true
traefik.http.middlewares.header-nextcloud.headers.contentTypeNosniff: true
traefik.http.middlewares.header-nextcloud.headers.customFrameOptionsValue: 'SAMEORIGIN'
traefik.http.middlewares.header-nextcloud.headers.referrerPolicy: 'no-referrer'
traefik.http.middlewares.header-nextcloud.headers.stsincludesubdomains: true
traefik.http.middlewares.header-nextcloud.headers.stspreload: true
traefik.http.middlewares.header-nextcloud.headers.stsseconds: 15552000
# https://docs.nextcloud.com/server/21/admin_manual/issues/general_troubleshooting.html#service-discovery
# https://docs.nextcloud.com/server/23/admin_manual/configuration_server/reverse_proxy_configuration.html#traefik-2
# https://doc.traefik.io/traefik/v2.6/middlewares/http/redirectregex/
traefik.http.middlewares.redirect-dav-nextcloud.redirectRegex.permanent: true
traefik.http.middlewares.redirect-dav-nextcloud.redirectRegex.regex: 'https://nextcloud.${SITE:-localhost}/.well-known/(card|cal)dav'
traefik.http.middlewares.redirect-dav-nextcloud.redirectRegex.replacement: 'https://nextcloud.${SITE:-localhost}/remote.php/dav/'
traefik.http.routers.nextcloud.middlewares: 'header-nextcloud,redirect-dav-nextcloud'
traefik.http.routers.nextcloud.rule: 'Host(`nextcloud.${SITE:-localhost}`)'
traefik.http.services.nextcloud.loadbalancer.server.port: 80
networks:
- 'srv'
- 'nextcloud-internal'
restart: 'always'
volumes:
- './data:/var/www/html'
nextcloud-db:
image: 'mariadb'
command: '--transaction-isolation=READ-COMMITTED --binlog-format=ROW'
environment:
MYSQL_DATABASE: '${NEXTCLOUD_MYSQL_DATABASE:-nextcloud}'
MYSQL_PASSWORD: '${NEXTCLOUD_MYSQL_PASSWORD:-nextcloud}'
MYSQL_ROOT_PASSWORD: '${NEXTCLOUD_MYSQL_ROOT_PASSWORD:-pass}'
MYSQL_USER: '${NEXTCLOUD_MYSQL_USER:-nextcloud}'
healthcheck:
test: ['CMD', 'mysqlcheck', '--all-databases', '-ppass']
labels:
traefik.enable: false
networks:
- 'nextcloud-internal'
restart: 'always'
volumes:
- './db:/var/lib/mysql'
================================================
FILE: nginx/README.md
================================================
# nginx
https://www.nginx.com/
Nginx (pronounced "engine X", /ˌɛndʒɪnˈɛks/ EN-jin-EKS), stylized as NGINX
or nginx or NginX, is a web server that can also be used as a reverse proxy,
load balancer, mail proxy and HTTP cache. The software was created by Igor
Sysoev and publicly released in 2004. Nginx is free and open-source
software, released under the terms of the 2-clause BSD license. A large
fraction of web servers use NGINX, often as a load balancer.
================================================
FILE: nginx/conf/nginx.conf
================================================
error_log /var/log/nginx/error.log;
log_format main_log_format '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
server {
access_log /var/log/nginx/access.log main_log_format;
root /etc/nginx/conf.d/www;
index index.html;
location /{
try_files $uri $uri/ =404;
autoindex on;
}
# deny all direct access for these folders
location ~* /(.git|cache|bin|logs|backup|tests)/.*$ { return 403; }
location ~* ^.+\.(ico|js|gif|jpg|jpeg|png|bmp)$ {
expires 30d;
}
}
================================================
FILE: nginx/conf/www/index.html
================================================
<h1>Simple web page</h1>
Hello, World!
================================================
FILE: nginx/docker-compose.nginx.yml
================================================
services:
nginx:
image: 'nginx:${NGINX_IMAGE_VERSION:-stable-alpine}'
healthcheck:
test: ['CMD', 'curl', '0.0.0.0:80']
labels:
traefik.enable: true
traefik.http.routers.nginx.rule: 'Host(`${SITE:-localhost}`)'
traefik.http.services.nginx.loadbalancer.server.port: 80
networks:
- 'srv'
restart: 'always'
volumes:
- './conf:/etc/nginx/conf.d'
- './logs:/var/log/nginx/'
================================================
FILE: nginx/logs/.gitkeep
================================================
================================================
FILE: pastebin/README.md
================================================
# pastebin
https://github.com/mko-x/docker-pastebin
Paste your stuff however
================================================
FILE: pastebin/docker-compose.pastebin.yml
================================================
services:
pastebin:
image: 'mkodockx/docker-pastebin:latest'
labels:
traefik.enable: true
traefik.http.routers.pastebin.rule: 'Host(`pastebin.${SITE:-localhost}`)'
traefik.http.services.pastebin.loadbalancer.server.port: 80
networks:
- 'srv'
restart: 'always'
================================================
FILE: peertube/.gitignore
================================================
config/custom-environment-variables.yaml
config/default.yaml
data/
db/
redis/
================================================
FILE: peertube/README.md
================================================
# peertube
https://peer.tube
Youtube but selfhosted.
PeerTube, a federated (ActivityPub) video streaming platform using P2P
(BitTorrent) directly in the web browser with WebTorrent and Angular.
================================================
FILE: peertube/config/production.yaml
================================================
listen:
hostname: '0.0.0.0'
port: 9000
# Correspond to your reverse proxy "listen" configuration
webserver:
https: true
hostname: 'undefined'
port: 443
rates_limit:
login:
# 15 attempts in 5 min
window: '5 minutes'
max: 15
ask_send_email:
# 3 attempts in 5 min
window: '5 minutes'
max: 3
# Proxies to trust to get real client IP
# If you run PeerTube just behind a local proxy (nginx), keep 'loopback'
# If you run PeerTube behind a remote proxy, add the proxy IP address (or
# subnet)
trust_proxy:
- 'loopback'
- 'linklocal'
- 'uniquelocal'
# Your database name will be "peertube"+database.suffix
database:
hostname: 'peertube-postgres'
port: 5432
suffix: ''
username: 'postgres'
password: 'postgres'
# Redis server for short time storage
redis:
hostname: 'peertube-redis'
port: 6379
auth: null
# From the project root directory
storage:
tmp: '../data/tmp/'
avatars: '../data/avatars/'
videos: '../data/videos/'
redundancy: '../data/redundancy/'
logs: '../data/logs/'
previews: '../data/previews/'
thumbnails: '../data/thumbnails/'
torrents: '../data/torrents/'
captions: '../data/captions/'
cache: '../data/cache/'
plugins: '../data/plugins/'
log:
level: 'info' # debug/info/warning/error
tracker:
enabled: true
# false because we have issues with traefik and ws ip/port forwarding
reject_too_many_announces: false
admin:
email: null
================================================
FILE: peertube/docker-compose.peertube.yml
================================================
services:
peertube:
image: 'chocobozzz/peertube:production-buster'
depends_on:
- 'peertube-db'
- 'peertube-redis'
environment:
PEERTUBE_ADMIN_EMAIL: '${ROOT_EMAIL:-changeme@changeme.org}'
PEERTUBE_DB_HOSTNAME: 'peertube-db'
PEERTUBE_DB_PASSWORD: '${USERS}'
PEERTUBE_DB_USERNAME: 'peertube'
PEERTUBE_TRUST_PROXY: '["127.0.0.1", "loopback", "172.0.0.0/0"]'
PEERTUBE_WEBSERVER_HOSTNAME: 'peertube.${SITE:-localhost}'
PEERTUBE_WEBSERVER_HTTPS: 'true'
PEERTUBE_WEBSERVER_PORT: 443
labels:
traefik.enable: true
traefik.http.routers.peertube.rule: 'Host(`peertube.${SITE:-localhost}`)'
traefik.http.services.peertube.loadbalancer.server.port: 9000
links:
- 'peertube-db'
- 'peertube-redis'
networks:
- 'srv'
restart: 'always'
volumes:
- './config:/config'
- './data:/data'
peertube-db:
image: 'postgres:10-alpine'
environment:
POSTGRES_DB: 'peertube'
POSTGRES_PASSWORD: '${USERS}'
POSTGRES_USER: 'peertube'
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'peertube']
labels:
traefik.enable: false
networks:
- 'srv'
restart: 'always'
volumes:
- './db:/var/lib/postgresql/data'
peertube-redis:
image: 'redis:4-alpine'
healthcheck:
test: ['CMD', 'redis-cli', 'PING']
labels:
traefik.enable: false
networks:
- 'srv'
restart: 'always'
volumes:
- './redis:/data'
================================================
FILE: pihole/.gitignore
================================================
etc-pihole/
etc-dnsmasq.d/
logs/
================================================
FILE: pihole/README.md
================================================
# Pi-hole
[Pi-hole](https://pi-hole.net/) is a network-wide DNS sinkhole that blocks ads
and trackers for every device on your network.
This service definition deploys Pi-hole behind Traefik so the admin interface is
available securely at `https://pihole.${SITE}` while DNS requests continue to be
served directly on port 53.
## Volumes
- `./etc-pihole` → `/etc/pihole` (gravity database, lists, custom configs)
- `./etc-dnsmasq.d` → `/etc/dnsmasq.d` (dnsmasq overrides, DHCP config)
- `./logs` → `/var/log/lighttpd` (web UI access/error logs)
Back up these folders before upgrading or recreating the container to retain
settings, blocklists, and DHCP leases.
## Environment variables
| Variable | Default | Description |
| -------------------------- | ---------------- | ------------------------------------------------------------------ |
| `PIHOLE_IMAGE_VERSION` | `2024.05.0` | Pi-hole Docker image tag |
| `PIHOLE_V4_ADDRESS` | `0.0.0.0` | IP address Pi-hole advertises to clients |
| `PIHOLE_V6_ADDRESS` | `::` | IPv6 address Pi-hole advertises |
| `PIHOLE_HOSTNAME` | `pihole` | Hostname shown in the UI and DHCP replies |
| `PIHOLE_DNS1` | `1.1.1.1` | Primary upstream DNS |
| `PIHOLE_DNS2` | `1.0.0.1` | Secondary upstream DNS |
| `PIHOLE_DNSMASQ_LISTENING` | `all` | dnsmasq listening mode (`local`, `all`) |
| `PIHOLE_REV_SERVER` | `false` | Enable conditional forwarding |
| `PIHOLE_REV_SERVER_TARGET` | `192.168.0.1` | Router/DNS to forward PTR queries |
| `PIHOLE_REV_SERVER_DOMAIN` | `lan` | Local domain for reverse lookups |
| `PIHOLE_REV_SERVER_CIDR` | `192.168.0.0/24` | Subnet for reverse lookups |
| `SITE` | `localhost` | Used to build the admin UI host `pihole.${SITE}` |
| `TZ` | `Europe/Paris` | Container timezone |
| `TRAEFIK_DNS_ENTRYPOINT` | `53` | Port that Traefik exposes for DNS (defined in the Traefik service) |
The DNS port exposed to your network is now configured globally via
`TRAEFIK_DNS_ENTRYPOINT` and defaults to 53.
Update `.env` (copied from `.env.default`) with secure values.
## DNS port requirements
Traefik now terminates all DNS traffic for Pi-hole. It binds both TCP and UDP
port 53 by default (configurable via `TRAEFIK_DNS_ENTRYPOINT`) and proxies the
traffic to the Pi-hole container over the internal `srv` network. Make sure the
host resolver (`systemd-resolved`, dnsmasq, etc.) is disabled or moved away from
that port before starting Traefik, otherwise Traefik cannot bind to it.
1. Disable the built-in resolver (for example `sudo systemctl disable --now
systemd-resolved` on Ubuntu) and restart Docker so the ports are freed.
2. If you must keep another resolver running locally, set
`TRAEFIK_DNS_ENTRYPOINT` in `.env` to an alternate port and point every DNS
client to that same port on the Traefik host.
## Traefik integration
- The compose file registers Pi-hole with Traefik using the shared `srv` network.
- The router `pihole` matches `Host(`pihole.${SITE}`)` and explicitly targets the
`websecure` entrypoint so TLS is always negotiated.
- The DNS routers `pihole-dns` bind to the `dns-tcp` and `dns-udp` entrypoints so
Traefik proxies raw DNS queries on port 53 to the container without exposing
any Pi-hole ports.
- Certificates are handled by Traefik via the globally configured ACME resolver.
## DNS configuration steps
1. Deploy the service: `SITE=example.com docker-compose up -d pihole` (or run the
global helper to start every service).
2. Point your network clients (router DHCP option or manual DNS setting) to the
host running Traefik on port `${TRAEFIK_DNS_ENTRYPOINT:-53}`.
3. Access `https://pihole.${SITE}` to finish the web-based setup and verify that
queries are being processed.
### Optional DHCP support
If you want Pi-hole to serve DHCP, you still need to grant it `NET_ADMIN` and
expose UDP/67 directly (Traefik does not proxy DHCP). Create an override such as:
```yml
services:
pihole:
cap_add:
- NET_ADMIN
ports:
- '67:67/udp'
```
This reintroduces a host port mapping, so only enable it if no other DHCP server
is active on your LAN and you understand the exposure.
## Maintenance
- Update Pi-hole with `docker-compose pull pihole && docker-compose up -d pihole`.
- Review logs in `pihole/logs/` or via the web UI if troubleshooting.
- Export blocklists or settings regularly from the admin UI in addition to
filesystem backups.
================================================
FILE: pihole/docker-compose.pihole.yml
================================================
services:
pihole:
image: 'pihole/pihole:${PIHOLE_IMAGE_VERSION:-2025.11.1}'
environment:
DNS1: '${PIHOLE_DNS1:-1.1.1.1}'
DNS2: '${PIHOLE_DNS2:-1.0.0.1}'
DNSMASQ_LISTENING: '${PIHOLE_DNSMASQ_LISTENING:-all}'
FTLCONF_REPLY_ADDR4: '${PIHOLE_V4_ADDRESS:-0.0.0.0}'
FTLCONF_REPLY_ADDR6: '${PIHOLE_V6_ADDRESS:-::}'
HOSTNAME: '${PIHOLE_HOSTNAME:-pihole}'
REV_SERVER: '${PIHOLE_REV_SERVER:-false}'
REV_SERVER_CIDR: '${PIHOLE_REV_SERVER_CIDR:-192.168.0.0/24}'
REV_SERVER_DOMAIN: '${PIHOLE_REV_SERVER_DOMAIN:-lan}'
REV_SERVER_TARGET: '${PIHOLE_REV_SERVER_TARGET:-192.168.0.1}'
TZ: '${TZ:-Europe/Paris}'
VIRTUAL_HOST: 'pihole.${SITE:-localhost}'
healthcheck:
test:
- 'CMD'
- 'dig'
- '+time=1'
- '+tries=1'
- '@127.0.0.1'
- 'pi-hole.net'
labels:
traefik.enable: true
traefik.http.routers.pihole.entrypoints: 'websecure'
traefik.http.routers.pihole.middlewares: 'basic_auth@docker'
traefik.http.routers.pihole.rule: 'Host(`pihole.${SITE:-localhost}`)'
traefik.http.services.pihole.loadbalancer.server.port: 80
traefik.tcp.routers.pihole-dns.entrypoints: 'dns-tcp'
traefik.tcp.routers.pihole-dns.rule: 'HostSNI(`*`)'
traefik.tcp.routers.pihole-dns.service: 'pihole-dns'
traefik.tcp.services.pihole-dns.loadbalancer.server.port: 53
traefik.udp.routers.pihole-dns.entrypoints: 'dns-udp'
traefik.udp.routers.pihole-dns.service: 'pihole-dns'
traefik.udp.services.pihole-dns.loadbalancer.server.port: 53
networks:
- 'srv'
restart: 'always'
volumes:
- './etc-pihole:/etc/pihole'
- './etc-dnsmasq.d:/etc/dnsmasq.d'
- './logs:/var/log/lighttpd'
================================================
FILE: portainer/README.md
================================================
# portainer
https://www.portainer.io
With over half a million regular users, it's a powerful, open-source toolset
that allows you to easily build and manage containers in Docker, Swarm,
Kubernetes and Azure ACI. Portainer works by hiding the complexity that makes
managing containers difficult behind an easy to use GUI.
================================================
FILE: portainer/docker-compose.portainer.yml
================================================
services:
portainer:
image: 'portainer/portainer'
labels:
traefik.enable: true
traefik.http.routers.portainer.middlewares: 'basic_auth@docker'
traefik.http.routers.portainer.rule: 'Host(`portainer.${SITE:-localhost}`)'
traefik.http.services.portainer.loadbalancer.server.port: 9000
networks:
- 'srv'
restart: 'always'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
- './data:/data'
================================================
FILE: remotely/README.md
================================================
# Remotely
Remotely is a free and open source, self-hosted solution for remote control and remote scripting via a web interface. Linux and Windows is currently supported as client device.
The client interface looks similar to TeamViewer. A nine-digit session ID is displayed, which a registered user can use to connect to the device via the website to control it remotely. The client executable can be downloaded from this hosted instance.
There is also a even more advanced background agent that provide unattended access and remote scripting.
================================================
FILE: remotely/docker-compose.remotely.yml
================================================
services:
remotely:
image: 'translucency/remotely:${REMOTELY_IMAGE_VERSION:-latest}'
build: 'https://github.com/immense/remotely.git#:Server'
labels:
traefik.enable: true
traefik.http.routers.remotely.rule: 'Host(`remotely.${SITE:-localhost}`)'
traefik.http.services.remotely.loadbalancer.server.port: 5000
networks:
- 'srv'
restart: 'always'
volumes:
- './remotely-data:/remotely-data'
================================================
FILE: rocketchat/.gitignore
================================================
db/
================================================
FILE: rocketchat/README.md
================================================
# Rocket Chat
https://rocket.chat/
Rocket.Chat is the ultimate Free Open Source Solution for team communications.
================================================
FILE: rocketchat/docker-compose.rocket-chat.yml
================================================
networks:
rocketchat-internal: {}
services:
rocketchat:
image: 'rocket.chat:latest'
depends_on:
- 'rocketchat-mongo'
- 'rocketchat-mongo-replica' # replica is mandatory
environment:
MONGO_OPLOG_URL: 'mongodb://rocketchat-mongo:27017/local'
MONGO_URL: 'mongodb://rocketchat-mongo:27017/rocketchat'
ROOT_URL: 'https://rocketchat.${SITE:-localhost}'
labels:
traefik.enable: true
traefik.http.routers.rocketchat.rule: 'Host(`rocketchat.${SITE:-localhost}`)'
traefik.http.services.rocketchat.loadbalancer.server.port: 3000
networks:
- 'rocketchat-internal'
- 'srv'
restart: 'unless-stopped'
volumes:
- './uploads:/app/uploads'
# hubot, the popular chatbot (add the bot user first and change the password
# before starting this image)
rocketchat-hubot:
image: 'rocketchat/hubot-rocketchat:latest'
depends_on:
- 'rocketchat'
environment:
BOT_NAME: 'bot'
# you can add more scripts as you'd like here, they need to be
# installable by npm
# EXTERNAL_SCRIPTS: 'hubot-help,hubot-seen,hubot-links,hubot-diagnostics'
ROCKETCHAT_PASSWORD: 'botpassword'
ROCKETCHAT_ROOM: 'GENERAL'
ROCKETCHAT_URL: 'rocketchat:3000'
ROCKETCHAT_USER: 'bot'
labels:
traefik.enable: false
# this is used to expose the hubot port for notifications on the host on
# port 3001, e.g. for hubot-jenkins-notifier
# ports:
# - '3001:8080'
restart: 'unless-stopped'
volumes:
- './scripts:/home/hubot/scripts'
rocketchat-mongo:
image: 'mongo:4.0'
command: 'mongod --smallfiles --oplogSize 128 --replSet rs01'
healthcheck:
test: ['CMD', 'echo', 'db.runCommand("ping").ok',
'|', 'mongo', 'localhost:27017/test', '--quiet']
labels:
traefik.enable: false
networks:
- 'rocketchat-internal'
restart: 'unless-stopped'
volumes:
- './db/:/data/db'
rocketchat-mongo-replica:
image: 'mongo:4.0'
command: |
mongo rocketchat-mongo/rocketchat --eval
"rs.initiate({ _id: ''rs01'',
members: [ { _id: 0, host: ''rocketchat-mongo:27017'' } ]})"
depends_on:
- 'rocketchat-mongo'
labels:
traefik.enable: false
networks:
- 'rocketchat-internal'
================================================
FILE: searxng/.gitignore
================================================
searx/
searx-checker/
================================================
FILE: searxng/README.md
================================================
# searxng
https://searxng.org/
searxng - a privacy-respecting, hackable metasearch engine. Advanced settings.
general files images it map music news science social media videos
================================================
FILE: searxng/docker-compose.searxng.yml
================================================
services:
searxng:
image: 'searxng/searxng:${SEARXNG_IMAGE_VERSION:-latest}'
depends_on:
- 'searxng-redis'
environment:
IMAGE_PROXY: 'true'
LIMITER: 'true'
REDIS_URL: 'redis://searxng-redis:6379/0'
SEARXNG_BASE_URL: 'https://searx.${SITE:-localhost}/'
healthcheck:
test: ['CMD',
'wget', '-q', '--spider', '--proxy=off', 'localhost:8080/healthz']
labels:
traefik.enable: true
traefik.http.routers.searxng.rule: 'Host(`searx.${SITE:-localhost}`)'
traefik.http.services.searxng.loadbalancer.server.port: 8080
networks:
- 'srv'
restart: 'always'
volumes:
- './searxng/:/etc/searxng:rw'
searxng-redis:
image: 'redis:6.0-alpine'
command: 'redis-server --save "" --appendonly "no"'
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
labels:
traefik.enable: false
restart: 'always'
tmpfs:
- '/var/lib/redis'
================================================
FILE: sharelatex/.gitignore
================================================
data/
mongo/
redis/
================================================
FILE: sharelatex/README.md
================================================
# sharelatex
https://github.com/sharelatex/sharelatex
An online LaTeX editor that's easy to use. No installation, real-time
collaboration, version control, hundreds of LaTeX templates, and more.
## Getting Started
After starting up for the first time the container, visit `/launchpad` to get
started and create the root account.
================================================
FILE: sharelatex/docker-compose.sharelatex.yml
================================================
networks:
sharelatex-internal: {}
services:
sharelatex:
image: 'sharelatex/sharelatex:${SHARELATEX_IMAGE_VERSION:-3.5}'
depends_on:
- 'sharelatex-mongo'
- 'sharelatex-redis'
environment:
REDIS_HOST: 'sharelatex-redis'
SHARELATEX_ADMIN_EMAIL: '${ROOT_EMAIL:-changeme@changeme.org}'
SHARELATEX_APP_NAME: '${USERNAME} ShareLaTeX'
# SHARELATEX_HEADER_IMAGE_URL: 'http://somewhere.com/mylogo.png'
SHARELATEX_MONGO_URL: 'mongodb://sharelatex-mongo/sharelatex'
SHARELATEX_NAV_TITLE: '${SITE:-localhost} - ShareLaTeX'
SHARELATEX_REDIS_HOST: 'sharelatex-redis'
SHARELATEX_SITE_URL: 'https://latex.${SITE:-localhost}'
healthcheck:
test: ['CMD', 'curl', '0.0.0.0:80']
labels:
traefik.enable: true
traefik.http.routers.sharelatex.rule: 'Host(`sharelatex.${SITE:-localhost}`)'
traefik.http.services.sharelatex.loadbalancer.server.port: 80
networks:
- 'sharelatex-internal'
- 'srv'
restart: 'always'
volumes:
- './data:/var/lib/sharelatex'
sharelatex-mongo:
image: 'mongo:4.0'
healthcheck:
test: ['CMD', 'echo', 'db.runCommand("ping").ok',
'|', 'mongo', 'localhost:27017/test', '--quiet']
labels:
traefik.enable: false
networks:
- 'sharelatex-internal'
restart: 'always'
volumes:
- './mongo:/data/db'
sharelatex-redis:
image: 'redis:6.0-alpine'
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
labels:
traefik.enable: false
networks:
- 'sharelatex-internal'
restart: 'always'
volumes:
- './redis:/data'
================================================
FILE: sonarr/.gitignore
================================================
config/
downloads/
tv/
================================================
FILE: sonarr/README.md
================================================
# sonarr
https://sonarr.tv/
Sonarr is a PVR for Usenet and BitTorrent users. It can monitor multiple RSS
feeds for new episodes of your favorite shows and will grab, sort and rename
them. It can also be configured to automatically upgrade the quality of files
already downloaded when a better quality format becomes available.
================================================
FILE: sonarr/docker-compose.sonarr.yml
================================================
services:
sonarr:
image: 'linuxserver/sonarr:${SONARR_IMAGE_VERSION:-4.0.0}'
environment:
PGID: '${SONARR_GPID:-1000}'
PUID: '${SONARR_PUID:-1000}'
TZ: '${TZ:-Europe/Paris}'
labels:
traefik.enable: true
traefik.http.routers.sonarr.middlewares: 'basic_auth@docker'
traefik.http.routers.sonarr.rule: 'Host(`sonarr.${SITE:-localhost}`)'
traefik.http.services.sonarr.loadbalancer.server.port: 8080
links:
- 'jackett'
- 'transmission'
networks:
- 'srv'
restart: 'always'
volumes:
- './config:/config'
- './downloads:/downloads'
- './tv:/tv'
================================================
FILE: streama/README.md
================================================
# streama
https://docs.streama-project.com
Streama. Self hosted streaming media server. Host your own Streaming
Application with your media library. Easy drag-and-drop to upload your media
to streama! Live Sync Watching. Watch with your loved ones remotely, with sync
for play/pause and scrubbing. Beautiful Video Player.
================================================
FILE: streama/docker-compose.streama.yml
================================================
services:
streama:
image: 'gkiko/streama:v1.8.3'
healthcheck:
test: ['CMD', 'curl', '0.0.0.0:8080/login/auth']
labels:
traefik.enable: true
traefik.http.routers.streama.rule: 'Host(`streama.${SITE:-localhost}`)'
traefik.http.services.streama.loadbalancer.server.port: 8080
networks:
- 'srv'
restart: 'always'
volumes:
- './streama.mv.db:/app/streama/streama.mv.db'
- './streama.trace.db:/app/streama/streama.trace.db'
- '../transmission/downloads:/data'
================================================
FILE: test.sh
================================================
#!/bin/bash
# export PS4='$(read time junk < /proc/$$/schedstat; echo "@@@ $time @@@ " )'
# set -x
errors=0
log_file=log.log
GREEN="\e[32m"
RED="\e[31m"
WHITE="\e[0m"
test ()
{
tmp=$({ $@ 2>&1; echo $? > /tmp/PIPESTATUS; } | tee $log_file)
rt=$(cat /tmp/PIPESTATUS)
if [[ $rt -ne 0 ]]; then
echo -e "[${RED}X${WHITE}] " "$@" ": " "$rt"
echo "$tmp"
((errors += 1))
return
fi
echo -e "[${GREEN}V${WHITE}] " "$@"
}
test docker-compose config -q
# testing docker-compose.yml files
file=$(mktemp)
docker-compose config > "$file" 2>$log_file
test diff test_config.yml "$file"
mv "$file" test_config.yml
# testing environment variables.
grep '${' ./**/docker-compose.*.yml \
| sed "s/.*\${\(.*\)}.*/\1/g" \
| cut -d":" -f 1 \
| sort -u \
| xargs -I % echo "%=" \
| sort \
>> .env.generated
test diff .env.default .env.generated
mv .env.generated .env.default
git diff | tee patch.patch
[ $errors -gt 0 ] && echo "There were $errors errors found" && exit 1
exit 0
# vim: set expandtab
================================================
FILE: test_config.yml
================================================
name: make-my-server
services:
alertmanager:
healthcheck:
test:
- CMD
- wget
- -q
- --spider
- --proxy=off
- localhost:9092/metrics
image: prom/alertmanager:v0.21.0
labels:
traefik.enable: "true"
traefik.http.routers.alertmanager.middlewares: basic_auth@docker
traefik.http.routers.alertmanager.rule: Host(`alertmanager.localhost`)
traefik.http.services.alertmanager.loadbalancer.server.port: "9093"
networks:
srv: null
restart: always
arachni:
image: arachni/arachni
labels:
traefik.enable: "true"
traefik.http.routers.arachni.middlewares: basic_auth@docker
traefik.http.routers.arachni.rule: Host(`arachni.localhost`)
traefik.http.services.arachni.loadbalancer.server.port: "9292"
networks:
srv: null
restart: always
bazarr:
depends_on:
jackett:
condition: service_started
restart: true
required: true
sonarr:
condition: service_started
restart: true
required: true
transmission:
condition: service_started
restart: true
required: true
environment:
PGID: "1000"
PUID: "1000"
TZ: Europe/Paris
image: linuxserver/bazarr:v1.2.2
labels:
traefik.enable: "true"
traefik.http.routers.bazarr.middlewares: basic_auth@docker
traefik.http.routers.bazarr.rule: Host(`bazarr.localhost`)
traefik.http.services.bazarr.loadbalancer.server.port: "8080"
links:
- transmission
- jackett
- sonarr
networks:
srv: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/bazarr/config
target: /config
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/bazarr/movies
target: /movies
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/bazarr/tv
target: /tv
bind:
create_host_path: true
bitwarden:
environment:
ADMIN_TOKEN: ""
DOMAIN: https://bitwarden.localhost
PASSWORD_ITERATIONS: "500000"
ROCKET_PORT: "8080"
SENDS_ALLOWED: "true"
SIGNUPS_ALLOWED: "true"
SIGNUPS_VERIFY: "false"
TZ: Europe/Paris
image: vaultwarden/server:latest
labels:
traefik.enable: "true"
traefik.http.routers.bitwarden-admin.middlewares: basic_auth@docker
traefik.http.routers.bitwarden-admin.rule: |
'Host(`bitwarden.localhost`) && PathPrefix(`/admin`)'
traefik.http.routers.bitwarden-user.rule: |
'Host(`bitwarden.localhost`) && !PathPrefix(`/admin`)'
traefik.http.services.bitwarden.loadbalancer.server.port: "8080"
networks:
srv: null
restart: always
user: nobody
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/bitwarden/data
target: /data
bind:
create_host_path: true
cadvisor:
devices:
- /dev/kmsg:/dev/kmsg
image: gcr.io/cadvisor/cadvisor:latest
labels:
traefik.enable: "true"
traefik.http.routers.cadvisor.middlewares: basic_auth@docker
traefik.http.routers.cadvisor.rule: Host(`cadvisor.localhost`)
traefik.http.services.cadvisor.loadbalancer.server.port: "8080"
networks:
srv: null
privileged: true
restart: always
volumes:
- type: bind
source: /
target: /rootfs
read_only: true
bind:
create_host_path: true
- type: bind
source: /sys
target: /sys
read_only: true
bind:
create_host_path: true
- type: bind
source: /var/lib/docker/
target: /var/lib/docker
read_only: true
bind:
create_host_path: true
- type: bind
source: /var/run
target: /var/run
read_only: true
bind:
create_host_path: true
ciao:
environment:
PROMETHEUS_ENABLED: "false"
TIME_ZONE: Europe/Paris
image: brotandgames/ciao:latest
labels:
traefik.enable: "true"
traefik.http.routers.ciao.middlewares: basic_auth@docker
traefik.http.routers.ciao.rule: Host(`ciao.localhost`)
traefik.http.services.ciao.loadbalancer.server.port: "3000"
networks:
srv: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/ciao/db
target: /app/db/sqlite
bind:
create_host_path: true
codimd:
depends_on:
codimd-db:
condition: service_started
required: true
environment:
CMD_DB_URL: postgres://codimd:mypwd@codimd-db/codimd
CMD_USECDN: "false"
healthcheck:
test:
- CMD
- wget
- 0.0.0.0:3000
image: hackmdio/hackmd:2.4.2-cjk
labels:
traefik.enable: "true"
traefik.http.routers.codimd.rule: Host(`codimd.localhost`)
traefik.http.services.codimd.loadbalancer.server.port: "3000"
links:
- codimd-db
networks:
codi-internal: null
srv: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/codimd/data
target: /home/hackmd/app/public/uploads
bind:
create_host_path: true
codimd-db:
environment:
POSTGRES_DB: codimd
POSTGRES_PASSWORD: mypwd
POSTGRES_USER: codimd
healthcheck:
test:
- CMD
- pg_isready
- -U
- codimd
image: postgres:11.6-alpine
labels:
traefik.enable: "false"
networks:
codi-internal: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/codimd/db
target: /var/lib/postgresql/data
bind:
create_host_path: true
elasticsearch:
environment:
ES_JAVA_OPTS: -Xms512m -Xmx512m
bootstrap.memory_lock: "true"
cluster.name: docker-cluster
discovery.type: single-node
image: docker.elastic.co/elasticsearch/elasticsearch:7.1.0
labels:
traefik.enable: "false"
networks:
default: null
restart: always
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/elk/elasticsearch/data
target: /usr/share/elasticsearch/data
bind:
create_host_path: true
factorio:
image: factoriotools/factorio
labels:
traefik.enable: "false"
networks:
default: null
ports:
- mode: ingress
target: 34197
published: "34197"
protocol: udp
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/factorio
target: /factorio
bind:
create_host_path: true
framadate:
depends_on:
framadate-db:
condition: service_started
required: true
environment:
ADMIN_PASSWORD: pass
APP_NAME: Framadate
APP_URL: framadate.localhost
DEFAULT_POLL_DURATION: "365"
MARKDOWN_EDITOR_BY_DEFAULT: "true"
MYSQL_DATABASE: framadate
MYSQL_PASSWORD: framadate
MYSQL_ROOT_PASSWORD: pass
MYSQL_USER: framadate
PROVIDE_FORK_AWESOME: "true"
SERVERNAME: framadate.localhost
SHOW_CULTIVATE_YOUR_GARDEN: "true"
SHOW_THE_SOFTWARE: "true"
SHOW_WHAT_IS_THAT: "true"
USER_CAN_ADD_IMG_OR_LINK: "true"
image: xgaia/framadate:latest
labels:
traefik.enable: "true"
traefik.http.routers.framadate.rule: Host(`framadate.localhost`)
traefik.http.services.framadate.loadbalancer.server.port: "80"
networks:
framadate-internal: null
srv: null
restart: always
framadate-db:
environment:
MYSQL_DATABASE: framadate
MYSQL_PASSWORD: framadate
MYSQL_ROOT_PASSWORD: pass
MYSQL_USER: framadate
healthcheck:
test:
- CMD
- mysqlcheck
- --all-databases
- -ppass
image: mysql:5.7
labels:
traefik.enable: "false"
networks:
framadate-internal: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/framadate/db
target: /var/lib/mysql
bind:
create_host_path: true
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.localhost:80'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
hostname: gitlab.localhost
image: gitlab/gitlab-ce:latest
labels:
traefik.enable: "true"
traefik.http.routers.gitlab.rule: Host(`gitlab.localhost`)
traefik.http.services.gitlab.loadbalancer.server.port: "80"
networks:
srv: null
ports:
- mode: ingress
target: 22
published: "2224"
protocol: tcp
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/gitlab/config
target: /etc/gitlab
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/gitlab/data
target: /var/opt/gitlab
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/gitlab/logs
target: /var/log/gitlab
bind:
create_host_path: true
grafana:
depends_on:
prometheus:
condition: service_started
required: true
environment:
GF_ANALYTICS_REPORTING_ENABLED: "false"
GF_AUTH_ANONYMOUS_ENABLED: "true"
GF_AUTH_ANONYMOUS_ORG_ROLE: Admin
GF_AUTH_BASIC_ENABLED: "false"
GF_AUTH_DISABLE_LOGIN_FORM: "true"
GF_AUTH_DISABLE_SIGNOUT_MENU: "true"
GF_INSTALL_PLUGINS: grafana-piechart-panel
GF_METRICS_ENABLED: "true"
GF_USERS_ALLOW_SIGN_UP: "false"
healthcheck:
test:
- CMD
- curl
- 0.0.0.0:3000/healthz
image: grafana/grafana-oss:7.2.2
labels:
traefik.enable: "true"
traefik.http.routers.grafana.middlewares: basic_auth@docker
traefik.http.routers.grafana.rule: Host(`grafana.localhost`)
traefik.http.services.grafana.loadbalancer.server.port: "3000"
networks:
srv: null
restart: always
user: 1000:1000
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/grafana/grafana
target: /var/lib/grafana
bind:
create_host_path: true
hits:
depends_on:
hits-postgresql:
condition: service_started
required: true
healthcheck:
test:
- CMD
- curl
- 0.0.0.0:4000
image: tommoulard/hits
labels:
traefik.enable: "true"
traefik.http.routers.hits.rule: Host(`hits.localhost`)
traefik.http.services.hits.loadbalancer.server.port: "4000"
networks:
hits-internal: null
srv: null
restart: always
hits-postgresql:
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
healthcheck:
test:
- CMD
- pg_isready
- -U
- postgres
image: postgres
networks:
hits-internal:
aliases:
- postgresql
restart: always
user: 1000:1000
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/hits/postgresql
target: /var/lib/postgresql/data
bind:
create_host_path: true
homeassistant:
environment:
GUID: "1000"
PUID: "1000"
healthcheck:
test:
- CMD
- curl
- 0.0.0.0:8123
image: ghcr.io/home-assistant/home-assistant:stable
labels:
traefik.enable: "true"
traefik.http.routers.homeassistant.rule: |
Host(`homeassistant.localhost`) && !Path(`/api/prometheus`)
traefik.http.services.homeassistant.loadbalancer.server.port: "8123"
networks:
srv: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/homeassistant/config
target: /config
bind:
create_host_path: true
- type: bind
source: /etc/localtime
target: /etc/localtime
read_only: true
bind:
create_host_path: true
hugo:
depends_on:
hugo-builder:
condition: service_started
required: true
healthcheck:
test:
- CMD
- curl
- 0.0.0.0:80
image: nginx:stable-alpine
labels:
traefik.enable: "true"
traefik.http.routers.hugo.rule: Host(`hugo.localhost`)
traefik.http.services.hugo.loadbalancer.server.port: "80"
networks:
srv: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/hugo/nginx/conf
target: /etc/nginx/conf.d
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/hugo/nginx/logs
target: /var/log/nginx
bind:
create_host_path: true
hugo-builder:
environment:
HUGO_BASEURL: https://hugo.localhost/
HUGO_REFRESH_TIME: "3600"
HUGO_THEME: hugo-theme-cactus-plus
image: jojomi/hugo:0.59
labels:
traefik.enable: "false"
networks:
default: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/hugo/blog
target: /src
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/hugo/nginx/conf/www
target: /output
bind:
create_host_path: true
jackett:
dns:
- 1.1.1.1
environment:
PGID: "1000"
PUID: "1000"
TZ: Europe/Paris
healthcheck:
test:
- CMD
- curl
- 0.0.0.0:9117/UI/Login?ReturnUrl=%2FUI%2FDashboard
image: linuxserver/jackett:v0.20.567-ls56
labels:
traefik.enable: "true"
traefik.http.routers.jackett.middlewares: basic_auth@docker
traefik.http.routers.jackett.rule: Host(`jackett.localhost`)
traefik.http.services.jackett.loadbalancer.server.port: "9117"
networks:
srv: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/jackett/config
target: /config
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/jackett/downloads
target: /downloads
bind:
create_host_path: true
jellyfin:
image: jellyfin/jellyfin
labels:
traefik.enable: "true"
traefik.http.routers.jellyfin.rule: Host(`jellyfin.localhost`)
traefik.http.services.jellyfin.loadbalancer.server.port: "8096"
networks:
srv: null
restart: always
user: 1000:1000
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/jellyfin/cache
target: /cache
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/jellyfin/config
target: /config
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/jellyfin/logs
target: /logs
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/jellyfin/media
target: /media
bind:
create_host_path: true
jupyter:
command:
- jupyter
- notebook
- --NotebookApp.token=
- --NotebookApp.password=
environment:
JUPYTER_ENABLE_LAB: "yes"
image: jupyter/tensorflow-notebook:45f07a14b422
labels:
traefik.enable: "true"
traefik.http.routers.jupyter.middlewares: basic_auth@docker
traefik.http.routers.jupyter.rule: Host(`jupyter.localhost`)
traefik.http.services.jupyter.loadbalancer.server.port: "8888"
networks:
srv: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/jupyter/jupyter/config
target: /root/.jupyter
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/jupyter/notebooks
target: /home/jovyan
bind:
create_host_path: true
kavita:
environment:
PGID: "1000"
PUID: "1000"
TZ: Europe/Paris
healthcheck:
test:
- CMD
- curl
- 0.0.0.0:5000
image: lscr.io/linuxserver/kavita:latest
labels:
traefik.enable: "true"
traefik.http.routers.kavita.rule: Host(`kavita.localhost`)
traefik.http.services.kavita.loadbalancer.server.port: "5000"
networks:
srv: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/kavita/data
target: /data
bind:
create_host_path: true
- type: bind
source: /home/runner/work/make-my-server/make-my-server/kavita/config
target: /config
bind:
create_host_path: true
kibana:
depends_on:
elasticsearch:
condition: service_started
restart: true
required: true
image: docker.elastic.co/kibana/kibana:7.1.0
labels:
traefik.enable: "true"
traefik.http.routers.kibana.middlewares: basic_auth@docker
traefik.http.routers.kibana.rule: Host(`kibana.localhost`)
traefik.http.services.kibana.loadbalancer.server.port: "5601"
links:
- elasticsearch
networks:
srv: null
restart: always
volumes:
- type: bind
source: /home/runner/work/make-my-server/make-my-server/elk/kibana/kibana.yml
target: /usr/share/kibana/config/kibana.yml
bind:
create_host_path: true
logstash:
depends_on:
elasticsearch:
condition: service_started
restart: true
required: true
image: docker.elastic.co/logstash/logstash:7.1.0
labels:
traefik.enable: "false"
links:
- elasticsearch
networks:
default: null
restart: always
v
gitextract_76wwh0f9/
├── .github/
│ ├── FUNDING.yml
│ ├── dependabot.yml
│ └── workflows/
│ ├── dockerpublish.yml
│ └── healthcheck.workflow.tmpl.yml
├── .gitignore
├── .yamllint
├── README.md
├── arachni/
│ ├── README.md
│ └── docker-compose.arachni.yml
├── bazarr/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.bazarr.yml
├── bitwarden/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.bitwarden.yml
├── ciao/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.ciao.yml
├── codimd/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.codimd.yml
├── docker-compose.networks.yml
├── docker-compose.yml
├── elk/
│ ├── README.md
│ ├── docker-compose.elk.yml
│ ├── elasticsearch/
│ │ ├── .gitignore
│ │ └── .gitkeep
│ └── logstash/
│ └── logstash.conf
├── factorio/
│ ├── .gitignore
│ ├── README.md
│ ├── config/
│ │ ├── .gitignore
│ │ ├── map-gen-settings.json
│ │ ├── map-settings.json
│ │ └── server-settings.json
│ ├── docker-compose.factorio.yml
│ └── mods/
│ └── mod-list.json
├── framadate/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.framadate.yml
├── gitlab/
│ ├── .gitignore
│ ├── README.md
│ ├── config/
│ │ └── gitlab.rb
│ ├── docker-compose.gitlab.yml
│ └── runner/
│ └── config.toml
├── hits/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.hits.yml
├── homeassistant/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.homeassistant.yml
├── hugo/
│ ├── .gitignore
│ ├── README.md
│ ├── docker-compose.hugo.yml
│ └── nginx/
│ ├── conf/
│ │ └── nginx.conf
│ └── logs/
│ └── .gitkeep
├── jackett/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.jackett.yml
├── jellyfin/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.jellyfin.yml
├── jupyter/
│ ├── README.md
│ └── docker-compose.jupyter.yml
├── kavita/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.kavita.yml
├── minecraft/
│ ├── README.md
│ ├── docker-compose.minecraft-ftb.yml
│ └── docker-compose.minecraft.yml
├── mumble/
│ ├── README.md
│ └── docker-compose.mumble.yml
├── musicbot/
│ ├── README.md
│ ├── conf/
│ │ └── config.txt
│ └── docker-compose.musicBot.yml
├── nextcloud/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.nextcloud.yml
├── nginx/
│ ├── README.md
│ ├── conf/
│ │ ├── nginx.conf
│ │ └── www/
│ │ └── index.html
│ ├── docker-compose.nginx.yml
│ └── logs/
│ └── .gitkeep
├── pastebin/
│ ├── README.md
│ └── docker-compose.pastebin.yml
├── peertube/
│ ├── .gitignore
│ ├── README.md
│ ├── config/
│ │ └── production.yaml
│ └── docker-compose.peertube.yml
├── pihole/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.pihole.yml
├── portainer/
│ ├── README.md
│ └── docker-compose.portainer.yml
├── remotely/
│ ├── README.md
│ └── docker-compose.remotely.yml
├── rocketchat/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.rocket-chat.yml
├── searxng/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.searxng.yml
├── sharelatex/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.sharelatex.yml
├── sonarr/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.sonarr.yml
├── streama/
│ ├── README.md
│ └── docker-compose.streama.yml
├── test.sh
├── test_config.yml
├── theia/
│ ├── README.md
│ └── docker-compose.theia.yml
├── tor-relay/
│ ├── .gitignore
│ ├── README.md
│ ├── docker-compose.tor-relay.yml
│ └── keys/
│ └── .gitkeep
├── traefik/
│ ├── README.md
│ ├── docker-compose.traefik.yml
│ ├── dynamic_conf/
│ │ ├── fail2ban.yml
│ │ ├── middlewares.yml
│ │ └── tls.yml
│ └── logs/
│ └── .gitkeep
├── transmission/
│ ├── .gitignore
│ ├── README.md
│ └── docker-compose.transmission.yml
├── vpn/
│ ├── README.md
│ └── docker-compose.vpn.yml
└── watchtower/
├── README.md
└── docker-compose.watchtower.yml
Condensed preview — 130 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (246K chars).
[
{
"path": ".github/FUNDING.yml",
"chars": 68,
"preview": "# These are supported funding model platforms\n\ngithub: 'tommoulard'\n"
},
{
"path": ".github/dependabot.yml",
"chars": 486,
"preview": "# To get started with Dependabot version updates, you'll need to specify which\n# package ecosystems to update and where "
},
{
"path": ".github/workflows/dockerpublish.yml",
"chars": 3645,
"preview": "name: 'Tests'\n\non: # yamllint disable-line rule:truthy\n push: {}\n\njobs:\n Config-test:\n runs-on: 'ubuntu-latest'\n "
},
{
"path": ".github/workflows/healthcheck.workflow.tmpl.yml",
"chars": 1852,
"preview": "on: # yamllint disable-line rule:truthy\n workflow_call:\n inputs:\n sus:\n description: 'a StartUp Script t"
},
{
"path": ".gitignore",
"chars": 98,
"preview": "*.log\nblog/blog*\nblog/nginx/conf/www\ngitlab/logs\nportainer/data\n.env\n.env.generated\n*.patch\n*.swp\n"
},
{
"path": ".yamllint",
"chars": 1081,
"preview": "# yaml-language-server: $schema=https://json.schemastore.org/yamllint.json\n# ref: https://yamllint.readthedocs.io/en/sta"
},
{
"path": "README.md",
"chars": 4108,
"preview": "# Server configuration\n[](https://discord.gg/zQV6m9Jk6Z)\n\nY"
},
{
"path": "arachni/README.md",
"chars": 546,
"preview": "# arachni\n\nhttps://www.arachni-scanner.com/\n\nArachni is a feature-full, modular, high-performance Ruby framework aimed\nt"
},
{
"path": "arachni/docker-compose.arachni.yml",
"chars": 352,
"preview": "services:\n arachni:\n image: 'arachni/arachni'\n labels:\n traefik.enable: true\n traefik.http.routers.arac"
},
{
"path": "bazarr/.gitignore",
"chars": 19,
"preview": "config/\nmovies/\ntv/"
},
{
"path": "bazarr/README.md",
"chars": 157,
"preview": "# bazarr\n\nhttps://www.bazarr.media/\n\nBazarr is a companion application to Sonarr and Radarr that manages and\ndownloads s"
},
{
"path": "bazarr/docker-compose.bazarr.yml",
"chars": 656,
"preview": "services:\n bazarr:\n image: 'linuxserver/bazarr:${BAZARR_IMAGE_VERSION:-v1.2.2}'\n environment:\n PGID: '${BAZA"
},
{
"path": "bitwarden/.gitignore",
"chars": 6,
"preview": "data/\n"
},
{
"path": "bitwarden/README.md",
"chars": 701,
"preview": "# bitwarden\n\nhttps://bitwarden.com\n\nBitwarden is an outstanding password manager that includes all the bells and\nwhistle"
},
{
"path": "bitwarden/docker-compose.bitwarden.yml",
"chars": 1092,
"preview": "services:\n bitwarden:\n image: 'vaultwarden/server:${BITWARDEN_IMAGE_VERSION:-latest}'\n environment:\n ADMIN_T"
},
{
"path": "ciao/.gitignore",
"chars": 3,
"preview": "db\n"
},
{
"path": "ciao/README.md",
"chars": 302,
"preview": "# ciao\n\nhttps://github.com/brotandgames/ciao\n\nciao checks HTTP(S) URL endpoints for a HTTP status code (or errors on the"
},
{
"path": "ciao/docker-compose.ciao.yml",
"chars": 531,
"preview": "services:\n ciao:\n image: 'brotandgames/ciao:${CIAO_IMAGE_VERSION:-latest}'\n environment:\n PROMETHEUS_ENABLED"
},
{
"path": "codimd/.gitignore",
"chars": 10,
"preview": "data/\ndb/\n"
},
{
"path": "codimd/README.md",
"chars": 841,
"preview": "# codimd\n\nhttps://github.com/hackmdio/codimd\n\nA hackmd self hosted.\n\nThe best platform to write and share markdown. Sign"
},
{
"path": "codimd/docker-compose.codimd.yml",
"chars": 1117,
"preview": "networks:\n codi-internal: {}\n\nservices:\n codimd:\n image: 'hackmdio/hackmd:${CODIMD_IMAGE_VERSION:-2.4.2-cjk}'\n d"
},
{
"path": "docker-compose.networks.yml",
"chars": 20,
"preview": "networks:\n srv: {}\n"
},
{
"path": "docker-compose.yml",
"chars": 2011,
"preview": "include:\n - path: 'docker-compose.networks.yml'\n - path: 'arachni/docker-compose.arachni.yml'\n - path: 'bazarr/docker"
},
{
"path": "elk/README.md",
"chars": 321,
"preview": "# elk\n\nhttps://www.elastic.co/fr/what-is/elk-stack\n\nThe Elastic suite: Elastic search, Logstash Kibana.\n\nElastic search "
},
{
"path": "elk/docker-compose.elk.yml",
"chars": 1448,
"preview": "services:\n elasticsearch:\n image: 'docker.elastic.co/elasticsearch/elasticsearch:${ELASTICSEARCH_IMAGE_VERSION:-7.1."
},
{
"path": "elk/elasticsearch/.gitignore",
"chars": 6,
"preview": "data/\n"
},
{
"path": "elk/elasticsearch/.gitkeep",
"chars": 0,
"preview": ""
},
{
"path": "elk/logstash/logstash.conf",
"chars": 3250,
"preview": "# logstash.con\n# Where you see:\n# # start_position => \"beginning\"\n# You can un comment this line if the elasticse"
},
{
"path": "factorio/.gitignore",
"chars": 63,
"preview": "saves/\n*.log\nplayer-data.json\nsaves/\nscenarios/\nscript-output/\n"
},
{
"path": "factorio/README.md",
"chars": 225,
"preview": "# factorio\n\nhttps://www.factorio.com\n\nFactorio is a game in which you build and maintain factories. You will be\nmining r"
},
{
"path": "factorio/config/.gitignore",
"chars": 7,
"preview": "rconpw\n"
},
{
"path": "factorio/config/map-gen-settings.json",
"chars": 2231,
"preview": "{\n \"_terrain_segmentation_comment\": \"Inverse of map scale\",\n \"terrain_segmentation\": 1,\n\n \"_water_comment\":\n [\n \""
},
{
"path": "factorio/config/map-settings.json",
"chars": 3856,
"preview": "{\n \"difficulty_settings\":\n {\n \"recipe_difficulty\": 0,\n \"technology_difficulty\": 0,\n \"technology_price_multipl"
},
{
"path": "factorio/config/server-settings.json",
"chars": 3383,
"preview": "{\n \"name\": \"Name of the game as it will appear in the game listing\",\n \"description\": \"Description of the game that wil"
},
{
"path": "factorio/docker-compose.factorio.yml",
"chars": 232,
"preview": "services:\n factorio:\n image: 'factoriotools/factorio'\n labels:\n traefik.enable: false\n ports:\n - '34"
},
{
"path": "factorio/mods/mod-list.json",
"chars": 79,
"preview": "\n{\n \"mods\":\n [\n {\n \"name\": \"base\",\n \"enabled\": true\n }\n ]\n}\n"
},
{
"path": "framadate/.gitignore",
"chars": 3,
"preview": "db\n"
},
{
"path": "framadate/README.md",
"chars": 250,
"preview": "# framadate\n\nhttps://framagit.org/framasoft/framadate/framadate/\n\n[Framadate](https://framadate.org) is an online servic"
},
{
"path": "framadate/docker-compose.framadate.yml",
"chars": 1675,
"preview": "networks:\n framadate-internal: {}\n\nservices:\n framadate:\n image: 'xgaia/framadate:${FRAMADATE_IMAGE_VERSION:-latest"
},
{
"path": "gitlab/.gitignore",
"chars": 59,
"preview": "config/gitlab-secrets.json\nconfig/*.pub\nconfig/*_key\ndata/\n"
},
{
"path": "gitlab/README.md",
"chars": 1518,
"preview": "# Gitlab\nhttps://about.gitlab.com/\n\nGitLab is a web-based DevOps lifecycle tool that provides a Git-repository\nmanager p"
},
{
"path": "gitlab/config/gitlab.rb",
"chars": 83793,
"preview": "## GitLab configuration settings\n##! This file is generated during initial installation and **is not** modified\n##! duri"
},
{
"path": "gitlab/docker-compose.gitlab.yml",
"chars": 929,
"preview": "services:\n gitlab:\n image: 'gitlab/gitlab-ce:${GITLAB_IMAGE_VERSION:-latest}'\n environment:\n GITLAB_OMNIBUS_"
},
{
"path": "gitlab/runner/config.toml",
"chars": 578,
"preview": "concurrent = 10\ncheck_interval = 1\nlog_level = \"info\"\nlog_format = \"json\"\n\n[session_server]\n session_timeout = 1800\n\n[["
},
{
"path": "hits/.gitignore",
"chars": 18,
"preview": "postgresql\n.*.swp\n"
},
{
"path": "hits/README.md",
"chars": 259,
"preview": "# Hits\n\nIs a hit counter using images: one image is served, the count is incremented.\n\nSee [source](https://github.com/d"
},
{
"path": "hits/docker-compose.hits.yml",
"chars": 809,
"preview": "networks:\n hits-internal: {}\n\nservices:\n hits:\n image: 'tommoulard/hits'\n depends_on:\n - 'hits-postgresql'\n"
},
{
"path": "homeassistant/.gitignore",
"chars": 7,
"preview": "config\n"
},
{
"path": "homeassistant/README.md",
"chars": 1273,
"preview": "# Home Assistant\n\nhttps://www.home-assistant.io\n\nOpen source home automation that puts local control and privacy first. "
},
{
"path": "homeassistant/docker-compose.homeassistant.yml",
"chars": 807,
"preview": "services:\n homeassistant:\n image: 'ghcr.io/home-assistant/home-assistant:${HOME_ASSISTANT_IMAGE_VERSION:-stable}'\n "
},
{
"path": "hugo/.gitignore",
"chars": 6,
"preview": "blog/\n"
},
{
"path": "hugo/README.md",
"chars": 229,
"preview": "# blog\n\nhttps://gohugo.io/\n\nThe world’s fastest framework for building websites\n\nHugo is one of the most popular open-so"
},
{
"path": "hugo/docker-compose.hugo.yml",
"chars": 789,
"preview": "services:\n hugo:\n image: 'nginx:stable-alpine'\n depends_on:\n - 'hugo-builder'\n healthcheck:\n test: ["
},
{
"path": "hugo/nginx/conf/nginx.conf",
"chars": 656,
"preview": "error_log /var/log/nginx/error.log;\nlog_format main_log_format '$remote_addr - $remote_user [$time_local] '\n "
},
{
"path": "hugo/nginx/logs/.gitkeep",
"chars": 0,
"preview": ""
},
{
"path": "jackett/.gitignore",
"chars": 19,
"preview": "config/\ndownloads/\n"
},
{
"path": "jackett/README.md",
"chars": 369,
"preview": "# jackett\n\nhttps://github.com/Jackett/Jackett\n\nJackett is a single repository of maintained indexer scraping & translati"
},
{
"path": "jackett/docker-compose.jackett.yml",
"chars": 716,
"preview": "services:\n jackett:\n image: 'linuxserver/jackett:${JACKETT_IMAGE_VERSION:-v0.20.567-ls56}'\n dns:\n - '1.1.1.1"
},
{
"path": "jellyfin/.gitignore",
"chars": 16,
"preview": "cache/\nconfig/\n\n"
},
{
"path": "jellyfin/README.md",
"chars": 533,
"preview": "# jellyfin\n\nhttps://jellyfin.org\n\nJellyfin is a suite of multimedia applications designed to organize, manage,\nand share"
},
{
"path": "jellyfin/docker-compose.jellyfin.yml",
"chars": 425,
"preview": "services:\n jellyfin:\n image: 'jellyfin/jellyfin'\n labels:\n traefik.enable: true\n traefik.http.routers.j"
},
{
"path": "jupyter/README.md",
"chars": 361,
"preview": "# jupyter\n\nhttps://jupyter.org\n\nThe Jupyter Notebook is an open-source web application that allows you to\ncreate and sha"
},
{
"path": "jupyter/docker-compose.jupyter.yml",
"chars": 669,
"preview": "services:\n jupyter:\n image: 'jupyter/tensorflow-notebook:45f07a14b422'\n command: |\n jupyter notebook\n --N"
},
{
"path": "kavita/.gitignore",
"chars": 7,
"preview": "config\n"
},
{
"path": "kavita/README.md",
"chars": 313,
"preview": "# kavita\n\nhttps://www.kavitareader.com/\nhttps://docs.linuxserver.io/images/docker-kavita/\n\nKavita is a fast, feature ric"
},
{
"path": "kavita/docker-compose.kavita.yml",
"chars": 566,
"preview": "services:\n kavita:\n image: 'lscr.io/linuxserver/kavita:${KAVITA_IMAGE_VERSION:-latest}'\n environment:\n PGID:"
},
{
"path": "minecraft/README.md",
"chars": 573,
"preview": "# minecraft\n\nhttps://github.com/itzg/docker-minecraft-server\n\nMinecraft is a sandbox video game developed by Mojang. The"
},
{
"path": "minecraft/docker-compose.minecraft-ftb.yml",
"chars": 210,
"preview": "services:\n minecraft-ftb:\n image: 'jonasbonno/ftb-revelation'\n labels:\n traefik.enable: false\n ports:\n "
},
{
"path": "minecraft/docker-compose.minecraft.yml",
"chars": 242,
"preview": "services:\n minecraft:\n image: 'itzg/minecraft-server'\n environment:\n EULA: 'true'\n restart: 'always'\n "
},
{
"path": "mumble/README.md",
"chars": 91,
"preview": "# Mumble\n\nMumble is a free, open source, low latency, high quality voice chat application.\n"
},
{
"path": "mumble/docker-compose.mumble.yml",
"chars": 367,
"preview": "services:\n mumble:\n image: 'mumblevoip/mumble-server:${MUMBLE_IMAGE_VERSION:-latest}'\n environment:\n MUMBLE_"
},
{
"path": "musicbot/README.md",
"chars": 109,
"preview": "# musicbot\n\nhttps://github.com/jagrosh/MusicBot\n\nA Discord music bot that's easy to set up and run yourself!\n"
},
{
"path": "musicbot/conf/config.txt",
"chars": 5514,
"preview": "/// START OF JMUSICBOT CONFIG ///\n/////////////////////////////////////////////////////////\n// Config for the JMusicBot "
},
{
"path": "musicbot/docker-compose.musicBot.yml",
"chars": 239,
"preview": "services:\n musicbot:\n image: 'raiponce/musicbot:0.2.10'\n labels:\n traefik.enable: false\n networks:\n "
},
{
"path": "nextcloud/.gitignore",
"chars": 10,
"preview": "data/\ndb/\n"
},
{
"path": "nextcloud/README.md",
"chars": 2461,
"preview": "# NextCloud\n\nhttp://nextcloud.com/\n\nNextCloud is a suite of client-server software for creating and using file\nhosting s"
},
{
"path": "nextcloud/docker-compose.nextcloud.yml",
"chars": 2630,
"preview": "networks:\n nextcloud-internal: {}\n\nservices:\n nextcloud:\n image: 'nextcloud:${NEXTCLOUD_IMAGE_VERSION:-latest}'\n "
},
{
"path": "nginx/README.md",
"chars": 461,
"preview": "# nginx\n\nhttps://www.nginx.com/\n\nNginx (pronounced \"engine X\", /ˌɛndʒɪnˈɛks/ EN-jin-EKS), stylized as NGINX\nor nginx or "
},
{
"path": "nginx/conf/nginx.conf",
"chars": 656,
"preview": "error_log /var/log/nginx/error.log;\nlog_format main_log_format '$remote_addr - $remote_user [$time_local] '\n "
},
{
"path": "nginx/conf/www/index.html",
"chars": 39,
"preview": "<h1>Simple web page</h1>\nHello, World!\n"
},
{
"path": "nginx/docker-compose.nginx.yml",
"chars": 436,
"preview": "services:\n nginx:\n image: 'nginx:${NGINX_IMAGE_VERSION:-stable-alpine}'\n healthcheck:\n test: ['CMD', 'curl',"
},
{
"path": "nginx/logs/.gitkeep",
"chars": 0,
"preview": ""
},
{
"path": "pastebin/README.md",
"chars": 79,
"preview": "# pastebin\n\nhttps://github.com/mko-x/docker-pastebin\n\nPaste your stuff however\n"
},
{
"path": "pastebin/docker-compose.pastebin.yml",
"chars": 302,
"preview": "services:\n pastebin:\n image: 'mkodockx/docker-pastebin:latest'\n labels:\n traefik.enable: true\n traefik."
},
{
"path": "peertube/.gitignore",
"chars": 78,
"preview": "config/custom-environment-variables.yaml\nconfig/default.yaml\ndata/\ndb/\nredis/\n"
},
{
"path": "peertube/README.md",
"chars": 197,
"preview": "# peertube\n\nhttps://peer.tube\n\nYoutube but selfhosted.\n\nPeerTube, a federated (ActivityPub) video streaming platform usi"
},
{
"path": "peertube/config/production.yaml",
"chars": 1437,
"preview": "listen:\n hostname: '0.0.0.0'\n port: 9000\n\n# Correspond to your reverse proxy \"listen\" configuration\nwebserver:\n https"
},
{
"path": "peertube/docker-compose.peertube.yml",
"chars": 1515,
"preview": "services:\n peertube:\n image: 'chocobozzz/peertube:production-buster'\n depends_on:\n - 'peertube-db'\n - '"
},
{
"path": "pihole/.gitignore",
"chars": 33,
"preview": "etc-pihole/\netc-dnsmasq.d/\nlogs/\n"
},
{
"path": "pihole/README.md",
"chars": 5133,
"preview": "# Pi-hole\n\n[Pi-hole](https://pi-hole.net/) is a network-wide DNS sinkhole that blocks ads\nand trackers for every device "
},
{
"path": "pihole/docker-compose.pihole.yml",
"chars": 1774,
"preview": "services:\n pihole:\n image: 'pihole/pihole:${PIHOLE_IMAGE_VERSION:-2025.11.1}'\n environment:\n DNS1: '${PIHOLE"
},
{
"path": "portainer/README.md",
"chars": 323,
"preview": "# portainer\n\nhttps://www.portainer.io\n\nWith over half a million regular users, it's a powerful, open-source toolset\nthat"
},
{
"path": "portainer/docker-compose.portainer.yml",
"chars": 454,
"preview": "services:\n portainer:\n image: 'portainer/portainer'\n labels:\n traefik.enable: true\n traefik.http.router"
},
{
"path": "remotely/README.md",
"chars": 546,
"preview": "# Remotely\n\nRemotely is a free and open source, self-hosted solution for remote control and remote scripting via a web i"
},
{
"path": "remotely/docker-compose.remotely.yml",
"chars": 443,
"preview": "services:\n remotely:\n image: 'translucency/remotely:${REMOTELY_IMAGE_VERSION:-latest}'\n build: 'https://github.co"
},
{
"path": "rocketchat/.gitignore",
"chars": 4,
"preview": "db/\n"
},
{
"path": "rocketchat/README.md",
"chars": 116,
"preview": "# Rocket Chat\n\nhttps://rocket.chat/\n\nRocket.Chat is the ultimate Free Open Source Solution for team communications.\n"
},
{
"path": "rocketchat/docker-compose.rocket-chat.yml",
"chars": 2321,
"preview": "networks:\n rocketchat-internal: {}\n\nservices:\n rocketchat:\n image: 'rocket.chat:latest'\n depends_on:\n - 'ro"
},
{
"path": "searxng/.gitignore",
"chars": 22,
"preview": "searx/\nsearx-checker/\n"
},
{
"path": "searxng/README.md",
"chars": 179,
"preview": "# searxng\n\nhttps://searxng.org/\n\nsearxng - a privacy-respecting, hackable metasearch engine. Advanced settings.\ngeneral "
},
{
"path": "searxng/docker-compose.searxng.yml",
"chars": 955,
"preview": "services:\n searxng:\n image: 'searxng/searxng:${SEARXNG_IMAGE_VERSION:-latest}'\n depends_on:\n - 'searxng-redi"
},
{
"path": "sharelatex/.gitignore",
"chars": 20,
"preview": "data/\nmongo/\nredis/\n"
},
{
"path": "sharelatex/README.md",
"chars": 333,
"preview": "# sharelatex\n\nhttps://github.com/sharelatex/sharelatex\n\nAn online LaTeX editor that's easy to use. No installation, real"
},
{
"path": "sharelatex/docker-compose.sharelatex.yml",
"chars": 1642,
"preview": "networks:\n sharelatex-internal: {}\n\nservices:\n sharelatex:\n image: 'sharelatex/sharelatex:${SHARELATEX_IMAGE_VERSIO"
},
{
"path": "sonarr/.gitignore",
"chars": 23,
"preview": "config/\ndownloads/\ntv/\n"
},
{
"path": "sonarr/README.md",
"chars": 329,
"preview": "# sonarr\n\nhttps://sonarr.tv/\n\nSonarr is a PVR for Usenet and BitTorrent users. It can monitor multiple RSS\nfeeds for new"
},
{
"path": "sonarr/docker-compose.sonarr.yml",
"chars": 644,
"preview": "services:\n sonarr:\n image: 'linuxserver/sonarr:${SONARR_IMAGE_VERSION:-4.0.0}'\n environment:\n PGID: '${SONAR"
},
{
"path": "streama/README.md",
"chars": 325,
"preview": "# streama\n\nhttps://docs.streama-project.com\n\nStreama. Self hosted streaming media server. Host your own Streaming\nApplic"
},
{
"path": "streama/docker-compose.streama.yml",
"chars": 528,
"preview": "services:\n streama:\n image: 'gkiko/streama:v1.8.3'\n healthcheck:\n test: ['CMD', 'curl', '0.0.0.0:8080/login/"
},
{
"path": "test.sh",
"chars": 1061,
"preview": "#!/bin/bash\n# export PS4='$(read time junk < /proc/$$/schedstat; echo \"@@@ $time @@@ \" )'\n# set -x\nerrors=0\nlog_file=log"
},
{
"path": "test_config.yml",
"chars": 53469,
"preview": "name: make-my-server\nservices:\n alertmanager:\n healthcheck:\n test:\n - CMD\n - wget\n - -q\n - "
},
{
"path": "theia/README.md",
"chars": 125,
"preview": "# theia\n\nhttps://github.com/eclipse-theia/theia\n\nEclipse Theia is a cloud & desktop IDE framework implemented in TypeScr"
},
{
"path": "theia/docker-compose.theia.yml",
"chars": 497,
"preview": "services:\n theia:\n image: 'theiaide/theia'\n init: true\n labels:\n traefik.enable: true\n traefik.http."
},
{
"path": "tor-relay/.gitignore",
"chars": 7,
"preview": "keys/*\n"
},
{
"path": "tor-relay/README.md",
"chars": 401,
"preview": "# tor-relay\n\nhttps://community.torproject.org/relay/\n\nThe Tor network relies on volunteers to donate bandwidth. The more"
},
{
"path": "tor-relay/docker-compose.tor-relay.yml",
"chars": 1068,
"preview": "# See https://blog.jessfraz.com/post/running-a-tor-relay-with-docker/\n# Checkout logs and https://atlas.torproject.org/ "
},
{
"path": "tor-relay/keys/.gitkeep",
"chars": 0,
"preview": ""
},
{
"path": "traefik/README.md",
"chars": 1327,
"preview": "# traefik\n\nhttps://doc.traefik.io/traefik/\n\nTraefik is an open-source Edge Router that makes publishing your services a "
},
{
"path": "traefik/docker-compose.traefik.yml",
"chars": 2879,
"preview": "services:\n traefik:\n image: 'traefik:${TRAEFIK_IMAGE_VERSION:-v3}'\n command:\n # Provider\n - '--provider"
},
{
"path": "traefik/dynamic_conf/fail2ban.yml",
"chars": 306,
"preview": "http:\n middlewares:\n fail2ban:\n plugin:\n fail2ban:\n rules:\n bantime: '3h'\n "
},
{
"path": "traefik/dynamic_conf/middlewares.yml",
"chars": 112,
"preview": "http:\n middlewares:\n compress:\n compress: {}\n\n headers:\n headers:\n stsSeconds: 63072000\n"
},
{
"path": "traefik/dynamic_conf/tls.yml",
"chars": 386,
"preview": "tls:\n options:\n default:\n cipherSuites:\n - 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256'\n - 'TLS_ECDH"
},
{
"path": "traefik/logs/.gitkeep",
"chars": 0,
"preview": ""
},
{
"path": "transmission/.gitignore",
"chars": 26,
"preview": "config/\ndownloads/\nwatch/\n"
},
{
"path": "transmission/README.md",
"chars": 82,
"preview": "# transmission\n\nhttps://transmissionbt.com/\n\nTransmission is a BitTorrent client.\n"
},
{
"path": "transmission/docker-compose.transmission.yml",
"chars": 907,
"preview": "services:\n transmission:\n image: 'linuxserver/transmission:${TRANSMISSION_IMAGE_VERSION:-3.00-r5-ls115}'\n dns:\n "
},
{
"path": "vpn/README.md",
"chars": 131,
"preview": "# vpn\n\nhttps://github.com/hwdsl2/docker-ipsec-vpn-server\n\nDocker image to run an IPsec VPN server, with IPsec/L2TP and C"
},
{
"path": "vpn/docker-compose.vpn.yml",
"chars": 512,
"preview": "services:\n vpn:\n image: 'hwdsl2/ipsec-vpn-server:${VPN_IMAGE_VERSION:-latest}'\n environment:\n VPN_ADDL_PASSW"
},
{
"path": "watchtower/README.md",
"chars": 454,
"preview": "# Watchtower\n\nhttps://containrrr.dev/watchtower/\n\nA container-based solution for automating Docker container base image "
},
{
"path": "watchtower/docker-compose.watchtower.yml",
"chars": 460,
"preview": "services:\n watchtower:\n image: 'containrrr/watchtower:${WATCHTOWER_IMAGE_VERSION:-latest}'\n environment:\n WA"
}
]
About this extraction
This page contains the full source code of the tomMoulard/make-my-server GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 130 files (225.1 KB), approximately 65.5k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.